diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 9863afb34aa7e..bbb01426fded3 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -38,6 +38,11 @@ If you have a bugfix or new feature that you would like to contribute to Elastic We enjoy working with contributors to get their code accepted. There are many approaches to fixing a problem and it is important to find the best approach before writing too much code. +Note that it is unlikely the project will merge refactors for the sake of refactoring. These +types of pull requests have a high cost to maintainers in reviewing and testing with little +to no tangible benefit. This especially includes changes generated by tools. For example, +converting all generic interface instances to use the diamond operator. + The process for contributing to any of the [Elastic repositories](https://github.com/elastic/) is similar. Details for individual projects can be found below. ### Fork and clone the repository @@ -106,9 +111,16 @@ then `File->New Project From Existing Sources`. Point to the root of the source directory, select `Import project from external model->Gradle`, enable `Use auto-import`. Additionally, in order to run tests directly from -IDEA 2017.2 and above it is required to disable IDEA run launcher, -which can be achieved by adding `-Didea.no.launcher=true` -[JVM option](https://intellij-support.jetbrains.com/hc/en-us/articles/206544869-Configuring-JVM-options-and-platform-properties) +IDEA 2017.2 and above it is required to disable IDEA run launcher to avoid +finding yourself in "jar hell", which can be achieved by adding the +`-Didea.no.launcher=true` [JVM +option](https://intellij-support.jetbrains.com/hc/en-us/articles/206544869-Configuring-JVM-options-and-platform-properties) +or by adding `idea.no.launcher=true` to the +`idea.properties`[https://www.jetbrains.com/help/idea/file-idea-properties.html] +file which can be accessed under Help > Edit Custom Properties within IDEA. You +may also need to [remove `ant-javafx.jar` from your +classpath][https://github.com/elastic/elasticsearch/issues/14348] if that is +reported as a source of jar hell. The Elasticsearch codebase makes heavy use of Java `assert`s and the test runner requires that assertions be enabled within the JVM. This diff --git a/TESTING.asciidoc b/TESTING.asciidoc index db23e24a1537b..9f64d1dd0afb8 100644 --- a/TESTING.asciidoc +++ b/TESTING.asciidoc @@ -352,6 +352,7 @@ These are the linux flavors the Vagrantfile currently supports: * centos-6 * centos-7 * fedora-25 +* fedora-26 * oel-6 aka Oracle Enterprise Linux 6 * oel-7 aka Oracle Enterprise Linux 7 * sles-12 @@ -471,28 +472,30 @@ is tested depends on the branch. On master, this will test against the current stable branch. On the stable branch, it will test against the latest release branch. Finally, on a release branch, it will test against the most recent release. -=== BWC Testing against a specific branch +=== BWC Testing against a specific remote/branch Sometimes a backward compatibility change spans two versions. A common case is a new functionality that needs a BWC bridge in and an unreleased versioned of a release branch (for example, 5.x). -To test the changes, you can instruct gradle to build the BWC version from a local branch instead of -pulling the release branch from GitHub. You do so using the `tests.bwc.refspec` system property: +To test the changes, you can instruct gradle to build the BWC version from a another remote/branch combination instead of +pulling the release branch from GitHub. You do so using the `tests.bwc.remote` and `tests.bwc.refspec` system properties: ------------------------------------------------- -gradle check -Dtests.bwc.refspec=origin/index_req_bwc_5.x +gradle check -Dtests.bwc.remote=${remote} -Dtests.bwc.refspec=index_req_bwc_5.x ------------------------------------------------- -The branch needs to be available on the local clone that the BWC makes of the repository you run the -tests from. Using the `origin` remote is a handy trick to make sure that a branch is available -and is up to date in the case of multiple runs. +The branch needs to be available on the remote that the BWC makes of the +repository you run the tests from. Using the remote is a handy trick to make +sure that a branch is available and is up to date in the case of multiple runs. Example: -Say you need to make a change to `master` and have a BWC layer in `5.x`. You will need to: -. Create a branch called `index_req_change` off `master`. This will contain your change. +Say you need to make a change to `master` and have a BWC layer in `5.x`. You +will need to: +. Create a branch called `index_req_change` off your remote `${remote}`. This +will contain your change. . Create a branch called `index_req_bwc_5.x` off `5.x`. This will contain your bwc layer. -. If not running the tests locally, push both branches to your remote repository. -. Run the tests with `gradle check -Dtests.bwc.refspec=origin/index_req_bwc_5.x` +. Push both branches to your remote repository. +. Run the tests with `gradle check -Dtests.bwc.remote=${remote} -Dtests.bwc.refspec=index_req_bwc_5.x`. == Coverage analysis @@ -548,3 +551,17 @@ included as part of the build by checking the projects of the build. --------------------------------------------------------------------------- gradle projects --------------------------------------------------------------------------- + +== Environment misc + +There is a known issue with macOS localhost resolve strategy that can cause +some integration tests to fail. This is because integration tests have timings +for cluster formation, discovery, etc. that can be exceeded if name resolution +takes a long time. +To fix this, make sure you have your computer name (as returned by `hostname`) +inside `/etc/hosts`, e.g.: +.... +127.0.0.1 localhost ElasticMBP.local +255.255.255.255 broadcasthost +::1 localhost ElasticMBP.local` +.... diff --git a/Vagrantfile b/Vagrantfile index 830a72322cdbb..487594bba8a1a 100644 --- a/Vagrantfile +++ b/Vagrantfile @@ -64,6 +64,10 @@ Vagrant.configure(2) do |config| config.vm.box = "elastic/fedora-25-x86_64" dnf_common config end + config.vm.define "fedora-26" do |config| + config.vm.box = "elastic/fedora-26-x86_64" + dnf_common config + end config.vm.define "opensuse-42" do |config| config.vm.box = "elastic/opensuse-42-x86_64" opensuse_common config diff --git a/build.gradle b/build.gradle index 7af6ff736ced0..cfc8401a934e0 100644 --- a/build.gradle +++ b/build.gradle @@ -23,6 +23,7 @@ import org.eclipse.jgit.lib.Repository import org.eclipse.jgit.lib.RepositoryBuilder import org.gradle.plugins.ide.eclipse.model.SourceFolder import org.apache.tools.ant.taskdefs.condition.Os +import org.elasticsearch.gradle.BuildPlugin import org.elasticsearch.gradle.VersionProperties import org.elasticsearch.gradle.Version @@ -60,6 +61,10 @@ configure(subprojects.findAll { it.projectDir.toPath().startsWith(rootPath) }) { } } } + plugins.withType(BuildPlugin).whenPluginAdded { + project.licenseFile = project.rootProject.file('LICENSE.txt') + project.noticeFile = project.rootProject.file('NOTICE.txt') + } } /* Introspect all versions of ES that may be tested agains for backwards @@ -122,6 +127,16 @@ if (currentVersion.bugfix == 0) { } } +// build metadata from previous build, contains eg hashes for bwc builds +String buildMetadataValue = System.getenv('BUILD_METADATA') +if (buildMetadataValue == null) { + buildMetadataValue = '' +} +Map buildMetadataMap = buildMetadataValue.tokenize(';').collectEntries { + def (String key, String value) = it.split('=') + return [key, value] +} + // injecting groovy property variables into all projects allprojects { project.ext { @@ -131,6 +146,7 @@ allprojects { // for backcompat testing indexCompatVersions = versions wireCompatVersions = versions.subList(prevMinorIndex, versions.size()) + buildMetadata = buildMetadataMap } } @@ -188,33 +204,15 @@ task branchConsistency { } subprojects { - project.afterEvaluate { - // include license and notice in jars - tasks.withType(Jar) { - into('META-INF') { - from project.rootProject.rootDir - include 'LICENSE.txt' - include 'NOTICE.txt' - } - } - // ignore missing javadocs - tasks.withType(Javadoc) { Javadoc javadoc -> - // the -quiet here is because of a bug in gradle, in that adding a string option - // by itself is not added to the options. By adding quiet, both this option and - // the "value" -quiet is added, separated by a space. This is ok since the javadoc - // command already adds -quiet, so we are just duplicating it - // see https://discuss.gradle.org/t/add-custom-javadoc-option-that-does-not-take-an-argument/5959 - javadoc.options.encoding='UTF8' - javadoc.options.addStringOption('Xdoclint:all,-missing', '-quiet') - /* - TODO: building javadocs with java 9 b118 is currently broken with weird errors, so - for now this is commented out...try again with the next ea build... - javadoc.executable = new File(project.javaHome, 'bin/javadoc') - if (project.javaVersion == JavaVersion.VERSION_1_9) { - // TODO: remove this hack! gradle should be passing this... - javadoc.options.addStringOption('source', '8') - }*/ - } + // ignore missing javadocs + tasks.withType(Javadoc) { Javadoc javadoc -> + // the -quiet here is because of a bug in gradle, in that adding a string option + // by itself is not added to the options. By adding quiet, both this option and + // the "value" -quiet is added, separated by a space. This is ok since the javadoc + // command already adds -quiet, so we are just duplicating it + // see https://discuss.gradle.org/t/add-custom-javadoc-option-that-does-not-take-an-argument/5959 + javadoc.options.encoding='UTF8' + javadoc.options.addStringOption('Xdoclint:all,-missing', '-quiet') } /* Sets up the dependencies that we build as part of this project but @@ -272,6 +270,27 @@ subprojects { } } } + + // Handle javadoc dependencies across projects. Order matters: the linksOffline for + // org.elasticsearch:elasticsearch must be the last one or all the links for the + // other packages (e.g org.elasticsearch.client) will point to core rather than + // their own artifacts. + if (project.plugins.hasPlugin(BuildPlugin)) { + String artifactsHost = VersionProperties.elasticsearch.endsWith("-SNAPSHOT") ? "https://snapshots.elastic.co" : "https://artifacts.elastic.co" + Closure sortClosure = { a, b -> b.group <=> a.group } + Closure depJavadocClosure = { dep -> + if (dep.group != null && dep.group.startsWith('org.elasticsearch')) { + String substitution = project.ext.projectSubstitutions.get("${dep.group}:${dep.name}:${dep.version}") + if (substitution != null) { + project.javadoc.dependsOn substitution + ':javadoc' + String artifactPath = dep.group.replaceAll('\\.', '/') + '/' + dep.name.replaceAll('\\.', '/') + '/' + dep.version + project.javadoc.options.linksOffline artifactsHost + "/javadoc/" + artifactPath, "${project.project(substitution).buildDir}/docs/javadoc/" + } + } + } + project.configurations.compile.dependencies.findAll().toSorted(sortClosure).each(depJavadocClosure) + project.configurations.provided.dependencies.findAll().toSorted(sortClosure).each(depJavadocClosure) + } } } @@ -281,7 +300,7 @@ subprojects { // the dependency is added. gradle.projectsEvaluated { allprojects { - if (project.path == ':test:framework' || project.path == ':client:test') { + if (project.path == ':test:framework') { // :test:framework:test cannot run before and after :core:test return } diff --git a/buildSrc/build.gradle b/buildSrc/build.gradle index 0839b8a22f8fa..727996ab7b049 100644 --- a/buildSrc/build.gradle +++ b/buildSrc/build.gradle @@ -92,8 +92,9 @@ dependencies { compile 'com.netflix.nebula:gradle-info-plugin:3.0.3' compile 'org.eclipse.jgit:org.eclipse.jgit:3.2.0.201312181205-r' compile 'com.perforce:p4java:2012.3.551082' // THIS IS SUPPOSED TO BE OPTIONAL IN THE FUTURE.... - compile 'de.thetaphi:forbiddenapis:2.3' + compile 'de.thetaphi:forbiddenapis:2.4.1' compile 'org.apache.rat:apache-rat:0.11' + compile "org.elasticsearch:jna:4.4.0-1" } // Gradle 2.14+ removed ProgressLogger(-Factory) classes from the public APIs diff --git a/buildSrc/src/main/groovy/com/carrotsearch/gradle/junit4/RandomizedTestingPlugin.groovy b/buildSrc/src/main/groovy/com/carrotsearch/gradle/junit4/RandomizedTestingPlugin.groovy index d3d07db0d2072..d0a686e29c2c1 100644 --- a/buildSrc/src/main/groovy/com/carrotsearch/gradle/junit4/RandomizedTestingPlugin.groovy +++ b/buildSrc/src/main/groovy/com/carrotsearch/gradle/junit4/RandomizedTestingPlugin.groovy @@ -61,7 +61,7 @@ class RandomizedTestingPlugin implements Plugin { ] RandomizedTestingTask newTestTask = tasks.create(properties) newTestTask.classpath = oldTestTask.classpath - newTestTask.testClassesDir = oldTestTask.testClassesDir + newTestTask.testClassesDir = oldTestTask.project.sourceSets.test.output.classesDir // hack so check task depends on custom test Task checkTask = tasks.findByPath('check') diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/BuildPlugin.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/BuildPlugin.groovy index 4fa90853b169e..add518822e07a 100644 --- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/BuildPlugin.groovy +++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/BuildPlugin.groovy @@ -30,25 +30,25 @@ import org.gradle.api.Project import org.gradle.api.Task import org.gradle.api.XmlProvider import org.gradle.api.artifacts.Configuration -import org.gradle.api.artifacts.Dependency import org.gradle.api.artifacts.ModuleDependency import org.gradle.api.artifacts.ModuleVersionIdentifier import org.gradle.api.artifacts.ProjectDependency import org.gradle.api.artifacts.ResolvedArtifact import org.gradle.api.artifacts.dsl.RepositoryHandler +import org.gradle.api.file.CopySpec import org.gradle.api.plugins.JavaPlugin import org.gradle.api.publish.maven.MavenPublication import org.gradle.api.publish.maven.plugins.MavenPublishPlugin import org.gradle.api.publish.maven.tasks.GenerateMavenPom import org.gradle.api.tasks.bundling.Jar import org.gradle.api.tasks.compile.JavaCompile +import org.gradle.api.tasks.javadoc.Javadoc import org.gradle.internal.jvm.Jvm import org.gradle.process.ExecResult import org.gradle.util.GradleVersion import java.time.ZoneOffset import java.time.ZonedDateTime - /** * Encapsulates build configuration for elasticsearch projects. */ @@ -79,7 +79,7 @@ class BuildPlugin implements Plugin { configureConfigurations(project) project.ext.versions = VersionProperties.versions configureCompile(project) - configureJavadocJar(project) + configureJavadoc(project) configureSourcesJar(project) configurePomGeneration(project) @@ -123,12 +123,20 @@ class BuildPlugin implements Plugin { } println " Random Testing Seed : ${project.testSeed}" - // enforce gradle version - GradleVersion minGradle = GradleVersion.version('3.3') - if (GradleVersion.current() < minGradle) { + // enforce Gradle version + final GradleVersion currentGradleVersion = GradleVersion.current(); + + final GradleVersion minGradle = GradleVersion.version('3.3') + if (currentGradleVersion < minGradle) { throw new GradleException("${minGradle} or above is required to build elasticsearch") } + final GradleVersion gradle42 = GradleVersion.version('4.2') + final GradleVersion gradle43 = GradleVersion.version('4.3') + if (currentGradleVersion >= gradle42 && currentGradleVersion < gradle43) { + throw new GradleException("${currentGradleVersion} is not compatible with the elasticsearch build") + } + // enforce Java version if (javaVersionEnum < minimumJava) { throw new GradleException("Java ${minimumJava} or above is required to build Elasticsearch") @@ -231,7 +239,7 @@ class BuildPlugin implements Plugin { /** Return the configuration name used for finding transitive deps of the given dependency. */ private static String transitiveDepConfigName(String groupId, String artifactId, String version) { - return "_transitive_${groupId}:${artifactId}:${version}" + return "_transitive_${groupId}_${artifactId}_${version}" } /** @@ -270,8 +278,8 @@ class BuildPlugin implements Plugin { }) // force all dependencies added directly to compile/testCompile to be non-transitive, except for ES itself - Closure disableTransitiveDeps = { Dependency dep -> - if (dep instanceof ModuleDependency && !(dep instanceof ProjectDependency) && dep.group.startsWith('org.elasticsearch') == false) { + Closure disableTransitiveDeps = { ModuleDependency dep -> + if (!(dep instanceof ProjectDependency) && dep.group.startsWith('org.elasticsearch') == false) { dep.transitive = false // also create a configuration just for this dependency version, so that later @@ -289,7 +297,7 @@ class BuildPlugin implements Plugin { project.configurations.provided.dependencies.all(disableTransitiveDeps) } - /** Adds repositores used by ES dependencies */ + /** Adds repositories used by ES dependencies */ static void configureRepositories(Project project) { RepositoryHandler repos = project.repositories if (System.getProperty("repos.mavenlocal") != null) { @@ -454,6 +462,13 @@ class BuildPlugin implements Plugin { } } + static void configureJavadoc(Project project) { + project.tasks.withType(Javadoc) { + executable = new File(project.javaHome, 'bin/javadoc') + } + configureJavadocJar(project) + } + /** Adds a javadocJar task to generate a jar containing javadocs. */ static void configureJavadocJar(Project project) { Jar javadocJarTask = project.task('javadocJar', type: Jar) @@ -473,8 +488,10 @@ class BuildPlugin implements Plugin { project.assemble.dependsOn(sourcesJarTask) } - /** Adds additional manifest info to jars, and adds source and javadoc jars */ + /** Adds additional manifest info to jars */ static void configureJars(Project project) { + project.ext.licenseFile = null + project.ext.noticeFile = null project.tasks.withType(Jar) { Jar jarTask -> // we put all our distributable files under distributions jarTask.destinationDir = new File(project.buildDir, 'distributions') @@ -498,6 +515,20 @@ class BuildPlugin implements Plugin { jarTask.manifest.attributes('Change': 'Unknown') } } + // add license/notice files + project.afterEvaluate { + if (project.licenseFile == null || project.noticeFile == null) { + throw new GradleException("Must specify license and notice file for project ${project.path}") + } + jarTask.into('META-INF') { + from(project.licenseFile.parent) { + include project.licenseFile.name + } + from(project.noticeFile.parent) { + include project.noticeFile.name + } + } + } } } @@ -526,8 +557,6 @@ class BuildPlugin implements Plugin { systemProperty 'tests.artifact', project.name systemProperty 'tests.task', path systemProperty 'tests.security.manager', 'true' - // Breaking change in JDK-9, revert to JDK-8 behavior for now, see https://github.com/elastic/elasticsearch/issues/21534 - systemProperty 'jdk.io.permissionsUseCanonicalPath', 'true' systemProperty 'jna.nosys', 'true' // default test sysprop values systemProperty 'tests.ifNoTests', 'fail' diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/doc/RestTestsFromSnippetsTask.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/doc/RestTestsFromSnippetsTask.groovy index 0395b31786f63..8491c5b45920e 100644 --- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/doc/RestTestsFromSnippetsTask.groovy +++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/doc/RestTestsFromSnippetsTask.groovy @@ -93,7 +93,7 @@ public class RestTestsFromSnippetsTask extends SnippetsTask { * * `sh` snippets that contain `curl` almost always should be marked * with `// CONSOLE`. In the exceptionally rare cases where they are - * not communicating with Elasticsearch, like the xamples in the ec2 + * not communicating with Elasticsearch, like the examples in the ec2 * and gce discovery plugins, the snippets should be marked * `// NOTCONSOLE`. */ return snippet.language == 'js' || snippet.curl diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/doc/SnippetsTask.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/doc/SnippetsTask.groovy index 94af22f4aa279..7132361e16361 100644 --- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/doc/SnippetsTask.groovy +++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/doc/SnippetsTask.groovy @@ -19,6 +19,10 @@ package org.elasticsearch.gradle.doc +import groovy.json.JsonException +import groovy.json.JsonParserType +import groovy.json.JsonSlurper + import org.gradle.api.DefaultTask import org.gradle.api.InvalidUserDataException import org.gradle.api.file.ConfigurableFileTree @@ -117,6 +121,23 @@ public class SnippetsTask extends DefaultTask { + "contain `curl`.") } } + if (snippet.testResponse && snippet.language == 'js') { + String quoted = snippet.contents + // quote values starting with $ + .replaceAll(/([:,])\s*(\$[^ ,\n}]+)/, '$1 "$2"') + // quote fields starting with $ + .replaceAll(/(\$[^ ,\n}]+)\s*:/, '"$1":') + JsonSlurper slurper = + new JsonSlurper(type: JsonParserType.INDEX_OVERLAY) + try { + slurper.parseText(quoted) + } catch (JsonException e) { + throw new InvalidUserDataException("Invalid json " + + "in $snippet. The error is:\n${e.message}.\n" + + "After substitutions and munging, the json " + + "looks like:\n$quoted", e) + } + } perSnippet(snippet) snippet = null } diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/plugin/PluginPropertiesExtension.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/plugin/PluginPropertiesExtension.groovy index 1251be265da9a..712c8a22154c6 100644 --- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/plugin/PluginPropertiesExtension.groovy +++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/plugin/PluginPropertiesExtension.groovy @@ -46,6 +46,10 @@ class PluginPropertiesExtension { @Input boolean hasClientJar = false + /** True if the plugin requires the elasticsearch keystore to exist, false otherwise. */ + @Input + boolean requiresKeystore = false + /** A license file that should be included in the built plugin zip. */ @Input File licenseFile = null diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/plugin/PluginPropertiesTask.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/plugin/PluginPropertiesTask.groovy index 91efe247a016b..bd0765cc6763f 100644 --- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/plugin/PluginPropertiesTask.groovy +++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/plugin/PluginPropertiesTask.groovy @@ -80,7 +80,8 @@ class PluginPropertiesTask extends Copy { 'elasticsearchVersion': stringSnap(VersionProperties.elasticsearch), 'javaVersion': project.targetCompatibility as String, 'classname': extension.classname, - 'hasNativeController': extension.hasNativeController + 'hasNativeController': extension.hasNativeController, + 'requiresKeystore': extension.requiresKeystore ] } } diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/precommit/ForbiddenPatternsTask.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/precommit/ForbiddenPatternsTask.groovy index ed62e88c567fa..e574d67f2ace1 100644 --- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/precommit/ForbiddenPatternsTask.groovy +++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/precommit/ForbiddenPatternsTask.groovy @@ -63,11 +63,6 @@ public class ForbiddenPatternsTask extends DefaultTask { patterns.put('nocommit should be all lowercase or all uppercase', /((?i)nocommit)(? - // include SelfResolvingDependency with files in the validation - if (dependency instanceof FileCollectionDependency) { - return true - } - return dependency.group && dependency.group.startsWith("org.elasticsearch") == false + dependency.group.startsWith("org.elasticsearch") == false }); // we don't want provided dependencies, which we have already scanned. e.g. don't diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/ClusterConfiguration.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/ClusterConfiguration.groovy index ab618a0fdc7f7..af84a44233aa3 100644 --- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/ClusterConfiguration.groovy +++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/ClusterConfiguration.groovy @@ -63,13 +63,11 @@ class ClusterConfiguration { boolean debug = false /** - * if true each node will be configured with discovery.zen.minimum_master_nodes set - * to the total number of nodes in the cluster. This will also cause that each node has `0s` state recovery - * timeout which can lead to issues if for instance an existing clusterstate is expected to be recovered - * before any tests start + * Configuration of the setting discovery.zen.minimum_master_nodes on the nodes. + * In case of more than one node, this defaults to (number of nodes / 2) + 1 */ @Input - boolean useMinimumMasterNodes = true + Closure minimumMasterNodes = { getNumNodes() > 1 ? getNumNodes().intdiv(2) + 1 : -1 } @Input String jvmArgs = "-Xms" + System.getProperty('tests.heap.size', '512m') + diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/ClusterFormationTasks.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/ClusterFormationTasks.groovy index 90c839720fb7f..72c6eef6685dc 100644 --- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/ClusterFormationTasks.groovy +++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/ClusterFormationTasks.groovy @@ -206,7 +206,19 @@ class ClusterFormationTasks { for (Map.Entry command : node.config.setupCommands.entrySet()) { // the first argument is the actual script name, relative to home Object[] args = command.getValue().clone() - args[0] = new File(node.homeDir, args[0].toString()) + final Object commandPath + if (Os.isFamily(Os.FAMILY_WINDOWS)) { + /* + * We have to delay building the string as the path will not exist during configuration which will fail on Windows due to + * getting the short name requiring the path to already exist. Note that we have to capture the value of arg[0] now + * otherwise we would stack overflow later since arg[0] is replaced below. + */ + String argsZero = args[0] + commandPath = "${-> Paths.get(NodeInfo.getShortPathName(node.homeDir.toString())).resolve(argsZero.toString()).toString()}" + } else { + commandPath = node.homeDir.toPath().resolve(args[0].toString()).toString() + } + args[0] = commandPath setup = configureExecTask(taskName(prefix, node, command.getKey()), project, setup, node, args) } @@ -299,13 +311,14 @@ class ClusterFormationTasks { // Define a node attribute so we can test that it exists 'node.attr.testattr' : 'test' ] - // we set min master nodes to the total number of nodes in the cluster and - // basically skip initial state recovery to allow the cluster to form using a realistic master election - // this means all nodes must be up, join the seed node and do a master election. This will also allow new and - // old nodes in the BWC case to become the master - if (node.config.useMinimumMasterNodes && node.config.numNodes > 1) { - esConfig['discovery.zen.minimum_master_nodes'] = node.config.numNodes - esConfig['discovery.initial_state_timeout'] = '0s' // don't wait for state.. just start up quickly + int minimumMasterNodes = node.config.minimumMasterNodes.call() + if (minimumMasterNodes > 0) { + esConfig['discovery.zen.minimum_master_nodes'] = minimumMasterNodes + } + if (node.config.numNodes > 1) { + // don't wait for state.. just start up quickly + // this will also allow new and old nodes in the BWC case to become the master + esConfig['discovery.initial_state_timeout'] = '0s' } esConfig['node.max_local_storage_nodes'] = node.config.numNodes esConfig['http.port'] = node.config.httpPort @@ -317,7 +330,7 @@ class ClusterFormationTasks { esConfig['cluster.routing.allocation.disk.watermark.flood_stage'] = '1b' } // increase script compilation limit since tests can rapid-fire script compilations - esConfig['script.max_compilations_per_minute'] = 2048 + esConfig['script.max_compilations_rate'] = '2048/1m' esConfig.putAll(node.config.settings) Task writeConfig = project.tasks.create(name: name, type: DefaultTask, dependsOn: setup) @@ -337,7 +350,11 @@ class ClusterFormationTasks { if (node.config.keystoreSettings.isEmpty()) { return setup } else { - File esKeystoreUtil = Paths.get(node.homeDir.toString(), "bin/" + "elasticsearch-keystore").toFile() + /* + * We have to delay building the string as the path will not exist during configuration which will fail on Windows due to + * getting the short name requiring the path to already exist. + */ + final Object esKeystoreUtil = "${-> node.binPath().resolve('elasticsearch-keystore').toString()}" return configureExecTask(name, project, setup, node, esKeystoreUtil, 'create') } } @@ -345,14 +362,19 @@ class ClusterFormationTasks { /** Adds tasks to add settings to the keystore */ static Task configureAddKeystoreSettingTasks(String parent, Project project, Task setup, NodeInfo node) { Map kvs = node.config.keystoreSettings - File esKeystoreUtil = Paths.get(node.homeDir.toString(), "bin/" + "elasticsearch-keystore").toFile() Task parentTask = setup + /* + * We have to delay building the string as the path will not exist during configuration which will fail on Windows due to getting + * the short name requiring the path to already exist. + */ + final Object esKeystoreUtil = "${-> node.binPath().resolve('elasticsearch-keystore').toString()}" for (Map.Entry entry in kvs) { String key = entry.getKey() String name = taskName(parent, node, 'addToKeystore#' + key) Task t = configureExecTask(name, project, parentTask, node, esKeystoreUtil, 'add', key, '-x') + String settingsValue = entry.getValue() // eval this early otherwise it will not use the right value t.doFirst { - standardInput = new ByteArrayInputStream(entry.getValue().getBytes(StandardCharsets.UTF_8)) + standardInput = new ByteArrayInputStream(settingsValue.getBytes(StandardCharsets.UTF_8)) } parentTask = t } @@ -403,7 +425,7 @@ class ClusterFormationTasks { Project pluginProject = plugin.getValue() verifyProjectHasBuildPlugin(name, node.nodeVersion, project, pluginProject) - String configurationName = "_plugin_${prefix}_${pluginProject.path}" + String configurationName = pluginConfigurationName(prefix, pluginProject) Configuration configuration = project.configurations.findByName(configurationName) if (configuration == null) { configuration = project.configurations.create(configurationName) @@ -432,13 +454,21 @@ class ClusterFormationTasks { return copyPlugins } + private static String pluginConfigurationName(final String prefix, final Project project) { + return "_plugin_${prefix}_${project.path}".replace(':', '_') + } + + private static String pluginBwcConfigurationName(final String prefix, final Project project) { + return "_plugin_bwc_${prefix}_${project.path}".replace(':', '_') + } + /** Configures task to copy a plugin based on a zip file resolved using dependencies for an older version */ static Task configureCopyBwcPluginsTask(String name, Project project, Task setup, NodeInfo node, String prefix) { Configuration bwcPlugins = project.configurations.getByName("${prefix}_elasticsearchBwcPlugins") for (Map.Entry plugin : node.config.plugins.entrySet()) { Project pluginProject = plugin.getValue() verifyProjectHasBuildPlugin(name, node.nodeVersion, project, pluginProject) - String configurationName = "_plugin_bwc_${prefix}_${pluginProject.path}" + String configurationName = pluginBwcConfigurationName(prefix, pluginProject) Configuration configuration = project.configurations.findByName(configurationName) if (configuration == null) { configuration = project.configurations.create(configurationName) @@ -468,6 +498,7 @@ class ClusterFormationTasks { } Copy installModule = project.tasks.create(name, Copy.class) installModule.dependsOn(setup) + installModule.dependsOn(module.tasks.bundlePlugin) installModule.into(new File(node.homeDir, "modules/${module.name}")) installModule.from({ project.zipTree(module.tasks.bundlePlugin.outputs.files.singleFile) }) return installModule @@ -476,13 +507,18 @@ class ClusterFormationTasks { static Task configureInstallPluginTask(String name, Project project, Task setup, NodeInfo node, Project plugin, String prefix) { final FileCollection pluginZip; if (node.nodeVersion != VersionProperties.elasticsearch) { - pluginZip = project.configurations.getByName("_plugin_bwc_${prefix}_${plugin.path}") + pluginZip = project.configurations.getByName(pluginBwcConfigurationName(prefix, plugin)) } else { - pluginZip = project.configurations.getByName("_plugin_${prefix}_${plugin.path}") + pluginZip = project.configurations.getByName(pluginConfigurationName(prefix, plugin)) } // delay reading the file location until execution time by wrapping in a closure within a GString - Object file = "${-> new File(node.pluginsTmpDir, pluginZip.singleFile.getName()).toURI().toURL().toString()}" - Object[] args = [new File(node.homeDir, 'bin/elasticsearch-plugin'), 'install', file] + final Object file = "${-> new File(node.pluginsTmpDir, pluginZip.singleFile.getName()).toURI().toURL().toString()}" + /* + * We have to delay building the string as the path will not exist during configuration which will fail on Windows due to getting + * the short name requiring the path to already exist. + */ + final Object esPluginUtil = "${-> node.binPath().resolve('elasticsearch-plugin').toString()}" + final Object[] args = [esPluginUtil, 'install', file] return configureExecTask(name, project, setup, node, args) } diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/JNAKernel32Library.java b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/JNAKernel32Library.java new file mode 100644 index 0000000000000..4d069cd434fc0 --- /dev/null +++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/JNAKernel32Library.java @@ -0,0 +1,25 @@ +package org.elasticsearch.gradle.test; + +import com.sun.jna.Native; +import com.sun.jna.WString; +import org.apache.tools.ant.taskdefs.condition.Os; + +public class JNAKernel32Library { + + private static final class Holder { + private static final JNAKernel32Library instance = new JNAKernel32Library(); + } + + static JNAKernel32Library getInstance() { + return Holder.instance; + } + + private JNAKernel32Library() { + if (Os.isFamily(Os.FAMILY_WINDOWS)) { + Native.register("kernel32"); + } + } + + native int GetShortPathNameW(WString lpszLongPath, char[] lpszShortPath, int cchBuffer); + +} diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/NodeInfo.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/NodeInfo.groovy index 9aadc5cf4a441..77da1c8ed7824 100644 --- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/NodeInfo.groovy +++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/NodeInfo.groovy @@ -18,11 +18,17 @@ */ package org.elasticsearch.gradle.test +import com.sun.jna.Native +import com.sun.jna.WString import org.apache.tools.ant.taskdefs.condition.Os import org.elasticsearch.gradle.Version import org.gradle.api.InvalidUserDataException import org.gradle.api.Project +import java.nio.file.Files +import java.nio.file.Path +import java.nio.file.Paths + /** * A container for the files and configuration associated with a single node in a test cluster. */ @@ -85,10 +91,10 @@ class NodeInfo { String executable /** Path to the elasticsearch start script */ - File esScript + private Object esScript /** script to run when running in the background */ - File wrapperScript + private File wrapperScript /** buffer for ant output when starting this node */ ByteArrayOutputStream buffer = new ByteArrayOutputStream() @@ -132,14 +138,26 @@ class NodeInfo { args.add('/C') args.add('"') // quote the entire command wrapperScript = new File(cwd, "run.bat") - esScript = new File(homeDir, 'bin/elasticsearch.bat') + /* + * We have to delay building the string as the path will not exist during configuration which will fail on Windows due to + * getting the short name requiring the path to already exist. + */ + esScript = "${-> binPath().resolve('elasticsearch.bat').toString()}" } else { executable = 'bash' wrapperScript = new File(cwd, "run") - esScript = new File(homeDir, 'bin/elasticsearch') + esScript = binPath().resolve('elasticsearch') } if (config.daemonize) { - args.add("${wrapperScript}") + if (Os.isFamily(Os.FAMILY_WINDOWS)) { + /* + * We have to delay building the string as the path will not exist during configuration which will fail on Windows due to + * getting the short name requiring the path to already exist. + */ + args.add("${-> getShortPathName(wrapperScript.toString())}") + } else { + args.add("${wrapperScript}") + } } else { args.add("${esScript}") } @@ -158,18 +176,59 @@ class NodeInfo { args.add("${property.key.substring('tests.es.'.size())}=${property.value}") } } - env.put('ES_PATH_CONF', pathConf) - if (Version.fromString(nodeVersion).major == 5) { - args.addAll("-E", "path.conf=${pathConf}") + if (Os.isFamily(Os.FAMILY_WINDOWS)) { + /* + * We have to delay building the string as the path will not exist during configuration which will fail on Windows due to + * getting the short name requiring the path to already exist. + */ + env.put('ES_PATH_CONF', "${-> getShortPathName(pathConf.toString())}") + } + else { + env.put('ES_PATH_CONF', pathConf) } if (!System.properties.containsKey("tests.es.path.data")) { - args.addAll("-E", "path.data=${-> dataDir.toString()}") + if (Os.isFamily(Os.FAMILY_WINDOWS)) { + /* + * We have to delay building the string as the path will not exist during configuration which will fail on Windows due to + * getting the short name requiring the path to already exist. This one is extra tricky because usually we rely on the node + * creating its data directory on startup but we simply can not do that here because getting the short path name requires + * the directory to already exist. Therefore, we create this directory immediately before getting the short name. + */ + args.addAll("-E", "path.data=${-> Files.createDirectories(Paths.get(dataDir.toString())); getShortPathName(dataDir.toString())}") + } else { + args.addAll("-E", "path.data=${-> dataDir.toString()}") + } } if (Os.isFamily(Os.FAMILY_WINDOWS)) { args.add('"') // end the entire command, quoted } } + Path binPath() { + if (Os.isFamily(Os.FAMILY_WINDOWS)) { + return Paths.get(getShortPathName(new File(homeDir, 'bin').toString())) + } else { + return Paths.get(new File(homeDir, 'bin').toURI()) + } + } + + static String getShortPathName(String path) { + assert Os.isFamily(Os.FAMILY_WINDOWS) + final WString longPath = new WString("\\\\?\\" + path) + // first we get the length of the buffer needed + final int length = JNAKernel32Library.getInstance().GetShortPathNameW(longPath, null, 0) + if (length == 0) { + throw new IllegalStateException("path [" + path + "] encountered error [" + Native.getLastError() + "]") + } + final char[] shortPath = new char[length] + // knowing the length of the buffer, now we get the short name + if (JNAKernel32Library.getInstance().GetShortPathNameW(longPath, shortPath, length) == 0) { + throw new IllegalStateException("path [" + path + "] encountered error [" + Native.getLastError() + "]") + } + // we have to strip the \\?\ away from the path for cmd.exe + return Native.toString(shortPath).substring(4) + } + /** Returns debug string for the command that started this node. */ String getCommandString() { String esCommandString = "\nNode ${nodeNum} configuration:\n" diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/VagrantSupportPlugin.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/VagrantSupportPlugin.groovy index d3b7e3aa880b8..9dfe487e83018 100644 --- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/VagrantSupportPlugin.groovy +++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/VagrantSupportPlugin.groovy @@ -45,7 +45,7 @@ class VagrantSupportPlugin implements Plugin { } String version = pipe.toString().trim() if (runResult.exitValue == 0) { - if (version ==~ /Vagrant 1\.(8\.[6-9]|9\.[0-9])+/) { + if (version ==~ /Vagrant 1\.(8\.[6-9]|9\.[0-9])+/ || version ==~ /Vagrant 2\.[0-9]+\.[0-9]+/) { return [ 'supported' : true ] } else { return [ 'supported' : false, diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/VagrantTestPlugin.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/VagrantTestPlugin.groovy index 17343cd5cacd9..8c94d48fcc43a 100644 --- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/VagrantTestPlugin.groovy +++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/VagrantTestPlugin.groovy @@ -20,6 +20,7 @@ class VagrantTestPlugin implements Plugin { 'debian-8', 'debian-9', 'fedora-25', + 'fedora-26', 'oel-6', 'oel-7', 'opensuse-42', diff --git a/buildSrc/src/main/resources/checkstyle_suppressions.xml b/buildSrc/src/main/resources/checkstyle_suppressions.xml index bd6d321bb1710..54dfe661f35a9 100644 --- a/buildSrc/src/main/resources/checkstyle_suppressions.xml +++ b/buildSrc/src/main/resources/checkstyle_suppressions.xml @@ -596,7 +596,6 @@ - @@ -740,11 +739,6 @@ - - - - - diff --git a/buildSrc/src/main/resources/forbidden/http-signatures.txt b/buildSrc/src/main/resources/forbidden/http-signatures.txt index 0ef7056579941..dcf20bbb09387 100644 --- a/buildSrc/src/main/resources/forbidden/http-signatures.txt +++ b/buildSrc/src/main/resources/forbidden/http-signatures.txt @@ -15,31 +15,31 @@ # language governing permissions and limitations under the License. @defaultMessage Explicitly specify the ContentType of HTTP entities when creating -org.elasticsearch.client.http.entity.StringEntity#(java.lang.String) -org.elasticsearch.client.http.entity.StringEntity#(java.lang.String,java.lang.String) -org.elasticsearch.client.http.entity.StringEntity#(java.lang.String,java.nio.charset.Charset) -org.elasticsearch.client.http.entity.ByteArrayEntity#(byte[]) -org.elasticsearch.client.http.entity.ByteArrayEntity#(byte[],int,int) -org.elasticsearch.client.http.entity.FileEntity#(java.io.File) -org.elasticsearch.client.http.entity.InputStreamEntity#(java.io.InputStream) -org.elasticsearch.client.http.entity.InputStreamEntity#(java.io.InputStream,long) -org.elasticsearch.client.http.nio.entity.NByteArrayEntity#(byte[]) -org.elasticsearch.client.http.nio.entity.NByteArrayEntity#(byte[],int,int) -org.elasticsearch.client.http.nio.entity.NFileEntity#(java.io.File) -org.elasticsearch.client.http.nio.entity.NStringEntity#(java.lang.String) -org.elasticsearch.client.http.nio.entity.NStringEntity#(java.lang.String,java.lang.String) +org.apache.http.entity.StringEntity#(java.lang.String) +org.apache.http.entity.StringEntity#(java.lang.String,java.lang.String) +org.apache.http.entity.StringEntity#(java.lang.String,java.nio.charset.Charset) +org.apache.http.entity.ByteArrayEntity#(byte[]) +org.apache.http.entity.ByteArrayEntity#(byte[],int,int) +org.apache.http.entity.FileEntity#(java.io.File) +org.apache.http.entity.InputStreamEntity#(java.io.InputStream) +org.apache.http.entity.InputStreamEntity#(java.io.InputStream,long) +org.apache.http.nio.entity.NByteArrayEntity#(byte[]) +org.apache.http.nio.entity.NByteArrayEntity#(byte[],int,int) +org.apache.http.nio.entity.NFileEntity#(java.io.File) +org.apache.http.nio.entity.NStringEntity#(java.lang.String) +org.apache.http.nio.entity.NStringEntity#(java.lang.String,java.lang.String) @defaultMessage Use non-deprecated constructors -org.elasticsearch.client.http.nio.entity.NFileEntity#(java.io.File,java.lang.String) -org.elasticsearch.client.http.nio.entity.NFileEntity#(java.io.File,java.lang.String,boolean) -org.elasticsearch.client.http.entity.FileEntity#(java.io.File,java.lang.String) -org.elasticsearch.client.http.entity.StringEntity#(java.lang.String,java.lang.String,java.lang.String) +org.apache.http.nio.entity.NFileEntity#(java.io.File,java.lang.String) +org.apache.http.nio.entity.NFileEntity#(java.io.File,java.lang.String,boolean) +org.apache.http.entity.FileEntity#(java.io.File,java.lang.String) +org.apache.http.entity.StringEntity#(java.lang.String,java.lang.String,java.lang.String) @defaultMessage BasicEntity is easy to mess up and forget to set content type -org.elasticsearch.client.http.entity.BasicHttpEntity#() +org.apache.http.entity.BasicHttpEntity#() @defaultMessage EntityTemplate is easy to mess up and forget to set content type -org.elasticsearch.client.http.entity.EntityTemplate#(org.elasticsearch.client.http.entity.ContentProducer) +org.apache.http.entity.EntityTemplate#(org.apache.http.entity.ContentProducer) @defaultMessage SerializableEntity uses java serialization and makes it easy to forget to set content type -org.elasticsearch.client.http.entity.SerializableEntity#(java.io.Serializable) +org.apache.http.entity.SerializableEntity#(java.io.Serializable) diff --git a/buildSrc/src/main/resources/forbidden/jdk-signatures.txt b/buildSrc/src/main/resources/forbidden/jdk-signatures.txt index 52b28ee072629..b17495db6bfb8 100644 --- a/buildSrc/src/main/resources/forbidden/jdk-signatures.txt +++ b/buildSrc/src/main/resources/forbidden/jdk-signatures.txt @@ -107,3 +107,6 @@ java.util.Collections#EMPTY_MAP java.util.Collections#EMPTY_SET java.util.Collections#shuffle(java.util.List) @ Use java.util.Collections#shuffle(java.util.List, java.util.Random) with a reproducible source of randomness + +@defaultMessage Avoid creating FilePermission objects directly, but use FilePermissionUtils instead +java.io.FilePermission#(java.lang.String,java.lang.String) diff --git a/buildSrc/src/main/resources/plugin-descriptor.properties b/buildSrc/src/main/resources/plugin-descriptor.properties index 67c6ee39968cd..31388a5ca79b0 100644 --- a/buildSrc/src/main/resources/plugin-descriptor.properties +++ b/buildSrc/src/main/resources/plugin-descriptor.properties @@ -42,3 +42,6 @@ elasticsearch.version=${elasticsearchVersion} # # 'has.native.controller': whether or not the plugin has a native controller has.native.controller=${hasNativeController} +# +# 'requires.keystore': whether or not the plugin needs the elasticsearch keystore be created +requires.keystore=${requiresKeystore} diff --git a/buildSrc/version.properties b/buildSrc/version.properties index eb39b486e6426..020ff236d9bf3 100644 --- a/buildSrc/version.properties +++ b/buildSrc/version.properties @@ -1,15 +1,17 @@ # When updating elasticsearch, please update 'rest' version in core/src/main/resources/org/elasticsearch/bootstrap/test-framework.policy elasticsearch = 7.0.0-alpha1 -lucene = 7.0.0-snapshot-a128fcb +lucene = 7.1.0 # optional dependencies spatial4j = 0.6 jts = 1.13 -jackson = 2.8.6 -snakeyaml = 1.15 -# When updating log4j, please update also docs/java-api/index.asciidoc -log4j = 2.8.2 +jackson = 2.8.10 +snakeyaml = 1.17 +# when updating log4j, please update also docs/java-api/index.asciidoc +log4j = 2.9.1 slf4j = 1.6.2 + +# when updating the JNA version, also update the version in buildSrc/build.gradle jna = 4.4.0-1 # test dependencies diff --git a/client/benchmark/src/main/java/org/elasticsearch/client/benchmark/rest/RestClientBenchmark.java b/client/benchmark/src/main/java/org/elasticsearch/client/benchmark/rest/RestClientBenchmark.java index ddf5316bd75a2..9210526e7c81c 100644 --- a/client/benchmark/src/main/java/org/elasticsearch/client/benchmark/rest/RestClientBenchmark.java +++ b/client/benchmark/src/main/java/org/elasticsearch/client/benchmark/rest/RestClientBenchmark.java @@ -18,17 +18,17 @@ */ package org.elasticsearch.client.benchmark.rest; -import org.elasticsearch.client.http.HttpEntity; -import org.elasticsearch.client.http.HttpHeaders; -import org.elasticsearch.client.http.HttpHost; -import org.elasticsearch.client.http.HttpStatus; -import org.elasticsearch.client.http.client.config.RequestConfig; -import org.elasticsearch.client.http.conn.ConnectionKeepAliveStrategy; -import org.elasticsearch.client.http.entity.ContentType; -import org.elasticsearch.client.http.entity.StringEntity; -import org.elasticsearch.client.http.impl.nio.client.HttpAsyncClientBuilder; -import org.elasticsearch.client.http.message.BasicHeader; -import org.elasticsearch.client.http.nio.entity.NStringEntity; +import org.apache.http.HttpEntity; +import org.apache.http.HttpHeaders; +import org.apache.http.HttpHost; +import org.apache.http.HttpStatus; +import org.apache.http.client.config.RequestConfig; +import org.apache.http.conn.ConnectionKeepAliveStrategy; +import org.apache.http.entity.ContentType; +import org.apache.http.entity.StringEntity; +import org.apache.http.impl.nio.client.HttpAsyncClientBuilder; +import org.apache.http.message.BasicHeader; +import org.apache.http.nio.entity.NStringEntity; import org.elasticsearch.ElasticsearchException; import org.elasticsearch.client.Response; import org.elasticsearch.client.RestClient; diff --git a/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/action/search/NoopSearchRequestBuilder.java b/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/action/search/NoopSearchRequestBuilder.java index f40941a602859..529182aa98f7d 100644 --- a/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/action/search/NoopSearchRequestBuilder.java +++ b/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/action/search/NoopSearchRequestBuilder.java @@ -33,7 +33,7 @@ import org.elasticsearch.search.aggregations.PipelineAggregationBuilder; import org.elasticsearch.search.builder.SearchSourceBuilder; import org.elasticsearch.search.fetch.subphase.highlight.HighlightBuilder; -import org.elasticsearch.search.rescore.RescoreBuilder; +import org.elasticsearch.search.rescore.RescorerBuilder; import org.elasticsearch.search.slice.SliceBuilder; import org.elasticsearch.search.sort.SortBuilder; import org.elasticsearch.search.sort.SortOrder; @@ -142,8 +142,8 @@ public NoopSearchRequestBuilder setRouting(String... routing) { /** * Sets the preference to execute the search. Defaults to randomize across shards. Can be set to - * _local to prefer local shards, _primary to execute only on primary shards, or - * a custom value, which guarantees that the same order will be used across different requests. + * _local to prefer local shards or a custom value, which guarantees that the same order + * will be used across different requests. */ public NoopSearchRequestBuilder setPreference(String preference) { request.preference(preference); @@ -397,25 +397,25 @@ public NoopSearchRequestBuilder suggest(SuggestBuilder suggestBuilder) { /** * Clears all rescorers on the builder and sets the first one. To use multiple rescore windows use - * {@link #addRescorer(org.elasticsearch.search.rescore.RescoreBuilder, int)}. + * {@link #addRescorer(org.elasticsearch.search.rescore.RescorerBuilder, int)}. * * @param rescorer rescorer configuration * @return this for chaining */ - public NoopSearchRequestBuilder setRescorer(RescoreBuilder rescorer) { + public NoopSearchRequestBuilder setRescorer(RescorerBuilder rescorer) { sourceBuilder().clearRescorers(); return addRescorer(rescorer); } /** * Clears all rescorers on the builder and sets the first one. To use multiple rescore windows use - * {@link #addRescorer(org.elasticsearch.search.rescore.RescoreBuilder, int)}. + * {@link #addRescorer(org.elasticsearch.search.rescore.RescorerBuilder, int)}. * * @param rescorer rescorer configuration * @param window rescore window * @return this for chaining */ - public NoopSearchRequestBuilder setRescorer(RescoreBuilder rescorer, int window) { + public NoopSearchRequestBuilder setRescorer(RescorerBuilder rescorer, int window) { sourceBuilder().clearRescorers(); return addRescorer(rescorer.windowSize(window)); } @@ -426,7 +426,7 @@ public NoopSearchRequestBuilder setRescorer(RescoreBuilder rescorer, int window) * @param rescorer rescorer configuration * @return this for chaining */ - public NoopSearchRequestBuilder addRescorer(RescoreBuilder rescorer) { + public NoopSearchRequestBuilder addRescorer(RescorerBuilder rescorer) { sourceBuilder().addRescorer(rescorer); return this; } @@ -438,7 +438,7 @@ public NoopSearchRequestBuilder addRescorer(RescoreBuilder rescorer) { * @param window rescore window * @return this for chaining */ - public NoopSearchRequestBuilder addRescorer(RescoreBuilder rescorer, int window) { + public NoopSearchRequestBuilder addRescorer(RescorerBuilder rescorer, int window) { sourceBuilder().addRescorer(rescorer.windowSize(window)); return this; } diff --git a/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/action/search/TransportNoopSearchAction.java b/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/action/search/TransportNoopSearchAction.java index 280e0b08f2c72..a6796c76f9279 100644 --- a/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/action/search/TransportNoopSearchAction.java +++ b/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/action/search/TransportNoopSearchAction.java @@ -42,8 +42,8 @@ public class TransportNoopSearchAction extends HandledTransportActionIndices API on elastic.co + */ +public final class IndicesClient { + private final RestHighLevelClient restHighLevelClient; + + public IndicesClient(RestHighLevelClient restHighLevelClient) { + this.restHighLevelClient = restHighLevelClient; + } + + /** + * Deletes an index using the Delete Index API + *

+ * See + * Delete Index API on elastic.co + */ + public DeleteIndexResponse deleteIndex(DeleteIndexRequest deleteIndexRequest, Header... headers) throws IOException { + return restHighLevelClient.performRequestAndParseEntity(deleteIndexRequest, Request::deleteIndex, DeleteIndexResponse::fromXContent, + Collections.emptySet(), headers); + } + + /** + * Asynchronously deletes an index using the Delete Index API + *

+ * See + * Delete Index API on elastic.co + */ + public void deleteIndexAsync(DeleteIndexRequest deleteIndexRequest, ActionListener listener, Header... headers) { + restHighLevelClient.performRequestAsyncAndParseEntity(deleteIndexRequest, Request::deleteIndex, DeleteIndexResponse::fromXContent, + listener, Collections.emptySet(), headers); + } +} diff --git a/client/rest-high-level/src/main/java/org/elasticsearch/client/Request.java b/client/rest-high-level/src/main/java/org/elasticsearch/client/Request.java old mode 100644 new mode 100755 index 64813482e14c2..4da68e98e2db9 --- a/client/rest-high-level/src/main/java/org/elasticsearch/client/Request.java +++ b/client/rest-high-level/src/main/java/org/elasticsearch/client/Request.java @@ -19,16 +19,17 @@ package org.elasticsearch.client; -import org.elasticsearch.client.http.HttpEntity; -import org.elasticsearch.client.http.client.methods.HttpDelete; -import org.elasticsearch.client.http.client.methods.HttpGet; -import org.elasticsearch.client.http.client.methods.HttpHead; -import org.elasticsearch.client.http.client.methods.HttpPost; -import org.elasticsearch.client.http.client.methods.HttpPut; -import org.elasticsearch.client.http.entity.ByteArrayEntity; -import org.elasticsearch.client.http.entity.ContentType; +import org.apache.http.HttpEntity; +import org.apache.http.client.methods.HttpDelete; +import org.apache.http.client.methods.HttpGet; +import org.apache.http.client.methods.HttpHead; +import org.apache.http.client.methods.HttpPost; +import org.apache.http.client.methods.HttpPut; +import org.apache.http.entity.ByteArrayEntity; +import org.apache.http.entity.ContentType; import org.apache.lucene.util.BytesRef; import org.elasticsearch.action.DocWriteRequest; +import org.elasticsearch.action.admin.indices.delete.DeleteIndexRequest; import org.elasticsearch.action.bulk.BulkRequest; import org.elasticsearch.action.delete.DeleteRequest; import org.elasticsearch.action.get.GetRequest; @@ -42,6 +43,7 @@ import org.elasticsearch.action.update.UpdateRequest; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.Strings; +import org.elasticsearch.common.SuppressForbidden; import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.lucene.uid.Versions; import org.elasticsearch.common.unit.TimeValue; @@ -57,34 +59,52 @@ import java.io.ByteArrayOutputStream; import java.io.IOException; +import java.nio.charset.Charset; import java.util.Collections; import java.util.HashMap; import java.util.Locale; import java.util.Map; +import java.util.Objects; import java.util.StringJoiner; -final class Request { +public final class Request { static final XContentType REQUEST_BODY_CONTENT_TYPE = XContentType.JSON; - final String method; - final String endpoint; - final Map params; - final HttpEntity entity; + private final String method; + private final String endpoint; + private final Map parameters; + private final HttpEntity entity; - Request(String method, String endpoint, Map params, HttpEntity entity) { - this.method = method; - this.endpoint = endpoint; - this.params = params; + public Request(String method, String endpoint, Map parameters, HttpEntity entity) { + this.method = Objects.requireNonNull(method, "method cannot be null"); + this.endpoint = Objects.requireNonNull(endpoint, "endpoint cannot be null"); + this.parameters = Objects.requireNonNull(parameters, "parameters cannot be null"); this.entity = entity; } + public String getMethod() { + return method; + } + + public String getEndpoint() { + return endpoint; + } + + public Map getParameters() { + return parameters; + } + + public HttpEntity getEntity() { + return entity; + } + @Override public String toString() { return "Request{" + "method='" + method + '\'' + ", endpoint='" + endpoint + '\'' + - ", params=" + params + + ", params=" + parameters + ", hasBody=" + (entity != null) + '}'; } @@ -104,6 +124,17 @@ static Request delete(DeleteRequest deleteRequest) { return new Request(HttpDelete.METHOD_NAME, endpoint, parameters.getParams(), null); } + static Request deleteIndex(DeleteIndexRequest deleteIndexRequest) { + String endpoint = endpoint(deleteIndexRequest.indices(), Strings.EMPTY_ARRAY, ""); + + Params parameters = Params.builder(); + parameters.withTimeout(deleteIndexRequest.timeout()); + parameters.withMasterTimeout(deleteIndexRequest.masterNodeTimeout()); + parameters.withIndicesOptions(deleteIndexRequest.indicesOptions()); + + return new Request(HttpDelete.METHOD_NAME, endpoint, parameters.getParams(), null); + } + static Request info() { return new Request(HttpGet.METHOD_NAME, "/", Collections.emptyMap(), null); } @@ -139,8 +170,8 @@ static Request bulk(BulkRequest bulkRequest) throws IOException { bulkContentType = XContentType.JSON; } - byte separator = bulkContentType.xContent().streamSeparator(); - ContentType requestContentType = ContentType.create(bulkContentType.mediaType()); + final byte separator = bulkContentType.xContent().streamSeparator(); + final ContentType requestContentType = createContentType(bulkContentType); ByteArrayOutputStream content = new ByteArrayOutputStream(); for (DocWriteRequest request : bulkRequest.requests()) { @@ -231,7 +262,7 @@ static Request bulk(BulkRequest bulkRequest) throws IOException { static Request exists(GetRequest getRequest) { Request request = get(getRequest); - return new Request(HttpHead.METHOD_NAME, request.endpoint, request.params, null); + return new Request(HttpHead.METHOD_NAME, request.endpoint, request.parameters, null); } static Request get(GetRequest getRequest) { @@ -268,7 +299,7 @@ static Request index(IndexRequest indexRequest) { parameters.withWaitForActiveShards(indexRequest.waitForActiveShards()); BytesRef source = indexRequest.source().toBytesRef(); - ContentType contentType = ContentType.create(indexRequest.getContentType().mediaType()); + ContentType contentType = createContentType(indexRequest.getContentType()); HttpEntity entity = new ByteArrayEntity(source.bytes, source.offset, source.length, contentType); return new Request(method, endpoint, parameters.getParams(), entity); @@ -352,7 +383,7 @@ static Request clearScroll(ClearScrollRequest clearScrollRequest) throws IOExcep private static HttpEntity createEntity(ToXContent toXContent, XContentType xContentType) throws IOException { BytesRef source = XContentHelper.toXContent(toXContent, xContentType, false).toBytesRef(); - return new ByteArrayEntity(source.bytes, source.offset, source.length, ContentType.create(xContentType.mediaType())); + return new ByteArrayEntity(source.bytes, source.offset, source.length, createContentType(xContentType)); } static String endpoint(String[] indices, String[] types, String endpoint) { @@ -372,6 +403,17 @@ static String endpoint(String... parts) { return joiner.toString(); } + /** + * Returns a {@link ContentType} from a given {@link XContentType}. + * + * @param xContentType the {@link XContentType} + * @return the {@link ContentType} + */ + @SuppressForbidden(reason = "Only allowed place to convert a XContentType to a ContentType") + public static ContentType createContentType(final XContentType xContentType) { + return ContentType.create(xContentType.mediaTypeWithoutParameters(), (Charset) null); + } + /** * Utility class to build request's parameters map and centralize all parameter names. */ @@ -419,6 +461,10 @@ Params withFetchSourceContext(FetchSourceContext fetchSourceContext) { return this; } + Params withMasterTimeout(TimeValue masterTimeout) { + return putParam("master_timeout", masterTimeout); + } + Params withParent(String parent) { return putParam("parent", parent); } diff --git a/client/rest-high-level/src/main/java/org/elasticsearch/client/RestHighLevelClient.java b/client/rest-high-level/src/main/java/org/elasticsearch/client/RestHighLevelClient.java old mode 100644 new mode 100755 index 0e749f7b310c8..e4827cf31c00d --- a/client/rest-high-level/src/main/java/org/elasticsearch/client/RestHighLevelClient.java +++ b/client/rest-high-level/src/main/java/org/elasticsearch/client/RestHighLevelClient.java @@ -19,8 +19,8 @@ package org.elasticsearch.client; -import org.elasticsearch.client.http.Header; -import org.elasticsearch.client.http.HttpEntity; +import org.apache.http.Header; +import org.apache.http.HttpEntity; import org.elasticsearch.ElasticsearchException; import org.elasticsearch.ElasticsearchStatusException; import org.elasticsearch.action.ActionListener; @@ -43,6 +43,7 @@ import org.elasticsearch.action.search.SearchScrollRequest; import org.elasticsearch.action.update.UpdateRequest; import org.elasticsearch.action.update.UpdateResponse; +import org.elasticsearch.common.CheckedConsumer; import org.elasticsearch.common.CheckedFunction; import org.elasticsearch.common.ParseField; import org.elasticsearch.common.xcontent.ContextParser; @@ -75,6 +76,8 @@ import org.elasticsearch.search.aggregations.bucket.nested.ReverseNestedAggregationBuilder; import org.elasticsearch.search.aggregations.bucket.range.DateRangeAggregationBuilder; import org.elasticsearch.search.aggregations.bucket.range.GeoDistanceAggregationBuilder; +import org.elasticsearch.search.aggregations.bucket.range.IpRangeAggregationBuilder; +import org.elasticsearch.search.aggregations.bucket.range.ParsedBinaryRange; import org.elasticsearch.search.aggregations.bucket.range.ParsedDateRange; import org.elasticsearch.search.aggregations.bucket.range.ParsedGeoDistance; import org.elasticsearch.search.aggregations.bucket.range.ParsedRange; @@ -119,6 +122,8 @@ import org.elasticsearch.search.aggregations.metrics.stats.extended.ParsedExtendedStats; import org.elasticsearch.search.aggregations.metrics.sum.ParsedSum; import org.elasticsearch.search.aggregations.metrics.sum.SumAggregationBuilder; +import org.elasticsearch.search.aggregations.metrics.tophits.ParsedTopHits; +import org.elasticsearch.search.aggregations.metrics.tophits.TopHitsAggregationBuilder; import org.elasticsearch.search.aggregations.metrics.valuecount.ParsedValueCount; import org.elasticsearch.search.aggregations.metrics.valuecount.ValueCountAggregationBuilder; import org.elasticsearch.search.aggregations.pipeline.InternalSimpleValue; @@ -138,6 +143,7 @@ import org.elasticsearch.search.suggest.phrase.PhraseSuggestion; import org.elasticsearch.search.suggest.term.TermSuggestion; +import java.io.Closeable; import java.io.IOException; import java.util.ArrayList; import java.util.Collections; @@ -157,39 +163,80 @@ /** * High level REST client that wraps an instance of the low level {@link RestClient} and allows to build requests and read responses. - * The provided {@link RestClient} is externally built and closed. - * Can be sub-classed to expose additional client methods that make use of endpoints added to Elasticsearch through plugins, or to - * add support for custom response sections, again added to Elasticsearch through plugins. + * The {@link RestClient} instance is internally built based on the provided {@link RestClientBuilder} and it gets closed automatically + * when closing the {@link RestHighLevelClient} instance that wraps it. + * In case an already existing instance of a low-level REST client needs to be provided, this class can be subclassed and the + * {@link #RestHighLevelClient(RestClient, CheckedConsumer, List)} constructor can be used. + * This class can also be sub-classed to expose additional client methods that make use of endpoints added to Elasticsearch through + * plugins, or to add support for custom response sections, again added to Elasticsearch through plugins. */ -public class RestHighLevelClient { +public class RestHighLevelClient implements Closeable { private final RestClient client; private final NamedXContentRegistry registry; + private final CheckedConsumer doClose; + + private final IndicesClient indicesClient = new IndicesClient(this); + + /** + * Creates a {@link RestHighLevelClient} given the low level {@link RestClientBuilder} that allows to build the + * {@link RestClient} to be used to perform requests. + */ + public RestHighLevelClient(RestClientBuilder restClientBuilder) { + this(restClientBuilder, Collections.emptyList()); + } /** - * Creates a {@link RestHighLevelClient} given the low level {@link RestClient} that it should use to perform requests. + * Creates a {@link RestHighLevelClient} given the low level {@link RestClientBuilder} that allows to build the + * {@link RestClient} to be used to perform requests and parsers for custom response sections added to Elasticsearch through plugins. */ - public RestHighLevelClient(RestClient restClient) { - this(restClient, Collections.emptyList()); + protected RestHighLevelClient(RestClientBuilder restClientBuilder, List namedXContentEntries) { + this(restClientBuilder.build(), RestClient::close, namedXContentEntries); } /** * Creates a {@link RestHighLevelClient} given the low level {@link RestClient} that it should use to perform requests and * a list of entries that allow to parse custom response sections added to Elasticsearch through plugins. + * This constructor can be called by subclasses in case an externally created low-level REST client needs to be provided. + * The consumer argument allows to control what needs to be done when the {@link #close()} method is called. + * Also subclasses can provide parsers for custom response sections added to Elasticsearch through plugins. */ - protected RestHighLevelClient(RestClient restClient, List namedXContentEntries) { - this.client = Objects.requireNonNull(restClient); + protected RestHighLevelClient(RestClient restClient, CheckedConsumer doClose, + List namedXContentEntries) { + this.client = Objects.requireNonNull(restClient, "restClient must not be null"); + this.doClose = Objects.requireNonNull(doClose, "doClose consumer must not be null"); this.registry = new NamedXContentRegistry( Stream.of(getDefaultNamedXContents().stream(), getProvidedNamedXContents().stream(), namedXContentEntries.stream()) .flatMap(Function.identity()).collect(toList())); } + /** + * Returns the low-level client that the current high-level client instance is using to perform requests + */ + public RestClient getLowLevelClient() { + return client; + } + + @Override + public final void close() throws IOException { + doClose.accept(client); + } + + /** + * Provides an {@link IndicesClient} which can be used to access the Indices API. + * + * See Indices API on elastic.co + */ + public final IndicesClient indices() { + return indicesClient; + } + /** * Executes a bulk request using the Bulk API * * See Bulk API on elastic.co */ - public BulkResponse bulk(BulkRequest bulkRequest, Header... headers) throws IOException { + public final BulkResponse bulk(BulkRequest bulkRequest, Header... headers) throws IOException { return performRequestAndParseEntity(bulkRequest, Request::bulk, BulkResponse::fromXContent, emptySet(), headers); } @@ -198,14 +245,14 @@ public BulkResponse bulk(BulkRequest bulkRequest, Header... headers) throws IOEx * * See Bulk API on elastic.co */ - public void bulkAsync(BulkRequest bulkRequest, ActionListener listener, Header... headers) { + public final void bulkAsync(BulkRequest bulkRequest, ActionListener listener, Header... headers) { performRequestAsyncAndParseEntity(bulkRequest, Request::bulk, BulkResponse::fromXContent, listener, emptySet(), headers); } /** * Pings the remote Elasticsearch cluster and returns true if the ping succeeded, false otherwise */ - public boolean ping(Header... headers) throws IOException { + public final boolean ping(Header... headers) throws IOException { return performRequest(new MainRequest(), (request) -> Request.ping(), RestHighLevelClient::convertExistsResponse, emptySet(), headers); } @@ -213,7 +260,7 @@ public boolean ping(Header... headers) throws IOException { /** * Get the cluster info otherwise provided when sending an HTTP request to port 9200 */ - public MainResponse info(Header... headers) throws IOException { + public final MainResponse info(Header... headers) throws IOException { return performRequestAndParseEntity(new MainRequest(), (request) -> Request.info(), MainResponse::fromXContent, emptySet(), headers); } @@ -223,7 +270,7 @@ public MainResponse info(Header... headers) throws IOException { * * See Get API on elastic.co */ - public GetResponse get(GetRequest getRequest, Header... headers) throws IOException { + public final GetResponse get(GetRequest getRequest, Header... headers) throws IOException { return performRequestAndParseEntity(getRequest, Request::get, GetResponse::fromXContent, singleton(404), headers); } @@ -241,7 +288,7 @@ public void getAsync(GetRequest getRequest, ActionListener listener * * See Get API on elastic.co */ - public boolean exists(GetRequest getRequest, Header... headers) throws IOException { + public final boolean exists(GetRequest getRequest, Header... headers) throws IOException { return performRequest(getRequest, Request::exists, RestHighLevelClient::convertExistsResponse, emptySet(), headers); } @@ -250,7 +297,7 @@ public boolean exists(GetRequest getRequest, Header... headers) throws IOExcepti * * See Get API on elastic.co */ - public void existsAsync(GetRequest getRequest, ActionListener listener, Header... headers) { + public final void existsAsync(GetRequest getRequest, ActionListener listener, Header... headers) { performRequestAsync(getRequest, Request::exists, RestHighLevelClient::convertExistsResponse, listener, emptySet(), headers); } @@ -259,7 +306,7 @@ public void existsAsync(GetRequest getRequest, ActionListener listener, * * See Index API on elastic.co */ - public IndexResponse index(IndexRequest indexRequest, Header... headers) throws IOException { + public final IndexResponse index(IndexRequest indexRequest, Header... headers) throws IOException { return performRequestAndParseEntity(indexRequest, Request::index, IndexResponse::fromXContent, emptySet(), headers); } @@ -268,7 +315,7 @@ public IndexResponse index(IndexRequest indexRequest, Header... headers) throws * * See Index API on elastic.co */ - public void indexAsync(IndexRequest indexRequest, ActionListener listener, Header... headers) { + public final void indexAsync(IndexRequest indexRequest, ActionListener listener, Header... headers) { performRequestAsyncAndParseEntity(indexRequest, Request::index, IndexResponse::fromXContent, listener, emptySet(), headers); } @@ -277,7 +324,7 @@ public void indexAsync(IndexRequest indexRequest, ActionListener *

* See Update API on elastic.co */ - public UpdateResponse update(UpdateRequest updateRequest, Header... headers) throws IOException { + public final UpdateResponse update(UpdateRequest updateRequest, Header... headers) throws IOException { return performRequestAndParseEntity(updateRequest, Request::update, UpdateResponse::fromXContent, emptySet(), headers); } @@ -286,99 +333,101 @@ public UpdateResponse update(UpdateRequest updateRequest, Header... headers) thr *

* See Update API on elastic.co */ - public void updateAsync(UpdateRequest updateRequest, ActionListener listener, Header... headers) { + public final void updateAsync(UpdateRequest updateRequest, ActionListener listener, Header... headers) { performRequestAsyncAndParseEntity(updateRequest, Request::update, UpdateResponse::fromXContent, listener, emptySet(), headers); } /** - * Deletes a document by id using the Delete api + * Deletes a document by id using the Delete API * * See Delete API on elastic.co */ - public DeleteResponse delete(DeleteRequest deleteRequest, Header... headers) throws IOException { + public final DeleteResponse delete(DeleteRequest deleteRequest, Header... headers) throws IOException { return performRequestAndParseEntity(deleteRequest, Request::delete, DeleteResponse::fromXContent, Collections.singleton(404), headers); } /** - * Asynchronously deletes a document by id using the Delete api + * Asynchronously deletes a document by id using the Delete API * * See Delete API on elastic.co */ - public void deleteAsync(DeleteRequest deleteRequest, ActionListener listener, Header... headers) { + public final void deleteAsync(DeleteRequest deleteRequest, ActionListener listener, Header... headers) { performRequestAsyncAndParseEntity(deleteRequest, Request::delete, DeleteResponse::fromXContent, listener, Collections.singleton(404), headers); } /** - * Executes a search using the Search api + * Executes a search using the Search API * * See Search API on elastic.co */ - public SearchResponse search(SearchRequest searchRequest, Header... headers) throws IOException { + public final SearchResponse search(SearchRequest searchRequest, Header... headers) throws IOException { return performRequestAndParseEntity(searchRequest, Request::search, SearchResponse::fromXContent, emptySet(), headers); } /** - * Asynchronously executes a search using the Search api + * Asynchronously executes a search using the Search API * * See Search API on elastic.co */ - public void searchAsync(SearchRequest searchRequest, ActionListener listener, Header... headers) { + public final void searchAsync(SearchRequest searchRequest, ActionListener listener, Header... headers) { performRequestAsyncAndParseEntity(searchRequest, Request::search, SearchResponse::fromXContent, listener, emptySet(), headers); } /** - * Executes a search using the Search Scroll api + * Executes a search using the Search Scroll API * * See Search Scroll * API on elastic.co */ - public SearchResponse searchScroll(SearchScrollRequest searchScrollRequest, Header... headers) throws IOException { + public final SearchResponse searchScroll(SearchScrollRequest searchScrollRequest, Header... headers) throws IOException { return performRequestAndParseEntity(searchScrollRequest, Request::searchScroll, SearchResponse::fromXContent, emptySet(), headers); } /** - * Asynchronously executes a search using the Search Scroll api + * Asynchronously executes a search using the Search Scroll API * * See Search Scroll * API on elastic.co */ - public void searchScrollAsync(SearchScrollRequest searchScrollRequest, ActionListener listener, Header... headers) { + public final void searchScrollAsync(SearchScrollRequest searchScrollRequest, + ActionListener listener, Header... headers) { performRequestAsyncAndParseEntity(searchScrollRequest, Request::searchScroll, SearchResponse::fromXContent, listener, emptySet(), headers); } /** - * Clears one or more scroll ids using the Clear Scroll api + * Clears one or more scroll ids using the Clear Scroll API * * See * Clear Scroll API on elastic.co */ - public ClearScrollResponse clearScroll(ClearScrollRequest clearScrollRequest, Header... headers) throws IOException { + public final ClearScrollResponse clearScroll(ClearScrollRequest clearScrollRequest, Header... headers) throws IOException { return performRequestAndParseEntity(clearScrollRequest, Request::clearScroll, ClearScrollResponse::fromXContent, emptySet(), headers); } /** - * Asynchronously clears one or more scroll ids using the Clear Scroll api + * Asynchronously clears one or more scroll ids using the Clear Scroll API * * See * Clear Scroll API on elastic.co */ - public void clearScrollAsync(ClearScrollRequest clearScrollRequest, ActionListener listener, Header... headers) { + public final void clearScrollAsync(ClearScrollRequest clearScrollRequest, + ActionListener listener, Header... headers) { performRequestAsyncAndParseEntity(clearScrollRequest, Request::clearScroll, ClearScrollResponse::fromXContent, listener, emptySet(), headers); } - protected Resp performRequestAndParseEntity(Req request, + protected final Resp performRequestAndParseEntity(Req request, CheckedFunction requestConverter, CheckedFunction entityParser, Set ignores, Header... headers) throws IOException { return performRequest(request, requestConverter, (response) -> parseEntity(response.getEntity(), entityParser), ignores, headers); } - protected Resp performRequest(Req request, + protected final Resp performRequest(Req request, CheckedFunction requestConverter, CheckedFunction responseConverter, Set ignores, Header... headers) throws IOException { @@ -389,7 +438,7 @@ protected Resp performRequest(Req request, Request req = requestConverter.apply(request); Response response; try { - response = client.performRequest(req.method, req.endpoint, req.params, req.entity, headers); + response = client.performRequest(req.getMethod(), req.getEndpoint(), req.getParameters(), req.getEntity(), headers); } catch (ResponseException e) { if (ignores.contains(e.getResponse().getStatusLine().getStatusCode())) { try { @@ -412,7 +461,7 @@ protected Resp performRequest(Req request, } } - protected void performRequestAsyncAndParseEntity(Req request, + protected final void performRequestAsyncAndParseEntity(Req request, CheckedFunction requestConverter, CheckedFunction entityParser, ActionListener listener, Set ignores, Header... headers) { @@ -420,7 +469,7 @@ protected void performRequestAsyncAndParseEnti listener, ignores, headers); } - protected void performRequestAsync(Req request, + protected final void performRequestAsync(Req request, CheckedFunction requestConverter, CheckedFunction responseConverter, ActionListener listener, Set ignores, Header... headers) { @@ -438,10 +487,10 @@ protected void performRequestAsync(Req request } ResponseListener responseListener = wrapResponseListener(responseConverter, listener, ignores); - client.performRequestAsync(req.method, req.endpoint, req.params, req.entity, responseListener, headers); + client.performRequestAsync(req.getMethod(), req.getEndpoint(), req.getParameters(), req.getEntity(), responseListener, headers); } - ResponseListener wrapResponseListener(CheckedFunction responseConverter, + final ResponseListener wrapResponseListener(CheckedFunction responseConverter, ActionListener actionListener, Set ignores) { return new ResponseListener() { @Override @@ -486,7 +535,7 @@ public void onFailure(Exception exception) { * that wraps the original {@link ResponseException}. The potential exception obtained while parsing is added to the returned * exception as a suppressed exception. This method is guaranteed to not throw any exception eventually thrown while parsing. */ - ElasticsearchStatusException parseResponseException(ResponseException responseException) { + protected final ElasticsearchStatusException parseResponseException(ResponseException responseException) { Response response = responseException.getResponse(); HttpEntity entity = response.getEntity(); ElasticsearchStatusException elasticsearchException; @@ -506,8 +555,8 @@ ElasticsearchStatusException parseResponseException(ResponseException responseEx return elasticsearchException; } - Resp parseEntity( - HttpEntity entity, CheckedFunction entityParser) throws IOException { + protected final Resp parseEntity(final HttpEntity entity, + final CheckedFunction entityParser) throws IOException { if (entity == null) { throw new IllegalStateException("Response body expected but not returned"); } @@ -570,6 +619,8 @@ static List getDefaultNamedXContents() { map.put(SignificantLongTerms.NAME, (p, c) -> ParsedSignificantLongTerms.fromXContent(p, (String) c)); map.put(SignificantStringTerms.NAME, (p, c) -> ParsedSignificantStringTerms.fromXContent(p, (String) c)); map.put(ScriptedMetricAggregationBuilder.NAME, (p, c) -> ParsedScriptedMetric.fromXContent(p, (String) c)); + map.put(IpRangeAggregationBuilder.NAME, (p, c) -> ParsedBinaryRange.fromXContent(p, (String) c)); + map.put(TopHitsAggregationBuilder.NAME, (p, c) -> ParsedTopHits.fromXContent(p, (String) c)); List entries = map.entrySet().stream() .map(entry -> new NamedXContentRegistry.Entry(Aggregation.class, new ParseField(entry.getKey()), entry.getValue())) .collect(Collectors.toList()); diff --git a/client/rest-high-level/src/main/resources/forbidden/rest-high-level-signatures.txt b/client/rest-high-level/src/main/resources/forbidden/rest-high-level-signatures.txt new file mode 100644 index 0000000000000..fb2330f3f083c --- /dev/null +++ b/client/rest-high-level/src/main/resources/forbidden/rest-high-level-signatures.txt @@ -0,0 +1,21 @@ +# Licensed to Elasticsearch under one or more contributor +# license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright +# ownership. Elasticsearch licenses this file to you under +# the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on +# an 'AS IS' BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, +# either express or implied. See the License for the specific +# language governing permissions and limitations under the License. + +@defaultMessage Use Request#createContentType(XContentType) to be sure to pass the right MIME type +org.apache.http.entity.ContentType#create(java.lang.String) +org.apache.http.entity.ContentType#create(java.lang.String,java.lang.String) +org.apache.http.entity.ContentType#create(java.lang.String,java.nio.charset.Charset) +org.apache.http.entity.ContentType#create(java.lang.String,org.apache.http.NameValuePair[]) diff --git a/client/rest-high-level/src/test/java/org/elasticsearch/client/CrudIT.java b/client/rest-high-level/src/test/java/org/elasticsearch/client/CrudIT.java index 5605790697122..e36c445082ed6 100644 --- a/client/rest-high-level/src/test/java/org/elasticsearch/client/CrudIT.java +++ b/client/rest-high-level/src/test/java/org/elasticsearch/client/CrudIT.java @@ -19,8 +19,8 @@ package org.elasticsearch.client; -import org.elasticsearch.client.http.entity.ContentType; -import org.elasticsearch.client.http.entity.StringEntity; +import org.apache.http.entity.ContentType; +import org.apache.http.entity.StringEntity; import org.elasticsearch.ElasticsearchException; import org.elasticsearch.ElasticsearchStatusException; import org.elasticsearch.action.DocWriteRequest; @@ -39,7 +39,6 @@ import org.elasticsearch.action.update.UpdateResponse; import org.elasticsearch.common.Strings; import org.elasticsearch.common.bytes.BytesReference; -import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.ByteSizeUnit; import org.elasticsearch.common.unit.ByteSizeValue; import org.elasticsearch.common.xcontent.XContentBuilder; @@ -50,7 +49,6 @@ import org.elasticsearch.script.Script; import org.elasticsearch.script.ScriptType; import org.elasticsearch.search.fetch.subphase.FetchSourceContext; -import org.elasticsearch.threadpool.ThreadPool; import java.io.IOException; import java.util.Collections; @@ -614,14 +612,14 @@ public void afterBulk(long executionId, BulkRequest request, Throwable failure) } }; - ThreadPool threadPool = new ThreadPool(Settings.builder().put("node.name", getClass().getName()).build()); // Pull the client to a variable to work around https://bugs.eclipse.org/bugs/show_bug.cgi?id=514884 RestHighLevelClient hlClient = highLevelClient(); - try(BulkProcessor processor = new BulkProcessor.Builder(hlClient::bulkAsync, listener, threadPool) - .setConcurrentRequests(0) - .setBulkSize(new ByteSizeValue(5, ByteSizeUnit.GB)) - .setBulkActions(nbItems + 1) - .build()) { + + try (BulkProcessor processor = BulkProcessor.builder(hlClient::bulkAsync, listener) + .setConcurrentRequests(0) + .setBulkSize(new ByteSizeValue(5, ByteSizeUnit.GB)) + .setBulkActions(nbItems + 1) + .build()) { for (int i = 0; i < nbItems; i++) { String id = String.valueOf(i); boolean erroneous = randomBoolean(); @@ -631,7 +629,7 @@ public void afterBulk(long executionId, BulkRequest request, Throwable failure) if (opType == DocWriteRequest.OpType.DELETE) { if (erroneous == false) { assertEquals(RestStatus.CREATED, - highLevelClient().index(new IndexRequest("index", "test", id).source("field", -1)).status()); + highLevelClient().index(new IndexRequest("index", "test", id).source("field", -1)).status()); } DeleteRequest deleteRequest = new DeleteRequest("index", "test", id); processor.add(deleteRequest); @@ -653,10 +651,10 @@ public void afterBulk(long executionId, BulkRequest request, Throwable failure) } else if (opType == DocWriteRequest.OpType.UPDATE) { UpdateRequest updateRequest = new UpdateRequest("index", "test", id) - .doc(new IndexRequest().source(xContentType, "id", i)); + .doc(new IndexRequest().source(xContentType, "id", i)); if (erroneous == false) { assertEquals(RestStatus.CREATED, - highLevelClient().index(new IndexRequest("index", "test", id).source("field", -1)).status()); + highLevelClient().index(new IndexRequest("index", "test", id).source("field", -1)).status()); } processor.add(updateRequest); } @@ -676,8 +674,6 @@ public void afterBulk(long executionId, BulkRequest request, Throwable failure) assertNull(error.get()); validateBulkResponses(nbItems, errors, bulkResponse, bulkRequest); - - terminate(threadPool); } private void validateBulkResponses(int nbItems, boolean[] errors, BulkResponse bulkResponse, BulkRequest bulkRequest) { diff --git a/client/rest-high-level/src/test/java/org/elasticsearch/client/CustomRestHighLevelClientTests.java b/client/rest-high-level/src/test/java/org/elasticsearch/client/CustomRestHighLevelClientTests.java index 191b9e561f07a..f8c191252804a 100644 --- a/client/rest-high-level/src/test/java/org/elasticsearch/client/CustomRestHighLevelClientTests.java +++ b/client/rest-high-level/src/test/java/org/elasticsearch/client/CustomRestHighLevelClientTests.java @@ -19,25 +19,29 @@ package org.elasticsearch.client; -import org.elasticsearch.client.http.Header; -import org.elasticsearch.client.http.HttpEntity; -import org.elasticsearch.client.http.HttpHost; -import org.elasticsearch.client.http.HttpResponse; -import org.elasticsearch.client.http.ProtocolVersion; -import org.elasticsearch.client.http.RequestLine; -import org.elasticsearch.client.http.client.methods.HttpGet; -import org.elasticsearch.client.http.entity.ByteArrayEntity; -import org.elasticsearch.client.http.entity.ContentType; -import org.elasticsearch.client.http.message.BasicHeader; -import org.elasticsearch.client.http.message.BasicHttpResponse; -import org.elasticsearch.client.http.message.BasicRequestLine; -import org.elasticsearch.client.http.message.BasicStatusLine; +import org.apache.http.Header; +import org.apache.http.HttpEntity; +import org.apache.http.HttpHost; +import org.apache.http.ProtocolVersion; +import org.apache.http.RequestLine; +import org.apache.http.client.methods.HttpGet; +import org.apache.http.entity.ByteArrayEntity; +import org.apache.http.entity.ContentType; +import org.apache.http.message.BasicHeader; +import org.apache.http.message.BasicRequestLine; +import org.apache.http.message.BasicStatusLine; import org.apache.lucene.util.BytesRef; import org.elasticsearch.Build; import org.elasticsearch.Version; import org.elasticsearch.action.ActionListener; import org.elasticsearch.action.main.MainRequest; import org.elasticsearch.action.main.MainResponse; +import org.elasticsearch.action.support.PlainActionFuture; +import org.elasticsearch.client.Request; +import org.elasticsearch.client.Response; +import org.elasticsearch.client.ResponseListener; +import org.elasticsearch.client.RestClient; +import org.elasticsearch.client.RestHighLevelClient; import org.elasticsearch.cluster.ClusterName; import org.elasticsearch.common.SuppressForbidden; import org.elasticsearch.common.xcontent.XContentHelper; @@ -48,10 +52,14 @@ import java.io.IOException; import java.lang.reflect.Method; import java.lang.reflect.Modifier; +import java.util.Arrays; +import java.util.Collections; +import java.util.List; +import java.util.stream.Collectors; import static java.util.Collections.emptyMap; import static java.util.Collections.emptySet; -import static org.elasticsearch.client.ESRestHighLevelClientTestCase.execute; +import static org.hamcrest.Matchers.containsInAnyOrder; import static org.mockito.Matchers.any; import static org.mockito.Matchers.anyMapOf; import static org.mockito.Matchers.anyObject; @@ -59,6 +67,7 @@ import static org.mockito.Matchers.eq; import static org.mockito.Mockito.doAnswer; import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.when; /** * Test and demonstrates how {@link RestHighLevelClient} can be extended to support custom endpoints. @@ -91,31 +100,45 @@ public void testCustomEndpoint() throws IOException { final MainRequest request = new MainRequest(); final Header header = new BasicHeader("node_name", randomAlphaOfLengthBetween(1, 10)); - MainResponse response = execute(request, restHighLevelClient::custom, restHighLevelClient::customAsync, header); + MainResponse response = restHighLevelClient.custom(request, header); assertEquals(header.getValue(), response.getNodeName()); - response = execute(request, restHighLevelClient::customAndParse, restHighLevelClient::customAndParseAsync, header); + response = restHighLevelClient.customAndParse(request, header); assertEquals(header.getValue(), response.getNodeName()); } + public void testCustomEndpointAsync() throws Exception { + final MainRequest request = new MainRequest(); + final Header header = new BasicHeader("node_name", randomAlphaOfLengthBetween(1, 10)); + + PlainActionFuture future = PlainActionFuture.newFuture(); + restHighLevelClient.customAsync(request, future, header); + assertEquals(header.getValue(), future.get().getNodeName()); + + future = PlainActionFuture.newFuture(); + restHighLevelClient.customAndParseAsync(request, future, header); + assertEquals(header.getValue(), future.get().getNodeName()); + } + /** * The {@link RestHighLevelClient} must declare the following execution methods using the protected modifier * so that they can be used by subclasses to implement custom logic. */ @SuppressForbidden(reason = "We're forced to uses Class#getDeclaredMethods() here because this test checks protected methods") public void testMethodsVisibility() throws ClassNotFoundException { - String[] methodNames = new String[]{"performRequest", "performRequestAndParseEntity", "performRequestAsync", - "performRequestAsyncAndParseEntity"}; - for (String methodName : methodNames) { - boolean found = false; - for (Method method : RestHighLevelClient.class.getDeclaredMethods()) { - if (method.getName().equals(methodName)) { - assertTrue("Method " + methodName + " must be protected", Modifier.isProtected(method.getModifiers())); - found = true; - } - } - assertTrue("Failed to find method " + methodName, found); - } + final String[] methodNames = new String[]{"performRequest", + "performRequestAsync", + "performRequestAndParseEntity", + "performRequestAsyncAndParseEntity", + "parseEntity", + "parseResponseException"}; + + final List protectedMethods = Arrays.stream(RestHighLevelClient.class.getDeclaredMethods()) + .filter(method -> Modifier.isProtected(method.getModifiers())) + .map(Method::getName) + .collect(Collectors.toList()); + + assertThat(protectedMethods, containsInAnyOrder(methodNames)); } /** @@ -134,15 +157,20 @@ private Void mockPerformRequestAsync(Header httpHeader, ResponseListener respons * Mocks the synchronous request execution like if it was executed by Elasticsearch. */ private Response mockPerformRequest(Header httpHeader) throws IOException { + final Response mockResponse = mock(Response.class); + when(mockResponse.getHost()).thenReturn(new HttpHost("localhost", 9200)); + ProtocolVersion protocol = new ProtocolVersion("HTTP", 1, 1); - HttpResponse httpResponse = new BasicHttpResponse(new BasicStatusLine(protocol, 200, "OK")); + when(mockResponse.getStatusLine()).thenReturn(new BasicStatusLine(protocol, 200, "OK")); MainResponse response = new MainResponse(httpHeader.getValue(), Version.CURRENT, ClusterName.DEFAULT, "_na", Build.CURRENT, true); BytesRef bytesRef = XContentHelper.toXContent(response, XContentType.JSON, false).toBytesRef(); - httpResponse.setEntity(new ByteArrayEntity(bytesRef.bytes, ContentType.APPLICATION_JSON)); + when(mockResponse.getEntity()).thenReturn(new ByteArrayEntity(bytesRef.bytes, ContentType.APPLICATION_JSON)); RequestLine requestLine = new BasicRequestLine(HttpGet.METHOD_NAME, ENDPOINT, protocol); - return new Response(requestLine, new HttpHost("localhost", 9200), httpResponse); + when(mockResponse.getRequestLine()).thenReturn(requestLine); + + return mockResponse; } /** @@ -151,7 +179,7 @@ private Response mockPerformRequest(Header httpHeader) throws IOException { static class CustomRestClient extends RestHighLevelClient { private CustomRestClient(RestClient restClient) { - super(restClient); + super(restClient, RestClient::close, Collections.emptyList()); } MainResponse custom(MainRequest mainRequest, Header... headers) throws IOException { diff --git a/client/rest-high-level/src/test/java/org/elasticsearch/client/ESRestHighLevelClientTestCase.java b/client/rest-high-level/src/test/java/org/elasticsearch/client/ESRestHighLevelClientTestCase.java index c6413a3e00fda..aabe2c4d1e270 100644 --- a/client/rest-high-level/src/test/java/org/elasticsearch/client/ESRestHighLevelClientTestCase.java +++ b/client/rest-high-level/src/test/java/org/elasticsearch/client/ESRestHighLevelClientTestCase.java @@ -19,7 +19,7 @@ package org.elasticsearch.client; -import org.elasticsearch.client.http.Header; +import org.apache.http.Header; import org.elasticsearch.action.ActionListener; import org.elasticsearch.action.support.PlainActionFuture; import org.elasticsearch.test.rest.ESRestTestCase; @@ -27,6 +27,7 @@ import org.junit.Before; import java.io.IOException; +import java.util.Collections; public abstract class ESRestHighLevelClientTestCase extends ESRestTestCase { @@ -36,12 +37,13 @@ public abstract class ESRestHighLevelClientTestCase extends ESRestTestCase { public void initHighLevelClient() throws IOException { super.initClient(); if (restHighLevelClient == null) { - restHighLevelClient = new RestHighLevelClient(client()); + restHighLevelClient = new HighLevelClient(client()); } } @AfterClass - public static void cleanupClient() { + public static void cleanupClient() throws IOException { + restHighLevelClient.close(); restHighLevelClient = null; } @@ -72,4 +74,10 @@ protected interface SyncMethod { protected interface AsyncMethod { void execute(Request request, ActionListener listener, Header... headers); } + + private static class HighLevelClient extends RestHighLevelClient { + private HighLevelClient(RestClient restClient) { + super(restClient, (client) -> {}, Collections.emptyList()); + } + } } diff --git a/client/rest-high-level/src/test/java/org/elasticsearch/client/IndicesClientIT.java b/client/rest-high-level/src/test/java/org/elasticsearch/client/IndicesClientIT.java new file mode 100755 index 0000000000000..4045e565288e5 --- /dev/null +++ b/client/rest-high-level/src/test/java/org/elasticsearch/client/IndicesClientIT.java @@ -0,0 +1,68 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.client; + +import org.elasticsearch.ElasticsearchException; +import org.elasticsearch.action.admin.indices.delete.DeleteIndexRequest; +import org.elasticsearch.action.admin.indices.delete.DeleteIndexResponse; +import org.elasticsearch.rest.RestStatus; + +import java.io.IOException; + +public class IndicesClientIT extends ESRestHighLevelClientTestCase { + + public void testDeleteIndex() throws IOException { + { + // Delete index if exists + String indexName = "test_index"; + createIndex(indexName); + + DeleteIndexRequest deleteIndexRequest = new DeleteIndexRequest(indexName); + DeleteIndexResponse deleteIndexResponse = + execute(deleteIndexRequest, highLevelClient().indices()::deleteIndex, highLevelClient().indices()::deleteIndexAsync); + assertTrue(deleteIndexResponse.isAcknowledged()); + + assertFalse(indexExists(indexName)); + } + { + // Return 404 if index doesn't exist + String nonExistentIndex = "non_existent_index"; + assertFalse(indexExists(nonExistentIndex)); + + DeleteIndexRequest deleteIndexRequest = new DeleteIndexRequest(nonExistentIndex); + + ElasticsearchException exception = expectThrows(ElasticsearchException.class, + () -> execute(deleteIndexRequest, highLevelClient().indices()::deleteIndex, highLevelClient().indices()::deleteIndexAsync)); + assertEquals(RestStatus.NOT_FOUND, exception.status()); + } + } + + private static void createIndex(String index) throws IOException { + Response response = client().performRequest("PUT", index); + + assertEquals(200, response.getStatusLine().getStatusCode()); + } + + private static boolean indexExists(String index) throws IOException { + Response response = client().performRequest("HEAD", index); + + return response.getStatusLine().getStatusCode() == 200; + } +} diff --git a/client/rest-high-level/src/test/java/org/elasticsearch/client/RequestTests.java b/client/rest-high-level/src/test/java/org/elasticsearch/client/RequestTests.java old mode 100644 new mode 100755 index 634ef1bb30a71..3be250d513d21 --- a/client/rest-high-level/src/test/java/org/elasticsearch/client/RequestTests.java +++ b/client/rest-high-level/src/test/java/org/elasticsearch/client/RequestTests.java @@ -19,10 +19,13 @@ package org.elasticsearch.client; -import org.elasticsearch.client.http.HttpEntity; -import org.elasticsearch.client.http.entity.ByteArrayEntity; -import org.elasticsearch.client.http.util.EntityUtils; +import org.apache.http.HttpEntity; +import org.apache.http.entity.ByteArrayEntity; +import org.apache.http.entity.ContentType; +import org.apache.http.entity.StringEntity; +import org.apache.http.util.EntityUtils; import org.elasticsearch.action.DocWriteRequest; +import org.elasticsearch.action.admin.indices.delete.DeleteIndexRequest; import org.elasticsearch.action.bulk.BulkRequest; import org.elasticsearch.action.bulk.BulkShardRequest; import org.elasticsearch.action.delete.DeleteRequest; @@ -34,6 +37,8 @@ import org.elasticsearch.action.search.SearchType; import org.elasticsearch.action.support.IndicesOptions; import org.elasticsearch.action.support.WriteRequest; +import org.elasticsearch.action.support.master.AcknowledgedRequest; +import org.elasticsearch.action.support.master.MasterNodeRequest; import org.elasticsearch.action.support.replication.ReplicatedWriteRequest; import org.elasticsearch.action.support.replication.ReplicationRequest; import org.elasticsearch.action.update.UpdateRequest; @@ -42,6 +47,7 @@ import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.io.Streams; import org.elasticsearch.common.lucene.uid.Versions; +import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentHelper; @@ -64,12 +70,15 @@ import java.io.IOException; import java.io.InputStream; +import java.lang.reflect.Constructor; +import java.lang.reflect.Modifier; import java.util.HashMap; import java.util.Locale; import java.util.Map; import java.util.StringJoiner; import java.util.function.Consumer; import java.util.function.Function; +import java.util.function.Supplier; import static java.util.Collections.singletonMap; import static org.elasticsearch.client.Request.enforceSameContentType; @@ -77,20 +86,50 @@ public class RequestTests extends ESTestCase { + public void testConstructor() throws Exception { + final String method = randomFrom("GET", "PUT", "POST", "HEAD", "DELETE"); + final String endpoint = randomAlphaOfLengthBetween(1, 10); + final Map parameters = singletonMap(randomAlphaOfLength(5), randomAlphaOfLength(5)); + final HttpEntity entity = randomBoolean() ? new StringEntity(randomAlphaOfLengthBetween(1, 100), ContentType.TEXT_PLAIN) : null; + + NullPointerException e = expectThrows(NullPointerException.class, () -> new Request(null, endpoint, parameters, entity)); + assertEquals("method cannot be null", e.getMessage()); + + e = expectThrows(NullPointerException.class, () -> new Request(method, null, parameters, entity)); + assertEquals("endpoint cannot be null", e.getMessage()); + + e = expectThrows(NullPointerException.class, () -> new Request(method, endpoint, null, entity)); + assertEquals("parameters cannot be null", e.getMessage()); + + final Request request = new Request(method, endpoint, parameters, entity); + assertEquals(method, request.getMethod()); + assertEquals(endpoint, request.getEndpoint()); + assertEquals(parameters, request.getParameters()); + assertEquals(entity, request.getEntity()); + + final Constructor[] constructors = Request.class.getConstructors(); + assertEquals("Expected only 1 constructor", 1, constructors.length); + assertTrue("Request constructor is not public", Modifier.isPublic(constructors[0].getModifiers())); + } + + public void testClassVisibility() throws Exception { + assertTrue("Request class is not public", Modifier.isPublic(Request.class.getModifiers())); + } + public void testPing() { Request request = Request.ping(); - assertEquals("/", request.endpoint); - assertEquals(0, request.params.size()); - assertNull(request.entity); - assertEquals("HEAD", request.method); + assertEquals("/", request.getEndpoint()); + assertEquals(0, request.getParameters().size()); + assertNull(request.getEntity()); + assertEquals("HEAD", request.getMethod()); } public void testInfo() { Request request = Request.info(); - assertEquals("/", request.endpoint); - assertEquals(0, request.params.size()); - assertNull(request.entity); - assertEquals("GET", request.method); + assertEquals("/", request.getEndpoint()); + assertEquals(0, request.getParameters().size()); + assertNull(request.getEntity()); + assertEquals("GET", request.getMethod()); } public void testGet() { @@ -105,7 +144,7 @@ public void testDelete() throws IOException { Map expectedParams = new HashMap<>(); - setRandomTimeout(deleteRequest, expectedParams); + setRandomTimeout(deleteRequest::timeout, ReplicationRequest.DEFAULT_TIMEOUT, expectedParams); setRandomRefreshPolicy(deleteRequest, expectedParams); setRandomVersion(deleteRequest, expectedParams); setRandomVersionType(deleteRequest, expectedParams); @@ -124,10 +163,10 @@ public void testDelete() throws IOException { } Request request = Request.delete(deleteRequest); - assertEquals("/" + index + "/" + type + "/" + id, request.endpoint); - assertEquals(expectedParams, request.params); - assertEquals("DELETE", request.method); - assertNull(request.entity); + assertEquals("/" + index + "/" + type + "/" + id, request.getEndpoint()); + assertEquals(expectedParams, request.getParameters()); + assertEquals("DELETE", request.getMethod()); + assertNull(request.getEntity()); } public void testExists() { @@ -200,10 +239,34 @@ private static void getAndExistsTest(Function requestConver } } Request request = requestConverter.apply(getRequest); - assertEquals("/" + index + "/" + type + "/" + id, request.endpoint); - assertEquals(expectedParams, request.params); - assertNull(request.entity); - assertEquals(method, request.method); + assertEquals("/" + index + "/" + type + "/" + id, request.getEndpoint()); + assertEquals(expectedParams, request.getParameters()); + assertNull(request.getEntity()); + assertEquals(method, request.getMethod()); + } + + public void testDeleteIndex() throws IOException { + DeleteIndexRequest deleteIndexRequest = new DeleteIndexRequest(); + + int numIndices = randomIntBetween(0, 5); + String[] indices = new String[numIndices]; + for (int i = 0; i < numIndices; i++) { + indices[i] = "index-" + randomAlphaOfLengthBetween(2, 5); + } + deleteIndexRequest.indices(indices); + + Map expectedParams = new HashMap<>(); + + setRandomTimeout(deleteIndexRequest::timeout, AcknowledgedRequest.DEFAULT_ACK_TIMEOUT, expectedParams); + setRandomMasterTimeout(deleteIndexRequest, expectedParams); + + setRandomIndicesOptions(deleteIndexRequest::indicesOptions, deleteIndexRequest::indicesOptions, expectedParams); + + Request request = Request.deleteIndex(deleteIndexRequest); + assertEquals("/" + String.join(",", indices), request.getEndpoint()); + assertEquals(expectedParams, request.getParameters()); + assertEquals("DELETE", request.getMethod()); + assertNull(request.getEntity()); } public void testIndex() throws IOException { @@ -224,7 +287,7 @@ public void testIndex() throws IOException { } } - setRandomTimeout(indexRequest, expectedParams); + setRandomTimeout(indexRequest::timeout, ReplicationRequest.DEFAULT_TIMEOUT, expectedParams); setRandomRefreshPolicy(indexRequest, expectedParams); // There is some logic around _create endpoint and version/version type @@ -267,18 +330,18 @@ public void testIndex() throws IOException { Request request = Request.index(indexRequest); if (indexRequest.opType() == DocWriteRequest.OpType.CREATE) { - assertEquals("/" + index + "/" + type + "/" + id + "/_create", request.endpoint); + assertEquals("/" + index + "/" + type + "/" + id + "/_create", request.getEndpoint()); } else if (id != null) { - assertEquals("/" + index + "/" + type + "/" + id, request.endpoint); + assertEquals("/" + index + "/" + type + "/" + id, request.getEndpoint()); } else { - assertEquals("/" + index + "/" + type, request.endpoint); + assertEquals("/" + index + "/" + type, request.getEndpoint()); } - assertEquals(expectedParams, request.params); - assertEquals(method, request.method); + assertEquals(expectedParams, request.getParameters()); + assertEquals(method, request.getMethod()); - HttpEntity entity = request.entity; + HttpEntity entity = request.getEntity(); assertTrue(entity instanceof ByteArrayEntity); - assertEquals(indexRequest.getContentType().mediaType(), entity.getContentType().getValue()); + assertEquals(indexRequest.getContentType().mediaTypeWithoutParameters(), entity.getContentType().getValue()); try (XContentParser parser = createParser(xContentType.xContent(), entity.getContent())) { assertEquals(nbFields, parser.map().size()); } @@ -367,11 +430,11 @@ public void testUpdate() throws IOException { } Request request = Request.update(updateRequest); - assertEquals("/" + index + "/" + type + "/" + id + "/_update", request.endpoint); - assertEquals(expectedParams, request.params); - assertEquals("POST", request.method); + assertEquals("/" + index + "/" + type + "/" + id + "/_update", request.getEndpoint()); + assertEquals(expectedParams, request.getParameters()); + assertEquals("POST", request.getMethod()); - HttpEntity entity = request.entity; + HttpEntity entity = request.getEntity(); assertTrue(entity instanceof ByteArrayEntity); UpdateRequest parsedUpdateRequest = new UpdateRequest(); @@ -485,12 +548,12 @@ public void testBulk() throws IOException { } Request request = Request.bulk(bulkRequest); - assertEquals("/_bulk", request.endpoint); - assertEquals(expectedParams, request.params); - assertEquals("POST", request.method); - assertEquals(xContentType.mediaType(), request.entity.getContentType().getValue()); - byte[] content = new byte[(int) request.entity.getContentLength()]; - try (InputStream inputStream = request.entity.getContent()) { + assertEquals("/_bulk", request.getEndpoint()); + assertEquals(expectedParams, request.getParameters()); + assertEquals("POST", request.getMethod()); + assertEquals(xContentType.mediaTypeWithoutParameters(), request.getEntity().getContentType().getValue()); + byte[] content = new byte[(int) request.getEntity().getContentLength()]; + try (InputStream inputStream = request.getEntity().getContent()) { Streams.readFully(inputStream, content); } @@ -541,7 +604,7 @@ public void testBulkWithDifferentContentTypes() throws IOException { bulkRequest.add(new DeleteRequest("index", "type", "2")); Request request = Request.bulk(bulkRequest); - assertEquals(XContentType.JSON.mediaType(), request.entity.getContentType().getValue()); + assertEquals(XContentType.JSON.mediaTypeWithoutParameters(), request.getEntity().getContentType().getValue()); } { XContentType xContentType = randomFrom(XContentType.JSON, XContentType.SMILE); @@ -551,7 +614,7 @@ public void testBulkWithDifferentContentTypes() throws IOException { bulkRequest.add(new DeleteRequest("index", "type", "2")); Request request = Request.bulk(bulkRequest); - assertEquals(xContentType.mediaType(), request.entity.getContentType().getValue()); + assertEquals(xContentType.mediaTypeWithoutParameters(), request.getEntity().getContentType().getValue()); } { XContentType xContentType = randomFrom(XContentType.JSON, XContentType.SMILE); @@ -563,7 +626,7 @@ public void testBulkWithDifferentContentTypes() throws IOException { } Request request = Request.bulk(new BulkRequest().add(updateRequest)); - assertEquals(xContentType.mediaType(), request.entity.getContentType().getValue()); + assertEquals(xContentType.mediaTypeWithoutParameters(), request.getEntity().getContentType().getValue()); } { BulkRequest bulkRequest = new BulkRequest(); @@ -644,20 +707,7 @@ public void testSearch() throws Exception { expectedParams.put("scroll", searchRequest.scroll().keepAlive().getStringRep()); } - if (randomBoolean()) { - searchRequest.indicesOptions(IndicesOptions.fromOptions(randomBoolean(), randomBoolean(), randomBoolean(), randomBoolean())); - } - expectedParams.put("ignore_unavailable", Boolean.toString(searchRequest.indicesOptions().ignoreUnavailable())); - expectedParams.put("allow_no_indices", Boolean.toString(searchRequest.indicesOptions().allowNoIndices())); - if (searchRequest.indicesOptions().expandWildcardsOpen() && searchRequest.indicesOptions().expandWildcardsClosed()) { - expectedParams.put("expand_wildcards", "open,closed"); - } else if (searchRequest.indicesOptions().expandWildcardsOpen()) { - expectedParams.put("expand_wildcards", "open"); - } else if (searchRequest.indicesOptions().expandWildcardsClosed()) { - expectedParams.put("expand_wildcards", "closed"); - } else { - expectedParams.put("expand_wildcards", "none"); - } + setRandomIndicesOptions(searchRequest::indicesOptions, searchRequest::indicesOptions, expectedParams); SearchSourceBuilder searchSourceBuilder = null; if (frequently()) { @@ -712,12 +762,12 @@ public void testSearch() throws Exception { endpoint.add(type); } endpoint.add("_search"); - assertEquals(endpoint.toString(), request.endpoint); - assertEquals(expectedParams, request.params); + assertEquals(endpoint.toString(), request.getEndpoint()); + assertEquals(expectedParams, request.getParameters()); if (searchSourceBuilder == null) { - assertNull(request.entity); + assertNull(request.getEntity()); } else { - assertToXContentBody(searchSourceBuilder, request.entity); + assertToXContentBody(searchSourceBuilder, request.getEntity()); } } @@ -728,11 +778,11 @@ public void testSearchScroll() throws IOException { searchScrollRequest.scroll(randomPositiveTimeValue()); } Request request = Request.searchScroll(searchScrollRequest); - assertEquals("GET", request.method); - assertEquals("/_search/scroll", request.endpoint); - assertEquals(0, request.params.size()); - assertToXContentBody(searchScrollRequest, request.entity); - assertEquals(Request.REQUEST_BODY_CONTENT_TYPE.mediaType(), request.entity.getContentType().getValue()); + assertEquals("GET", request.getMethod()); + assertEquals("/_search/scroll", request.getEndpoint()); + assertEquals(0, request.getParameters().size()); + assertToXContentBody(searchScrollRequest, request.getEntity()); + assertEquals(Request.REQUEST_BODY_CONTENT_TYPE.mediaTypeWithoutParameters(), request.getEntity().getContentType().getValue()); } public void testClearScroll() throws IOException { @@ -742,16 +792,16 @@ public void testClearScroll() throws IOException { clearScrollRequest.addScrollId(randomAlphaOfLengthBetween(5, 10)); } Request request = Request.clearScroll(clearScrollRequest); - assertEquals("DELETE", request.method); - assertEquals("/_search/scroll", request.endpoint); - assertEquals(0, request.params.size()); - assertToXContentBody(clearScrollRequest, request.entity); - assertEquals(Request.REQUEST_BODY_CONTENT_TYPE.mediaType(), request.entity.getContentType().getValue()); + assertEquals("DELETE", request.getMethod()); + assertEquals("/_search/scroll", request.getEndpoint()); + assertEquals(0, request.getParameters().size()); + assertToXContentBody(clearScrollRequest, request.getEntity()); + assertEquals(Request.REQUEST_BODY_CONTENT_TYPE.mediaTypeWithoutParameters(), request.getEntity().getContentType().getValue()); } private static void assertToXContentBody(ToXContent expectedBody, HttpEntity actualEntity) throws IOException { BytesReference expectedBytes = XContentHelper.toXContent(expectedBody, Request.REQUEST_BODY_CONTENT_TYPE, false); - assertEquals(XContentType.JSON.mediaType(), actualEntity.getContentType().getValue()); + assertEquals(XContentType.JSON.mediaTypeWithoutParameters(), actualEntity.getContentType().getValue()); assertEquals(expectedBytes, new BytesArray(EntityUtils.toByteArray(actualEntity))); } @@ -793,6 +843,11 @@ public void testEndpoint() { assertEquals("/a/_create", Request.endpoint("a", null, null, "_create")); } + public void testCreateContentType() { + final XContentType xContentType = randomFrom(XContentType.values()); + assertEquals(xContentType.mediaTypeWithoutParameters(), Request.createContentType(xContentType).getMimeType()); + } + public void testEnforceSameContentType() { XContentType xContentType = randomFrom(XContentType.JSON, XContentType.SMILE); IndexRequest indexRequest = new IndexRequest().source(singletonMap("field", "value"), xContentType); @@ -864,13 +919,43 @@ private static void randomizeFetchSourceContextParams(Consumer request, Map expectedParams) { + private static void setRandomIndicesOptions(Consumer setter, Supplier getter, + Map expectedParams) { + + if (randomBoolean()) { + setter.accept(IndicesOptions.fromOptions(randomBoolean(), randomBoolean(), randomBoolean(), + randomBoolean())); + } + expectedParams.put("ignore_unavailable", Boolean.toString(getter.get().ignoreUnavailable())); + expectedParams.put("allow_no_indices", Boolean.toString(getter.get().allowNoIndices())); + if (getter.get().expandWildcardsOpen() && getter.get().expandWildcardsClosed()) { + expectedParams.put("expand_wildcards", "open,closed"); + } else if (getter.get().expandWildcardsOpen()) { + expectedParams.put("expand_wildcards", "open"); + } else if (getter.get().expandWildcardsClosed()) { + expectedParams.put("expand_wildcards", "closed"); + } else { + expectedParams.put("expand_wildcards", "none"); + } + } + + private static void setRandomTimeout(Consumer setter, TimeValue defaultTimeout, Map expectedParams) { if (randomBoolean()) { String timeout = randomTimeValue(); - request.timeout(timeout); + setter.accept(timeout); expectedParams.put("timeout", timeout); } else { - expectedParams.put("timeout", ReplicationRequest.DEFAULT_TIMEOUT.getStringRep()); + expectedParams.put("timeout", defaultTimeout.getStringRep()); + } + } + + private static void setRandomMasterTimeout(MasterNodeRequest request, Map expectedParams) { + if (randomBoolean()) { + String masterTimeout = randomTimeValue(); + request.masterNodeTimeout(masterTimeout); + expectedParams.put("master_timeout", masterTimeout); + } else { + expectedParams.put("master_timeout", MasterNodeRequest.DEFAULT_MASTER_NODE_TIMEOUT.getStringRep()); } } diff --git a/client/rest-high-level/src/test/java/org/elasticsearch/client/RestHighLevelClientExtTests.java b/client/rest-high-level/src/test/java/org/elasticsearch/client/RestHighLevelClientExtTests.java index aea3653c0fff0..b5fb98a3bdf5e 100644 --- a/client/rest-high-level/src/test/java/org/elasticsearch/client/RestHighLevelClientExtTests.java +++ b/client/rest-high-level/src/test/java/org/elasticsearch/client/RestHighLevelClientExtTests.java @@ -19,9 +19,9 @@ package org.elasticsearch.client; -import org.elasticsearch.client.http.HttpEntity; -import org.elasticsearch.client.http.entity.ContentType; -import org.elasticsearch.client.http.entity.StringEntity; +import org.apache.http.HttpEntity; +import org.apache.http.entity.ContentType; +import org.apache.http.entity.StringEntity; import org.elasticsearch.common.ParseField; import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.common.xcontent.XContentParser; @@ -36,7 +36,7 @@ import static org.mockito.Mockito.mock; /** - * This test works against a {@link RestHighLevelClient} subclass that simulats how custom response sections returned by + * This test works against a {@link RestHighLevelClient} subclass that simulates how custom response sections returned by * Elasticsearch plugins can be parsed using the high level client. */ public class RestHighLevelClientExtTests extends ESTestCase { @@ -69,7 +69,7 @@ public void testParseEntityCustomResponseSection() throws IOException { private static class RestHighLevelClientExt extends RestHighLevelClient { private RestHighLevelClientExt(RestClient restClient) { - super(restClient, getNamedXContentsExt()); + super(restClient, RestClient::close, getNamedXContentsExt()); } private static List getNamedXContentsExt() { diff --git a/client/rest-high-level/src/test/java/org/elasticsearch/client/RestHighLevelClientTests.java b/client/rest-high-level/src/test/java/org/elasticsearch/client/RestHighLevelClientTests.java index aa914e0ccc166..1e6559cb880c9 100644 --- a/client/rest-high-level/src/test/java/org/elasticsearch/client/RestHighLevelClientTests.java +++ b/client/rest-high-level/src/test/java/org/elasticsearch/client/RestHighLevelClientTests.java @@ -20,20 +20,7 @@ package org.elasticsearch.client; import com.fasterxml.jackson.core.JsonParseException; -import org.elasticsearch.client.http.Header; -import org.elasticsearch.client.http.HttpEntity; -import org.elasticsearch.client.http.HttpHost; -import org.elasticsearch.client.http.HttpResponse; -import org.elasticsearch.client.http.ProtocolVersion; -import org.elasticsearch.client.http.RequestLine; -import org.elasticsearch.client.http.StatusLine; -import org.elasticsearch.client.http.entity.ByteArrayEntity; -import org.elasticsearch.client.http.entity.ContentType; -import org.elasticsearch.client.http.entity.StringEntity; -import org.elasticsearch.client.http.message.BasicHttpResponse; -import org.elasticsearch.client.http.message.BasicRequestLine; -import org.elasticsearch.client.http.message.BasicStatusLine; -import org.elasticsearch.client.http.nio.entity.NStringEntity; + import org.elasticsearch.Build; import org.elasticsearch.ElasticsearchException; import org.elasticsearch.Version; @@ -48,6 +35,20 @@ import org.elasticsearch.action.search.SearchResponseSections; import org.elasticsearch.action.search.SearchScrollRequest; import org.elasticsearch.action.search.ShardSearchFailure; +import org.apache.http.Header; +import org.apache.http.HttpEntity; +import org.apache.http.HttpHost; +import org.apache.http.HttpResponse; +import org.apache.http.ProtocolVersion; +import org.apache.http.RequestLine; +import org.apache.http.StatusLine; +import org.apache.http.entity.ByteArrayEntity; +import org.apache.http.entity.ContentType; +import org.apache.http.entity.StringEntity; +import org.apache.http.message.BasicHttpResponse; +import org.apache.http.message.BasicRequestLine; +import org.apache.http.message.BasicStatusLine; +import org.apache.http.nio.entity.NStringEntity; import org.elasticsearch.cluster.ClusterName; import org.elasticsearch.common.CheckedFunction; import org.elasticsearch.common.xcontent.NamedXContentRegistry; @@ -64,6 +65,7 @@ import org.elasticsearch.search.aggregations.matrix.stats.MatrixStatsAggregationBuilder; import org.elasticsearch.search.suggest.Suggest; import org.elasticsearch.test.ESTestCase; +import org.elasticsearch.test.InternalAggregationTestCase; import org.junit.Before; import org.mockito.ArgumentMatcher; import org.mockito.internal.matchers.ArrayEquals; @@ -91,6 +93,7 @@ import static org.mockito.Matchers.isNotNull; import static org.mockito.Matchers.isNull; import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.times; import static org.mockito.Mockito.verify; import static org.mockito.Mockito.when; @@ -105,7 +108,16 @@ public class RestHighLevelClientTests extends ESTestCase { @Before public void initClient() { restClient = mock(RestClient.class); - restHighLevelClient = new RestHighLevelClient(restClient); + restHighLevelClient = new RestHighLevelClient(restClient, RestClient::close, Collections.emptyList()); + } + + public void testCloseIsIdempotent() throws IOException { + restHighLevelClient.close(); + verify(restClient, times(1)).close(); + restHighLevelClient.close(); + verify(restClient, times(2)).close(); + restHighLevelClient.close(); + verify(restClient, times(3)).close(); } public void testPingSuccessful() throws IOException { @@ -618,7 +630,9 @@ public void testWrapResponseListenerOnResponseExceptionWithIgnoresErrorValidBody public void testDefaultNamedXContents() { List namedXContents = RestHighLevelClient.getDefaultNamedXContents(); - assertEquals(43, namedXContents.size()); + int expectedInternalAggregations = InternalAggregationTestCase.getDefaultNamedXContents().size(); + int expectedSuggestions = 3; + assertEquals(expectedInternalAggregations + expectedSuggestions, namedXContents.size()); Map, Integer> categories = new HashMap<>(); for (NamedXContentRegistry.Entry namedXContent : namedXContents) { Integer counter = categories.putIfAbsent(namedXContent.categoryClass, 1); @@ -627,8 +641,8 @@ public void testDefaultNamedXContents() { } } assertEquals(2, categories.size()); - assertEquals(Integer.valueOf(40), categories.get(Aggregation.class)); - assertEquals(Integer.valueOf(3), categories.get(Suggest.Suggestion.class)); + assertEquals(expectedInternalAggregations, categories.get(Aggregation.class).intValue()); + assertEquals(expectedSuggestions, categories.get(Suggest.Suggestion.class).intValue()); } public void testProvidedNamedXContents() { diff --git a/client/rest-high-level/src/test/java/org/elasticsearch/client/SearchIT.java b/client/rest-high-level/src/test/java/org/elasticsearch/client/SearchIT.java index dd1227aa5d9ab..d73e746528dc6 100644 --- a/client/rest-high-level/src/test/java/org/elasticsearch/client/SearchIT.java +++ b/client/rest-high-level/src/test/java/org/elasticsearch/client/SearchIT.java @@ -19,10 +19,10 @@ package org.elasticsearch.client; -import org.elasticsearch.client.http.HttpEntity; -import org.elasticsearch.client.http.entity.ContentType; -import org.elasticsearch.client.http.entity.StringEntity; -import org.elasticsearch.client.http.nio.entity.NStringEntity; +import org.apache.http.HttpEntity; +import org.apache.http.entity.ContentType; +import org.apache.http.entity.StringEntity; +import org.apache.http.nio.entity.NStringEntity; import org.elasticsearch.ElasticsearchException; import org.elasticsearch.ElasticsearchStatusException; import org.elasticsearch.action.search.ClearScrollRequest; diff --git a/client/rest-high-level/src/test/java/org/elasticsearch/client/documentation/CRUDDocumentationIT.java b/client/rest-high-level/src/test/java/org/elasticsearch/client/documentation/CRUDDocumentationIT.java index f69742f78cfd3..b32351656cab3 100644 --- a/client/rest-high-level/src/test/java/org/elasticsearch/client/documentation/CRUDDocumentationIT.java +++ b/client/rest-high-level/src/test/java/org/elasticsearch/client/documentation/CRUDDocumentationIT.java @@ -19,9 +19,11 @@ package org.elasticsearch.client.documentation; -import org.elasticsearch.Build; +import org.apache.http.HttpEntity; +import org.apache.http.client.methods.HttpPost; +import org.apache.http.entity.ContentType; +import org.apache.http.nio.entity.NStringEntity; import org.elasticsearch.ElasticsearchException; -import org.elasticsearch.Version; import org.elasticsearch.action.ActionListener; import org.elasticsearch.action.DocWriteRequest; import org.elasticsearch.action.DocWriteResponse; @@ -36,7 +38,6 @@ import org.elasticsearch.action.get.GetResponse; import org.elasticsearch.action.index.IndexRequest; import org.elasticsearch.action.index.IndexResponse; -import org.elasticsearch.action.main.MainResponse; import org.elasticsearch.action.support.ActiveShardCount; import org.elasticsearch.action.support.WriteRequest; import org.elasticsearch.action.support.replication.ReplicationResponse; @@ -45,13 +46,7 @@ import org.elasticsearch.client.ESRestHighLevelClientTestCase; import org.elasticsearch.client.Response; import org.elasticsearch.client.RestHighLevelClient; -import org.elasticsearch.client.http.HttpEntity; -import org.elasticsearch.client.http.client.methods.HttpPost; -import org.elasticsearch.client.http.entity.ContentType; -import org.elasticsearch.client.http.nio.entity.NStringEntity; -import org.elasticsearch.cluster.ClusterName; import org.elasticsearch.common.Strings; -import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.ByteSizeUnit; import org.elasticsearch.common.unit.ByteSizeValue; import org.elasticsearch.common.unit.TimeValue; @@ -64,7 +59,7 @@ import org.elasticsearch.script.Script; import org.elasticsearch.script.ScriptType; import org.elasticsearch.search.fetch.subphase.FetchSourceContext; -import org.elasticsearch.threadpool.ThreadPool; +import org.elasticsearch.threadpool.Scheduler; import java.io.IOException; import java.util.Collections; @@ -868,31 +863,27 @@ public void onFailure(Exception e) { } public void testBulkProcessor() throws InterruptedException, IOException { - Settings settings = Settings.builder().put("node.name", "my-application").build(); RestHighLevelClient client = highLevelClient(); { // tag::bulk-processor-init - ThreadPool threadPool = new ThreadPool(settings); // <1> - - BulkProcessor.Listener listener = new BulkProcessor.Listener() { // <2> + BulkProcessor.Listener listener = new BulkProcessor.Listener() { // <1> @Override public void beforeBulk(long executionId, BulkRequest request) { - // <3> + // <2> } @Override public void afterBulk(long executionId, BulkRequest request, BulkResponse response) { - // <4> + // <3> } @Override public void afterBulk(long executionId, BulkRequest request, Throwable failure) { - // <5> + // <4> } }; - BulkProcessor bulkProcessor = new BulkProcessor.Builder(client::bulkAsync, listener, threadPool) - .build(); // <6> + BulkProcessor bulkProcessor = BulkProcessor.builder(client::bulkAsync, listener).build(); // <5> // end::bulk-processor-init assertNotNull(bulkProcessor); @@ -917,7 +908,6 @@ public void afterBulk(long executionId, BulkRequest request, Throwable failure) // tag::bulk-processor-close bulkProcessor.close(); // end::bulk-processor-close - terminate(threadPool); } { // tag::bulk-processor-listener @@ -944,19 +934,14 @@ public void afterBulk(long executionId, BulkRequest request, Throwable failure) }; // end::bulk-processor-listener - ThreadPool threadPool = new ThreadPool(settings); - try { - // tag::bulk-processor-options - BulkProcessor.Builder builder = new BulkProcessor.Builder(client::bulkAsync, listener, threadPool); - builder.setBulkActions(500); // <1> - builder.setBulkSize(new ByteSizeValue(1L, ByteSizeUnit.MB)); // <2> - builder.setConcurrentRequests(0); // <3> - builder.setFlushInterval(TimeValue.timeValueSeconds(10L)); // <4> - builder.setBackoffPolicy(BackoffPolicy.constantBackoff(TimeValue.timeValueSeconds(1L), 3)); // <5> - // end::bulk-processor-options - } finally { - terminate(threadPool); - } + // tag::bulk-processor-options + BulkProcessor.Builder builder = BulkProcessor.builder(client::bulkAsync, listener); + builder.setBulkActions(500); // <1> + builder.setBulkSize(new ByteSizeValue(1L, ByteSizeUnit.MB)); // <2> + builder.setConcurrentRequests(0); // <3> + builder.setFlushInterval(TimeValue.timeValueSeconds(10L)); // <4> + builder.setBackoffPolicy(BackoffPolicy.constantBackoff(TimeValue.timeValueSeconds(1L), 3)); // <5> + // end::bulk-processor-options } } } diff --git a/client/rest-high-level/src/test/java/org/elasticsearch/client/documentation/IndicesClientDocumentationIT.java b/client/rest-high-level/src/test/java/org/elasticsearch/client/documentation/IndicesClientDocumentationIT.java new file mode 100644 index 0000000000000..e866fb92aae67 --- /dev/null +++ b/client/rest-high-level/src/test/java/org/elasticsearch/client/documentation/IndicesClientDocumentationIT.java @@ -0,0 +1,116 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.client.documentation; + +import org.elasticsearch.ElasticsearchException; +import org.elasticsearch.action.ActionListener; +import org.elasticsearch.action.admin.indices.delete.DeleteIndexRequest; +import org.elasticsearch.action.admin.indices.delete.DeleteIndexResponse; +import org.elasticsearch.action.support.IndicesOptions; +import org.elasticsearch.client.ESRestHighLevelClientTestCase; +import org.elasticsearch.client.Response; +import org.elasticsearch.client.RestHighLevelClient; +import org.elasticsearch.common.unit.TimeValue; +import org.elasticsearch.rest.RestStatus; + +import java.io.IOException; + +/** + * This class is used to generate the Java Indices API documentation. + * You need to wrap your code between two tags like: + * // tag::example[] + * // end::example[] + * + * Where example is your tag name. + * + * Then in the documentation, you can extract what is between tag and end tags with + * ["source","java",subs="attributes,callouts,macros"] + * -------------------------------------------------- + * include-tagged::{doc-tests}/CRUDDocumentationIT.java[example] + * -------------------------------------------------- + */ +public class IndicesClientDocumentationIT extends ESRestHighLevelClientTestCase { + + public void testDeleteIndex() throws IOException { + RestHighLevelClient client = highLevelClient(); + + { + Response createIndexResponse = client().performRequest("PUT", "/posts"); + assertEquals(200, createIndexResponse.getStatusLine().getStatusCode()); + } + + { + // tag::delete-index-request + DeleteIndexRequest request = new DeleteIndexRequest("posts"); // <1> + // end::delete-index-request + + // tag::delete-index-execute + DeleteIndexResponse deleteIndexResponse = client.indices().deleteIndex(request); + // end::delete-index-execute + assertTrue(deleteIndexResponse.isAcknowledged()); + + // tag::delete-index-response + boolean acknowledged = deleteIndexResponse.isAcknowledged(); // <1> + // end::delete-index-response + + // tag::delete-index-execute-async + client.indices().deleteIndexAsync(request, new ActionListener() { + @Override + public void onResponse(DeleteIndexResponse deleteIndexResponse) { + // <1> + } + + @Override + public void onFailure(Exception e) { + // <2> + } + }); + // end::delete-index-execute-async + } + + { + DeleteIndexRequest request = new DeleteIndexRequest("posts"); + // tag::delete-index-request-timeout + request.timeout(TimeValue.timeValueMinutes(2)); // <1> + request.timeout("2m"); // <2> + // end::delete-index-request-timeout + // tag::delete-index-request-masterTimeout + request.masterNodeTimeout(TimeValue.timeValueMinutes(1)); // <1> + request.timeout("1m"); // <2> + // end::delete-index-request-masterTimeout + // tag::delete-index-request-indicesOptions + request.indicesOptions(IndicesOptions.lenientExpandOpen()); // <1> + // end::delete-index-request-indicesOptions + } + + { + // tag::delete-index-notfound + try { + DeleteIndexRequest request = new DeleteIndexRequest("does_not_exist"); + DeleteIndexResponse deleteIndexResponse = client.indices().deleteIndex(request); + } catch (ElasticsearchException exception) { + if (exception.status() == RestStatus.NOT_FOUND) { + // <1> + } + } + // end::delete-index-notfound + } + } +} diff --git a/client/rest-high-level/src/test/java/org/elasticsearch/client/documentation/MainDocumentationIT.java b/client/rest-high-level/src/test/java/org/elasticsearch/client/documentation/MainDocumentationIT.java index 877ae910e83fa..0558091a76cb4 100644 --- a/client/rest-high-level/src/test/java/org/elasticsearch/client/documentation/MainDocumentationIT.java +++ b/client/rest-high-level/src/test/java/org/elasticsearch/client/documentation/MainDocumentationIT.java @@ -19,10 +19,12 @@ package org.elasticsearch.client.documentation; +import org.apache.http.HttpHost; import org.elasticsearch.Build; import org.elasticsearch.Version; import org.elasticsearch.action.main.MainResponse; import org.elasticsearch.client.ESRestHighLevelClientTestCase; +import org.elasticsearch.client.RestClient; import org.elasticsearch.client.RestHighLevelClient; import org.elasticsearch.cluster.ClusterName; @@ -65,4 +67,17 @@ public void testMain() throws IOException { assertNotNull(build); } } + + public void testInitializationFromClientBuilder() throws IOException { + //tag::rest-high-level-client-init + RestHighLevelClient client = new RestHighLevelClient( + RestClient.builder( + new HttpHost("localhost", 9200, "http"), + new HttpHost("localhost", 9201, "http"))); + //end::rest-high-level-client-init + + //tag::rest-high-level-client-close + client.close(); + //end::rest-high-level-client-close + } } diff --git a/client/rest-high-level/src/test/java/org/elasticsearch/client/documentation/MigrationDocumentationIT.java b/client/rest-high-level/src/test/java/org/elasticsearch/client/documentation/MigrationDocumentationIT.java index d0f191c4ca348..21fa1152ccd72 100644 --- a/client/rest-high-level/src/test/java/org/elasticsearch/client/documentation/MigrationDocumentationIT.java +++ b/client/rest-high-level/src/test/java/org/elasticsearch/client/documentation/MigrationDocumentationIT.java @@ -19,20 +19,18 @@ package org.elasticsearch.client.documentation; +import org.apache.http.HttpEntity; +import org.apache.http.HttpStatus; +import org.apache.http.entity.ContentType; +import org.apache.http.nio.entity.NStringEntity; import org.elasticsearch.action.ActionListener; import org.elasticsearch.action.delete.DeleteRequest; import org.elasticsearch.action.delete.DeleteResponse; import org.elasticsearch.action.index.IndexRequest; -import org.elasticsearch.action.index.IndexRequestBuilder; import org.elasticsearch.action.index.IndexResponse; import org.elasticsearch.client.ESRestHighLevelClientTestCase; import org.elasticsearch.client.Response; -import org.elasticsearch.client.RestClient; import org.elasticsearch.client.RestHighLevelClient; -import org.elasticsearch.client.http.HttpEntity; -import org.elasticsearch.client.http.HttpStatus; -import org.elasticsearch.client.http.entity.ContentType; -import org.elasticsearch.client.http.nio.entity.NStringEntity; import org.elasticsearch.cluster.health.ClusterHealthStatus; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentFactory; @@ -67,7 +65,7 @@ public class MigrationDocumentationIT extends ESRestHighLevelClientTestCase { public void testCreateIndex() throws IOException { - RestClient restClient = client(); + RestHighLevelClient client = highLevelClient(); { //tag::migration-create-inded Settings indexSettings = Settings.builder() // <1> @@ -93,7 +91,7 @@ public void testCreateIndex() throws IOException { HttpEntity entity = new NStringEntity(payload, ContentType.APPLICATION_JSON); // <5> - Response response = restClient.performRequest("PUT", "my-index", emptyMap(), entity); // <6> + Response response = client.getLowLevelClient().performRequest("PUT", "my-index", emptyMap(), entity); // <6> if (response.getStatusLine().getStatusCode() != HttpStatus.SC_OK) { // <7> } @@ -103,10 +101,10 @@ public void testCreateIndex() throws IOException { } public void testClusterHealth() throws IOException { - RestClient restClient = client(); + RestHighLevelClient client = highLevelClient(); { //tag::migration-cluster-health - Response response = restClient.performRequest("GET", "/_cluster/health"); // <1> + Response response = client.getLowLevelClient().performRequest("GET", "/_cluster/health"); // <1> ClusterHealthStatus healthStatus; try (InputStream is = response.getEntity().getContent()) { // <2> diff --git a/client/rest/build.gradle b/client/rest/build.gradle index 3622cf55b60f9..ac489f5871116 100644 --- a/client/rest/build.gradle +++ b/client/rest/build.gradle @@ -19,31 +19,6 @@ import org.elasticsearch.gradle.precommit.PrecommitTasks -/** - * The rest client is a shaded jar. It contains the source of the rest client, as well as all the dependencies, - * shaded to the `org.elasticsearch.client` package. 2 artifacts come out of this build process. The shading process - * only modifies the imports and class names and locations. It does not do any processing on the files. The classes used - * to interact with the rest client are no different from the dependencies in the shade configuration, besides in name. - * - * IDEs do not like removing artifacts and changing configurations on the fly, so the bits that make the build use the - * actual shaded jar (2) are only executed on the cli. Tests run in an IDE rely on the deps (1) jar. - * - * 1) A jar that contains *only* the `org.elasticsearch.client` shaded dependencies. This is a jar that is built before - * the src is compiled. This jar is only used by the rest client so will compile. There exists a chicken-egg - * situation where the src needs compilation and depends on `org.elasticsearch.client` shaded classes, so an - * intermediary jar needs to exist to satisfy the compile. The `deps` classifier is added to this jar. - * 2) The *actual* jar that will be used by clients. This has no classifier, contains the rest client src and - * `org.elasticsearch.client`. This jar is the only actual output artifact of this job. - */ -buildscript { - repositories { - jcenter() - } - dependencies { - classpath 'com.github.jengelman.gradle.plugins:shadow:2.0.1' - } -} - apply plugin: 'elasticsearch.build' apply plugin: 'ru.vyarus.animalsniffer' apply plugin: 'nebula.maven-base-publish' @@ -63,74 +38,15 @@ publishing { } } -configurations { - shade { - transitive = false - } -} - -// Useful for build time dependencies, as it is generated before compilation of the source in the rest client. -// This cannot be used as the final shaded jar, as it will contain the compiled source and dependencies -File shadedDir = file("${buildDir}/shaded") -// This directory exists so that the shadeDeps task would produce an output, so we can add it (below) to the source set. -File shadedSrcDir = file("${buildDir}/generated-dummy-shaded") -task shadeDeps(type: com.github.jengelman.gradle.plugins.shadow.tasks.ShadowJar) { - destinationDir = shadedDir - configurations = [project.configurations.shade] - classifier = 'deps' - relocate 'org.apache', 'org.elasticsearch.client' - - doLast { - shadedSrcDir.mkdir() - } -} - -jar { - from zipTree(shadeDeps.outputs.files.singleFile) - dependsOn shadeDeps -} - -// remove the deps jar from the classpath to avoid jarHell -if (isIdea == false && isEclipse == false) { - // cleanup to remove the deps jar from the classpath - if (gradle.gradleVersion == "3.3") { - configurations.runtime.extendsFrom -= [configurations.compile] - } else if (gradle.gradleVersion > "3.3") { - configurations.runtimeElements.extendsFrom = [] - } -} - -if (isEclipse) { - // in eclipse the project is under a fake root, we need to change around the source sets - sourceSets { - if (project.path == ":client:rest") { - main.java.srcDirs = ['java'] - //main.resources.srcDirs = ['resources'] - } else { - test.java.srcDirs = ['java'] - test.resources.srcDirs = ['resources'] - } - } -} -// adds a dependency to compile, so the -deps jar is built first -sourceSets.main.output.dir(shadedSrcDir, builtBy: 'shadeDeps') - dependencies { - shade "org.apache.httpcomponents:httpclient:${versions.httpclient}" - shade "org.apache.httpcomponents:httpcore:${versions.httpcore}" - shade "org.apache.httpcomponents:httpasyncclient:${versions.httpasyncclient}" - shade "org.apache.httpcomponents:httpcore-nio:${versions.httpcore}" - shade "commons-codec:commons-codec:${versions.commonscodec}" - shade "commons-logging:commons-logging:${versions.commonslogging}" - - compile shadeDeps.outputs.files - - if (isEclipse == false || project.path == ":client:rest-tests") { - testCompile("org.elasticsearch.client:test:${version}") { - // tests use the locally compiled version of core - exclude group: 'org.elasticsearch', module: 'elasticsearch' - } - } + compile "org.apache.httpcomponents:httpclient:${versions.httpclient}" + compile "org.apache.httpcomponents:httpcore:${versions.httpcore}" + compile "org.apache.httpcomponents:httpasyncclient:${versions.httpasyncclient}" + compile "org.apache.httpcomponents:httpcore-nio:${versions.httpcore}" + compile "commons-codec:commons-codec:${versions.commonscodec}" + compile "commons-logging:commons-logging:${versions.commonslogging}" + + testCompile "org.elasticsearch.client:test:${version}" testCompile "com.carrotsearch.randomizedtesting:randomizedtesting-runner:${versions.randomizedrunner}" testCompile "junit:junit:${versions.junit}" testCompile "org.hamcrest:hamcrest-all:${versions.hamcrest}" @@ -140,16 +56,6 @@ dependencies { signature "org.codehaus.mojo.signature:java17:1.0@signature" } -// Set the exported=true for the generated rest client deps since it is used by other projects in eclipse. -// https://docs.gradle.org/3.3/userguide/eclipse_plugin.html#sec:eclipse_modify_domain_objects -eclipse.classpath.file { - whenMerged { classpath -> - classpath.entries.findAll { entry -> entry.path.contains("elasticsearch-rest-client") }*.exported = true - } -} - -dependencyLicenses.dependencies = project.configurations.shade - forbiddenApisMain { //client does not depend on core, so only jdk and http signatures should be checked signaturesURLs = [PrecommitTasks.getResource('/forbidden/jdk-signatures.txt'), @@ -166,7 +72,7 @@ forbiddenApisTest { } //JarHell is part of es core, which we don't want to pull in -jarHell.enabled = false +jarHell.enabled=false namingConventions { testClass = 'org.elasticsearch.client.RestClientTestCase' @@ -176,13 +82,13 @@ namingConventions { thirdPartyAudit.excludes = [ //commons-logging optional dependencies - 'org.elasticsearch.client.avalon.framework.logger.Logger', - 'org.elasticsearch.client.log.Hierarchy', - 'org.elasticsearch.client.log.Logger', - 'org.elasticsearch.client.log4j.Category', - 'org.elasticsearch.client.log4j.Level', - 'org.elasticsearch.client.log4j.Logger', - 'org.elasticsearch.client.log4j.Priority', + 'org.apache.avalon.framework.logger.Logger', + 'org.apache.log.Hierarchy', + 'org.apache.log.Logger', + 'org.apache.log4j.Category', + 'org.apache.log4j.Level', + 'org.apache.log4j.Logger', + 'org.apache.log4j.Priority', //commons-logging provided dependencies 'javax.servlet.ServletContextEvent', 'javax.servlet.ServletContextListener' diff --git a/client/rest/src/main/eclipse-build.gradle b/client/rest/src/main/eclipse-build.gradle deleted file mode 100644 index 5f678803ad856..0000000000000 --- a/client/rest/src/main/eclipse-build.gradle +++ /dev/null @@ -1,2 +0,0 @@ -// this is just shell gradle file for eclipse to have separate projects for src and tests -apply from: '../../build.gradle' diff --git a/client/rest/src/main/java/org/elasticsearch/client/HeapBufferedAsyncResponseConsumer.java b/client/rest/src/main/java/org/elasticsearch/client/HeapBufferedAsyncResponseConsumer.java index 8873db60a83e2..84753e6f75c8d 100644 --- a/client/rest/src/main/java/org/elasticsearch/client/HeapBufferedAsyncResponseConsumer.java +++ b/client/rest/src/main/java/org/elasticsearch/client/HeapBufferedAsyncResponseConsumer.java @@ -19,24 +19,24 @@ package org.elasticsearch.client; -import org.elasticsearch.client.http.ContentTooLongException; -import org.elasticsearch.client.http.HttpEntity; -import org.elasticsearch.client.http.HttpException; -import org.elasticsearch.client.http.HttpResponse; -import org.elasticsearch.client.http.entity.ContentType; -import org.elasticsearch.client.http.nio.ContentDecoder; -import org.elasticsearch.client.http.nio.IOControl; -import org.elasticsearch.client.http.nio.entity.ContentBufferEntity; -import org.elasticsearch.client.http.nio.protocol.AbstractAsyncResponseConsumer; -import org.elasticsearch.client.http.nio.util.ByteBufferAllocator; -import org.elasticsearch.client.http.nio.util.HeapByteBufferAllocator; -import org.elasticsearch.client.http.nio.util.SimpleInputBuffer; -import org.elasticsearch.client.http.protocol.HttpContext; +import org.apache.http.ContentTooLongException; +import org.apache.http.HttpEntity; +import org.apache.http.HttpException; +import org.apache.http.HttpResponse; +import org.apache.http.entity.ContentType; +import org.apache.http.nio.ContentDecoder; +import org.apache.http.nio.IOControl; +import org.apache.http.nio.entity.ContentBufferEntity; +import org.apache.http.nio.protocol.AbstractAsyncResponseConsumer; +import org.apache.http.nio.util.ByteBufferAllocator; +import org.apache.http.nio.util.HeapByteBufferAllocator; +import org.apache.http.nio.util.SimpleInputBuffer; +import org.apache.http.protocol.HttpContext; import java.io.IOException; /** - * Default implementation of {@link org.elasticsearch.client.http.nio.protocol.HttpAsyncResponseConsumer}. Buffers the whole + * Default implementation of {@link org.apache.http.nio.protocol.HttpAsyncResponseConsumer}. Buffers the whole * response content in heap memory, meaning that the size of the buffer is equal to the content-length of the response. * Limits the size of responses that can be read based on a configurable argument. Throws an exception in case the entity is longer * than the configured buffer limit. diff --git a/client/rest/src/main/java/org/elasticsearch/client/HttpAsyncResponseConsumerFactory.java b/client/rest/src/main/java/org/elasticsearch/client/HttpAsyncResponseConsumerFactory.java index 7ea7cfbf3ac67..1af9e0dcf0fa4 100644 --- a/client/rest/src/main/java/org/elasticsearch/client/HttpAsyncResponseConsumerFactory.java +++ b/client/rest/src/main/java/org/elasticsearch/client/HttpAsyncResponseConsumerFactory.java @@ -19,8 +19,8 @@ package org.elasticsearch.client; -import org.elasticsearch.client.http.HttpResponse; -import org.elasticsearch.client.http.nio.protocol.HttpAsyncResponseConsumer; +import org.apache.http.HttpResponse; +import org.apache.http.nio.protocol.HttpAsyncResponseConsumer; import static org.elasticsearch.client.HttpAsyncResponseConsumerFactory.HeapBufferedResponseConsumerFactory.DEFAULT_BUFFER_LIMIT; diff --git a/client/rest/src/main/java/org/elasticsearch/client/HttpDeleteWithEntity.java b/client/rest/src/main/java/org/elasticsearch/client/HttpDeleteWithEntity.java index 9106426878f7d..df08ae5a8d101 100644 --- a/client/rest/src/main/java/org/elasticsearch/client/HttpDeleteWithEntity.java +++ b/client/rest/src/main/java/org/elasticsearch/client/HttpDeleteWithEntity.java @@ -18,8 +18,8 @@ */ package org.elasticsearch.client; -import org.elasticsearch.client.http.client.methods.HttpDelete; -import org.elasticsearch.client.http.client.methods.HttpEntityEnclosingRequestBase; +import org.apache.http.client.methods.HttpDelete; +import org.apache.http.client.methods.HttpEntityEnclosingRequestBase; import java.net.URI; diff --git a/client/rest/src/main/java/org/elasticsearch/client/HttpGetWithEntity.java b/client/rest/src/main/java/org/elasticsearch/client/HttpGetWithEntity.java index 13dddfbb2008b..a3846beefe440 100644 --- a/client/rest/src/main/java/org/elasticsearch/client/HttpGetWithEntity.java +++ b/client/rest/src/main/java/org/elasticsearch/client/HttpGetWithEntity.java @@ -18,8 +18,8 @@ */ package org.elasticsearch.client; -import org.elasticsearch.client.http.client.methods.HttpEntityEnclosingRequestBase; -import org.elasticsearch.client.http.client.methods.HttpGet; +import org.apache.http.client.methods.HttpEntityEnclosingRequestBase; +import org.apache.http.client.methods.HttpGet; import java.net.URI; @@ -38,4 +38,4 @@ final class HttpGetWithEntity extends HttpEntityEnclosingRequestBase { public String getMethod() { return METHOD_NAME; } -} +} \ No newline at end of file diff --git a/client/rest/src/main/java/org/elasticsearch/client/RequestLogger.java b/client/rest/src/main/java/org/elasticsearch/client/RequestLogger.java index 5434176d6547d..07ff89b7e3fb0 100644 --- a/client/rest/src/main/java/org/elasticsearch/client/RequestLogger.java +++ b/client/rest/src/main/java/org/elasticsearch/client/RequestLogger.java @@ -19,18 +19,18 @@ package org.elasticsearch.client; -import org.elasticsearch.client.commons.logging.Log; -import org.elasticsearch.client.commons.logging.LogFactory; -import org.elasticsearch.client.http.Header; -import org.elasticsearch.client.http.HttpEntity; -import org.elasticsearch.client.http.HttpEntityEnclosingRequest; -import org.elasticsearch.client.http.HttpHost; -import org.elasticsearch.client.http.HttpResponse; -import org.elasticsearch.client.http.RequestLine; -import org.elasticsearch.client.http.client.methods.HttpUriRequest; -import org.elasticsearch.client.http.entity.BufferedHttpEntity; -import org.elasticsearch.client.http.entity.ContentType; -import org.elasticsearch.client.http.util.EntityUtils; +import org.apache.commons.logging.Log; +import org.apache.commons.logging.LogFactory; +import org.apache.http.Header; +import org.apache.http.HttpEntity; +import org.apache.http.HttpEntityEnclosingRequest; +import org.apache.http.HttpHost; +import org.apache.http.HttpResponse; +import org.apache.http.RequestLine; +import org.apache.http.client.methods.HttpUriRequest; +import org.apache.http.entity.BufferedHttpEntity; +import org.apache.http.entity.ContentType; +import org.apache.http.util.EntityUtils; import java.io.BufferedReader; import java.io.IOException; diff --git a/client/rest/src/main/java/org/elasticsearch/client/Response.java b/client/rest/src/main/java/org/elasticsearch/client/Response.java index fa8b8c849bd98..02aedb4765abe 100644 --- a/client/rest/src/main/java/org/elasticsearch/client/Response.java +++ b/client/rest/src/main/java/org/elasticsearch/client/Response.java @@ -19,12 +19,12 @@ package org.elasticsearch.client; -import org.elasticsearch.client.http.Header; -import org.elasticsearch.client.http.HttpEntity; -import org.elasticsearch.client.http.HttpHost; -import org.elasticsearch.client.http.HttpResponse; -import org.elasticsearch.client.http.RequestLine; -import org.elasticsearch.client.http.StatusLine; +import org.apache.http.Header; +import org.apache.http.HttpEntity; +import org.apache.http.HttpHost; +import org.apache.http.HttpResponse; +import org.apache.http.RequestLine; +import org.apache.http.StatusLine; import java.util.Objects; diff --git a/client/rest/src/main/java/org/elasticsearch/client/ResponseException.java b/client/rest/src/main/java/org/elasticsearch/client/ResponseException.java index 1c1470ec8bebe..072e45ffb0e97 100644 --- a/client/rest/src/main/java/org/elasticsearch/client/ResponseException.java +++ b/client/rest/src/main/java/org/elasticsearch/client/ResponseException.java @@ -19,11 +19,12 @@ package org.elasticsearch.client; -import org.elasticsearch.client.http.HttpEntity; -import org.elasticsearch.client.http.entity.BufferedHttpEntity; -import org.elasticsearch.client.http.util.EntityUtils; +import org.apache.http.HttpEntity; +import org.apache.http.entity.BufferedHttpEntity; +import org.apache.http.util.EntityUtils; import java.io.IOException; +import java.util.Locale; /** * Exception thrown when an elasticsearch node responds to a request with a status code that indicates an error. @@ -39,8 +40,13 @@ public ResponseException(Response response) throws IOException { } private static String buildMessage(Response response) throws IOException { - String message = response.getRequestLine().getMethod() + " " + response.getHost() + response.getRequestLine().getUri() - + ": " + response.getStatusLine().toString(); + String message = String.format(Locale.ROOT, + "method [%s], host [%s], URI [%s], status line [%s]", + response.getRequestLine().getMethod(), + response.getHost(), + response.getRequestLine().getUri(), + response.getStatusLine().toString() + ); HttpEntity entity = response.getEntity(); if (entity != null) { diff --git a/client/rest/src/main/java/org/elasticsearch/client/RestClient.java b/client/rest/src/main/java/org/elasticsearch/client/RestClient.java index f8e815fa878d6..e221ed081a597 100644 --- a/client/rest/src/main/java/org/elasticsearch/client/RestClient.java +++ b/client/rest/src/main/java/org/elasticsearch/client/RestClient.java @@ -18,38 +18,39 @@ */ package org.elasticsearch.client; -import org.elasticsearch.client.commons.logging.Log; -import org.elasticsearch.client.commons.logging.LogFactory; -import org.elasticsearch.client.http.Header; -import org.elasticsearch.client.http.HttpEntity; -import org.elasticsearch.client.http.HttpHost; -import org.elasticsearch.client.http.HttpRequest; -import org.elasticsearch.client.http.HttpResponse; -import org.elasticsearch.client.http.client.AuthCache; -import org.elasticsearch.client.http.client.ClientProtocolException; -import org.elasticsearch.client.http.client.methods.HttpEntityEnclosingRequestBase; -import org.elasticsearch.client.http.client.methods.HttpHead; -import org.elasticsearch.client.http.client.methods.HttpOptions; -import org.elasticsearch.client.http.client.methods.HttpPatch; -import org.elasticsearch.client.http.client.methods.HttpPost; -import org.elasticsearch.client.http.client.methods.HttpPut; -import org.elasticsearch.client.http.client.methods.HttpRequestBase; -import org.elasticsearch.client.http.client.methods.HttpTrace; -import org.elasticsearch.client.http.client.protocol.HttpClientContext; -import org.elasticsearch.client.http.client.utils.URIBuilder; -import org.elasticsearch.client.http.concurrent.FutureCallback; -import org.elasticsearch.client.http.impl.auth.BasicScheme; -import org.elasticsearch.client.http.impl.client.BasicAuthCache; -import org.elasticsearch.client.http.impl.nio.client.CloseableHttpAsyncClient; -import org.elasticsearch.client.http.nio.client.methods.HttpAsyncMethods; -import org.elasticsearch.client.http.nio.protocol.HttpAsyncRequestProducer; -import org.elasticsearch.client.http.nio.protocol.HttpAsyncResponseConsumer; +import org.apache.commons.logging.Log; +import org.apache.commons.logging.LogFactory; +import org.apache.http.Header; +import org.apache.http.HttpEntity; +import org.apache.http.HttpHost; +import org.apache.http.HttpRequest; +import org.apache.http.HttpResponse; +import org.apache.http.client.AuthCache; +import org.apache.http.client.ClientProtocolException; +import org.apache.http.client.methods.HttpEntityEnclosingRequestBase; +import org.apache.http.client.methods.HttpHead; +import org.apache.http.client.methods.HttpOptions; +import org.apache.http.client.methods.HttpPatch; +import org.apache.http.client.methods.HttpPost; +import org.apache.http.client.methods.HttpPut; +import org.apache.http.client.methods.HttpRequestBase; +import org.apache.http.client.methods.HttpTrace; +import org.apache.http.client.protocol.HttpClientContext; +import org.apache.http.client.utils.URIBuilder; +import org.apache.http.concurrent.FutureCallback; +import org.apache.http.impl.auth.BasicScheme; +import org.apache.http.impl.client.BasicAuthCache; +import org.apache.http.impl.nio.client.CloseableHttpAsyncClient; +import org.apache.http.nio.client.methods.HttpAsyncMethods; +import org.apache.http.nio.protocol.HttpAsyncRequestProducer; +import org.apache.http.nio.protocol.HttpAsyncResponseConsumer; import java.io.Closeable; import java.io.IOException; import java.net.URI; import java.net.URISyntaxException; import java.util.ArrayList; +import java.util.Arrays; import java.util.Collection; import java.util.Collections; import java.util.Comparator; @@ -91,8 +92,9 @@ public class RestClient implements Closeable { private static final Log logger = LogFactory.getLog(RestClient.class); private final CloseableHttpAsyncClient client; - //we don't rely on default headers supported by HttpAsyncClient as those cannot be replaced - private final Header[] defaultHeaders; + // We don't rely on default headers supported by HttpAsyncClient as those cannot be replaced. + // These are package private for tests. + final List

defaultHeaders; private final long maxRetryTimeoutMillis; private final String pathPrefix; private final AtomicInteger lastHostIndex = new AtomicInteger(0); @@ -104,7 +106,7 @@ public class RestClient implements Closeable { HttpHost[] hosts, String pathPrefix, FailureListener failureListener) { this.client = client; this.maxRetryTimeoutMillis = maxRetryTimeoutMillis; - this.defaultHeaders = defaultHeaders; + this.defaultHeaders = Collections.unmodifiableList(Arrays.asList(defaultHeaders)); this.failureListener = failureListener; this.pathPrefix = pathPrefix; setHosts(hosts); @@ -112,6 +114,7 @@ public class RestClient implements Closeable { /** * Returns a new {@link RestClientBuilder} to help with {@link RestClient} creation. + * Creates a new builder instance and sets the hosts that the client will send requests to. */ public static RestClientBuilder builder(HttpHost... hosts) { return new RestClientBuilder(hosts); @@ -706,8 +709,8 @@ public void onFailure(HttpHost host) { * safe, volatile way. */ private static class HostTuple { - public final T hosts; - public final AuthCache authCache; + final T hosts; + final AuthCache authCache; HostTuple(final T hosts, final AuthCache authCache) { this.hosts = hosts; diff --git a/client/rest/src/main/java/org/elasticsearch/client/RestClientBuilder.java b/client/rest/src/main/java/org/elasticsearch/client/RestClientBuilder.java index bf645d4f1eb4d..38c9cdbe6e665 100644 --- a/client/rest/src/main/java/org/elasticsearch/client/RestClientBuilder.java +++ b/client/rest/src/main/java/org/elasticsearch/client/RestClientBuilder.java @@ -19,14 +19,14 @@ package org.elasticsearch.client; -import org.elasticsearch.client.http.Header; -import org.elasticsearch.client.http.HttpHost; -import org.elasticsearch.client.http.client.config.RequestConfig; -import org.elasticsearch.client.http.impl.client.CloseableHttpClient; -import org.elasticsearch.client.http.impl.client.HttpClientBuilder; -import org.elasticsearch.client.http.impl.nio.client.CloseableHttpAsyncClient; -import org.elasticsearch.client.http.impl.nio.client.HttpAsyncClientBuilder; -import org.elasticsearch.client.http.nio.conn.SchemeIOSessionStrategy; +import org.apache.http.Header; +import org.apache.http.HttpHost; +import org.apache.http.client.config.RequestConfig; +import org.apache.http.impl.client.CloseableHttpClient; +import org.apache.http.impl.client.HttpClientBuilder; +import org.apache.http.impl.nio.client.CloseableHttpAsyncClient; +import org.apache.http.impl.nio.client.HttpAsyncClientBuilder; +import org.apache.http.nio.conn.SchemeIOSessionStrategy; import java.security.AccessController; import java.security.PrivilegedAction; @@ -34,8 +34,8 @@ /** * Helps creating a new {@link RestClient}. Allows to set the most common http client configuration options when internally - * creating the underlying {@link org.elasticsearch.client.http.nio.client.HttpAsyncClient}. Also allows to provide an externally created - * {@link org.elasticsearch.client.http.nio.client.HttpAsyncClient} in case additional customization is needed. + * creating the underlying {@link org.apache.http.nio.client.HttpAsyncClient}. Also allows to provide an externally created + * {@link org.apache.http.nio.client.HttpAsyncClient} in case additional customization is needed. */ public final class RestClientBuilder { public static final int DEFAULT_CONNECT_TIMEOUT_MILLIS = 1000; @@ -237,7 +237,7 @@ public interface RequestConfigCallback { public interface HttpClientConfigCallback { /** * Allows to customize the {@link CloseableHttpAsyncClient} being created and used by the {@link RestClient}. - * Commonly used to customize the default {@link org.elasticsearch.client.http.client.CredentialsProvider} for authentication + * Commonly used to customize the default {@link org.apache.http.client.CredentialsProvider} for authentication * or the {@link SchemeIOSessionStrategy} for communication through ssl without losing any other useful default * value that the {@link RestClientBuilder} internally sets, like connection pooling. */ diff --git a/client/rest/src/test/eclipse-build.gradle b/client/rest/src/test/eclipse-build.gradle deleted file mode 100644 index 044b208b883f1..0000000000000 --- a/client/rest/src/test/eclipse-build.gradle +++ /dev/null @@ -1,6 +0,0 @@ -// this is just shell gradle file for eclipse to have separate projects for src and tests -apply from: '../../build.gradle' - -dependencies { - testCompile project(':client:rest') -} diff --git a/client/rest/src/test/java/org/elasticsearch/client/FailureTrackingResponseListenerTests.java b/client/rest/src/test/java/org/elasticsearch/client/FailureTrackingResponseListenerTests.java index ce3b2a31520b4..f6ec388d09d94 100644 --- a/client/rest/src/test/java/org/elasticsearch/client/FailureTrackingResponseListenerTests.java +++ b/client/rest/src/test/java/org/elasticsearch/client/FailureTrackingResponseListenerTests.java @@ -19,14 +19,14 @@ package org.elasticsearch.client; -import org.elasticsearch.client.http.HttpHost; -import org.elasticsearch.client.http.HttpResponse; -import org.elasticsearch.client.http.ProtocolVersion; -import org.elasticsearch.client.http.RequestLine; -import org.elasticsearch.client.http.StatusLine; -import org.elasticsearch.client.http.message.BasicHttpResponse; -import org.elasticsearch.client.http.message.BasicRequestLine; -import org.elasticsearch.client.http.message.BasicStatusLine; +import org.apache.http.HttpHost; +import org.apache.http.HttpResponse; +import org.apache.http.ProtocolVersion; +import org.apache.http.RequestLine; +import org.apache.http.StatusLine; +import org.apache.http.message.BasicHttpResponse; +import org.apache.http.message.BasicRequestLine; +import org.apache.http.message.BasicStatusLine; import java.util.concurrent.atomic.AtomicReference; diff --git a/client/rest/src/test/java/org/elasticsearch/client/HeapBufferedAsyncResponseConsumerTests.java b/client/rest/src/test/java/org/elasticsearch/client/HeapBufferedAsyncResponseConsumerTests.java index b2b21557f4d02..fe82d5367e51a 100644 --- a/client/rest/src/test/java/org/elasticsearch/client/HeapBufferedAsyncResponseConsumerTests.java +++ b/client/rest/src/test/java/org/elasticsearch/client/HeapBufferedAsyncResponseConsumerTests.java @@ -19,19 +19,19 @@ package org.elasticsearch.client; -import org.elasticsearch.client.http.ContentTooLongException; -import org.elasticsearch.client.http.HttpEntity; -import org.elasticsearch.client.http.HttpResponse; -import org.elasticsearch.client.http.ProtocolVersion; -import org.elasticsearch.client.http.StatusLine; -import org.elasticsearch.client.http.entity.ContentType; -import org.elasticsearch.client.http.entity.StringEntity; -import org.elasticsearch.client.http.message.BasicHttpResponse; -import org.elasticsearch.client.http.message.BasicStatusLine; -import org.elasticsearch.client.http.nio.ContentDecoder; -import org.elasticsearch.client.http.nio.IOControl; -import org.elasticsearch.client.http.nio.protocol.HttpAsyncResponseConsumer; -import org.elasticsearch.client.http.protocol.HttpContext; +import org.apache.http.ContentTooLongException; +import org.apache.http.HttpEntity; +import org.apache.http.HttpResponse; +import org.apache.http.ProtocolVersion; +import org.apache.http.StatusLine; +import org.apache.http.entity.ContentType; +import org.apache.http.entity.StringEntity; +import org.apache.http.message.BasicHttpResponse; +import org.apache.http.message.BasicStatusLine; +import org.apache.http.nio.ContentDecoder; +import org.apache.http.nio.IOControl; +import org.apache.http.nio.protocol.HttpAsyncResponseConsumer; +import org.apache.http.protocol.HttpContext; import java.lang.reflect.Constructor; import java.lang.reflect.InvocationTargetException; diff --git a/client/rest/src/test/java/org/elasticsearch/client/HostsTrackingFailureListener.java b/client/rest/src/test/java/org/elasticsearch/client/HostsTrackingFailureListener.java index d09a8ae734a09..e2f0ba81f6ed7 100644 --- a/client/rest/src/test/java/org/elasticsearch/client/HostsTrackingFailureListener.java +++ b/client/rest/src/test/java/org/elasticsearch/client/HostsTrackingFailureListener.java @@ -19,7 +19,7 @@ package org.elasticsearch.client; -import org.elasticsearch.client.http.HttpHost; +import org.apache.http.HttpHost; import java.util.HashSet; import java.util.Set; @@ -48,4 +48,4 @@ void assertCalled(HttpHost... hosts) { void assertNotCalled() { assertEquals(0, hosts.size()); } -} +} \ No newline at end of file diff --git a/client/rest/src/test/java/org/elasticsearch/client/RequestLoggerTests.java b/client/rest/src/test/java/org/elasticsearch/client/RequestLoggerTests.java index 4164bd0cd58ac..637e1807d2536 100644 --- a/client/rest/src/test/java/org/elasticsearch/client/RequestLoggerTests.java +++ b/client/rest/src/test/java/org/elasticsearch/client/RequestLoggerTests.java @@ -19,27 +19,27 @@ package org.elasticsearch.client; -import org.elasticsearch.client.http.Header; -import org.elasticsearch.client.http.HttpEntity; -import org.elasticsearch.client.http.HttpEntityEnclosingRequest; -import org.elasticsearch.client.http.HttpHost; -import org.elasticsearch.client.http.ProtocolVersion; -import org.elasticsearch.client.http.client.methods.HttpHead; -import org.elasticsearch.client.http.client.methods.HttpOptions; -import org.elasticsearch.client.http.client.methods.HttpPatch; -import org.elasticsearch.client.http.client.methods.HttpPost; -import org.elasticsearch.client.http.client.methods.HttpPut; -import org.elasticsearch.client.http.client.methods.HttpTrace; -import org.elasticsearch.client.http.client.methods.HttpUriRequest; -import org.elasticsearch.client.http.entity.ContentType; -import org.elasticsearch.client.http.entity.InputStreamEntity; -import org.elasticsearch.client.http.entity.StringEntity; -import org.elasticsearch.client.http.message.BasicHeader; -import org.elasticsearch.client.http.message.BasicHttpResponse; -import org.elasticsearch.client.http.message.BasicStatusLine; -import org.elasticsearch.client.http.nio.entity.NByteArrayEntity; -import org.elasticsearch.client.http.nio.entity.NStringEntity; -import org.elasticsearch.client.http.util.EntityUtils; +import org.apache.http.Header; +import org.apache.http.HttpEntity; +import org.apache.http.HttpEntityEnclosingRequest; +import org.apache.http.HttpHost; +import org.apache.http.ProtocolVersion; +import org.apache.http.client.methods.HttpHead; +import org.apache.http.client.methods.HttpOptions; +import org.apache.http.client.methods.HttpPatch; +import org.apache.http.client.methods.HttpPost; +import org.apache.http.client.methods.HttpPut; +import org.apache.http.client.methods.HttpTrace; +import org.apache.http.client.methods.HttpUriRequest; +import org.apache.http.entity.ContentType; +import org.apache.http.entity.InputStreamEntity; +import org.apache.http.entity.StringEntity; +import org.apache.http.message.BasicHeader; +import org.apache.http.message.BasicHttpResponse; +import org.apache.http.message.BasicStatusLine; +import org.apache.http.nio.entity.NByteArrayEntity; +import org.apache.http.nio.entity.NStringEntity; +import org.apache.http.util.EntityUtils; import java.io.ByteArrayInputStream; import java.io.IOException; diff --git a/client/rest/src/test/java/org/elasticsearch/client/ResponseExceptionTests.java b/client/rest/src/test/java/org/elasticsearch/client/ResponseExceptionTests.java index 1a632c96b9b81..6cf7e68b98800 100644 --- a/client/rest/src/test/java/org/elasticsearch/client/ResponseExceptionTests.java +++ b/client/rest/src/test/java/org/elasticsearch/client/ResponseExceptionTests.java @@ -19,23 +19,24 @@ package org.elasticsearch.client; -import org.elasticsearch.client.http.HttpEntity; -import org.elasticsearch.client.http.HttpHost; -import org.elasticsearch.client.http.HttpResponse; -import org.elasticsearch.client.http.ProtocolVersion; -import org.elasticsearch.client.http.RequestLine; -import org.elasticsearch.client.http.StatusLine; -import org.elasticsearch.client.http.entity.ContentType; -import org.elasticsearch.client.http.entity.InputStreamEntity; -import org.elasticsearch.client.http.entity.StringEntity; -import org.elasticsearch.client.http.message.BasicHttpResponse; -import org.elasticsearch.client.http.message.BasicRequestLine; -import org.elasticsearch.client.http.message.BasicStatusLine; -import org.elasticsearch.client.http.util.EntityUtils; +import org.apache.http.HttpEntity; +import org.apache.http.HttpHost; +import org.apache.http.HttpResponse; +import org.apache.http.ProtocolVersion; +import org.apache.http.RequestLine; +import org.apache.http.StatusLine; +import org.apache.http.entity.ContentType; +import org.apache.http.entity.InputStreamEntity; +import org.apache.http.entity.StringEntity; +import org.apache.http.message.BasicHttpResponse; +import org.apache.http.message.BasicRequestLine; +import org.apache.http.message.BasicStatusLine; +import org.apache.http.util.EntityUtils; import java.io.ByteArrayInputStream; import java.io.IOException; import java.nio.charset.StandardCharsets; +import java.util.Locale; import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertNull; @@ -74,8 +75,14 @@ public void testResponseException() throws IOException { assertNull(responseException.getResponse().getEntity()); } - String message = response.getRequestLine().getMethod() + " " + response.getHost() + response.getRequestLine().getUri() - + ": " + response.getStatusLine().toString(); + String message = String.format(Locale.ROOT, + "method [%s], host [%s], URI [%s], status line [%s]", + response.getRequestLine().getMethod(), + response.getHost(), + response.getRequestLine().getUri(), + response.getStatusLine().toString() + ); + if (hasBody) { message += "\n" + responseBody; } diff --git a/client/rest/src/test/java/org/elasticsearch/client/RestClientBuilderIntegTests.java b/client/rest/src/test/java/org/elasticsearch/client/RestClientBuilderIntegTests.java index 28b5f5935f761..8142fea6d259b 100644 --- a/client/rest/src/test/java/org/elasticsearch/client/RestClientBuilderIntegTests.java +++ b/client/rest/src/test/java/org/elasticsearch/client/RestClientBuilderIntegTests.java @@ -23,7 +23,7 @@ import com.sun.net.httpserver.HttpHandler; import com.sun.net.httpserver.HttpsConfigurator; import com.sun.net.httpserver.HttpsServer; -import org.elasticsearch.client.http.HttpHost; +import org.apache.http.HttpHost; import org.codehaus.mojo.animal_sniffer.IgnoreJRERequirement; import org.elasticsearch.mocksocket.MockHttpServer; import org.junit.AfterClass; diff --git a/client/rest/src/test/java/org/elasticsearch/client/RestClientBuilderTests.java b/client/rest/src/test/java/org/elasticsearch/client/RestClientBuilderTests.java index 6aea066e0aed5..c9243d3aaf6ce 100644 --- a/client/rest/src/test/java/org/elasticsearch/client/RestClientBuilderTests.java +++ b/client/rest/src/test/java/org/elasticsearch/client/RestClientBuilderTests.java @@ -19,11 +19,11 @@ package org.elasticsearch.client; -import org.elasticsearch.client.http.Header; -import org.elasticsearch.client.http.HttpHost; -import org.elasticsearch.client.http.client.config.RequestConfig; -import org.elasticsearch.client.http.impl.nio.client.HttpAsyncClientBuilder; -import org.elasticsearch.client.http.message.BasicHeader; +import org.apache.http.Header; +import org.apache.http.HttpHost; +import org.apache.http.client.config.RequestConfig; +import org.apache.http.impl.nio.client.HttpAsyncClientBuilder; +import org.apache.http.message.BasicHeader; import java.io.IOException; diff --git a/client/rest/src/test/java/org/elasticsearch/client/RestClientMultipleHostsIntegTests.java b/client/rest/src/test/java/org/elasticsearch/client/RestClientMultipleHostsIntegTests.java index d3b2c1b9b84f3..da5a960c2e84c 100644 --- a/client/rest/src/test/java/org/elasticsearch/client/RestClientMultipleHostsIntegTests.java +++ b/client/rest/src/test/java/org/elasticsearch/client/RestClientMultipleHostsIntegTests.java @@ -22,7 +22,7 @@ import com.sun.net.httpserver.HttpExchange; import com.sun.net.httpserver.HttpHandler; import com.sun.net.httpserver.HttpServer; -import org.elasticsearch.client.http.HttpHost; +import org.apache.http.HttpHost; import org.codehaus.mojo.animal_sniffer.IgnoreJRERequirement; import org.elasticsearch.mocksocket.MockHttpServer; import org.junit.AfterClass; @@ -45,7 +45,7 @@ import static org.junit.Assert.assertTrue; /** - * Integration test to check interaction between {@link RestClient} and {@link org.elasticsearch.client.http.client.HttpClient}. + * Integration test to check interaction between {@link RestClient} and {@link org.apache.http.client.HttpClient}. * Works against real http servers, multiple hosts. Also tests failover by randomly shutting down hosts. */ //animal-sniffer doesn't like our usage of com.sun.net.httpserver.* classes diff --git a/client/rest/src/test/java/org/elasticsearch/client/RestClientMultipleHostsTests.java b/client/rest/src/test/java/org/elasticsearch/client/RestClientMultipleHostsTests.java index 94f5d05cdd8a0..6f87a244ff59f 100644 --- a/client/rest/src/test/java/org/elasticsearch/client/RestClientMultipleHostsTests.java +++ b/client/rest/src/test/java/org/elasticsearch/client/RestClientMultipleHostsTests.java @@ -20,21 +20,21 @@ package org.elasticsearch.client; import com.carrotsearch.randomizedtesting.generators.RandomNumbers; -import org.elasticsearch.client.http.Header; -import org.elasticsearch.client.http.HttpHost; -import org.elasticsearch.client.http.HttpResponse; -import org.elasticsearch.client.http.ProtocolVersion; -import org.elasticsearch.client.http.StatusLine; -import org.elasticsearch.client.http.client.methods.HttpUriRequest; -import org.elasticsearch.client.http.client.protocol.HttpClientContext; -import org.elasticsearch.client.http.concurrent.FutureCallback; -import org.elasticsearch.client.http.conn.ConnectTimeoutException; -import org.elasticsearch.client.http.impl.auth.BasicScheme; -import org.elasticsearch.client.http.impl.nio.client.CloseableHttpAsyncClient; -import org.elasticsearch.client.http.message.BasicHttpResponse; -import org.elasticsearch.client.http.message.BasicStatusLine; -import org.elasticsearch.client.http.nio.protocol.HttpAsyncRequestProducer; -import org.elasticsearch.client.http.nio.protocol.HttpAsyncResponseConsumer; +import org.apache.http.Header; +import org.apache.http.HttpHost; +import org.apache.http.HttpResponse; +import org.apache.http.ProtocolVersion; +import org.apache.http.StatusLine; +import org.apache.http.client.methods.HttpUriRequest; +import org.apache.http.client.protocol.HttpClientContext; +import org.apache.http.concurrent.FutureCallback; +import org.apache.http.conn.ConnectTimeoutException; +import org.apache.http.impl.auth.BasicScheme; +import org.apache.http.impl.nio.client.CloseableHttpAsyncClient; +import org.apache.http.message.BasicHttpResponse; +import org.apache.http.message.BasicStatusLine; +import org.apache.http.nio.protocol.HttpAsyncRequestProducer; +import org.apache.http.nio.protocol.HttpAsyncResponseConsumer; import org.junit.Before; import org.mockito.invocation.InvocationOnMock; import org.mockito.stubbing.Answer; diff --git a/client/rest/src/test/java/org/elasticsearch/client/RestClientSingleHostIntegTests.java b/client/rest/src/test/java/org/elasticsearch/client/RestClientSingleHostIntegTests.java index fd27cd1ddb911..6d4e3ba4bc861 100644 --- a/client/rest/src/test/java/org/elasticsearch/client/RestClientSingleHostIntegTests.java +++ b/client/rest/src/test/java/org/elasticsearch/client/RestClientSingleHostIntegTests.java @@ -23,16 +23,16 @@ import com.sun.net.httpserver.HttpExchange; import com.sun.net.httpserver.HttpHandler; import com.sun.net.httpserver.HttpServer; -import org.elasticsearch.client.http.Consts; -import org.elasticsearch.client.http.Header; -import org.elasticsearch.client.http.HttpHost; -import org.elasticsearch.client.http.auth.AuthScope; -import org.elasticsearch.client.http.auth.UsernamePasswordCredentials; -import org.elasticsearch.client.http.entity.ContentType; -import org.elasticsearch.client.http.entity.StringEntity; -import org.elasticsearch.client.http.impl.client.BasicCredentialsProvider; -import org.elasticsearch.client.http.impl.nio.client.HttpAsyncClientBuilder; -import org.elasticsearch.client.http.util.EntityUtils; +import org.apache.http.Consts; +import org.apache.http.Header; +import org.apache.http.HttpHost; +import org.apache.http.auth.AuthScope; +import org.apache.http.auth.UsernamePasswordCredentials; +import org.apache.http.entity.ContentType; +import org.apache.http.entity.StringEntity; +import org.apache.http.impl.client.BasicCredentialsProvider; +import org.apache.http.impl.nio.client.HttpAsyncClientBuilder; +import org.apache.http.util.EntityUtils; import org.codehaus.mojo.animal_sniffer.IgnoreJRERequirement; import org.elasticsearch.mocksocket.MockHttpServer; import org.junit.AfterClass; @@ -60,7 +60,7 @@ import static org.junit.Assert.assertTrue; /** - * Integration test to check interaction between {@link RestClient} and {@link org.elasticsearch.client.http.client.HttpClient}. + * Integration test to check interaction between {@link RestClient} and {@link org.apache.http.client.HttpClient}. * Works against a real http server, one single host. */ //animal-sniffer doesn't like our usage of com.sun.net.httpserver.* classes @@ -164,7 +164,7 @@ public static void stopHttpServers() throws IOException { /** * End to end test for headers. We test it explicitly against a real http client as there are different ways - * to set/add headers to the {@link org.elasticsearch.client.http.client.HttpClient}. + * to set/add headers to the {@link org.apache.http.client.HttpClient}. * Exercises the test http server ability to send back whatever headers it received. */ public void testHeaders() throws IOException { @@ -198,7 +198,7 @@ public void testHeaders() throws IOException { /** * End to end test for delete with body. We test it explicitly as it is not supported - * out of the box by {@link org.elasticsearch.client.http.client.HttpClient}. + * out of the box by {@link org.apache.http.client.HttpClient}. * Exercises the test http server ability to send back whatever body it received. */ public void testDeleteWithBody() throws IOException { @@ -207,7 +207,7 @@ public void testDeleteWithBody() throws IOException { /** * End to end test for get with body. We test it explicitly as it is not supported - * out of the box by {@link org.elasticsearch.client.http.client.HttpClient}. + * out of the box by {@link org.apache.http.client.HttpClient}. * Exercises the test http server ability to send back whatever body it received. */ public void testGetWithBody() throws IOException { diff --git a/client/rest/src/test/java/org/elasticsearch/client/RestClientSingleHostTests.java b/client/rest/src/test/java/org/elasticsearch/client/RestClientSingleHostTests.java index 188e053b538bd..541193c733d56 100644 --- a/client/rest/src/test/java/org/elasticsearch/client/RestClientSingleHostTests.java +++ b/client/rest/src/test/java/org/elasticsearch/client/RestClientSingleHostTests.java @@ -19,34 +19,34 @@ package org.elasticsearch.client; -import org.elasticsearch.client.http.Header; -import org.elasticsearch.client.http.HttpEntity; -import org.elasticsearch.client.http.HttpEntityEnclosingRequest; -import org.elasticsearch.client.http.HttpHost; -import org.elasticsearch.client.http.HttpRequest; -import org.elasticsearch.client.http.HttpResponse; -import org.elasticsearch.client.http.ProtocolVersion; -import org.elasticsearch.client.http.StatusLine; -import org.elasticsearch.client.http.client.methods.HttpHead; -import org.elasticsearch.client.http.client.methods.HttpOptions; -import org.elasticsearch.client.http.client.methods.HttpPatch; -import org.elasticsearch.client.http.client.methods.HttpPost; -import org.elasticsearch.client.http.client.methods.HttpPut; -import org.elasticsearch.client.http.client.methods.HttpTrace; -import org.elasticsearch.client.http.client.methods.HttpUriRequest; -import org.elasticsearch.client.http.client.protocol.HttpClientContext; -import org.elasticsearch.client.http.client.utils.URIBuilder; -import org.elasticsearch.client.http.concurrent.FutureCallback; -import org.elasticsearch.client.http.conn.ConnectTimeoutException; -import org.elasticsearch.client.http.entity.ContentType; -import org.elasticsearch.client.http.entity.StringEntity; -import org.elasticsearch.client.http.impl.auth.BasicScheme; -import org.elasticsearch.client.http.impl.nio.client.CloseableHttpAsyncClient; -import org.elasticsearch.client.http.message.BasicHttpResponse; -import org.elasticsearch.client.http.message.BasicStatusLine; -import org.elasticsearch.client.http.nio.protocol.HttpAsyncRequestProducer; -import org.elasticsearch.client.http.nio.protocol.HttpAsyncResponseConsumer; -import org.elasticsearch.client.http.util.EntityUtils; +import org.apache.http.Header; +import org.apache.http.HttpEntity; +import org.apache.http.HttpEntityEnclosingRequest; +import org.apache.http.HttpHost; +import org.apache.http.HttpRequest; +import org.apache.http.HttpResponse; +import org.apache.http.ProtocolVersion; +import org.apache.http.StatusLine; +import org.apache.http.client.methods.HttpHead; +import org.apache.http.client.methods.HttpOptions; +import org.apache.http.client.methods.HttpPatch; +import org.apache.http.client.methods.HttpPost; +import org.apache.http.client.methods.HttpPut; +import org.apache.http.client.methods.HttpTrace; +import org.apache.http.client.methods.HttpUriRequest; +import org.apache.http.client.protocol.HttpClientContext; +import org.apache.http.client.utils.URIBuilder; +import org.apache.http.concurrent.FutureCallback; +import org.apache.http.conn.ConnectTimeoutException; +import org.apache.http.entity.ContentType; +import org.apache.http.entity.StringEntity; +import org.apache.http.impl.auth.BasicScheme; +import org.apache.http.impl.nio.client.CloseableHttpAsyncClient; +import org.apache.http.message.BasicHttpResponse; +import org.apache.http.message.BasicStatusLine; +import org.apache.http.nio.protocol.HttpAsyncRequestProducer; +import org.apache.http.nio.protocol.HttpAsyncResponseConsumer; +import org.apache.http.util.EntityUtils; import org.junit.Before; import org.mockito.ArgumentCaptor; import org.mockito.invocation.InvocationOnMock; diff --git a/client/rest/src/test/java/org/elasticsearch/client/RestClientTests.java b/client/rest/src/test/java/org/elasticsearch/client/RestClientTests.java index 35a0959814b9f..dd3a88f53513b 100644 --- a/client/rest/src/test/java/org/elasticsearch/client/RestClientTests.java +++ b/client/rest/src/test/java/org/elasticsearch/client/RestClientTests.java @@ -19,23 +19,38 @@ package org.elasticsearch.client; -import org.elasticsearch.client.http.Header; -import org.elasticsearch.client.http.HttpHost; -import org.elasticsearch.client.http.impl.nio.client.CloseableHttpAsyncClient; +import org.apache.http.Header; +import org.apache.http.HttpHost; +import org.apache.http.impl.nio.client.CloseableHttpAsyncClient; +import java.io.IOException; import java.net.URI; import java.util.Collections; import static org.junit.Assert.assertEquals; import static org.junit.Assert.fail; import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.times; +import static org.mockito.Mockito.verify; public class RestClientTests extends RestClientTestCase { + public void testCloseIsIdempotent() throws IOException { + HttpHost[] hosts = new HttpHost[]{new HttpHost("localhost", 9200)}; + CloseableHttpAsyncClient closeableHttpAsyncClient = mock(CloseableHttpAsyncClient.class); + RestClient restClient = new RestClient(closeableHttpAsyncClient, 1_000, new Header[0], hosts, null, null); + restClient.close(); + verify(closeableHttpAsyncClient, times(1)).close(); + restClient.close(); + verify(closeableHttpAsyncClient, times(2)).close(); + restClient.close(); + verify(closeableHttpAsyncClient, times(3)).close(); + } + public void testPerformAsyncWithUnsupportedMethod() throws Exception { RestClient.SyncResponseListener listener = new RestClient.SyncResponseListener(10000); try (RestClient restClient = createRestClient()) { - restClient.performRequestAsync("unsupported", randomAsciiOfLength(5), listener); + restClient.performRequestAsync("unsupported", randomAsciiLettersOfLength(5), listener); listener.get(); fail("should have failed because of unsupported method"); @@ -47,7 +62,7 @@ public void testPerformAsyncWithUnsupportedMethod() throws Exception { public void testPerformAsyncWithNullParams() throws Exception { RestClient.SyncResponseListener listener = new RestClient.SyncResponseListener(10000); try (RestClient restClient = createRestClient()) { - restClient.performRequestAsync(randomAsciiOfLength(5), randomAsciiOfLength(5), null, listener); + restClient.performRequestAsync(randomAsciiLettersOfLength(5), randomAsciiLettersOfLength(5), null, listener); listener.get(); fail("should have failed because of null parameters"); @@ -59,7 +74,7 @@ public void testPerformAsyncWithNullParams() throws Exception { public void testPerformAsyncWithNullHeaders() throws Exception { RestClient.SyncResponseListener listener = new RestClient.SyncResponseListener(10000); try (RestClient restClient = createRestClient()) { - restClient.performRequestAsync("GET", randomAsciiOfLength(5), listener, (Header) null); + restClient.performRequestAsync("GET", randomAsciiLettersOfLength(5), listener, (Header) null); listener.get(); fail("should have failed because of null headers"); diff --git a/client/rest/src/test/java/org/elasticsearch/client/SyncResponseListenerTests.java b/client/rest/src/test/java/org/elasticsearch/client/SyncResponseListenerTests.java index d4c15d97fe951..154efb4cac34b 100644 --- a/client/rest/src/test/java/org/elasticsearch/client/SyncResponseListenerTests.java +++ b/client/rest/src/test/java/org/elasticsearch/client/SyncResponseListenerTests.java @@ -19,14 +19,14 @@ package org.elasticsearch.client; -import org.elasticsearch.client.http.HttpHost; -import org.elasticsearch.client.http.HttpResponse; -import org.elasticsearch.client.http.ProtocolVersion; -import org.elasticsearch.client.http.RequestLine; -import org.elasticsearch.client.http.StatusLine; -import org.elasticsearch.client.http.message.BasicHttpResponse; -import org.elasticsearch.client.http.message.BasicRequestLine; -import org.elasticsearch.client.http.message.BasicStatusLine; +import org.apache.http.HttpHost; +import org.apache.http.HttpResponse; +import org.apache.http.ProtocolVersion; +import org.apache.http.RequestLine; +import org.apache.http.StatusLine; +import org.apache.http.message.BasicHttpResponse; +import org.apache.http.message.BasicRequestLine; +import org.apache.http.message.BasicStatusLine; import java.io.IOException; import java.net.URISyntaxException; diff --git a/client/rest/src/test/java/org/elasticsearch/client/documentation/RestClientDocumentation.java b/client/rest/src/test/java/org/elasticsearch/client/documentation/RestClientDocumentation.java index 461a24a6c1b7c..1bad6b5f6d6fd 100644 --- a/client/rest/src/test/java/org/elasticsearch/client/documentation/RestClientDocumentation.java +++ b/client/rest/src/test/java/org/elasticsearch/client/documentation/RestClientDocumentation.java @@ -19,21 +19,23 @@ package org.elasticsearch.client.documentation; -import org.elasticsearch.client.http.Header; -import org.elasticsearch.client.http.HttpEntity; -import org.elasticsearch.client.http.HttpHost; -import org.elasticsearch.client.http.RequestLine; -import org.elasticsearch.client.http.auth.AuthScope; -import org.elasticsearch.client.http.auth.UsernamePasswordCredentials; -import org.elasticsearch.client.http.client.CredentialsProvider; -import org.elasticsearch.client.http.client.config.RequestConfig; -import org.elasticsearch.client.http.entity.ContentType; -import org.elasticsearch.client.http.impl.client.BasicCredentialsProvider; -import org.elasticsearch.client.http.impl.nio.client.HttpAsyncClientBuilder; -import org.elasticsearch.client.http.impl.nio.reactor.IOReactorConfig; -import org.elasticsearch.client.http.message.BasicHeader; -import org.elasticsearch.client.http.nio.entity.NStringEntity; -import org.elasticsearch.client.http.util.EntityUtils; +import org.apache.http.Header; +import org.apache.http.HttpEntity; +import org.apache.http.HttpHost; +import org.apache.http.RequestLine; +import org.apache.http.auth.AuthScope; +import org.apache.http.auth.UsernamePasswordCredentials; +import org.apache.http.client.CredentialsProvider; +import org.apache.http.client.config.RequestConfig; +import org.apache.http.entity.ContentType; +import org.apache.http.impl.client.BasicCredentialsProvider; +import org.apache.http.impl.nio.client.HttpAsyncClientBuilder; +import org.apache.http.impl.nio.reactor.IOReactorConfig; +import org.apache.http.message.BasicHeader; +import org.apache.http.nio.entity.NStringEntity; +import org.apache.http.ssl.SSLContextBuilder; +import org.apache.http.ssl.SSLContexts; +import org.apache.http.util.EntityUtils; import org.elasticsearch.client.HttpAsyncResponseConsumerFactory; import org.elasticsearch.client.Response; import org.elasticsearch.client.ResponseListener; @@ -47,9 +49,6 @@ import java.nio.file.Path; import java.nio.file.Paths; import java.security.KeyStore; -import java.security.KeyStoreException; -import java.security.NoSuchAlgorithmException; -import java.security.cert.CertificateException; import java.util.Collections; import java.util.Map; import java.util.concurrent.CountDownLatch; @@ -258,7 +257,7 @@ public void onFailure(Exception exception) { } @SuppressWarnings("unused") - public void testCommonConfiguration() throws IOException, KeyStoreException, CertificateException, NoSuchAlgorithmException { + public void testCommonConfiguration() throws Exception { { //tag::rest-client-config-timeouts RestClientBuilder builder = RestClient.builder(new HttpHost("localhost", 9200)) @@ -318,13 +317,14 @@ public HttpAsyncClientBuilder customizeHttpClient(HttpAsyncClientBuilder httpCli { Path keyStorePath = Paths.get(""); String keyStorePass = ""; - final SSLContext sslContext = null; //tag::rest-client-config-encrypted-communication - KeyStore keystore = KeyStore.getInstance("jks"); + KeyStore truststore = KeyStore.getInstance("jks"); try (InputStream is = Files.newInputStream(keyStorePath)) { - keystore.load(is, keyStorePass.toCharArray()); + truststore.load(is, keyStorePass.toCharArray()); } - RestClientBuilder builder = RestClient.builder(new HttpHost("localhost", 9200)) + SSLContextBuilder sslBuilder = SSLContexts.custom().loadTrustMaterial(truststore, null); + final SSLContext sslContext = sslBuilder.build(); + RestClientBuilder builder = RestClient.builder(new HttpHost("localhost", 9200, "https")) .setHttpClientConfigCallback(new RestClientBuilder.HttpClientConfigCallback() { @Override public HttpAsyncClientBuilder customizeHttpClient(HttpAsyncClientBuilder httpClientBuilder) { diff --git a/client/sniffer/build.gradle b/client/sniffer/build.gradle index a212bdd0f2184..fdf624e1daa7f 100644 --- a/client/sniffer/build.gradle +++ b/client/sniffer/build.gradle @@ -40,6 +40,8 @@ publishing { dependencies { compile "org.elasticsearch.client:elasticsearch-rest-client:${version}" + compile "org.apache.httpcomponents:httpclient:${versions.httpclient}" + compile "org.apache.httpcomponents:httpcore:${versions.httpcore}" compile "commons-codec:commons-codec:${versions.commonscodec}" compile "commons-logging:commons-logging:${versions.commonslogging}" compile "com.fasterxml.jackson.core:jackson-core:${versions.jackson}" @@ -99,4 +101,4 @@ thirdPartyAudit.excludes = [ //commons-logging provided dependencies 'javax.servlet.ServletContextEvent', 'javax.servlet.ServletContextListener' -] +] \ No newline at end of file diff --git a/client/sniffer/licenses/httpclient-4.5.2.jar.sha1 b/client/sniffer/licenses/httpclient-4.5.2.jar.sha1 new file mode 100644 index 0000000000000..6937112a09fb6 --- /dev/null +++ b/client/sniffer/licenses/httpclient-4.5.2.jar.sha1 @@ -0,0 +1 @@ +733db77aa8d9b2d68015189df76ab06304406e50 \ No newline at end of file diff --git a/client/sniffer/licenses/httpclient-LICENSE.txt b/client/sniffer/licenses/httpclient-LICENSE.txt new file mode 100644 index 0000000000000..32f01eda18fe9 --- /dev/null +++ b/client/sniffer/licenses/httpclient-LICENSE.txt @@ -0,0 +1,558 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + +========================================================================= + +This project includes Public Suffix List copied from + +licensed under the terms of the Mozilla Public License, v. 2.0 + +Full license text: + +Mozilla Public License Version 2.0 +================================== + +1. Definitions +-------------- + +1.1. "Contributor" + means each individual or legal entity that creates, contributes to + the creation of, or owns Covered Software. + +1.2. "Contributor Version" + means the combination of the Contributions of others (if any) used + by a Contributor and that particular Contributor's Contribution. + +1.3. "Contribution" + means Covered Software of a particular Contributor. + +1.4. "Covered Software" + means Source Code Form to which the initial Contributor has attached + the notice in Exhibit A, the Executable Form of such Source Code + Form, and Modifications of such Source Code Form, in each case + including portions thereof. + +1.5. "Incompatible With Secondary Licenses" + means + + (a) that the initial Contributor has attached the notice described + in Exhibit B to the Covered Software; or + + (b) that the Covered Software was made available under the terms of + version 1.1 or earlier of the License, but not also under the + terms of a Secondary License. + +1.6. "Executable Form" + means any form of the work other than Source Code Form. + +1.7. "Larger Work" + means a work that combines Covered Software with other material, in + a separate file or files, that is not Covered Software. + +1.8. "License" + means this document. + +1.9. "Licensable" + means having the right to grant, to the maximum extent possible, + whether at the time of the initial grant or subsequently, any and + all of the rights conveyed by this License. + +1.10. "Modifications" + means any of the following: + + (a) any file in Source Code Form that results from an addition to, + deletion from, or modification of the contents of Covered + Software; or + + (b) any new file in Source Code Form that contains any Covered + Software. + +1.11. "Patent Claims" of a Contributor + means any patent claim(s), including without limitation, method, + process, and apparatus claims, in any patent Licensable by such + Contributor that would be infringed, but for the grant of the + License, by the making, using, selling, offering for sale, having + made, import, or transfer of either its Contributions or its + Contributor Version. + +1.12. "Secondary License" + means either the GNU General Public License, Version 2.0, the GNU + Lesser General Public License, Version 2.1, the GNU Affero General + Public License, Version 3.0, or any later versions of those + licenses. + +1.13. "Source Code Form" + means the form of the work preferred for making modifications. + +1.14. "You" (or "Your") + means an individual or a legal entity exercising rights under this + License. For legal entities, "You" includes any entity that + controls, is controlled by, or is under common control with You. For + purposes of this definition, "control" means (a) the power, direct + or indirect, to cause the direction or management of such entity, + whether by contract or otherwise, or (b) ownership of more than + fifty percent (50%) of the outstanding shares or beneficial + ownership of such entity. + +2. License Grants and Conditions +-------------------------------- + +2.1. Grants + +Each Contributor hereby grants You a world-wide, royalty-free, +non-exclusive license: + +(a) under intellectual property rights (other than patent or trademark) + Licensable by such Contributor to use, reproduce, make available, + modify, display, perform, distribute, and otherwise exploit its + Contributions, either on an unmodified basis, with Modifications, or + as part of a Larger Work; and + +(b) under Patent Claims of such Contributor to make, use, sell, offer + for sale, have made, import, and otherwise transfer either its + Contributions or its Contributor Version. + +2.2. Effective Date + +The licenses granted in Section 2.1 with respect to any Contribution +become effective for each Contribution on the date the Contributor first +distributes such Contribution. + +2.3. Limitations on Grant Scope + +The licenses granted in this Section 2 are the only rights granted under +this License. No additional rights or licenses will be implied from the +distribution or licensing of Covered Software under this License. +Notwithstanding Section 2.1(b) above, no patent license is granted by a +Contributor: + +(a) for any code that a Contributor has removed from Covered Software; + or + +(b) for infringements caused by: (i) Your and any other third party's + modifications of Covered Software, or (ii) the combination of its + Contributions with other software (except as part of its Contributor + Version); or + +(c) under Patent Claims infringed by Covered Software in the absence of + its Contributions. + +This License does not grant any rights in the trademarks, service marks, +or logos of any Contributor (except as may be necessary to comply with +the notice requirements in Section 3.4). + +2.4. Subsequent Licenses + +No Contributor makes additional grants as a result of Your choice to +distribute the Covered Software under a subsequent version of this +License (see Section 10.2) or under the terms of a Secondary License (if +permitted under the terms of Section 3.3). + +2.5. Representation + +Each Contributor represents that the Contributor believes its +Contributions are its original creation(s) or it has sufficient rights +to grant the rights to its Contributions conveyed by this License. + +2.6. Fair Use + +This License is not intended to limit any rights You have under +applicable copyright doctrines of fair use, fair dealing, or other +equivalents. + +2.7. Conditions + +Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted +in Section 2.1. + +3. Responsibilities +------------------- + +3.1. Distribution of Source Form + +All distribution of Covered Software in Source Code Form, including any +Modifications that You create or to which You contribute, must be under +the terms of this License. You must inform recipients that the Source +Code Form of the Covered Software is governed by the terms of this +License, and how they can obtain a copy of this License. You may not +attempt to alter or restrict the recipients' rights in the Source Code +Form. + +3.2. Distribution of Executable Form + +If You distribute Covered Software in Executable Form then: + +(a) such Covered Software must also be made available in Source Code + Form, as described in Section 3.1, and You must inform recipients of + the Executable Form how they can obtain a copy of such Source Code + Form by reasonable means in a timely manner, at a charge no more + than the cost of distribution to the recipient; and + +(b) You may distribute such Executable Form under the terms of this + License, or sublicense it under different terms, provided that the + license for the Executable Form does not attempt to limit or alter + the recipients' rights in the Source Code Form under this License. + +3.3. Distribution of a Larger Work + +You may create and distribute a Larger Work under terms of Your choice, +provided that You also comply with the requirements of this License for +the Covered Software. If the Larger Work is a combination of Covered +Software with a work governed by one or more Secondary Licenses, and the +Covered Software is not Incompatible With Secondary Licenses, this +License permits You to additionally distribute such Covered Software +under the terms of such Secondary License(s), so that the recipient of +the Larger Work may, at their option, further distribute the Covered +Software under the terms of either this License or such Secondary +License(s). + +3.4. Notices + +You may not remove or alter the substance of any license notices +(including copyright notices, patent notices, disclaimers of warranty, +or limitations of liability) contained within the Source Code Form of +the Covered Software, except that You may alter any license notices to +the extent required to remedy known factual inaccuracies. + +3.5. Application of Additional Terms + +You may choose to offer, and to charge a fee for, warranty, support, +indemnity or liability obligations to one or more recipients of Covered +Software. However, You may do so only on Your own behalf, and not on +behalf of any Contributor. You must make it absolutely clear that any +such warranty, support, indemnity, or liability obligation is offered by +You alone, and You hereby agree to indemnify every Contributor for any +liability incurred by such Contributor as a result of warranty, support, +indemnity or liability terms You offer. You may include additional +disclaimers of warranty and limitations of liability specific to any +jurisdiction. + +4. Inability to Comply Due to Statute or Regulation +--------------------------------------------------- + +If it is impossible for You to comply with any of the terms of this +License with respect to some or all of the Covered Software due to +statute, judicial order, or regulation then You must: (a) comply with +the terms of this License to the maximum extent possible; and (b) +describe the limitations and the code they affect. Such description must +be placed in a text file included with all distributions of the Covered +Software under this License. Except to the extent prohibited by statute +or regulation, such description must be sufficiently detailed for a +recipient of ordinary skill to be able to understand it. + +5. Termination +-------------- + +5.1. The rights granted under this License will terminate automatically +if You fail to comply with any of its terms. However, if You become +compliant, then the rights granted under this License from a particular +Contributor are reinstated (a) provisionally, unless and until such +Contributor explicitly and finally terminates Your grants, and (b) on an +ongoing basis, if such Contributor fails to notify You of the +non-compliance by some reasonable means prior to 60 days after You have +come back into compliance. Moreover, Your grants from a particular +Contributor are reinstated on an ongoing basis if such Contributor +notifies You of the non-compliance by some reasonable means, this is the +first time You have received notice of non-compliance with this License +from such Contributor, and You become compliant prior to 30 days after +Your receipt of the notice. + +5.2. If You initiate litigation against any entity by asserting a patent +infringement claim (excluding declaratory judgment actions, +counter-claims, and cross-claims) alleging that a Contributor Version +directly or indirectly infringes any patent, then the rights granted to +You by any and all Contributors for the Covered Software under Section +2.1 of this License shall terminate. + +5.3. In the event of termination under Sections 5.1 or 5.2 above, all +end user license agreements (excluding distributors and resellers) which +have been validly granted by You or Your distributors under this License +prior to termination shall survive termination. + +************************************************************************ +* * +* 6. Disclaimer of Warranty * +* ------------------------- * +* * +* Covered Software is provided under this License on an "as is" * +* basis, without warranty of any kind, either expressed, implied, or * +* statutory, including, without limitation, warranties that the * +* Covered Software is free of defects, merchantable, fit for a * +* particular purpose or non-infringing. The entire risk as to the * +* quality and performance of the Covered Software is with You. * +* Should any Covered Software prove defective in any respect, You * +* (not any Contributor) assume the cost of any necessary servicing, * +* repair, or correction. This disclaimer of warranty constitutes an * +* essential part of this License. No use of any Covered Software is * +* authorized under this License except under this disclaimer. * +* * +************************************************************************ + +************************************************************************ +* * +* 7. Limitation of Liability * +* -------------------------- * +* * +* Under no circumstances and under no legal theory, whether tort * +* (including negligence), contract, or otherwise, shall any * +* Contributor, or anyone who distributes Covered Software as * +* permitted above, be liable to You for any direct, indirect, * +* special, incidental, or consequential damages of any character * +* including, without limitation, damages for lost profits, loss of * +* goodwill, work stoppage, computer failure or malfunction, or any * +* and all other commercial damages or losses, even if such party * +* shall have been informed of the possibility of such damages. This * +* limitation of liability shall not apply to liability for death or * +* personal injury resulting from such party's negligence to the * +* extent applicable law prohibits such limitation. Some * +* jurisdictions do not allow the exclusion or limitation of * +* incidental or consequential damages, so this exclusion and * +* limitation may not apply to You. * +* * +************************************************************************ + +8. Litigation +------------- + +Any litigation relating to this License may be brought only in the +courts of a jurisdiction where the defendant maintains its principal +place of business and such litigation shall be governed by laws of that +jurisdiction, without reference to its conflict-of-law provisions. +Nothing in this Section shall prevent a party's ability to bring +cross-claims or counter-claims. + +9. Miscellaneous +---------------- + +This License represents the complete agreement concerning the subject +matter hereof. If any provision of this License is held to be +unenforceable, such provision shall be reformed only to the extent +necessary to make it enforceable. Any law or regulation which provides +that the language of a contract shall be construed against the drafter +shall not be used to construe this License against a Contributor. + +10. Versions of the License +--------------------------- + +10.1. New Versions + +Mozilla Foundation is the license steward. Except as provided in Section +10.3, no one other than the license steward has the right to modify or +publish new versions of this License. Each version will be given a +distinguishing version number. + +10.2. Effect of New Versions + +You may distribute the Covered Software under the terms of the version +of the License under which You originally received the Covered Software, +or under the terms of any subsequent version published by the license +steward. + +10.3. Modified Versions + +If you create software not governed by this License, and you want to +create a new license for such software, you may create and use a +modified version of this License if you rename the license and remove +any references to the name of the license steward (except to note that +such modified license differs from this License). + +10.4. Distributing Source Code Form that is Incompatible With Secondary +Licenses + +If You choose to distribute Source Code Form that is Incompatible With +Secondary Licenses under the terms of this version of the License, the +notice described in Exhibit B of this License must be attached. + +Exhibit A - Source Code Form License Notice +------------------------------------------- + + This Source Code Form is subject to the terms of the Mozilla Public + License, v. 2.0. If a copy of the MPL was not distributed with this + file, You can obtain one at http://mozilla.org/MPL/2.0/. + +If it is not possible or desirable to put the notice in a particular +file, then You may include the notice in a location (such as a LICENSE +file in a relevant directory) where a recipient would be likely to look +for such a notice. + +You may add additional accurate notices of copyright ownership. + +Exhibit B - "Incompatible With Secondary Licenses" Notice +--------------------------------------------------------- + + This Source Code Form is "Incompatible With Secondary Licenses", as + defined by the Mozilla Public License, v. 2.0. diff --git a/client/sniffer/licenses/httpclient-NOTICE.txt b/client/sniffer/licenses/httpclient-NOTICE.txt new file mode 100644 index 0000000000000..91e5c40c4c6d3 --- /dev/null +++ b/client/sniffer/licenses/httpclient-NOTICE.txt @@ -0,0 +1,6 @@ +Apache HttpComponents Client +Copyright 1999-2016 The Apache Software Foundation + +This product includes software developed at +The Apache Software Foundation (http://www.apache.org/). + diff --git a/client/sniffer/licenses/httpcore-4.4.5.jar.sha1 b/client/sniffer/licenses/httpcore-4.4.5.jar.sha1 new file mode 100644 index 0000000000000..581726601745b --- /dev/null +++ b/client/sniffer/licenses/httpcore-4.4.5.jar.sha1 @@ -0,0 +1 @@ +e7501a1b34325abb00d17dde96150604a0658b54 \ No newline at end of file diff --git a/client/sniffer/licenses/jackson-core-2.8.10.jar.sha1 b/client/sniffer/licenses/jackson-core-2.8.10.jar.sha1 new file mode 100644 index 0000000000000..a322d371e265e --- /dev/null +++ b/client/sniffer/licenses/jackson-core-2.8.10.jar.sha1 @@ -0,0 +1 @@ +eb21a035c66ad307e66ec8fce37f5d50fd62d039 \ No newline at end of file diff --git a/client/sniffer/licenses/jackson-core-2.8.6.jar.sha1 b/client/sniffer/licenses/jackson-core-2.8.6.jar.sha1 deleted file mode 100644 index af7677d13c28c..0000000000000 --- a/client/sniffer/licenses/jackson-core-2.8.6.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -2ef7b1cc34de149600f5e75bc2d5bf40de894e60 \ No newline at end of file diff --git a/client/sniffer/src/main/java/org/elasticsearch/client/sniff/ElasticsearchHostsSniffer.java b/client/sniffer/src/main/java/org/elasticsearch/client/sniff/ElasticsearchHostsSniffer.java index e924449d64d41..34a4988358653 100644 --- a/client/sniffer/src/main/java/org/elasticsearch/client/sniff/ElasticsearchHostsSniffer.java +++ b/client/sniffer/src/main/java/org/elasticsearch/client/sniff/ElasticsearchHostsSniffer.java @@ -24,8 +24,8 @@ import com.fasterxml.jackson.core.JsonToken; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; -import org.elasticsearch.client.http.HttpEntity; -import org.elasticsearch.client.http.HttpHost; +import org.apache.http.HttpEntity; +import org.apache.http.HttpHost; import org.elasticsearch.client.Response; import org.elasticsearch.client.RestClient; diff --git a/client/sniffer/src/main/java/org/elasticsearch/client/sniff/HostsSniffer.java b/client/sniffer/src/main/java/org/elasticsearch/client/sniff/HostsSniffer.java index 02c727a7b898d..9eb7b34425944 100644 --- a/client/sniffer/src/main/java/org/elasticsearch/client/sniff/HostsSniffer.java +++ b/client/sniffer/src/main/java/org/elasticsearch/client/sniff/HostsSniffer.java @@ -19,7 +19,7 @@ package org.elasticsearch.client.sniff; -import org.elasticsearch.client.http.HttpHost; +import org.apache.http.HttpHost; import java.io.IOException; import java.util.List; diff --git a/client/sniffer/src/main/java/org/elasticsearch/client/sniff/SniffOnFailureListener.java b/client/sniffer/src/main/java/org/elasticsearch/client/sniff/SniffOnFailureListener.java index 8510ddd2947a7..cbc77351de98b 100644 --- a/client/sniffer/src/main/java/org/elasticsearch/client/sniff/SniffOnFailureListener.java +++ b/client/sniffer/src/main/java/org/elasticsearch/client/sniff/SniffOnFailureListener.java @@ -19,7 +19,7 @@ package org.elasticsearch.client.sniff; -import org.elasticsearch.client.http.HttpHost; +import org.apache.http.HttpHost; import org.elasticsearch.client.RestClient; import java.util.Objects; diff --git a/client/sniffer/src/main/java/org/elasticsearch/client/sniff/Sniffer.java b/client/sniffer/src/main/java/org/elasticsearch/client/sniff/Sniffer.java index c887be53ec42b..c655babd9ed3d 100644 --- a/client/sniffer/src/main/java/org/elasticsearch/client/sniff/Sniffer.java +++ b/client/sniffer/src/main/java/org/elasticsearch/client/sniff/Sniffer.java @@ -21,18 +21,22 @@ import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; -import org.elasticsearch.client.http.HttpHost; +import org.apache.http.HttpHost; import org.elasticsearch.client.RestClient; import org.elasticsearch.client.RestClientBuilder; import java.io.Closeable; import java.io.IOException; +import java.security.AccessController; +import java.security.PrivilegedAction; import java.util.List; import java.util.concurrent.Executors; import java.util.concurrent.ScheduledExecutorService; import java.util.concurrent.ScheduledFuture; +import java.util.concurrent.ThreadFactory; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicBoolean; +import java.util.concurrent.atomic.AtomicInteger; /** * Class responsible for sniffing nodes from some source (default is elasticsearch itself) and setting them to a provided instance of @@ -45,6 +49,7 @@ public class Sniffer implements Closeable { private static final Log logger = LogFactory.getLog(Sniffer.class); + private static final String SNIFFER_THREAD_NAME = "es_rest_client_sniffer"; private final Task task; @@ -79,7 +84,8 @@ private Task(HostsSniffer hostsSniffer, RestClient restClient, long sniffInterva this.restClient = restClient; this.sniffIntervalMillis = sniffIntervalMillis; this.sniffAfterFailureDelayMillis = sniffAfterFailureDelayMillis; - this.scheduledExecutorService = Executors.newScheduledThreadPool(1); + SnifferThreadFactory threadFactory = new SnifferThreadFactory(SNIFFER_THREAD_NAME); + this.scheduledExecutorService = Executors.newScheduledThreadPool(1, threadFactory); scheduleNextRun(0); } @@ -151,4 +157,34 @@ synchronized void shutdown() { public static SnifferBuilder builder(RestClient restClient) { return new SnifferBuilder(restClient); } + + private static class SnifferThreadFactory implements ThreadFactory { + + private final AtomicInteger threadNumber = new AtomicInteger(1); + private final String namePrefix; + private final ThreadFactory originalThreadFactory; + + private SnifferThreadFactory(String namePrefix) { + this.namePrefix = namePrefix; + this.originalThreadFactory = AccessController.doPrivileged(new PrivilegedAction() { + @Override + public ThreadFactory run() { + return Executors.defaultThreadFactory(); + } + }); + } + + @Override + public Thread newThread(final Runnable r) { + return AccessController.doPrivileged(new PrivilegedAction() { + @Override + public Thread run() { + Thread t = originalThreadFactory.newThread(r); + t.setName(namePrefix + "[T#" + threadNumber.getAndIncrement() + "]"); + t.setDaemon(true); + return t; + } + }); + } + } } diff --git a/client/sniffer/src/test/java/org/elasticsearch/client/sniff/ElasticsearchHostsSnifferTests.java b/client/sniffer/src/test/java/org/elasticsearch/client/sniff/ElasticsearchHostsSnifferTests.java index 531567d79150f..483b7df62f95a 100644 --- a/client/sniffer/src/test/java/org/elasticsearch/client/sniff/ElasticsearchHostsSnifferTests.java +++ b/client/sniffer/src/test/java/org/elasticsearch/client/sniff/ElasticsearchHostsSnifferTests.java @@ -27,9 +27,9 @@ import com.sun.net.httpserver.HttpExchange; import com.sun.net.httpserver.HttpHandler; import com.sun.net.httpserver.HttpServer; -import org.elasticsearch.client.http.Consts; -import org.elasticsearch.client.http.HttpHost; -import org.elasticsearch.client.http.client.methods.HttpGet; +import org.apache.http.Consts; +import org.apache.http.HttpHost; +import org.apache.http.client.methods.HttpGet; import org.codehaus.mojo.animal_sniffer.IgnoreJRERequirement; import org.elasticsearch.client.Response; import org.elasticsearch.client.ResponseException; @@ -55,6 +55,7 @@ import static org.hamcrest.CoreMatchers.containsString; import static org.hamcrest.CoreMatchers.equalTo; +import static org.hamcrest.CoreMatchers.startsWith; import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertThat; import static org.junit.Assert.fail; @@ -128,7 +129,9 @@ public void testSniffNodes() throws IOException { } catch(ResponseException e) { Response response = e.getResponse(); if (sniffResponse.isFailure) { - assertThat(e.getMessage(), containsString("GET " + httpHost + "/_nodes/http?timeout=" + sniffRequestTimeout + "ms")); + final String errorPrefix = "method [GET], host [" + httpHost + "], URI [/_nodes/http?timeout=" + sniffRequestTimeout + + "ms], status line [HTTP/1.1"; + assertThat(e.getMessage(), startsWith(errorPrefix)); assertThat(e.getMessage(), containsString(Integer.toString(sniffResponse.nodesInfoResponseCode))); assertThat(response.getHost(), equalTo(httpHost)); assertThat(response.getStatusLine().getStatusCode(), equalTo(sniffResponse.nodesInfoResponseCode)); diff --git a/client/sniffer/src/test/java/org/elasticsearch/client/sniff/MockHostsSniffer.java b/client/sniffer/src/test/java/org/elasticsearch/client/sniff/MockHostsSniffer.java index 50fd52869dd2d..5a52151d76e01 100644 --- a/client/sniffer/src/test/java/org/elasticsearch/client/sniff/MockHostsSniffer.java +++ b/client/sniffer/src/test/java/org/elasticsearch/client/sniff/MockHostsSniffer.java @@ -19,7 +19,7 @@ package org.elasticsearch.client.sniff; -import org.elasticsearch.client.http.HttpHost; +import org.apache.http.HttpHost; import java.io.IOException; import java.util.Collections; diff --git a/client/sniffer/src/test/java/org/elasticsearch/client/sniff/SniffOnFailureListenerTests.java b/client/sniffer/src/test/java/org/elasticsearch/client/sniff/SniffOnFailureListenerTests.java index 8aee847f911de..1fece270ffe0d 100644 --- a/client/sniffer/src/test/java/org/elasticsearch/client/sniff/SniffOnFailureListenerTests.java +++ b/client/sniffer/src/test/java/org/elasticsearch/client/sniff/SniffOnFailureListenerTests.java @@ -19,7 +19,7 @@ package org.elasticsearch.client.sniff; -import org.elasticsearch.client.http.HttpHost; +import org.apache.http.HttpHost; import org.elasticsearch.client.RestClient; import org.elasticsearch.client.RestClientTestCase; diff --git a/client/sniffer/src/test/java/org/elasticsearch/client/sniff/SnifferBuilderTests.java b/client/sniffer/src/test/java/org/elasticsearch/client/sniff/SnifferBuilderTests.java index ac54c6337211f..9a7359e9c7215 100644 --- a/client/sniffer/src/test/java/org/elasticsearch/client/sniff/SnifferBuilderTests.java +++ b/client/sniffer/src/test/java/org/elasticsearch/client/sniff/SnifferBuilderTests.java @@ -20,7 +20,7 @@ package org.elasticsearch.client.sniff; import com.carrotsearch.randomizedtesting.generators.RandomNumbers; -import org.elasticsearch.client.http.HttpHost; +import org.apache.http.HttpHost; import org.elasticsearch.client.RestClient; import org.elasticsearch.client.RestClientTestCase; diff --git a/client/sniffer/src/test/java/org/elasticsearch/client/sniff/documentation/SnifferDocumentation.java b/client/sniffer/src/test/java/org/elasticsearch/client/sniff/documentation/SnifferDocumentation.java index 790a7006a9708..199632d478f81 100644 --- a/client/sniffer/src/test/java/org/elasticsearch/client/sniff/documentation/SnifferDocumentation.java +++ b/client/sniffer/src/test/java/org/elasticsearch/client/sniff/documentation/SnifferDocumentation.java @@ -19,7 +19,7 @@ package org.elasticsearch.client.sniff.documentation; -import org.elasticsearch.client.http.HttpHost; +import org.apache.http.HttpHost; import org.elasticsearch.client.RestClient; import org.elasticsearch.client.sniff.ElasticsearchHostsSniffer; import org.elasticsearch.client.sniff.HostsSniffer; diff --git a/client/test/build.gradle b/client/test/build.gradle index 771901b090d8f..e57d415e9eaab 100644 --- a/client/test/build.gradle +++ b/client/test/build.gradle @@ -18,6 +18,7 @@ */ import org.elasticsearch.gradle.precommit.PrecommitTasks +import org.gradle.api.JavaVersion apply plugin: 'elasticsearch.build' apply plugin: 'ru.vyarus.animalsniffer' @@ -26,7 +27,7 @@ targetCompatibility = JavaVersion.VERSION_1_7 sourceCompatibility = JavaVersion.VERSION_1_7 dependencies { - compile "org.elasticsearch.client:elasticsearch-rest-client:${version}" + compile "org.apache.httpcomponents:httpcore:${versions.httpcore}" compile "com.carrotsearch.randomizedtesting:randomizedtesting-runner:${versions.randomizedrunner}" compile "junit:junit:${versions.junit}" compile "org.hamcrest:hamcrest-all:${versions.hamcrest}" diff --git a/client/test/src/main/java/org/elasticsearch/client/RestClientTestCase.java b/client/test/src/main/java/org/elasticsearch/client/RestClientTestCase.java index 76581ff252184..6a2a45ef2813c 100644 --- a/client/test/src/main/java/org/elasticsearch/client/RestClientTestCase.java +++ b/client/test/src/main/java/org/elasticsearch/client/RestClientTestCase.java @@ -30,7 +30,7 @@ import com.carrotsearch.randomizedtesting.annotations.ThreadLeakScope; import com.carrotsearch.randomizedtesting.annotations.ThreadLeakZombies; import com.carrotsearch.randomizedtesting.annotations.TimeoutSuite; -import org.elasticsearch.client.http.Header; +import org.apache.http.Header; import java.util.ArrayList; import java.util.HashMap; diff --git a/client/test/src/main/java/org/elasticsearch/client/RestClientTestUtil.java b/client/test/src/main/java/org/elasticsearch/client/RestClientTestUtil.java index d9953b15e6dbb..a0a6641abbc5f 100644 --- a/client/test/src/main/java/org/elasticsearch/client/RestClientTestUtil.java +++ b/client/test/src/main/java/org/elasticsearch/client/RestClientTestUtil.java @@ -22,8 +22,8 @@ import com.carrotsearch.randomizedtesting.generators.RandomNumbers; import com.carrotsearch.randomizedtesting.generators.RandomPicks; import com.carrotsearch.randomizedtesting.generators.RandomStrings; -import org.elasticsearch.client.http.Header; -import org.elasticsearch.client.http.message.BasicHeader; +import org.apache.http.Header; +import org.apache.http.message.BasicHeader; import java.util.ArrayList; import java.util.Arrays; diff --git a/client/transport/build.gradle b/client/transport/build.gradle index b2edc9c8fcd8f..f09668ac6acfc 100644 --- a/client/transport/build.gradle +++ b/client/transport/build.gradle @@ -54,4 +54,4 @@ namingConventions { testClass = 'com.carrotsearch.randomizedtesting.RandomizedTest' //we don't have integration tests skipIntegTestInDisguise = true -} +} \ No newline at end of file diff --git a/core/build.gradle b/core/build.gradle index 5791631df72b6..fe60cd8b1cf6b 100644 --- a/core/build.gradle +++ b/core/build.gradle @@ -157,12 +157,11 @@ thirdPartyAudit.excludes = [ 'com.fasterxml.jackson.databind.ObjectMapper', // from log4j - 'com.beust.jcommander.IStringConverter', - 'com.beust.jcommander.JCommander', 'com.conversantmedia.util.concurrent.DisruptorBlockingQueue', 'com.conversantmedia.util.concurrent.SpinPolicy', 'com.fasterxml.jackson.annotation.JsonInclude$Include', 'com.fasterxml.jackson.databind.DeserializationContext', + 'com.fasterxml.jackson.databind.DeserializationFeature', 'com.fasterxml.jackson.databind.JsonMappingException', 'com.fasterxml.jackson.databind.JsonNode', 'com.fasterxml.jackson.databind.Module$SetupContext', @@ -203,11 +202,11 @@ thirdPartyAudit.excludes = [ 'javax.jms.Connection', 'javax.jms.ConnectionFactory', 'javax.jms.Destination', + 'javax.jms.JMSException', + 'javax.jms.MapMessage', 'javax.jms.Message', 'javax.jms.MessageConsumer', - 'javax.jms.MessageListener', 'javax.jms.MessageProducer', - 'javax.jms.ObjectMessage', 'javax.jms.Session', 'javax.mail.Authenticator', 'javax.mail.Message$RecipientType', @@ -247,6 +246,7 @@ thirdPartyAudit.excludes = [ 'org.osgi.framework.BundleEvent', 'org.osgi.framework.BundleReference', 'org.osgi.framework.FrameworkUtil', + 'org.osgi.framework.ServiceRegistration', 'org.osgi.framework.SynchronousBundleListener', 'org.osgi.framework.wiring.BundleWire', 'org.osgi.framework.wiring.BundleWiring', diff --git a/core/licenses/jackson-core-2.8.10.jar.sha1 b/core/licenses/jackson-core-2.8.10.jar.sha1 new file mode 100644 index 0000000000000..a322d371e265e --- /dev/null +++ b/core/licenses/jackson-core-2.8.10.jar.sha1 @@ -0,0 +1 @@ +eb21a035c66ad307e66ec8fce37f5d50fd62d039 \ No newline at end of file diff --git a/core/licenses/jackson-core-2.8.6.jar.sha1 b/core/licenses/jackson-core-2.8.6.jar.sha1 deleted file mode 100644 index af7677d13c28c..0000000000000 --- a/core/licenses/jackson-core-2.8.6.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -2ef7b1cc34de149600f5e75bc2d5bf40de894e60 \ No newline at end of file diff --git a/core/licenses/jackson-dataformat-cbor-2.8.10.jar.sha1 b/core/licenses/jackson-dataformat-cbor-2.8.10.jar.sha1 new file mode 100644 index 0000000000000..1d3e18e21a694 --- /dev/null +++ b/core/licenses/jackson-dataformat-cbor-2.8.10.jar.sha1 @@ -0,0 +1 @@ +1c58cc9313ddf19f0900cd61ed044874278ce320 \ No newline at end of file diff --git a/core/licenses/jackson-dataformat-cbor-2.8.6.jar.sha1 b/core/licenses/jackson-dataformat-cbor-2.8.6.jar.sha1 deleted file mode 100644 index 6a2e980235381..0000000000000 --- a/core/licenses/jackson-dataformat-cbor-2.8.6.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -b88721371cfa2d7242bb5e52fe70861aa061c050 \ No newline at end of file diff --git a/core/licenses/jackson-dataformat-smile-2.8.10.jar.sha1 b/core/licenses/jackson-dataformat-smile-2.8.10.jar.sha1 new file mode 100644 index 0000000000000..4f4cacde22079 --- /dev/null +++ b/core/licenses/jackson-dataformat-smile-2.8.10.jar.sha1 @@ -0,0 +1 @@ +e853081fadaad3e98ed801937acc3d8f77580686 \ No newline at end of file diff --git a/core/licenses/jackson-dataformat-smile-2.8.6.jar.sha1 b/core/licenses/jackson-dataformat-smile-2.8.6.jar.sha1 deleted file mode 100644 index 19be9a2040bed..0000000000000 --- a/core/licenses/jackson-dataformat-smile-2.8.6.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -71590ad45cee21249774e2f93e5eca66e446cef3 \ No newline at end of file diff --git a/core/licenses/jackson-dataformat-yaml-2.8.10.jar.sha1 b/core/licenses/jackson-dataformat-yaml-2.8.10.jar.sha1 new file mode 100644 index 0000000000000..40bcb05f69795 --- /dev/null +++ b/core/licenses/jackson-dataformat-yaml-2.8.10.jar.sha1 @@ -0,0 +1 @@ +1e08caf1d787c825307d8cc6362452086020d853 \ No newline at end of file diff --git a/core/licenses/jackson-dataformat-yaml-2.8.6.jar.sha1 b/core/licenses/jackson-dataformat-yaml-2.8.6.jar.sha1 deleted file mode 100644 index c61dad3bbcdd7..0000000000000 --- a/core/licenses/jackson-dataformat-yaml-2.8.6.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -8bd44d50f9a6cdff9c7578ea39d524eb519e35ab \ No newline at end of file diff --git a/core/licenses/log4j-1.2-api-2.8.2.jar.sha1 b/core/licenses/log4j-1.2-api-2.8.2.jar.sha1 deleted file mode 100644 index 39d09bec71767..0000000000000 --- a/core/licenses/log4j-1.2-api-2.8.2.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -f1543534b8413aac91fa54d1fff65dfff76818cd \ No newline at end of file diff --git a/core/licenses/log4j-1.2-api-2.9.1.jar.sha1 b/core/licenses/log4j-1.2-api-2.9.1.jar.sha1 new file mode 100644 index 0000000000000..0b5acc62b7a13 --- /dev/null +++ b/core/licenses/log4j-1.2-api-2.9.1.jar.sha1 @@ -0,0 +1 @@ +894f96d677880d4ab834a1356f62b875e579caaa \ No newline at end of file diff --git a/core/licenses/log4j-api-2.8.2.jar.sha1 b/core/licenses/log4j-api-2.8.2.jar.sha1 deleted file mode 100644 index 7c7c1da835c92..0000000000000 --- a/core/licenses/log4j-api-2.8.2.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -e590eeb783348ce8ddef205b82127f9084d82bf3 \ No newline at end of file diff --git a/core/licenses/log4j-api-2.9.1.jar.sha1 b/core/licenses/log4j-api-2.9.1.jar.sha1 new file mode 100644 index 0000000000000..e1a89fadfed95 --- /dev/null +++ b/core/licenses/log4j-api-2.9.1.jar.sha1 @@ -0,0 +1 @@ +7a2999229464e7a324aa503c0a52ec0f05efe7bd \ No newline at end of file diff --git a/core/licenses/log4j-core-2.8.2.jar.sha1 b/core/licenses/log4j-core-2.8.2.jar.sha1 deleted file mode 100644 index 4e6c7b4fcc365..0000000000000 --- a/core/licenses/log4j-core-2.8.2.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -979fc0cf8460302e4ffbfe38c1b66a99450b0bb7 \ No newline at end of file diff --git a/core/licenses/log4j-core-2.9.1.jar.sha1 b/core/licenses/log4j-core-2.9.1.jar.sha1 new file mode 100644 index 0000000000000..990ea322a7613 --- /dev/null +++ b/core/licenses/log4j-core-2.9.1.jar.sha1 @@ -0,0 +1 @@ +c041978c686866ee8534f538c6220238db3bb6be \ No newline at end of file diff --git a/core/licenses/lucene-NOTICE.txt b/core/licenses/lucene-NOTICE.txt index ecf08201a5ee6..1a1d51572432a 100644 --- a/core/licenses/lucene-NOTICE.txt +++ b/core/licenses/lucene-NOTICE.txt @@ -54,13 +54,14 @@ The KStem stemmer in was developed by Bob Krovetz and Sergio Guzman-Lara (CIIR-UMass Amherst) under the BSD-license. -The Arabic,Persian,Romanian,Bulgarian, and Hindi analyzers (common) come with a default +The Arabic,Persian,Romanian,Bulgarian, Hindi and Bengali analyzers (common) come with a default stopword list that is BSD-licensed created by Jacques Savoy. These files reside in: analysis/common/src/resources/org/apache/lucene/analysis/ar/stopwords.txt, analysis/common/src/resources/org/apache/lucene/analysis/fa/stopwords.txt, analysis/common/src/resources/org/apache/lucene/analysis/ro/stopwords.txt, analysis/common/src/resources/org/apache/lucene/analysis/bg/stopwords.txt, -analysis/common/src/resources/org/apache/lucene/analysis/hi/stopwords.txt +analysis/common/src/resources/org/apache/lucene/analysis/hi/stopwords.txt, +analysis/common/src/resources/org/apache/lucene/analysis/bn/stopwords.txt See http://members.unine.ch/jacques.savoy/clef/index.html. The German,Spanish,Finnish,French,Hungarian,Italian,Portuguese,Russian and Swedish light stemmers diff --git a/core/licenses/lucene-analyzers-common-7.0.0-snapshot-a128fcb.jar.sha1 b/core/licenses/lucene-analyzers-common-7.0.0-snapshot-a128fcb.jar.sha1 deleted file mode 100644 index 564779519e3cc..0000000000000 --- a/core/licenses/lucene-analyzers-common-7.0.0-snapshot-a128fcb.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -6d3d4e90c7ffaf560a4e5c945f41b4c229d357b4 \ No newline at end of file diff --git a/core/licenses/lucene-analyzers-common-7.1.0.jar.sha1 b/core/licenses/lucene-analyzers-common-7.1.0.jar.sha1 new file mode 100644 index 0000000000000..880d261cb89d2 --- /dev/null +++ b/core/licenses/lucene-analyzers-common-7.1.0.jar.sha1 @@ -0,0 +1 @@ +a508bf6b580471ee568dab7d2acfedfa5aadce70 \ No newline at end of file diff --git a/core/licenses/lucene-backward-codecs-7.0.0-snapshot-a128fcb.jar.sha1 b/core/licenses/lucene-backward-codecs-7.0.0-snapshot-a128fcb.jar.sha1 deleted file mode 100644 index ea6eaedf095e3..0000000000000 --- a/core/licenses/lucene-backward-codecs-7.0.0-snapshot-a128fcb.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -e83bbddd0d6c347e0cd8d55549d24859d89de76b \ No newline at end of file diff --git a/core/licenses/lucene-backward-codecs-7.1.0.jar.sha1 b/core/licenses/lucene-backward-codecs-7.1.0.jar.sha1 new file mode 100644 index 0000000000000..ec597be207dd5 --- /dev/null +++ b/core/licenses/lucene-backward-codecs-7.1.0.jar.sha1 @@ -0,0 +1 @@ +804a7ce82bba3d085733486bfde4846ecb77ce01 \ No newline at end of file diff --git a/core/licenses/lucene-core-7.0.0-snapshot-a128fcb.jar.sha1 b/core/licenses/lucene-core-7.0.0-snapshot-a128fcb.jar.sha1 deleted file mode 100644 index 54c8214a14177..0000000000000 --- a/core/licenses/lucene-core-7.0.0-snapshot-a128fcb.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -df94f460d5dc1c49726c8c0d72e40579c998b9ab \ No newline at end of file diff --git a/core/licenses/lucene-core-7.1.0.jar.sha1 b/core/licenses/lucene-core-7.1.0.jar.sha1 new file mode 100644 index 0000000000000..9e8112996604d --- /dev/null +++ b/core/licenses/lucene-core-7.1.0.jar.sha1 @@ -0,0 +1 @@ +dd291b7ebf4845483895724d2562214dc7f40049 \ No newline at end of file diff --git a/core/licenses/lucene-grouping-7.0.0-snapshot-a128fcb.jar.sha1 b/core/licenses/lucene-grouping-7.0.0-snapshot-a128fcb.jar.sha1 deleted file mode 100644 index fb2e6153fa510..0000000000000 --- a/core/licenses/lucene-grouping-7.0.0-snapshot-a128fcb.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -997310d0bf1a38cef7f52c988cb92fbeab0ca916 \ No newline at end of file diff --git a/core/licenses/lucene-grouping-7.1.0.jar.sha1 b/core/licenses/lucene-grouping-7.1.0.jar.sha1 new file mode 100644 index 0000000000000..3c4963f4460e9 --- /dev/null +++ b/core/licenses/lucene-grouping-7.1.0.jar.sha1 @@ -0,0 +1 @@ +0732d16c16421fca058a2a07ca4081ec7696365b \ No newline at end of file diff --git a/core/licenses/lucene-highlighter-7.0.0-snapshot-a128fcb.jar.sha1 b/core/licenses/lucene-highlighter-7.0.0-snapshot-a128fcb.jar.sha1 deleted file mode 100644 index 7837394f00240..0000000000000 --- a/core/licenses/lucene-highlighter-7.0.0-snapshot-a128fcb.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -aa65bc87940864425d3eb92d2cbd0cd18c5e5117 \ No newline at end of file diff --git a/core/licenses/lucene-highlighter-7.1.0.jar.sha1 b/core/licenses/lucene-highlighter-7.1.0.jar.sha1 new file mode 100644 index 0000000000000..87f841e14677b --- /dev/null +++ b/core/licenses/lucene-highlighter-7.1.0.jar.sha1 @@ -0,0 +1 @@ +596550daabae765ad685112e0fe7c4f0fdfccb3f \ No newline at end of file diff --git a/core/licenses/lucene-join-7.0.0-snapshot-a128fcb.jar.sha1 b/core/licenses/lucene-join-7.0.0-snapshot-a128fcb.jar.sha1 deleted file mode 100644 index 1e9b098a786e1..0000000000000 --- a/core/licenses/lucene-join-7.0.0-snapshot-a128fcb.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -24934887aef7b938d674afd854a4e8b0dde073bf \ No newline at end of file diff --git a/core/licenses/lucene-join-7.1.0.jar.sha1 b/core/licenses/lucene-join-7.1.0.jar.sha1 new file mode 100644 index 0000000000000..774ec13c61451 --- /dev/null +++ b/core/licenses/lucene-join-7.1.0.jar.sha1 @@ -0,0 +1 @@ +5f26dd64c195258a81175772ef7fe105e7d60a26 \ No newline at end of file diff --git a/core/licenses/lucene-memory-7.0.0-snapshot-a128fcb.jar.sha1 b/core/licenses/lucene-memory-7.0.0-snapshot-a128fcb.jar.sha1 deleted file mode 100644 index 8e76133f7fd8c..0000000000000 --- a/core/licenses/lucene-memory-7.0.0-snapshot-a128fcb.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -359311a2480ebf7d5421e1be71a40e109d6406d4 \ No newline at end of file diff --git a/core/licenses/lucene-memory-7.1.0.jar.sha1 b/core/licenses/lucene-memory-7.1.0.jar.sha1 new file mode 100644 index 0000000000000..17264d5e43df8 --- /dev/null +++ b/core/licenses/lucene-memory-7.1.0.jar.sha1 @@ -0,0 +1 @@ +3ef64c58d0c09ca40d848efa96b585b7476271f2 \ No newline at end of file diff --git a/core/licenses/lucene-misc-7.0.0-snapshot-a128fcb.jar.sha1 b/core/licenses/lucene-misc-7.0.0-snapshot-a128fcb.jar.sha1 deleted file mode 100644 index 856c6dfccc93a..0000000000000 --- a/core/licenses/lucene-misc-7.0.0-snapshot-a128fcb.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -30d8c0071d32515f5786e2ccf1dd556d3cef1d46 \ No newline at end of file diff --git a/core/licenses/lucene-misc-7.1.0.jar.sha1 b/core/licenses/lucene-misc-7.1.0.jar.sha1 new file mode 100644 index 0000000000000..6fb92dee458e0 --- /dev/null +++ b/core/licenses/lucene-misc-7.1.0.jar.sha1 @@ -0,0 +1 @@ +1496ee5fa62206ee5ddf51042a340d6a9ee3b5de \ No newline at end of file diff --git a/core/licenses/lucene-queries-7.0.0-snapshot-a128fcb.jar.sha1 b/core/licenses/lucene-queries-7.0.0-snapshot-a128fcb.jar.sha1 deleted file mode 100644 index 0a66abdef11aa..0000000000000 --- a/core/licenses/lucene-queries-7.0.0-snapshot-a128fcb.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -3e29571a1ba442e54fc23e177744e47642082851 \ No newline at end of file diff --git a/core/licenses/lucene-queries-7.1.0.jar.sha1 b/core/licenses/lucene-queries-7.1.0.jar.sha1 new file mode 100644 index 0000000000000..a4028cc2149cb --- /dev/null +++ b/core/licenses/lucene-queries-7.1.0.jar.sha1 @@ -0,0 +1 @@ +1554920ab207a3245fa408d022a5c90ad3a1fea3 \ No newline at end of file diff --git a/core/licenses/lucene-queryparser-7.0.0-snapshot-a128fcb.jar.sha1 b/core/licenses/lucene-queryparser-7.0.0-snapshot-a128fcb.jar.sha1 deleted file mode 100644 index 91497e7ea8750..0000000000000 --- a/core/licenses/lucene-queryparser-7.0.0-snapshot-a128fcb.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -c868a9383ea53830688378ec52464196afb23dad \ No newline at end of file diff --git a/core/licenses/lucene-queryparser-7.1.0.jar.sha1 b/core/licenses/lucene-queryparser-7.1.0.jar.sha1 new file mode 100644 index 0000000000000..85c745ea911b7 --- /dev/null +++ b/core/licenses/lucene-queryparser-7.1.0.jar.sha1 @@ -0,0 +1 @@ +5767c15c5ee97926829fd8a4337e434fa95f3c08 \ No newline at end of file diff --git a/core/licenses/lucene-sandbox-7.0.0-snapshot-a128fcb.jar.sha1 b/core/licenses/lucene-sandbox-7.0.0-snapshot-a128fcb.jar.sha1 deleted file mode 100644 index 63dbc6ce5f03c..0000000000000 --- a/core/licenses/lucene-sandbox-7.0.0-snapshot-a128fcb.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -4df2b6fe845cb8859612960bced15d3496cacaaf \ No newline at end of file diff --git a/core/licenses/lucene-sandbox-7.1.0.jar.sha1 b/core/licenses/lucene-sandbox-7.1.0.jar.sha1 new file mode 100644 index 0000000000000..4fedc42d2f10d --- /dev/null +++ b/core/licenses/lucene-sandbox-7.1.0.jar.sha1 @@ -0,0 +1 @@ +691f7b9ac05f3ad2ac7e80733ef70247904bd3ae \ No newline at end of file diff --git a/core/licenses/lucene-spatial-7.0.0-snapshot-a128fcb.jar.sha1 b/core/licenses/lucene-spatial-7.0.0-snapshot-a128fcb.jar.sha1 deleted file mode 100644 index bc8b20e7ad199..0000000000000 --- a/core/licenses/lucene-spatial-7.0.0-snapshot-a128fcb.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -c34ea2a1f1c15015ff40b31e8a519d2fd3756887 \ No newline at end of file diff --git a/core/licenses/lucene-spatial-7.1.0.jar.sha1 b/core/licenses/lucene-spatial-7.1.0.jar.sha1 new file mode 100644 index 0000000000000..3cc891f4b4d85 --- /dev/null +++ b/core/licenses/lucene-spatial-7.1.0.jar.sha1 @@ -0,0 +1 @@ +6c64c04d802badb800516a8a574cb993929c3805 \ No newline at end of file diff --git a/core/licenses/lucene-spatial-extras-7.0.0-snapshot-a128fcb.jar.sha1 b/core/licenses/lucene-spatial-extras-7.0.0-snapshot-a128fcb.jar.sha1 deleted file mode 100644 index bb6c3619d6c84..0000000000000 --- a/core/licenses/lucene-spatial-extras-7.0.0-snapshot-a128fcb.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -51017934fab316c4558318cf050db06117906dc2 \ No newline at end of file diff --git a/core/licenses/lucene-spatial-extras-7.1.0.jar.sha1 b/core/licenses/lucene-spatial-extras-7.1.0.jar.sha1 new file mode 100644 index 0000000000000..066098d5571f6 --- /dev/null +++ b/core/licenses/lucene-spatial-extras-7.1.0.jar.sha1 @@ -0,0 +1 @@ +3f1bc1aada8f06b176b782da24b9d7ad9641c41a \ No newline at end of file diff --git a/core/licenses/lucene-spatial3d-7.0.0-snapshot-a128fcb.jar.sha1 b/core/licenses/lucene-spatial3d-7.0.0-snapshot-a128fcb.jar.sha1 deleted file mode 100644 index 6e52bfe74f5eb..0000000000000 --- a/core/licenses/lucene-spatial3d-7.0.0-snapshot-a128fcb.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -deb26f1a467502a47dd3de9b5d4f1283115c7c54 \ No newline at end of file diff --git a/core/licenses/lucene-spatial3d-7.1.0.jar.sha1 b/core/licenses/lucene-spatial3d-7.1.0.jar.sha1 new file mode 100644 index 0000000000000..32277c393c94e --- /dev/null +++ b/core/licenses/lucene-spatial3d-7.1.0.jar.sha1 @@ -0,0 +1 @@ +8ded650aed23efb775f17be496e3e3870214e23b \ No newline at end of file diff --git a/core/licenses/lucene-suggest-7.0.0-snapshot-a128fcb.jar.sha1 b/core/licenses/lucene-suggest-7.0.0-snapshot-a128fcb.jar.sha1 deleted file mode 100644 index e2ef3d9ae28ea..0000000000000 --- a/core/licenses/lucene-suggest-7.0.0-snapshot-a128fcb.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -5a264ddca7a58fe495aeda9c0e01288bf4b13c96 \ No newline at end of file diff --git a/core/licenses/lucene-suggest-7.1.0.jar.sha1 b/core/licenses/lucene-suggest-7.1.0.jar.sha1 new file mode 100644 index 0000000000000..1d2d0585c63c1 --- /dev/null +++ b/core/licenses/lucene-suggest-7.1.0.jar.sha1 @@ -0,0 +1 @@ +8d0ed1589ebdccf34e888c6efc0134a13a238c85 \ No newline at end of file diff --git a/core/licenses/snakeyaml-1.15.jar.sha1 b/core/licenses/snakeyaml-1.15.jar.sha1 deleted file mode 100644 index 48391d6d9e1a7..0000000000000 --- a/core/licenses/snakeyaml-1.15.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -3b132bea69e8ee099f416044970997bde80f4ea6 \ No newline at end of file diff --git a/core/licenses/snakeyaml-1.17.jar.sha1 b/core/licenses/snakeyaml-1.17.jar.sha1 new file mode 100644 index 0000000000000..9ac6e87f2244a --- /dev/null +++ b/core/licenses/snakeyaml-1.17.jar.sha1 @@ -0,0 +1 @@ +7a27ea250c5130b2922b86dea63cbb1cc10a660c \ No newline at end of file diff --git a/core/src/main/java/org/apache/lucene/search/XFilteredDocIdSetIterator.java b/core/src/main/java/org/apache/lucene/search/XFilteredDocIdSetIterator.java deleted file mode 100644 index 8d1617d3ab4da..0000000000000 --- a/core/src/main/java/org/apache/lucene/search/XFilteredDocIdSetIterator.java +++ /dev/null @@ -1,107 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.apache.lucene.search; - -import java.io.IOException; - -// this is just one possible solution for "early termination" ! - -/** - * Abstract decorator class of a DocIdSetIterator - * implementation that provides on-demand filter/validation - * mechanism on an underlying DocIdSetIterator. - */ -public abstract class XFilteredDocIdSetIterator extends DocIdSetIterator { - protected DocIdSetIterator _innerIter; - private int doc; - - /** - * Constructor. - * @param innerIter Underlying DocIdSetIterator. - */ - public XFilteredDocIdSetIterator(DocIdSetIterator innerIter) { - if (innerIter == null) { - throw new IllegalArgumentException("null iterator"); - } - _innerIter = innerIter; - doc = -1; - } - - /** Return the wrapped {@link DocIdSetIterator}. */ - public DocIdSetIterator getDelegate() { - return _innerIter; - } - - /** - * Validation method to determine whether a docid should be in the result set. - * @param doc docid to be tested - * @return true if input docid should be in the result set, false otherwise. - * @see #XFilteredDocIdSetIterator(DocIdSetIterator) - * @throws CollectionTerminatedException if the underlying iterator is exhausted. - */ - protected abstract boolean match(int doc); - - @Override - public int docID() { - return doc; - } - - @Override - public int nextDoc() throws IOException { - try { - while ((doc = _innerIter.nextDoc()) != NO_MORE_DOCS) { - if (match(doc)) { - return doc; - } - } - } catch (CollectionTerminatedException e) { - return doc = NO_MORE_DOCS; - } - return doc; - } - - @Override - public int advance(int target) throws IOException { - doc = _innerIter.advance(target); - try { - if (doc != NO_MORE_DOCS) { - if (match(doc)) { - return doc; - } else { - while ((doc = _innerIter.nextDoc()) != NO_MORE_DOCS) { - if (match(doc)) { - return doc; - } - } - return doc; - } - } - } catch (CollectionTerminatedException e) { - return doc = NO_MORE_DOCS; - } - return doc; - } - - @Override - public long cost() { - return _innerIter.cost(); - } -} - diff --git a/core/src/main/java/org/apache/lucene/search/suggest/analyzing/XAnalyzingSuggester.java b/core/src/main/java/org/apache/lucene/search/suggest/analyzing/XAnalyzingSuggester.java deleted file mode 100644 index 312b4b3dd0b34..0000000000000 --- a/core/src/main/java/org/apache/lucene/search/suggest/analyzing/XAnalyzingSuggester.java +++ /dev/null @@ -1,1215 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ -package org.apache.lucene.search.suggest.analyzing; - -import com.carrotsearch.hppc.ObjectIntHashMap; -import org.apache.lucene.analysis.Analyzer; -import org.apache.lucene.analysis.TokenStream; -import org.apache.lucene.analysis.TokenStreamToAutomaton; -import org.apache.lucene.search.suggest.InputIterator; -import org.apache.lucene.search.suggest.Lookup; -import org.apache.lucene.store.ByteArrayDataInput; -import org.apache.lucene.store.ByteArrayDataOutput; -import org.apache.lucene.store.DataInput; -import org.apache.lucene.store.DataOutput; -import org.apache.lucene.store.Directory; -import org.apache.lucene.store.FSDirectory; -import org.apache.lucene.store.IOContext; -import org.apache.lucene.store.IndexOutput; -import org.apache.lucene.store.InputStreamDataInput; -import org.apache.lucene.store.OutputStreamDataOutput; -import org.apache.lucene.util.ArrayUtil; -import org.apache.lucene.util.BytesRef; -import org.apache.lucene.util.BytesRefBuilder; -import org.apache.lucene.util.CharsRefBuilder; -import org.apache.lucene.util.IOUtils; -import org.apache.lucene.util.IntsRef; -import org.apache.lucene.util.IntsRefBuilder; -import org.apache.lucene.util.OfflineSorter; -import org.apache.lucene.util.automaton.Automaton; -import org.apache.lucene.util.automaton.LimitedFiniteStringsIterator; -import org.apache.lucene.util.automaton.Operations; -import org.apache.lucene.util.automaton.Transition; -import org.apache.lucene.util.fst.Builder; -import org.apache.lucene.util.fst.ByteSequenceOutputs; -import org.apache.lucene.util.fst.FST; -import org.apache.lucene.util.fst.FST.BytesReader; -import org.apache.lucene.util.fst.PairOutputs; -import org.apache.lucene.util.fst.PairOutputs.Pair; -import org.apache.lucene.util.fst.PositiveIntOutputs; -import org.apache.lucene.util.fst.Util; -import org.apache.lucene.util.fst.Util.Result; -import org.apache.lucene.util.fst.Util.TopResults; -import org.elasticsearch.common.SuppressForbidden; -import org.elasticsearch.common.collect.HppcMaps; -import org.elasticsearch.common.io.PathUtils; - -import java.io.IOException; -import java.io.InputStream; -import java.io.OutputStream; -import java.util.ArrayList; -import java.util.Arrays; -import java.util.Collections; -import java.util.Comparator; -import java.util.HashSet; -import java.util.LinkedList; -import java.util.List; -import java.util.Set; - -/** - * Suggester that first analyzes the surface form, adds the - * analyzed form to a weighted FST, and then does the same - * thing at lookup time. This means lookup is based on the - * analyzed form while suggestions are still the surface - * form(s). - * - *

- * This can result in powerful suggester functionality. For - * example, if you use an analyzer removing stop words, - * then the partial text "ghost chr..." could see the - * suggestion "The Ghost of Christmas Past". Note that - * position increments MUST NOT be preserved for this example - * to work, so you should call the constructor with - * preservePositionIncrements parameter set to - * false - * - *

- * If SynonymFilter is used to map wifi and wireless network to - * hotspot then the partial text "wirele..." could suggest - * "wifi router". Token normalization like stemmers, accent - * removal, etc., would allow suggestions to ignore such - * variations. - * - *

- * When two matching suggestions have the same weight, they - * are tie-broken by the analyzed form. If their analyzed - * form is the same then the order is undefined. - * - *

- * There are some limitations: - *

    - * - *
  • A lookup from a query like "net" in English won't - * be any different than "net " (ie, user added a - * trailing space) because analyzers don't reflect - * when they've seen a token separator and when they - * haven't. - * - *
  • If you're using {@code StopFilter}, and the user will - * type "fast apple", but so far all they've typed is - * "fast a", again because the analyzer doesn't convey whether - * it's seen a token separator after the "a", - * {@code StopFilter} will remove that "a" causing - * far more matches than you'd expect. - * - *
  • Lookups with the empty string return no results - * instead of all results. - *
- */ -public class XAnalyzingSuggester extends Lookup { - - /** - * FST<Weight,Surface>: - * input is the analyzed form, with a null byte between terms - * weights are encoded as costs: (Integer.MAX_VALUE-weight) - * surface is the original, unanalyzed form. - */ - private FST> fst = null; - - /** - * Analyzer that will be used for analyzing suggestions at - * index time. - */ - private final Analyzer indexAnalyzer; - - /** - * Analyzer that will be used for analyzing suggestions at - * query time. - */ - private final Analyzer queryAnalyzer; - - /** - * True if exact match suggestions should always be returned first. - */ - private final boolean exactFirst; - - /** - * True if separator between tokens should be preserved. - */ - private final boolean preserveSep; - - /** Include this flag in the options parameter to {@code - * #XAnalyzingSuggester(Analyzer,Analyzer,int,int,int,boolean,FST,boolean,int,int,int,int,int)} to always - * return the exact match first, regardless of score. This - * has no performance impact but could result in - * low-quality suggestions. */ - public static final int EXACT_FIRST = 1; - - /** Include this flag in the options parameter to {@code - * #XAnalyzingSuggester(Analyzer,Analyzer,int,int,int,boolean,FST,boolean,int,int,int,int,int)} to preserve - * token separators when matching. */ - public static final int PRESERVE_SEP = 2; - - /** Represents the separation between tokens, if - * PRESERVE_SEP was specified */ - public static final int SEP_LABEL = '\u001F'; - - /** Marks end of the analyzed input and start of dedup - * byte. */ - public static final int END_BYTE = 0x0; - - /** Maximum number of dup surface forms (different surface - * forms for the same analyzed form). */ - private final int maxSurfaceFormsPerAnalyzedForm; - - /** Maximum graph paths to index for a single analyzed - * surface form. This only matters if your analyzer - * makes lots of alternate paths (e.g. contains - * SynonymFilter). */ - private final int maxGraphExpansions; - - /** Highest number of analyzed paths we saw for any single - * input surface form. For analyzers that never create - * graphs this will always be 1. */ - private int maxAnalyzedPathsForOneInput; - - private boolean hasPayloads; - - private final int sepLabel; - private final int payloadSep; - private final int endByte; - private final int holeCharacter; - - public static final int PAYLOAD_SEP = '\u001F'; - public static final int HOLE_CHARACTER = '\u001E'; - - private final Automaton queryPrefix; - - /** Whether position holes should appear in the automaton. */ - private boolean preservePositionIncrements; - - /** Number of entries the lookup was built with */ - private long count = 0; - - /** - * Calls {@code #XAnalyzingSuggester(Analyzer,Analyzer,int,int,int,boolean,FST,boolean,int,int,int,int,int) - * AnalyzingSuggester(analyzer, analyzer, EXACT_FIRST | - * PRESERVE_SEP, 256, -1)} - * - * @param analyzer Analyzer that will be used for analyzing suggestions while building the index. - */ - public XAnalyzingSuggester(Analyzer analyzer) { - this(analyzer, null, analyzer, EXACT_FIRST | PRESERVE_SEP, 256, -1, true, null, false, 0, - SEP_LABEL, PAYLOAD_SEP, END_BYTE, HOLE_CHARACTER); - } - - /** - * Calls {@code #XAnalyzingSuggester(Analyzer,Analyzer,int,int,int,boolean,FST,boolean,int,int,int,int,int) - * AnalyzingSuggester(indexAnalyzer, queryAnalyzer, EXACT_FIRST | - * PRESERVE_SEP, 256, -1)} - * - * @param indexAnalyzer Analyzer that will be used for analyzing suggestions while building the index. - * @param queryAnalyzer Analyzer that will be used for analyzing query text during lookup - */ - public XAnalyzingSuggester(Analyzer indexAnalyzer, Analyzer queryAnalyzer) { - this(indexAnalyzer, null, queryAnalyzer, EXACT_FIRST | PRESERVE_SEP, 256, -1, true, null, false, 0, - SEP_LABEL, PAYLOAD_SEP, END_BYTE, HOLE_CHARACTER); - } - - /** - * Creates a new suggester. - * - * @param indexAnalyzer Analyzer that will be used for - * analyzing suggestions while building the index. - * @param queryAnalyzer Analyzer that will be used for - * analyzing query text during lookup - * @param options see {@link #EXACT_FIRST}, {@link #PRESERVE_SEP} - * @param maxSurfaceFormsPerAnalyzedForm Maximum number of - * surface forms to keep for a single analyzed form. - * When there are too many surface forms we discard the - * lowest weighted ones. - * @param maxGraphExpansions Maximum number of graph paths - * to expand from the analyzed form. Set this to -1 for - * no limit. - */ - public XAnalyzingSuggester(Analyzer indexAnalyzer, Automaton queryPrefix, Analyzer queryAnalyzer, - int options, int maxSurfaceFormsPerAnalyzedForm, int maxGraphExpansions, - boolean preservePositionIncrements, FST> fst, - boolean hasPayloads, int maxAnalyzedPathsForOneInput, - int sepLabel, int payloadSep, int endByte, int holeCharacter) { - // SIMON EDIT: I added fst, hasPayloads and maxAnalyzedPathsForOneInput - this.indexAnalyzer = indexAnalyzer; - this.queryAnalyzer = queryAnalyzer; - this.fst = fst; - this.hasPayloads = hasPayloads; - if ((options & ~(EXACT_FIRST | PRESERVE_SEP)) != 0) { - throw new IllegalArgumentException("options should only contain EXACT_FIRST and PRESERVE_SEP; got " + options); - } - this.exactFirst = (options & EXACT_FIRST) != 0; - this.preserveSep = (options & PRESERVE_SEP) != 0; - - // FLORIAN EDIT: I added queryPrefix for context dependent suggestions - this.queryPrefix = queryPrefix; - - // NOTE: this is just an implementation limitation; if - // somehow this is a problem we could fix it by using - // more than one byte to disambiguate ... but 256 seems - // like it should be way more then enough. - if (maxSurfaceFormsPerAnalyzedForm <= 0 || maxSurfaceFormsPerAnalyzedForm > 256) { - throw new IllegalArgumentException( - "maxSurfaceFormsPerAnalyzedForm must be > 0 and < 256 (got: " + maxSurfaceFormsPerAnalyzedForm + ")"); - } - this.maxSurfaceFormsPerAnalyzedForm = maxSurfaceFormsPerAnalyzedForm; - - if (maxGraphExpansions < 1 && maxGraphExpansions != -1) { - throw new IllegalArgumentException( - "maxGraphExpansions must -1 (no limit) or > 0 (got: " + maxGraphExpansions + ")"); - } - this.maxGraphExpansions = maxGraphExpansions; - this.maxAnalyzedPathsForOneInput = maxAnalyzedPathsForOneInput; - this.preservePositionIncrements = preservePositionIncrements; - this.sepLabel = sepLabel; - this.payloadSep = payloadSep; - this.endByte = endByte; - this.holeCharacter = holeCharacter; - } - - /** Returns byte size of the underlying FST. */ - @Override -public long ramBytesUsed() { - return fst == null ? 0 : fst.ramBytesUsed(); - } - - public int getMaxAnalyzedPathsForOneInput() { - return maxAnalyzedPathsForOneInput; - } - - // Replaces SEP with epsilon or remaps them if - // we were asked to preserve them: - private Automaton replaceSep(Automaton a) { - - Automaton result = new Automaton(); - - // Copy all states over - int numStates = a.getNumStates(); - for(int s=0;s visited = new HashSet<>(); - final LinkedList worklist = new LinkedList<>(); - worklist.add(0); - visited.add(0); - int upto = 0; - states[upto] = 0; - upto++; - Transition t = new Transition(); - while (worklist.size() > 0) { - int s = worklist.removeFirst(); - int count = a.initTransition(s, t); - for (int i=0;i { - - private final boolean hasPayloads; - - AnalyzingComparator(boolean hasPayloads) { - this.hasPayloads = hasPayloads; - } - - private final ByteArrayDataInput readerA = new ByteArrayDataInput(); - private final ByteArrayDataInput readerB = new ByteArrayDataInput(); - private final BytesRef scratchA = new BytesRef(); - private final BytesRef scratchB = new BytesRef(); - - @Override - public int compare(BytesRef a, BytesRef b) { - - // First by analyzed form: - readerA.reset(a.bytes, a.offset, a.length); - scratchA.length = readerA.readShort(); - scratchA.bytes = a.bytes; - scratchA.offset = readerA.getPosition(); - - readerB.reset(b.bytes, b.offset, b.length); - scratchB.bytes = b.bytes; - scratchB.length = readerB.readShort(); - scratchB.offset = readerB.getPosition(); - - int cmp = scratchA.compareTo(scratchB); - if (cmp != 0) { - return cmp; - } - readerA.skipBytes(scratchA.length); - readerB.skipBytes(scratchB.length); - // Next by cost: - long aCost = readerA.readInt(); - long bCost = readerB.readInt(); - if (aCost < bCost) { - return -1; - } else if (aCost > bCost) { - return 1; - } - - // Finally by surface form: - if (hasPayloads) { - scratchA.length = readerA.readShort(); - scratchA.offset = readerA.getPosition(); - scratchB.length = readerB.readShort(); - scratchB.offset = readerB.getPosition(); - } else { - scratchA.offset = readerA.getPosition(); - scratchA.length = a.length - scratchA.offset; - scratchB.offset = readerB.getPosition(); - scratchB.length = b.length - scratchB.offset; - } - return scratchA.compareTo(scratchB); - } - } - - /** Non-null if this suggester created a temp dir, needed only during build */ - private static FSDirectory tmpBuildDir; - - @SuppressForbidden(reason = "access temp directory for building index") - protected static synchronized FSDirectory getTempDir() { - if (tmpBuildDir == null) { - // Lazy init - String tempDirPath = System.getProperty("java.io.tmpdir"); - if (tempDirPath == null) { - throw new RuntimeException("Java has no temporary folder property (java.io.tmpdir)?"); - } - try { - tmpBuildDir = FSDirectory.open(PathUtils.get(tempDirPath)); - } catch (IOException ioe) { - throw new RuntimeException(ioe); - } - } - return tmpBuildDir; - } - - @Override - public void build(InputIterator iterator) throws IOException { - String prefix = getClass().getSimpleName(); - Directory tempDir = getTempDir(); - OfflineSorter sorter = new OfflineSorter(tempDir, prefix, new AnalyzingComparator(hasPayloads)); - - IndexOutput tempInput = tempDir.createTempOutput(prefix, "input", IOContext.DEFAULT); - - OfflineSorter.ByteSequencesWriter writer = new OfflineSorter.ByteSequencesWriter(tempInput); - OfflineSorter.ByteSequencesReader reader = null; - - hasPayloads = iterator.hasPayloads(); - - BytesRefBuilder scratch = new BytesRefBuilder(); - - TokenStreamToAutomaton ts2a = getTokenStreamToAutomaton(); - String tempSortedFileName = null; - - count = 0; - byte buffer[] = new byte[8]; - try { - ByteArrayDataOutput output = new ByteArrayDataOutput(buffer); - - for (BytesRef surfaceForm; (surfaceForm = iterator.next()) != null;) { - LimitedFiniteStringsIterator finiteStrings = - new LimitedFiniteStringsIterator(toAutomaton(surfaceForm, ts2a), maxGraphExpansions); - for (IntsRef string; (string = finiteStrings.next()) != null; count++) { - Util.toBytesRef(string, scratch); - - // length of the analyzed text (FST input) - if (scratch.length() > Short.MAX_VALUE-2) { - throw new IllegalArgumentException( - "cannot handle analyzed forms > " + (Short.MAX_VALUE-2) + " in length (got " + scratch.length() + ")"); - } - short analyzedLength = (short) scratch.length(); - - // compute the required length: - // analyzed sequence + weight (4) + surface + analyzedLength (short) - int requiredLength = analyzedLength + 4 + surfaceForm.length + 2; - - BytesRef payload; - - if (hasPayloads) { - if (surfaceForm.length > (Short.MAX_VALUE-2)) { - throw new IllegalArgumentException( - "cannot handle surface form > " + (Short.MAX_VALUE-2) + " in length (got " + surfaceForm.length + ")"); - } - payload = iterator.payload(); - // payload + surfaceLength (short) - requiredLength += payload.length + 2; - } else { - payload = null; - } - - buffer = ArrayUtil.grow(buffer, requiredLength); - - output.reset(buffer); - - output.writeShort(analyzedLength); - - output.writeBytes(scratch.bytes(), 0, scratch.length()); - - output.writeInt(encodeWeight(iterator.weight())); - - if (hasPayloads) { - for(int i=0;i outputs = new PairOutputs<>( - PositiveIntOutputs.getSingleton(), ByteSequenceOutputs.getSingleton()); - Builder> builder = new Builder<>(FST.INPUT_TYPE.BYTE1, outputs); - - // Build FST: - BytesRefBuilder previousAnalyzed = null; - BytesRefBuilder analyzed = new BytesRefBuilder(); - BytesRef surface = new BytesRef(); - IntsRefBuilder scratchInts = new IntsRefBuilder(); - ByteArrayDataInput input = new ByteArrayDataInput(); - - // Used to remove duplicate surface forms (but we - // still index the hightest-weight one). We clear - // this when we see a new analyzed form, so it cannot - // grow unbounded (at most 256 entries): - Set seenSurfaceForms = new HashSet<>(); - - int dedup = 0; - while (true) { - BytesRef bytes = reader.next(); - if (bytes == null) { - break; - } - input.reset(bytes.bytes, bytes.offset, bytes.length); - short analyzedLength = input.readShort(); - analyzed.grow(analyzedLength+2); - input.readBytes(analyzed.bytes(), 0, analyzedLength); - analyzed.setLength(analyzedLength); - - long cost = input.readInt(); - - surface.bytes = bytes.bytes; - if (hasPayloads) { - surface.length = input.readShort(); - surface.offset = input.getPosition(); - } else { - surface.offset = input.getPosition(); - surface.length = bytes.length - surface.offset; - } - - if (previousAnalyzed == null) { - previousAnalyzed = new BytesRefBuilder(); - previousAnalyzed.copyBytes(analyzed); - seenSurfaceForms.add(BytesRef.deepCopyOf(surface)); - } else if (analyzed.get().equals(previousAnalyzed.get())) { - dedup++; - if (dedup >= maxSurfaceFormsPerAnalyzedForm) { - // More than maxSurfaceFormsPerAnalyzedForm - // dups: skip the rest: - continue; - } - if (seenSurfaceForms.contains(surface)) { - continue; - } - seenSurfaceForms.add(BytesRef.deepCopyOf(surface)); - } else { - dedup = 0; - previousAnalyzed.copyBytes(analyzed); - seenSurfaceForms.clear(); - seenSurfaceForms.add(BytesRef.deepCopyOf(surface)); - } - - // TODO: I think we can avoid the extra 2 bytes when - // there is no dup (dedup==0), but we'd have to fix - // the exactFirst logic ... which would be sort of - // hairy because we'd need to special case the two - // (dup/not dup)... - - // NOTE: must be byte 0 so we sort before whatever - // is next - analyzed.append((byte) 0); - analyzed.append((byte) dedup); - - Util.toIntsRef(analyzed.get(), scratchInts); - //System.out.println("ADD: " + scratchInts + " -> " + cost + ": " + surface.utf8ToString()); - if (!hasPayloads) { - builder.add(scratchInts.get(), outputs.newPair(cost, BytesRef.deepCopyOf(surface))); - } else { - int payloadOffset = input.getPosition() + surface.length; - int payloadLength = bytes.length - payloadOffset; - BytesRef br = new BytesRef(surface.length + 1 + payloadLength); - System.arraycopy(surface.bytes, surface.offset, br.bytes, 0, surface.length); - br.bytes[surface.length] = (byte) payloadSep; - System.arraycopy(bytes.bytes, payloadOffset, br.bytes, surface.length+1, payloadLength); - br.length = br.bytes.length; - builder.add(scratchInts.get(), outputs.newPair(cost, br)); - } - } - fst = builder.finish(); - - //PrintWriter pw = new PrintWriter("/tmp/out.dot"); - //Util.toDot(fst, pw, true, true); - //pw.close(); - - } finally { - IOUtils.closeWhileHandlingException(reader, writer); - IOUtils.deleteFilesIgnoringExceptions(tempDir, tempInput.getName(), tempSortedFileName); - } - } - - @Override - public boolean store(OutputStream output) throws IOException { - DataOutput dataOut = new OutputStreamDataOutput(output); - try { - if (fst == null) { - return false; - } - - fst.save(dataOut); - dataOut.writeVInt(maxAnalyzedPathsForOneInput); - dataOut.writeByte((byte) (hasPayloads ? 1 : 0)); - } finally { - IOUtils.close(output); - } - return true; - } - - @Override - public long getCount() { - return count; - } - - @Override - public boolean load(InputStream input) throws IOException { - DataInput dataIn = new InputStreamDataInput(input); - try { - this.fst = new FST<>(dataIn, new PairOutputs<>( - PositiveIntOutputs.getSingleton(), ByteSequenceOutputs.getSingleton())); - maxAnalyzedPathsForOneInput = dataIn.readVInt(); - hasPayloads = dataIn.readByte() == 1; - } finally { - IOUtils.close(input); - } - return true; - } - - private LookupResult getLookupResult(Long output1, BytesRef output2, CharsRefBuilder spare) { - LookupResult result; - if (hasPayloads) { - int sepIndex = -1; - for(int i=0;i= output2.length) { - return false; - } - for(int i=0;i lookup(final CharSequence key, Set contexts, boolean onlyMorePopular, int num) { - assert num > 0; - - if (onlyMorePopular) { - throw new IllegalArgumentException("this suggester only works with onlyMorePopular=false"); - } - if (fst == null) { - return Collections.emptyList(); - } - - //System.out.println("lookup key=" + key + " num=" + num); - for (int i = 0; i < key.length(); i++) { - if (key.charAt(i) == holeCharacter) { - throw new IllegalArgumentException( - "lookup key cannot contain HOLE character U+001E; this character is reserved"); - } - if (key.charAt(i) == sepLabel) { - throw new IllegalArgumentException( - "lookup key cannot contain unit separator character U+001F; this character is reserved"); - } - } - final BytesRef utf8Key = new BytesRef(key); - try { - - Automaton lookupAutomaton = toLookupAutomaton(key); - - final CharsRefBuilder spare = new CharsRefBuilder(); - - //System.out.println(" now intersect exactFirst=" + exactFirst); - - // Intersect automaton w/ suggest wFST and get all - // prefix starting nodes & their outputs: - //final PathIntersector intersector = getPathIntersector(lookupAutomaton, fst); - - //System.out.println(" prefixPaths: " + prefixPaths.size()); - - BytesReader bytesReader = fst.getBytesReader(); - - FST.Arc> scratchArc = new FST.Arc<>(); - - final List results = new ArrayList<>(); - - List>> prefixPaths = FSTUtil.intersectPrefixPaths(convertAutomaton(lookupAutomaton), fst); - - if (exactFirst) { - - int count = 0; - for (FSTUtil.Path> path : prefixPaths) { - if (fst.findTargetArc(endByte, path.fstNode, scratchArc, bytesReader) != null) { - // This node has END_BYTE arc leaving, meaning it's an - // "exact" match: - count++; - } - } - - // Searcher just to find the single exact only - // match, if present: - Util.TopNSearcher> searcher; - searcher = new Util.TopNSearcher<>( - fst, count * maxSurfaceFormsPerAnalyzedForm, count * maxSurfaceFormsPerAnalyzedForm, weightComparator); - - // NOTE: we could almost get away with only using - // the first start node. The only catch is if - // maxSurfaceFormsPerAnalyzedForm had kicked in and - // pruned our exact match from one of these nodes - // ...: - for (FSTUtil.Path> path : prefixPaths) { - if (fst.findTargetArc(endByte, path.fstNode, scratchArc, bytesReader) != null) { - // This node has END_BYTE arc leaving, meaning it's an - // "exact" match: - searcher.addStartPaths(scratchArc, fst.outputs.add(path.output, scratchArc.output), false, path.input); - } - } - - Util.TopResults> completions = searcher.search(); - - // NOTE: this is rather inefficient: we enumerate - // every matching "exactly the same analyzed form" - // path, and then do linear scan to see if one of - // these exactly matches the input. It should be - // possible (though hairy) to do something similar - // to getByOutput, since the surface form is encoded - // into the FST output, so we more efficiently hone - // in on the exact surface-form match. Still, I - // suspect very little time is spent in this linear - // seach: it's bounded by how many prefix start - // nodes we have and the - // maxSurfaceFormsPerAnalyzedForm: - for(Result> completion : completions) { - BytesRef output2 = completion.output.output2; - if (sameSurfaceForm(utf8Key, output2)) { - results.add(getLookupResult(completion.output.output1, output2, spare)); - break; - } - } - - if (results.size() == num) { - // That was quick: - return results; - } - } - - Util.TopNSearcher> searcher; - searcher = new Util.TopNSearcher>(fst, - num - results.size(), - num * maxAnalyzedPathsForOneInput, - weightComparator) { - private final Set seen = new HashSet<>(); - - @Override - protected boolean acceptResult(IntsRef input, Pair output) { - - // Dedup: when the input analyzes to a graph we - // can get duplicate surface forms: - if (seen.contains(output.output2)) { - return false; - } - seen.add(output.output2); - - if (!exactFirst) { - return true; - } else { - // In exactFirst mode, don't accept any paths - // matching the surface form since that will - // create duplicate results: - if (sameSurfaceForm(utf8Key, output.output2)) { - // We found exact match, which means we should - // have already found it in the first search: - assert results.size() == 1; - return false; - } else { - return true; - } - } - } - }; - - prefixPaths = getFullPrefixPaths(prefixPaths, lookupAutomaton, fst); - - for (FSTUtil.Path> path : prefixPaths) { - searcher.addStartPaths(path.fstNode, path.output, true, path.input); - } - - TopResults> completions = searcher.search(); - - for(Result> completion : completions) { - - LookupResult result = getLookupResult(completion.output.output1, completion.output.output2, spare); - - // TODO: for fuzzy case would be nice to return - // how many edits were required - - //System.out.println(" result=" + result); - results.add(result); - - if (results.size() == num) { - // In the exactFirst=true case the search may - // produce one extra path - break; - } - } - - return results; - } catch (IOException bogus) { - throw new RuntimeException(bogus); - } - } - - @Override - public boolean store(DataOutput output) throws IOException { - output.writeVLong(count); - if (fst == null) { - return false; - } - - fst.save(output); - output.writeVInt(maxAnalyzedPathsForOneInput); - output.writeByte((byte) (hasPayloads ? 1 : 0)); - return true; - } - - @Override - public boolean load(DataInput input) throws IOException { - count = input.readVLong(); - this.fst = new FST<>(input, new PairOutputs<>(PositiveIntOutputs.getSingleton(), ByteSequenceOutputs.getSingleton())); - maxAnalyzedPathsForOneInput = input.readVInt(); - hasPayloads = input.readByte() == 1; - return true; - } - - /** Returns all completion paths to initialize the search. */ - protected List>> getFullPrefixPaths(List>> prefixPaths, - Automaton lookupAutomaton, - FST> fst) - throws IOException { - return prefixPaths; - } - - final Automaton toAutomaton(final BytesRef surfaceForm, final TokenStreamToAutomaton ts2a) throws IOException { - try (TokenStream ts = indexAnalyzer.tokenStream("", surfaceForm.utf8ToString())) { - return toAutomaton(ts, ts2a); - } - } - - final Automaton toAutomaton(TokenStream ts, final TokenStreamToAutomaton ts2a) throws IOException { - // Create corresponding automaton: labels are bytes - // from each analyzed token, with byte 0 used as - // separator between tokens: - Automaton automaton = ts2a.toAutomaton(ts); - - automaton = replaceSep(automaton); - automaton = convertAutomaton(automaton); - - // TODO: LUCENE-5660 re-enable this once we disallow massive suggestion strings - // assert SpecialOperations.isFinite(automaton); - - // Get all paths from the automaton (there can be - // more than one path, eg if the analyzer created a - // graph using SynFilter or WDF): - - return automaton; - } - - // EDIT: Adrien, needed by lookup providers - // NOTE: these XForks are unmaintainable, we need to get rid of them... - public Set toFiniteStrings(TokenStream stream) throws IOException { - final TokenStreamToAutomaton ts2a = getTokenStreamToAutomaton(); - Automaton automaton; - try (TokenStream ts = stream) { - automaton = toAutomaton(ts, ts2a); - } - LimitedFiniteStringsIterator finiteStrings = - new LimitedFiniteStringsIterator(automaton, maxGraphExpansions); - Set set = new HashSet<>(); - for (IntsRef string = finiteStrings.next(); string != null; string = finiteStrings.next()) { - set.add(IntsRef.deepCopyOf(string)); - } - return Collections.unmodifiableSet(set); - } - - final Automaton toLookupAutomaton(final CharSequence key) throws IOException { - // TODO: is there a Reader from a CharSequence? - // Turn tokenstream into automaton: - Automaton automaton = null; - - try (TokenStream ts = queryAnalyzer.tokenStream("", key.toString())) { - automaton = getTokenStreamToAutomaton().toAutomaton(ts); - } - - automaton = replaceSep(automaton); - - // TODO: we can optimize this somewhat by determinizing - // while we convert - - // This automaton should not blow up during determinize: - automaton = Operations.determinize(automaton, Integer.MAX_VALUE); - return automaton; - } - - - - /** - * Returns the weight associated with an input string, or null if it does not exist. - * - * Unsupported in this implementation (and will throw an {@link UnsupportedOperationException}). - * - * @param key input string - * @return the weight associated with the input string, or {@code null} if it does not exist. - */ - public Object get(CharSequence key) { - throw new UnsupportedOperationException(); - } - - /** - * cost -> weight - * - * @param encoded Cost - * @return Weight - */ - public static int decodeWeight(long encoded) { - return (int)(Integer.MAX_VALUE - encoded); - } - - /** - * weight -> cost - * - * @param value Weight - * @return Cost - */ - public static int encodeWeight(long value) { - if (value < 0 || value > Integer.MAX_VALUE) { - throw new UnsupportedOperationException("cannot encode value: " + value); - } - return Integer.MAX_VALUE - (int)value; - } - - static final Comparator> weightComparator = new Comparator> () { - @Override - public int compare(Pair left, Pair right) { - return left.output1.compareTo(right.output1); - } - }; - - - public static class XBuilder { - private Builder> builder; - private int maxSurfaceFormsPerAnalyzedForm; - private IntsRefBuilder scratchInts = new IntsRefBuilder(); - private final PairOutputs outputs; - private boolean hasPayloads; - private BytesRefBuilder analyzed = new BytesRefBuilder(); - private final SurfaceFormAndPayload[] surfaceFormsAndPayload; - private int count; - private ObjectIntHashMap seenSurfaceForms = HppcMaps.Object.Integer.ensureNoNullKeys(256, 0.75f); - private int payloadSep; - - public XBuilder(int maxSurfaceFormsPerAnalyzedForm, boolean hasPayloads, int payloadSep) { - this.payloadSep = payloadSep; - this.outputs = new PairOutputs<>(PositiveIntOutputs.getSingleton(), ByteSequenceOutputs.getSingleton()); - this.builder = new Builder<>(FST.INPUT_TYPE.BYTE1, outputs); - this.maxSurfaceFormsPerAnalyzedForm = maxSurfaceFormsPerAnalyzedForm; - this.hasPayloads = hasPayloads; - surfaceFormsAndPayload = new SurfaceFormAndPayload[maxSurfaceFormsPerAnalyzedForm]; - - } - public void startTerm(BytesRef analyzed) { - this.analyzed.grow(analyzed.length+2); - this.analyzed.copyBytes(analyzed); - } - - private static final class SurfaceFormAndPayload implements Comparable { - BytesRef payload; - long weight; - - SurfaceFormAndPayload(BytesRef payload, long cost) { - super(); - this.payload = payload; - this.weight = cost; - } - - @Override - public int compareTo(SurfaceFormAndPayload o) { - int res = compare(weight, o.weight); - if (res == 0 ){ - return payload.compareTo(o.payload); - } - return res; - } - public static int compare(long x, long y) { - return (x < y) ? -1 : ((x == y) ? 0 : 1); - } - } - - public void addSurface(BytesRef surface, BytesRef payload, long cost) throws IOException { - int surfaceIndex = -1; - long encodedWeight = cost == -1 ? cost : encodeWeight(cost); - /* - * we need to check if we have seen this surface form, if so only use the - * the surface form with the highest weight and drop the rest no matter if - * the payload differs. - */ - if (count >= maxSurfaceFormsPerAnalyzedForm) { - // More than maxSurfaceFormsPerAnalyzedForm - // dups: skip the rest: - return; - } - - BytesRef surfaceCopy; - final int keySlot; - if (count > 0 && (keySlot = seenSurfaceForms.indexOf(surface)) >= 0) { - surfaceIndex = seenSurfaceForms.indexGet(keySlot); - SurfaceFormAndPayload surfaceFormAndPayload = surfaceFormsAndPayload[surfaceIndex]; - if (encodedWeight >= surfaceFormAndPayload.weight) { - return; - } - surfaceCopy = BytesRef.deepCopyOf(surface); - } else { - surfaceIndex = count++; - surfaceCopy = BytesRef.deepCopyOf(surface); - seenSurfaceForms.put(surfaceCopy, surfaceIndex); - } - - BytesRef payloadRef; - if (!hasPayloads) { - payloadRef = surfaceCopy; - } else { - int len = surface.length + 1 + payload.length; - final BytesRef br = new BytesRef(len); - System.arraycopy(surface.bytes, surface.offset, br.bytes, 0, surface.length); - br.bytes[surface.length] = (byte) payloadSep; - System.arraycopy(payload.bytes, payload.offset, br.bytes, surface.length + 1, payload.length); - br.length = len; - payloadRef = br; - } - if (surfaceFormsAndPayload[surfaceIndex] == null) { - surfaceFormsAndPayload[surfaceIndex] = new SurfaceFormAndPayload(payloadRef, encodedWeight); - } else { - surfaceFormsAndPayload[surfaceIndex].payload = payloadRef; - surfaceFormsAndPayload[surfaceIndex].weight = encodedWeight; - } - } - - public void finishTerm(long defaultWeight) throws IOException { - ArrayUtil.timSort(surfaceFormsAndPayload, 0, count); - int deduplicator = 0; - analyzed.append((byte) 0); - analyzed.setLength(analyzed.length() + 1); - analyzed.grow(analyzed.length()); - for (int i = 0; i < count; i++) { - analyzed.setByteAt(analyzed.length() - 1, (byte) deduplicator++); - Util.toIntsRef(analyzed.get(), scratchInts); - SurfaceFormAndPayload candiate = surfaceFormsAndPayload[i]; - long cost = candiate.weight == -1 ? encodeWeight(Math.min(Integer.MAX_VALUE, defaultWeight)) : candiate.weight; - builder.add(scratchInts.get(), outputs.newPair(cost, candiate.payload)); - } - seenSurfaceForms.clear(); - count = 0; - } - - public FST> build() throws IOException { - return builder.finish(); - } - - public boolean hasPayloads() { - return hasPayloads; - } - - public int maxSurfaceFormsPerAnalyzedForm() { - return maxSurfaceFormsPerAnalyzedForm; - } - - } -} diff --git a/core/src/main/java/org/apache/lucene/search/suggest/analyzing/XFuzzySuggester.java b/core/src/main/java/org/apache/lucene/search/suggest/analyzing/XFuzzySuggester.java deleted file mode 100644 index 31a04b6fa68b5..0000000000000 --- a/core/src/main/java/org/apache/lucene/search/suggest/analyzing/XFuzzySuggester.java +++ /dev/null @@ -1,267 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ -package org.apache.lucene.search.suggest.analyzing; - -import org.apache.lucene.analysis.Analyzer; -import org.apache.lucene.analysis.TokenStreamToAutomaton; -import org.apache.lucene.util.BytesRef; -import org.apache.lucene.util.IntsRef; -import org.apache.lucene.util.UnicodeUtil; -import org.apache.lucene.util.automaton.Automata; -import org.apache.lucene.util.automaton.Automaton; -import org.apache.lucene.util.automaton.FiniteStringsIterator; -import org.apache.lucene.util.automaton.LevenshteinAutomata; -import org.apache.lucene.util.automaton.Operations; -import org.apache.lucene.util.automaton.UTF32ToUTF8; -import org.apache.lucene.util.fst.FST; -import org.apache.lucene.util.fst.PairOutputs; - -import java.io.IOException; -import java.util.ArrayList; -import java.util.List; - -import static org.apache.lucene.util.automaton.Operations.DEFAULT_MAX_DETERMINIZED_STATES; - -/** - * Implements a fuzzy {@link AnalyzingSuggester}. The similarity measurement is - * based on the Damerau-Levenshtein (optimal string alignment) algorithm, though - * you can explicitly choose classic Levenshtein by passing false - * for the transpositions parameter. - *

- * At most, this query will match terms up to - * {@value org.apache.lucene.util.automaton.LevenshteinAutomata#MAXIMUM_SUPPORTED_DISTANCE} - * edits. Higher distances are not supported. Note that the - * fuzzy distance is measured in "byte space" on the bytes - * returned by the {@link org.apache.lucene.analysis.TokenStream}'s {@link - * org.apache.lucene.analysis.tokenattributes.TermToBytesRefAttribute}, usually UTF8. By default - * the analyzed bytes must be at least 3 {@link - * #DEFAULT_MIN_FUZZY_LENGTH} bytes before any edits are - * considered. Furthermore, the first 1 {@link - * #DEFAULT_NON_FUZZY_PREFIX} byte is not allowed to be - * edited. We allow up to 1 (@link - * #DEFAULT_MAX_EDITS} edit. - * If {@link #unicodeAware} parameter in the constructor is set to true, maxEdits, - * minFuzzyLength, transpositions and nonFuzzyPrefix are measured in Unicode code - * points (actual letters) instead of bytes.* - * - *

- * NOTE: This suggester does not boost suggestions that - * required no edits over suggestions that did require - * edits. This is a known limitation. - * - *

- * Note: complex query analyzers can have a significant impact on the lookup - * performance. It's recommended to not use analyzers that drop or inject terms - * like synonyms to keep the complexity of the prefix intersection low for good - * lookup performance. At index time, complex analyzers can safely be used. - *

- */ -public final class XFuzzySuggester extends XAnalyzingSuggester { - private final int maxEdits; - private final boolean transpositions; - private final int nonFuzzyPrefix; - private final int minFuzzyLength; - private final boolean unicodeAware; - - /** - * Measure maxEdits, minFuzzyLength, transpositions and nonFuzzyPrefix - * parameters in Unicode code points (actual letters) - * instead of bytes. - */ - public static final boolean DEFAULT_UNICODE_AWARE = false; - - /** - * The default minimum length of the key passed to {@link - * #lookup} before any edits are allowed. - */ - public static final int DEFAULT_MIN_FUZZY_LENGTH = 3; - - /** - * The default prefix length where edits are not allowed. - */ - public static final int DEFAULT_NON_FUZZY_PREFIX = 1; - - /** - * The default maximum number of edits for fuzzy - * suggestions. - */ - public static final int DEFAULT_MAX_EDITS = 1; - - /** - * The default transposition value passed to {@link org.apache.lucene.util.automaton.LevenshteinAutomata} - */ - public static final boolean DEFAULT_TRANSPOSITIONS = true; - - /** - * Creates a {@link FuzzySuggester} instance initialized with default values. - * - * @param analyzer the analyzer used for this suggester - */ - public XFuzzySuggester(Analyzer analyzer) { - this(analyzer, analyzer); - } - - /** - * Creates a {@link FuzzySuggester} instance with an index & a query analyzer initialized with default values. - * - * @param indexAnalyzer - * Analyzer that will be used for analyzing suggestions while building the index. - * @param queryAnalyzer - * Analyzer that will be used for analyzing query text during lookup - */ - public XFuzzySuggester(Analyzer indexAnalyzer, Analyzer queryAnalyzer) { - this(indexAnalyzer, null, queryAnalyzer, EXACT_FIRST | PRESERVE_SEP, 256, -1, - DEFAULT_MAX_EDITS, DEFAULT_TRANSPOSITIONS, - DEFAULT_NON_FUZZY_PREFIX, DEFAULT_MIN_FUZZY_LENGTH, DEFAULT_UNICODE_AWARE, - null, false, 0, SEP_LABEL, PAYLOAD_SEP, END_BYTE, HOLE_CHARACTER); - - } - - /** - * Creates a {@link FuzzySuggester} instance. - * - * @param indexAnalyzer Analyzer that will be used for - * analyzing suggestions while building the index. - * @param queryAnalyzer Analyzer that will be used for - * analyzing query text during lookup - * @param options see {@link #EXACT_FIRST}, {@link #PRESERVE_SEP} - * @param maxSurfaceFormsPerAnalyzedForm Maximum number of - * surface forms to keep for a single analyzed form. - * When there are too many surface forms we discard the - * lowest weighted ones. - * @param maxGraphExpansions Maximum number of graph paths - * to expand from the analyzed form. Set this to -1 for - * no limit. - * @param maxEdits must be >= 0 and <= {@link org.apache.lucene.util.automaton.LevenshteinAutomata#MAXIMUM_SUPPORTED_DISTANCE} . - * @param transpositions true if transpositions should be treated as a primitive - * edit operation. If this is false, comparisons will implement the classic - * Levenshtein algorithm. - * @param nonFuzzyPrefix length of common (non-fuzzy) prefix (see default {@link #DEFAULT_NON_FUZZY_PREFIX} - * @param minFuzzyLength minimum length of lookup key before any edits are allowed (see default {@link #DEFAULT_MIN_FUZZY_LENGTH}) - * @param sepLabel separation label - * @param payloadSep payload separator byte - * @param endByte end byte marker byte - */ - public XFuzzySuggester(Analyzer indexAnalyzer, Automaton queryPrefix, Analyzer queryAnalyzer, - int options, int maxSurfaceFormsPerAnalyzedForm, int maxGraphExpansions, - int maxEdits, boolean transpositions, int nonFuzzyPrefix, int minFuzzyLength, - boolean unicodeAware, FST> fst, boolean hasPayloads, - int maxAnalyzedPathsForOneInput, int sepLabel, int payloadSep, int endByte, int holeCharacter) { - super(indexAnalyzer, queryPrefix, queryAnalyzer, options, maxSurfaceFormsPerAnalyzedForm, maxGraphExpansions, - true, fst, hasPayloads, maxAnalyzedPathsForOneInput, sepLabel, payloadSep, endByte, holeCharacter); - if (maxEdits < 0 || maxEdits > LevenshteinAutomata.MAXIMUM_SUPPORTED_DISTANCE) { - throw new IllegalArgumentException( - "maxEdits must be between 0 and " + LevenshteinAutomata.MAXIMUM_SUPPORTED_DISTANCE); - } - if (nonFuzzyPrefix < 0) { - throw new IllegalArgumentException("nonFuzzyPrefix must not be >= 0 (got " + nonFuzzyPrefix + ")"); - } - if (minFuzzyLength < 0) { - throw new IllegalArgumentException("minFuzzyLength must not be >= 0 (got " + minFuzzyLength + ")"); - } - - this.maxEdits = maxEdits; - this.transpositions = transpositions; - this.nonFuzzyPrefix = nonFuzzyPrefix; - this.minFuzzyLength = minFuzzyLength; - this.unicodeAware = unicodeAware; - } - - @Override - protected List>> getFullPrefixPaths( - List>> prefixPaths, Automaton lookupAutomaton, - FST> fst) - throws IOException { - - // TODO: right now there's no penalty for fuzzy/edits, - // ie a completion whose prefix matched exactly what the - // user typed gets no boost over completions that - // required an edit, which get no boost over completions - // requiring two edits. I suspect a multiplicative - // factor is appropriate (eg, say a fuzzy match must be at - // least 2X better weight than the non-fuzzy match to - // "compete") ... in which case I think the wFST needs - // to be log weights or something ... - - Automaton levA = convertAutomaton(toLevenshteinAutomata(lookupAutomaton)); - /* - Writer w = new OutputStreamWriter(new FileOutputStream("out.dot"), "UTF-8"); - w.write(levA.toDot()); - w.close(); - System.out.println("Wrote LevA to out.dot"); - */ - return FSTUtil.intersectPrefixPaths(levA, fst); - } - - @Override - protected Automaton convertAutomaton(Automaton a) { - if (unicodeAware) { - // FLORIAN EDIT: get converted Automaton from superclass - Automaton utf8automaton = new UTF32ToUTF8().convert(super.convertAutomaton(a)); - // This automaton should not blow up during determinize: - utf8automaton = Operations.determinize(utf8automaton, Integer.MAX_VALUE); - return utf8automaton; - } else { - return super.convertAutomaton(a); - } - } - - @Override - public TokenStreamToAutomaton getTokenStreamToAutomaton() { - final TokenStreamToAutomaton tsta = super.getTokenStreamToAutomaton(); - tsta.setUnicodeArcs(unicodeAware); - return tsta; - } - - Automaton toLevenshteinAutomata(Automaton automaton) { - List subs = new ArrayList<>(); - FiniteStringsIterator finiteStrings = new FiniteStringsIterator(automaton); - for (IntsRef string; (string = finiteStrings.next()) != null;) { - if (string.length <= nonFuzzyPrefix || string.length < minFuzzyLength) { - subs.add(Automata.makeString(string.ints, string.offset, string.length)); - } else { - int ints[] = new int[string.length-nonFuzzyPrefix]; - System.arraycopy(string.ints, string.offset+nonFuzzyPrefix, ints, 0, ints.length); - // TODO: maybe add alphaMin to LevenshteinAutomata, - // and pass 1 instead of 0? We probably don't want - // to allow the trailing dedup bytes to be - // edited... but then 0 byte is "in general" allowed - // on input (but not in UTF8). - LevenshteinAutomata lev = new LevenshteinAutomata( - ints, unicodeAware ? Character.MAX_CODE_POINT : 255, transpositions); - subs.add(lev.toAutomaton(maxEdits, UnicodeUtil.newString(string.ints, string.offset, nonFuzzyPrefix))); - } - } - - if (subs.isEmpty()) { - // automaton is empty, there is no accepted paths through it - return Automata.makeEmpty(); // matches nothing - } else if (subs.size() == 1) { - // no synonyms or anything: just a single path through the tokenstream - return subs.get(0); - } else { - // multiple paths: this is really scary! is it slow? - // maybe we should not do this and throw UOE? - Automaton a = Operations.union(subs); - // TODO: we could call toLevenshteinAutomata() before det? - // this only happens if you have multiple paths anyway (e.g. synonyms) - return Operations.determinize(a, DEFAULT_MAX_DETERMINIZED_STATES); - } - } -} diff --git a/core/src/main/java/org/apache/lucene/search/uhighlight/CustomUnifiedHighlighter.java b/core/src/main/java/org/apache/lucene/search/uhighlight/CustomUnifiedHighlighter.java index dc33556fa5ff2..2c8169c3ac41f 100644 --- a/core/src/main/java/org/apache/lucene/search/uhighlight/CustomUnifiedHighlighter.java +++ b/core/src/main/java/org/apache/lucene/search/uhighlight/CustomUnifiedHighlighter.java @@ -35,9 +35,9 @@ import org.apache.lucene.util.BytesRef; import org.apache.lucene.util.automaton.CharacterRunAutomaton; import org.elasticsearch.common.Nullable; -import org.elasticsearch.common.lucene.all.AllTermQuery; import org.elasticsearch.common.lucene.search.MultiPhrasePrefixQuery; import org.elasticsearch.common.lucene.search.function.FunctionScoreQuery; +import org.elasticsearch.index.search.ESToParentBlockJoinQuery; import java.io.IOException; import java.text.BreakIterator; @@ -205,11 +205,10 @@ private Collection rewriteCustomQuery(Query query) { tqs.add(new TermQuery(term)); } return tqs; - } else if (query instanceof AllTermQuery) { - AllTermQuery atq = (AllTermQuery) query; - return Collections.singletonList(new TermQuery(atq.getTerm())); } else if (query instanceof FunctionScoreQuery) { return Collections.singletonList(((FunctionScoreQuery) query).getSubQuery()); + } else if (query instanceof ESToParentBlockJoinQuery) { + return Collections.singletonList(((ESToParentBlockJoinQuery) query).getChildQuery()); } else { return null; } diff --git a/core/src/main/java/org/apache/lucene/search/vectorhighlight/CustomFieldQuery.java b/core/src/main/java/org/apache/lucene/search/vectorhighlight/CustomFieldQuery.java index 25329424833e8..5e877fbce4096 100644 --- a/core/src/main/java/org/apache/lucene/search/vectorhighlight/CustomFieldQuery.java +++ b/core/src/main/java/org/apache/lucene/search/vectorhighlight/CustomFieldQuery.java @@ -30,6 +30,7 @@ import org.apache.lucene.search.Query; import org.apache.lucene.search.SynonymQuery; import org.apache.lucene.search.TermQuery; +import org.apache.lucene.search.join.ToParentBlockJoinQuery; import org.apache.lucene.search.spans.SpanTermQuery; import org.elasticsearch.common.lucene.search.MultiPhrasePrefixQuery; import org.elasticsearch.common.lucene.search.function.FunctionScoreQuery; @@ -90,6 +91,11 @@ void flatten(Query sourceQuery, IndexReader reader, Collection flatQuerie for (Term term : synQuery.getTerms()) { flatten(new TermQuery(term), reader, flatQueries, boost); } + } else if (sourceQuery instanceof ESToParentBlockJoinQuery) { + Query childQuery = ((ESToParentBlockJoinQuery) sourceQuery).getChildQuery(); + if (childQuery != null) { + flatten(childQuery, reader, flatQueries, boost); + } } else { super.flatten(sourceQuery, reader, flatQueries, boost); } diff --git a/core/src/main/java/org/elasticsearch/Build.java b/core/src/main/java/org/elasticsearch/Build.java index bef9fafe3ca70..7e46b340dfc01 100644 --- a/core/src/main/java/org/elasticsearch/Build.java +++ b/core/src/main/java/org/elasticsearch/Build.java @@ -19,6 +19,7 @@ package org.elasticsearch; +import org.elasticsearch.common.Booleans; import org.elasticsearch.common.io.FileSystemUtils; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; @@ -59,7 +60,18 @@ public class Build { // not running from the official elasticsearch jar file (unit tests, IDE, uber client jar, shadiness) shortHash = "Unknown"; date = "Unknown"; - isSnapshot = true; + final String buildSnapshot = System.getProperty("build.snapshot"); + if (buildSnapshot != null) { + try { + Class.forName("com.carrotsearch.randomizedtesting.RandomizedContext"); + } catch (final ClassNotFoundException e) { + // we are not in tests but build.snapshot is set, bail hard + throw new IllegalStateException("build.snapshot set to [" + buildSnapshot + "] but not running tests"); + } + isSnapshot = Booleans.parseBoolean(buildSnapshot); + } else { + isSnapshot = true; + } } if (shortHash == null) { throw new IllegalStateException("Error finding the build shortHash. " + diff --git a/core/src/main/java/org/elasticsearch/Version.java b/core/src/main/java/org/elasticsearch/Version.java index e3d0546e60df3..687061c3e7fa1 100644 --- a/core/src/main/java/org/elasticsearch/Version.java +++ b/core/src/main/java/org/elasticsearch/Version.java @@ -88,8 +88,20 @@ public class Version implements Comparable { public static final Version V_5_5_1 = new Version(V_5_5_1_ID, org.apache.lucene.util.Version.LUCENE_6_6_0); public static final int V_5_5_2_ID = 5050299; public static final Version V_5_5_2 = new Version(V_5_5_2_ID, org.apache.lucene.util.Version.LUCENE_6_6_0); + public static final int V_5_5_3_ID = 5050399; + public static final Version V_5_5_3 = new Version(V_5_5_3_ID, org.apache.lucene.util.Version.LUCENE_6_6_0); public static final int V_5_6_0_ID = 5060099; public static final Version V_5_6_0 = new Version(V_5_6_0_ID, org.apache.lucene.util.Version.LUCENE_6_6_0); + public static final int V_5_6_1_ID = 5060199; + public static final Version V_5_6_1 = new Version(V_5_6_1_ID, org.apache.lucene.util.Version.LUCENE_6_6_1); + public static final int V_5_6_2_ID = 5060299; + public static final Version V_5_6_2 = new Version(V_5_6_2_ID, org.apache.lucene.util.Version.LUCENE_6_6_1); + public static final int V_5_6_3_ID = 5060399; + public static final Version V_5_6_3 = new Version(V_5_6_3_ID, org.apache.lucene.util.Version.LUCENE_6_6_1); + public static final int V_5_6_4_ID = 5060499; + public static final Version V_5_6_4 = new Version(V_5_6_4_ID, org.apache.lucene.util.Version.LUCENE_6_6_1); + public static final int V_5_6_5_ID = 5060599; + public static final Version V_5_6_5 = new Version(V_5_6_5_ID, org.apache.lucene.util.Version.LUCENE_6_6_1); public static final int V_6_0_0_alpha1_ID = 6000001; public static final Version V_6_0_0_alpha1 = new Version(V_6_0_0_alpha1_ID, org.apache.lucene.util.Version.LUCENE_7_0_0); @@ -102,12 +114,21 @@ public class Version implements Comparable { public static final int V_6_0_0_beta2_ID = 6000027; public static final Version V_6_0_0_beta2 = new Version(V_6_0_0_beta2_ID, org.apache.lucene.util.Version.LUCENE_7_0_0); + public static final int V_6_0_0_rc1_ID = 6000051; + public static final Version V_6_0_0_rc1 = + new Version(V_6_0_0_rc1_ID, org.apache.lucene.util.Version.LUCENE_7_0_0); + public static final int V_6_0_0_rc2_ID = 6000052; + public static final Version V_6_0_0_rc2 = + new Version(V_6_0_0_rc2_ID, org.apache.lucene.util.Version.LUCENE_7_0_1); + public static final int V_6_0_0_ID = 6000099; + public static final Version V_6_0_0 = + new Version(V_6_0_0_ID, org.apache.lucene.util.Version.LUCENE_7_0_1); public static final int V_6_1_0_ID = 6010099; public static final Version V_6_1_0 = - new Version(V_6_1_0_ID, org.apache.lucene.util.Version.LUCENE_7_0_0); + new Version(V_6_1_0_ID, org.apache.lucene.util.Version.LUCENE_7_1_0); public static final int V_7_0_0_alpha1_ID = 7000001; public static final Version V_7_0_0_alpha1 = - new Version(V_7_0_0_alpha1_ID, org.apache.lucene.util.Version.LUCENE_7_0_0); + new Version(V_7_0_0_alpha1_ID, org.apache.lucene.util.Version.LUCENE_7_1_0); public static final Version CURRENT = V_7_0_0_alpha1; // unreleased versions must be added to the above list with the suffix _UNRELEASED (with the exception of CURRENT) @@ -127,16 +148,34 @@ public static Version fromId(int id) { return V_7_0_0_alpha1; case V_6_1_0_ID: return V_6_1_0; + case V_6_0_0_ID: + return V_6_0_0; + case V_6_0_0_rc2_ID: + return V_6_0_0_rc2; case V_6_0_0_beta2_ID: return V_6_0_0_beta2; + case V_6_0_0_rc1_ID: + return V_6_0_0_rc1; case V_6_0_0_beta1_ID: return V_6_0_0_beta1; case V_6_0_0_alpha2_ID: return V_6_0_0_alpha2; case V_6_0_0_alpha1_ID: return V_6_0_0_alpha1; + case V_5_6_5_ID: + return V_5_6_5; + case V_5_6_4_ID: + return V_5_6_4; + case V_5_6_3_ID: + return V_5_6_3; + case V_5_6_2_ID: + return V_5_6_2; + case V_5_6_1_ID: + return V_5_6_1; case V_5_6_0_ID: return V_5_6_0; + case V_5_5_3_ID: + return V_5_5_3; case V_5_5_2_ID: return V_5_5_2; case V_5_5_1_ID: diff --git a/core/src/main/java/org/elasticsearch/action/ActionListener.java b/core/src/main/java/org/elasticsearch/action/ActionListener.java index f9fafa9f95a2e..fa32ab417737c 100644 --- a/core/src/main/java/org/elasticsearch/action/ActionListener.java +++ b/core/src/main/java/org/elasticsearch/action/ActionListener.java @@ -45,7 +45,7 @@ public interface ActionListener { * Creates a listener that listens for a response (or failure) and executes the * corresponding consumer when the response (or failure) is received. * - * @param onResponse the consumer of the response, when the listener receives one + * @param onResponse the checked consumer of the response, when the listener receives one * @param onFailure the consumer of the failure, when the listener receives one * @param the type of the response * @return a listener that listens for responses and invokes the consumer when received diff --git a/core/src/main/java/org/elasticsearch/action/ActionModule.java b/core/src/main/java/org/elasticsearch/action/ActionModule.java index 6cfd89d2d26bd..28fd3458b902a 100644 --- a/core/src/main/java/org/elasticsearch/action/ActionModule.java +++ b/core/src/main/java/org/elasticsearch/action/ActionModule.java @@ -128,7 +128,9 @@ import org.elasticsearch.action.admin.indices.settings.put.UpdateSettingsAction; import org.elasticsearch.action.admin.indices.shards.IndicesShardStoresAction; import org.elasticsearch.action.admin.indices.shards.TransportIndicesShardStoresAction; +import org.elasticsearch.action.admin.indices.shrink.ResizeAction; import org.elasticsearch.action.admin.indices.shrink.ShrinkAction; +import org.elasticsearch.action.admin.indices.shrink.TransportResizeAction; import org.elasticsearch.action.admin.indices.shrink.TransportShrinkAction; import org.elasticsearch.action.admin.indices.stats.IndicesStatsAction; import org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction; @@ -181,7 +183,6 @@ import org.elasticsearch.action.search.TransportMultiSearchAction; import org.elasticsearch.action.search.TransportSearchAction; import org.elasticsearch.action.search.TransportSearchScrollAction; -import org.elasticsearch.action.support.ActionFilter; import org.elasticsearch.action.support.ActionFilters; import org.elasticsearch.action.support.AutoCreateIndex; import org.elasticsearch.action.support.DestructiveOperations; @@ -199,7 +200,6 @@ import org.elasticsearch.common.NamedRegistry; import org.elasticsearch.common.inject.AbstractModule; import org.elasticsearch.common.inject.multibindings.MapBinder; -import org.elasticsearch.common.inject.multibindings.Multibinder; import org.elasticsearch.common.logging.ESLoggerFactory; import org.elasticsearch.common.settings.ClusterSettings; import org.elasticsearch.common.settings.IndexScopedSettings; @@ -271,6 +271,7 @@ import org.elasticsearch.rest.action.admin.indices.RestRefreshAction; import org.elasticsearch.rest.action.admin.indices.RestRolloverIndexAction; import org.elasticsearch.rest.action.admin.indices.RestShrinkIndexAction; +import org.elasticsearch.rest.action.admin.indices.RestSplitIndexAction; import org.elasticsearch.rest.action.admin.indices.RestSyncedFlushAction; import org.elasticsearch.rest.action.admin.indices.RestUpdateSettingsAction; import org.elasticsearch.rest.action.admin.indices.RestUpgradeAction; @@ -315,6 +316,7 @@ import org.elasticsearch.usage.UsageService; import java.util.ArrayList; +import java.util.Collections; import java.util.List; import java.util.Map; import java.util.Set; @@ -323,7 +325,6 @@ import java.util.function.UnaryOperator; import java.util.stream.Collectors; -import static java.util.Collections.unmodifiableList; import static java.util.Collections.unmodifiableMap; /** @@ -341,7 +342,7 @@ public class ActionModule extends AbstractModule { private final SettingsFilter settingsFilter; private final List actionPlugins; private final Map> actions; - private final List> actionFilters; + private final ActionFilters actionFilters; private final AutoCreateIndex autoCreateIndex; private final DestructiveOperations destructiveOperations; private final RestController restController; @@ -437,6 +438,7 @@ public void reg actions.register(IndicesShardStoresAction.INSTANCE, TransportIndicesShardStoresAction.class); actions.register(CreateIndexAction.INSTANCE, TransportCreateIndexAction.class); actions.register(ShrinkAction.INSTANCE, TransportShrinkAction.class); + actions.register(ResizeAction.INSTANCE, TransportResizeAction.class); actions.register(RolloverAction.INSTANCE, TransportRolloverAction.class); actions.register(DeleteIndexAction.INSTANCE, TransportDeleteIndexAction.class); actions.register(GetIndexAction.INSTANCE, TransportGetIndexAction.class); @@ -503,8 +505,9 @@ public void reg return unmodifiableMap(actions.getRegistry()); } - private List> setupActionFilters(List actionPlugins) { - return unmodifiableList(actionPlugins.stream().flatMap(p -> p.getActionFilters().stream()).collect(Collectors.toList())); + private ActionFilters setupActionFilters(List actionPlugins) { + return new ActionFilters( + Collections.unmodifiableSet(actionPlugins.stream().flatMap(p -> p.getActionFilters().stream()).collect(Collectors.toSet()))); } public void initRestHandlers(Supplier nodesInCluster) { @@ -552,6 +555,7 @@ public void initRestHandlers(Supplier nodesInCluster) { registerHandler.accept(new RestIndicesAliasesAction(settings, restController)); registerHandler.accept(new RestCreateIndexAction(settings, restController)); registerHandler.accept(new RestShrinkIndexAction(settings, restController)); + registerHandler.accept(new RestSplitIndexAction(settings, restController)); registerHandler.accept(new RestRolloverIndexAction(settings, restController)); registerHandler.accept(new RestDeleteIndexAction(settings, restController)); registerHandler.accept(new RestCloseIndexAction(settings, restController)); @@ -649,11 +653,7 @@ public void initRestHandlers(Supplier nodesInCluster) { @Override protected void configure() { - Multibinder actionFilterMultibinder = Multibinder.newSetBinder(binder(), ActionFilter.class); - for (Class actionFilter : actionFilters) { - actionFilterMultibinder.addBinding().to(actionFilter); - } - bind(ActionFilters.class).asEagerSingleton(); + bind(ActionFilters.class).toInstance(actionFilters); bind(DestructiveOperations.class).toInstance(destructiveOperations); if (false == transportClient) { @@ -676,6 +676,10 @@ protected void configure() { } } + public ActionFilters getActionFilters() { + return actionFilters; + } + public RestController getRestController() { return restController; } diff --git a/core/src/main/java/org/elasticsearch/action/ActionRequest.java b/core/src/main/java/org/elasticsearch/action/ActionRequest.java index 769b2e7b5738c..f5f10c7bcfa9d 100644 --- a/core/src/main/java/org/elasticsearch/action/ActionRequest.java +++ b/core/src/main/java/org/elasticsearch/action/ActionRequest.java @@ -34,6 +34,10 @@ public ActionRequest() { // this.listenerThreaded = request.listenerThreaded(); } + public ActionRequest(StreamInput in) throws IOException { + super(in); + } + public abstract ActionRequestValidationException validate(); /** diff --git a/core/src/main/java/org/elasticsearch/action/DocWriteResponse.java b/core/src/main/java/org/elasticsearch/action/DocWriteResponse.java index 6b1cf09bd736a..69ba6db63ef07 100644 --- a/core/src/main/java/org/elasticsearch/action/DocWriteResponse.java +++ b/core/src/main/java/org/elasticsearch/action/DocWriteResponse.java @@ -33,7 +33,7 @@ import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.index.Index; import org.elasticsearch.index.IndexSettings; -import org.elasticsearch.index.seqno.SequenceNumbersService; +import org.elasticsearch.index.seqno.SequenceNumbers; import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.rest.RestStatus; @@ -60,7 +60,7 @@ public abstract class DocWriteResponse extends ReplicationResponse implements Wr private static final String FORCED_REFRESH = "forced_refresh"; /** - * An enum that represents the the results of CRUD operations, primarily used to communicate the type of + * An enum that represents the results of CRUD operations, primarily used to communicate the type of * operation that occurred. */ public enum Result implements Writeable { @@ -176,7 +176,7 @@ public long getVersion() { } /** - * Returns the sequence number assigned for this change. Returns {@link SequenceNumbersService#UNASSIGNED_SEQ_NO} if the operation + * Returns the sequence number assigned for this change. Returns {@link SequenceNumbers#UNASSIGNED_SEQ_NO} if the operation * wasn't performed (i.e., an update operation that resulted in a NOOP). */ public long getSeqNo() { @@ -263,7 +263,7 @@ public void readFrom(StreamInput in) throws IOException { seqNo = in.readZLong(); primaryTerm = in.readVLong(); } else { - seqNo = SequenceNumbersService.UNASSIGNED_SEQ_NO; + seqNo = SequenceNumbers.UNASSIGNED_SEQ_NO; primaryTerm = 0; } forcedRefresh = in.readBoolean(); @@ -375,7 +375,7 @@ public abstract static class Builder { protected Result result = null; protected boolean forcedRefresh; protected ShardInfo shardInfo = null; - protected Long seqNo = SequenceNumbersService.UNASSIGNED_SEQ_NO; + protected Long seqNo = SequenceNumbers.UNASSIGNED_SEQ_NO; protected Long primaryTerm = 0L; public ShardId getShardId() { diff --git a/core/src/main/java/org/elasticsearch/action/TaskOperationFailure.java b/core/src/main/java/org/elasticsearch/action/TaskOperationFailure.java index 8c8f263c34db0..885647441d01f 100644 --- a/core/src/main/java/org/elasticsearch/action/TaskOperationFailure.java +++ b/core/src/main/java/org/elasticsearch/action/TaskOperationFailure.java @@ -24,7 +24,8 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContent.Params; +import org.elasticsearch.common.xcontent.ToXContentFragment; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.rest.RestStatus; @@ -37,7 +38,7 @@ * * The class is final due to serialization limitations */ -public final class TaskOperationFailure implements Writeable, ToXContent { +public final class TaskOperationFailure implements Writeable, ToXContentFragment { private final String nodeId; diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/allocation/ClusterAllocationExplainRequest.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/allocation/ClusterAllocationExplainRequest.java index aea1ee57dca87..40960c3362086 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/allocation/ClusterAllocationExplainRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/allocation/ClusterAllocationExplainRequest.java @@ -67,6 +67,17 @@ public ClusterAllocationExplainRequest() { this.currentNode = null; } + public ClusterAllocationExplainRequest(StreamInput in) throws IOException { + super(in); + checkVersion(in.getVersion()); + this.index = in.readOptionalString(); + this.shard = in.readOptionalVInt(); + this.primary = in.readOptionalBoolean(); + this.currentNode = in.readOptionalString(); + this.includeYesDecisions = in.readBoolean(); + this.includeDiskInfo = in.readBoolean(); + } + /** * Create a new allocation explain request. If {@code primary} is false, the first unassigned replica * will be picked for explanation. If no replicas are unassigned, the first assigned replica will @@ -81,6 +92,18 @@ public ClusterAllocationExplainRequest() { this.currentNode = currentNode; } + @Override + public void writeTo(StreamOutput out) throws IOException { + checkVersion(out.getVersion()); + super.writeTo(out); + out.writeOptionalString(index); + out.writeOptionalVInt(shard); + out.writeOptionalBoolean(primary); + out.writeOptionalString(currentNode); + out.writeBoolean(includeYesDecisions); + out.writeBoolean(includeDiskInfo); + } + @Override public ActionRequestValidationException validate() { ActionRequestValidationException validationException = null; @@ -226,26 +249,7 @@ public static ClusterAllocationExplainRequest parse(XContentParser parser) throw @Override public void readFrom(StreamInput in) throws IOException { - checkVersion(in.getVersion()); - super.readFrom(in); - this.index = in.readOptionalString(); - this.shard = in.readOptionalVInt(); - this.primary = in.readOptionalBoolean(); - this.currentNode = in.readOptionalString(); - this.includeYesDecisions = in.readBoolean(); - this.includeDiskInfo = in.readBoolean(); - } - - @Override - public void writeTo(StreamOutput out) throws IOException { - checkVersion(out.getVersion()); - super.writeTo(out); - out.writeOptionalString(index); - out.writeOptionalVInt(shard); - out.writeOptionalBoolean(primary); - out.writeOptionalString(currentNode); - out.writeBoolean(includeYesDecisions); - out.writeBoolean(includeDiskInfo); + throw new UnsupportedOperationException("usage of Streamable is to be replaced by Writeable"); } private void checkVersion(Version version) { diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/allocation/TransportClusterAllocationExplainAction.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/allocation/TransportClusterAllocationExplainAction.java index 77d4b24d8cc23..6aaece4c986f2 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/allocation/TransportClusterAllocationExplainAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/allocation/TransportClusterAllocationExplainAction.java @@ -66,7 +66,7 @@ public TransportClusterAllocationExplainAction(Settings settings, TransportServi ClusterInfoService clusterInfoService, AllocationDeciders allocationDeciders, ShardsAllocator shardAllocator, GatewayAllocator gatewayAllocator) { super(settings, ClusterAllocationExplainAction.NAME, transportService, clusterService, threadPool, actionFilters, - indexNameExpressionResolver, ClusterAllocationExplainRequest::new); + ClusterAllocationExplainRequest::new, indexNameExpressionResolver); this.clusterInfoService = clusterInfoService; this.allocationDeciders = allocationDeciders; this.shardAllocator = shardAllocator; @@ -94,7 +94,7 @@ protected void masterOperation(final ClusterAllocationExplainRequest request, fi final RoutingNodes routingNodes = state.getRoutingNodes(); final ClusterInfo clusterInfo = clusterInfoService.getClusterInfo(); final RoutingAllocation allocation = new RoutingAllocation(allocationDeciders, routingNodes, state, - clusterInfo, System.nanoTime(), false); + clusterInfo, System.nanoTime()); ShardRouting shardRouting = findShardToExplain(request, allocation); logger.debug("explaining the allocation for [{}], found shard [{}]", request, shardRouting); diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/health/ClusterHealthRequest.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/health/ClusterHealthRequest.java index ef206d0183b99..fc597a5aa7750 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/health/ClusterHealthRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/health/ClusterHealthRequest.java @@ -51,6 +51,58 @@ public ClusterHealthRequest(String... indices) { this.indices = indices; } + public ClusterHealthRequest(StreamInput in) throws IOException { + super(in); + int size = in.readVInt(); + if (size == 0) { + indices = Strings.EMPTY_ARRAY; + } else { + indices = new String[size]; + for (int i = 0; i < indices.length; i++) { + indices[i] = in.readString(); + } + } + timeout = new TimeValue(in); + if (in.readBoolean()) { + waitForStatus = ClusterHealthStatus.fromValue(in.readByte()); + } + waitForNoRelocatingShards = in.readBoolean(); + waitForActiveShards = ActiveShardCount.readFrom(in); + waitForNodes = in.readString(); + if (in.readBoolean()) { + waitForEvents = Priority.readFrom(in); + } + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + super.writeTo(out); + if (indices == null) { + out.writeVInt(0); + } else { + out.writeVInt(indices.length); + for (String index : indices) { + out.writeString(index); + } + } + timeout.writeTo(out); + if (waitForStatus == null) { + out.writeBoolean(false); + } else { + out.writeBoolean(true); + out.writeByte(waitForStatus.value()); + } + out.writeBoolean(waitForNoRelocatingShards); + waitForActiveShards.writeTo(out); + out.writeString(waitForNodes); + if (waitForEvents == null) { + out.writeBoolean(false); + } else { + out.writeBoolean(true); + Priority.writeTo(waitForEvents, out); + } + } + @Override public String[] indices() { return indices; @@ -174,54 +226,6 @@ public ActionRequestValidationException validate() { @Override public void readFrom(StreamInput in) throws IOException { - super.readFrom(in); - int size = in.readVInt(); - if (size == 0) { - indices = Strings.EMPTY_ARRAY; - } else { - indices = new String[size]; - for (int i = 0; i < indices.length; i++) { - indices[i] = in.readString(); - } - } - timeout = new TimeValue(in); - if (in.readBoolean()) { - waitForStatus = ClusterHealthStatus.fromValue(in.readByte()); - } - waitForNoRelocatingShards = in.readBoolean(); - waitForActiveShards = ActiveShardCount.readFrom(in); - waitForNodes = in.readString(); - if (in.readBoolean()) { - waitForEvents = Priority.readFrom(in); - } - } - - @Override - public void writeTo(StreamOutput out) throws IOException { - super.writeTo(out); - if (indices == null) { - out.writeVInt(0); - } else { - out.writeVInt(indices.length); - for (String index : indices) { - out.writeString(index); - } - } - timeout.writeTo(out); - if (waitForStatus == null) { - out.writeBoolean(false); - } else { - out.writeBoolean(true); - out.writeByte(waitForStatus.value()); - } - out.writeBoolean(waitForNoRelocatingShards); - waitForActiveShards.writeTo(out); - out.writeString(waitForNodes); - if (waitForEvents == null) { - out.writeBoolean(false); - } else { - out.writeBoolean(true); - Priority.writeTo(waitForEvents, out); - } + throw new UnsupportedOperationException("usage of Streamable is to be replaced by Writeable"); } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/health/TransportClusterHealthAction.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/health/TransportClusterHealthAction.java index 8924f81a86cea..6b3ab05a18383 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/health/TransportClusterHealthAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/health/TransportClusterHealthAction.java @@ -56,7 +56,7 @@ public TransportClusterHealthAction(Settings settings, TransportService transpor ThreadPool threadPool, ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver, GatewayAllocator gatewayAllocator) { super(settings, ClusterHealthAction.NAME, false, transportService, clusterService, threadPool, actionFilters, - indexNameExpressionResolver, ClusterHealthRequest::new); + ClusterHealthRequest::new, indexNameExpressionResolver); this.gatewayAllocator = gatewayAllocator; } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/info/NodesInfoResponse.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/info/NodesInfoResponse.java index f233494e1c60c..a7f4ea25fdbee 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/info/NodesInfoResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/info/NodesInfoResponse.java @@ -26,7 +26,8 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContent.Params; +import org.elasticsearch.common.xcontent.ToXContentFragment; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentFactory; @@ -34,7 +35,7 @@ import java.util.List; import java.util.Map; -public class NodesInfoResponse extends BaseNodesResponse implements ToXContent { +public class NodesInfoResponse extends BaseNodesResponse implements ToXContentFragment { public NodesInfoResponse() { } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/info/PluginsAndModules.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/info/PluginsAndModules.java index 206dd262ed8a6..e562adf2602c1 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/info/PluginsAndModules.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/info/PluginsAndModules.java @@ -22,7 +22,8 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContent.Params; +import org.elasticsearch.common.xcontent.ToXContentFragment; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.plugins.PluginInfo; @@ -34,7 +35,7 @@ /** * Information about plugins and modules */ -public class PluginsAndModules implements Writeable, ToXContent { +public class PluginsAndModules implements Writeable, ToXContentFragment { private final List plugins; private final List modules; diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/stats/NodeStats.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/stats/NodeStats.java index 9e301368453eb..750cf609dc67d 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/stats/NodeStats.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/stats/NodeStats.java @@ -19,12 +19,14 @@ package org.elasticsearch.action.admin.cluster.node.stats; +import org.elasticsearch.Version; import org.elasticsearch.action.support.nodes.BaseNodeResponse; import org.elasticsearch.cluster.node.DiscoveryNode; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContent.Params; +import org.elasticsearch.common.xcontent.ToXContentFragment; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.discovery.DiscoveryStats; import org.elasticsearch.http.HttpStats; @@ -35,6 +37,7 @@ import org.elasticsearch.monitor.jvm.JvmStats; import org.elasticsearch.monitor.os.OsStats; import org.elasticsearch.monitor.process.ProcessStats; +import org.elasticsearch.node.AdaptiveSelectionStats; import org.elasticsearch.script.ScriptStats; import org.elasticsearch.threadpool.ThreadPoolStats; import org.elasticsearch.transport.TransportStats; @@ -45,7 +48,7 @@ /** * Node statistics (dynamic, changes depending on when created). */ -public class NodeStats extends BaseNodeResponse implements ToXContent { +public class NodeStats extends BaseNodeResponse implements ToXContentFragment { private long timestamp; @@ -85,6 +88,9 @@ public class NodeStats extends BaseNodeResponse implements ToXContent { @Nullable private IngestStats ingestStats; + @Nullable + private AdaptiveSelectionStats adaptiveSelectionStats; + NodeStats() { } @@ -94,7 +100,8 @@ public NodeStats(DiscoveryNode node, long timestamp, @Nullable NodeIndicesStats @Nullable AllCircuitBreakerStats breaker, @Nullable ScriptStats scriptStats, @Nullable DiscoveryStats discoveryStats, - @Nullable IngestStats ingestStats) { + @Nullable IngestStats ingestStats, + @Nullable AdaptiveSelectionStats adaptiveSelectionStats) { super(node); this.timestamp = timestamp; this.indices = indices; @@ -109,6 +116,7 @@ public NodeStats(DiscoveryNode node, long timestamp, @Nullable NodeIndicesStats this.scriptStats = scriptStats; this.discoveryStats = discoveryStats; this.ingestStats = ingestStats; + this.adaptiveSelectionStats = adaptiveSelectionStats; } public long getTimestamp() { @@ -198,6 +206,11 @@ public IngestStats getIngestStats() { return ingestStats; } + @Nullable + public AdaptiveSelectionStats getAdaptiveSelectionStats() { + return adaptiveSelectionStats; + } + public static NodeStats readNodeStats(StreamInput in) throws IOException { NodeStats nodeInfo = new NodeStats(); nodeInfo.readFrom(in); @@ -222,6 +235,11 @@ public void readFrom(StreamInput in) throws IOException { scriptStats = in.readOptionalWriteable(ScriptStats::new); discoveryStats = in.readOptionalWriteable(DiscoveryStats::new); ingestStats = in.readOptionalWriteable(IngestStats::new); + if (in.getVersion().onOrAfter(Version.V_6_1_0)) { + adaptiveSelectionStats = in.readOptionalWriteable(AdaptiveSelectionStats::new); + } else { + adaptiveSelectionStats = null; + } } @Override @@ -245,6 +263,9 @@ public void writeTo(StreamOutput out) throws IOException { out.writeOptionalWriteable(scriptStats); out.writeOptionalWriteable(discoveryStats); out.writeOptionalWriteable(ingestStats); + if (out.getVersion().onOrAfter(Version.V_6_1_0)) { + out.writeOptionalWriteable(adaptiveSelectionStats); + } } @Override @@ -305,6 +326,9 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws if (getIngestStats() != null) { getIngestStats().toXContent(builder, params); } + if (getAdaptiveSelectionStats() != null) { + getAdaptiveSelectionStats().toXContent(builder, params); + } return builder; } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/stats/NodesStatsRequest.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/stats/NodesStatsRequest.java index a2098d1736113..54ef5b65977a2 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/stats/NodesStatsRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/stats/NodesStatsRequest.java @@ -19,6 +19,7 @@ package org.elasticsearch.action.admin.cluster.node.stats; +import org.elasticsearch.Version; import org.elasticsearch.action.admin.indices.stats.CommonStatsFlags; import org.elasticsearch.action.support.nodes.BaseNodesRequest; import org.elasticsearch.common.io.stream.StreamInput; @@ -43,6 +44,7 @@ public class NodesStatsRequest extends BaseNodesRequest { private boolean script; private boolean discovery; private boolean ingest; + private boolean adaptiveSelection; public NodesStatsRequest() { } @@ -71,6 +73,7 @@ public NodesStatsRequest all() { this.script = true; this.discovery = true; this.ingest = true; + this.adaptiveSelection = true; return this; } @@ -90,6 +93,7 @@ public NodesStatsRequest clear() { this.script = false; this.discovery = false; this.ingest = false; + this.adaptiveSelection = false; return this; } @@ -265,6 +269,18 @@ public NodesStatsRequest ingest(boolean ingest) { return this; } + public boolean adaptiveSelection() { + return adaptiveSelection; + } + + /** + * Should adaptiveSelection statistics be returned. + */ + public NodesStatsRequest adaptiveSelection(boolean adaptiveSelection) { + this.adaptiveSelection = adaptiveSelection; + return this; + } + @Override public void readFrom(StreamInput in) throws IOException { super.readFrom(in); @@ -280,6 +296,11 @@ public void readFrom(StreamInput in) throws IOException { script = in.readBoolean(); discovery = in.readBoolean(); ingest = in.readBoolean(); + if (in.getVersion().onOrAfter(Version.V_6_1_0)) { + adaptiveSelection = in.readBoolean(); + } else { + adaptiveSelection = false; + } } @Override @@ -297,5 +318,8 @@ public void writeTo(StreamOutput out) throws IOException { out.writeBoolean(script); out.writeBoolean(discovery); out.writeBoolean(ingest); + if (out.getVersion().onOrAfter(Version.V_6_1_0)) { + out.writeBoolean(adaptiveSelection); + } } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/stats/NodesStatsResponse.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/stats/NodesStatsResponse.java index c4553304f415f..a9ff7a4c67b9c 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/stats/NodesStatsResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/stats/NodesStatsResponse.java @@ -24,14 +24,14 @@ import org.elasticsearch.cluster.ClusterName; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentFragment; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentFactory; import java.io.IOException; import java.util.List; -public class NodesStatsResponse extends BaseNodesResponse implements ToXContent { +public class NodesStatsResponse extends BaseNodesResponse implements ToXContentFragment { NodesStatsResponse() { } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/stats/TransportNodesStatsAction.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/stats/TransportNodesStatsAction.java index 56c98ed7db02e..f9f3b0826f930 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/stats/TransportNodesStatsAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/stats/TransportNodesStatsAction.java @@ -73,7 +73,7 @@ protected NodeStats nodeOperation(NodeStatsRequest nodeStatsRequest) { NodesStatsRequest request = nodeStatsRequest.request; return nodeService.stats(request.indices(), request.os(), request.process(), request.jvm(), request.threadPool(), request.fs(), request.transport(), request.http(), request.breaker(), request.script(), request.discovery(), - request.ingest()); + request.ingest(), request.adaptiveSelection()); } public static class NodeStatsRequest extends BaseNodeRequest { diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/list/ListTasksResponse.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/list/ListTasksResponse.java index a203dd35b47ff..de5fcf9345d23 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/list/ListTasksResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/list/ListTasksResponse.java @@ -110,7 +110,7 @@ private void buildTaskGroups() { // we found parent in the list of tasks - add it to the parent list parentTask.addGroup(taskGroup); } else { - // we got zombie or the parent was filtered out - add it to the the top task list + // we got zombie or the parent was filtered out - add it to the top task list topLevelTasks.add(taskGroup); } } else { diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/list/TaskGroup.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/list/TaskGroup.java index 87bf70acede44..c0a0930aaaf34 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/list/TaskGroup.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/list/TaskGroup.java @@ -19,7 +19,8 @@ package org.elasticsearch.action.admin.cluster.node.tasks.list; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContent.Params; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.tasks.TaskInfo; @@ -32,7 +33,7 @@ /** * Information about a currently running task and all its subtasks. */ -public class TaskGroup implements ToXContent { +public class TaskGroup implements ToXContentObject { private final TaskInfo task; diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/usage/NodeUsage.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/usage/NodeUsage.java index 954e64e8caf33..d963a6d5e3989 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/usage/NodeUsage.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/usage/NodeUsage.java @@ -23,13 +23,14 @@ import org.elasticsearch.cluster.node.DiscoveryNode; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContent.Params; +import org.elasticsearch.common.xcontent.ToXContentFragment; import org.elasticsearch.common.xcontent.XContentBuilder; import java.io.IOException; import java.util.Map; -public class NodeUsage extends BaseNodeResponse implements ToXContent { +public class NodeUsage extends BaseNodeResponse implements ToXContentFragment { private long timestamp; private long sinceTime; diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/usage/NodesUsageResponse.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/usage/NodesUsageResponse.java index ff88145021c73..24fa2817b1e3b 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/usage/NodesUsageResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/usage/NodesUsageResponse.java @@ -24,7 +24,8 @@ import org.elasticsearch.cluster.ClusterName; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContent.Params; +import org.elasticsearch.common.xcontent.ToXContentFragment; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentFactory; @@ -35,7 +36,7 @@ * The response for the nodes usage api which contains the individual usage * statistics for all nodes queried. */ -public class NodesUsageResponse extends BaseNodesResponse implements ToXContent { +public class NodesUsageResponse extends BaseNodesResponse implements ToXContentFragment { NodesUsageResponse() { } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/remote/RemoteInfoRequest.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/remote/RemoteInfoRequest.java index 6e41f145b65e7..e13c7fc9146a5 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/remote/RemoteInfoRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/remote/RemoteInfoRequest.java @@ -21,9 +21,20 @@ import org.elasticsearch.action.ActionRequest; import org.elasticsearch.action.ActionRequestValidationException; +import org.elasticsearch.common.io.stream.StreamInput; + +import java.io.IOException; public final class RemoteInfoRequest extends ActionRequest { + public RemoteInfoRequest() { + + } + + public RemoteInfoRequest(StreamInput in) throws IOException { + super(in); + } + @Override public ActionRequestValidationException validate() { return null; diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/remote/TransportRemoteInfoAction.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/remote/TransportRemoteInfoAction.java index 33254a9aed9ab..0410f920c8a9a 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/remote/TransportRemoteInfoAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/remote/TransportRemoteInfoAction.java @@ -38,8 +38,8 @@ public final class TransportRemoteInfoAction extends HandledTransportAction listener) { + ActionListener logWrapper = ActionListener.wrap( + response -> { + if (request.dryRun() == false) { + response.getExplanations().getYesDecisionMessages().forEach(logger::info); + } + listener.onResponse(response); + }, + listener::onFailure + ); + clusterService.submitStateUpdateTask("cluster_reroute (api)", new ClusterRerouteResponseAckedClusterStateUpdateTask(logger, - allocationService, request, listener)); + allocationService, request, logWrapper)); } static class ClusterRerouteResponseAckedClusterStateUpdateTask extends AckedClusterStateUpdateTask { diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/settings/SettingsUpdater.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/settings/SettingsUpdater.java index e9fec716a90c7..5d1990a48d06b 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/settings/SettingsUpdater.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/settings/SettingsUpdater.java @@ -58,35 +58,40 @@ synchronized ClusterState updateSettings(final ClusterState currentState, Settin persistentSettings.put(currentState.metaData().persistentSettings()); changed |= clusterSettings.updateDynamicSettings(persistentToApply, persistentSettings, persistentUpdates, "persistent"); - if (!changed) { - return currentState; - } - - MetaData.Builder metaData = MetaData.builder(currentState.metaData()) - .persistentSettings(persistentSettings.build()) - .transientSettings(transientSettings.build()); + final ClusterState clusterState; + if (changed) { + MetaData.Builder metaData = MetaData.builder(currentState.metaData()) + .persistentSettings(persistentSettings.build()) + .transientSettings(transientSettings.build()); - ClusterBlocks.Builder blocks = ClusterBlocks.builder().blocks(currentState.blocks()); - boolean updatedReadOnly = MetaData.SETTING_READ_ONLY_SETTING.get(metaData.persistentSettings()) - || MetaData.SETTING_READ_ONLY_SETTING.get(metaData.transientSettings()); - if (updatedReadOnly) { - blocks.addGlobalBlock(MetaData.CLUSTER_READ_ONLY_BLOCK); - } else { - blocks.removeGlobalBlock(MetaData.CLUSTER_READ_ONLY_BLOCK); - } - boolean updatedReadOnlyAllowDelete = MetaData.SETTING_READ_ONLY_ALLOW_DELETE_SETTING.get(metaData.persistentSettings()) - || MetaData.SETTING_READ_ONLY_ALLOW_DELETE_SETTING.get(metaData.transientSettings()); - if (updatedReadOnlyAllowDelete) { - blocks.addGlobalBlock(MetaData.CLUSTER_READ_ONLY_ALLOW_DELETE_BLOCK); + ClusterBlocks.Builder blocks = ClusterBlocks.builder().blocks(currentState.blocks()); + boolean updatedReadOnly = MetaData.SETTING_READ_ONLY_SETTING.get(metaData.persistentSettings()) + || MetaData.SETTING_READ_ONLY_SETTING.get(metaData.transientSettings()); + if (updatedReadOnly) { + blocks.addGlobalBlock(MetaData.CLUSTER_READ_ONLY_BLOCK); + } else { + blocks.removeGlobalBlock(MetaData.CLUSTER_READ_ONLY_BLOCK); + } + boolean updatedReadOnlyAllowDelete = MetaData.SETTING_READ_ONLY_ALLOW_DELETE_SETTING.get(metaData.persistentSettings()) + || MetaData.SETTING_READ_ONLY_ALLOW_DELETE_SETTING.get(metaData.transientSettings()); + if (updatedReadOnlyAllowDelete) { + blocks.addGlobalBlock(MetaData.CLUSTER_READ_ONLY_ALLOW_DELETE_BLOCK); + } else { + blocks.removeGlobalBlock(MetaData.CLUSTER_READ_ONLY_ALLOW_DELETE_BLOCK); + } + clusterState = builder(currentState).metaData(metaData).blocks(blocks).build(); } else { - blocks.removeGlobalBlock(MetaData.CLUSTER_READ_ONLY_ALLOW_DELETE_BLOCK); + clusterState = currentState; } - ClusterState build = builder(currentState).metaData(metaData).blocks(blocks).build(); - Settings settings = build.metaData().settings(); - // now we try to apply things and if they are invalid we fail - // this dryRun will validate & parse settings but won't actually apply them. + + /* + * Now we try to apply things and if they are invalid we fail. This dry run will validate, parse settings, and trigger deprecation + * logging, but will not actually apply them. + */ + final Settings settings = clusterState.metaData().settings(); clusterSettings.validateUpdate(settings); - return build; + + return clusterState; } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/shards/ClusterSearchShardsRequest.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/shards/ClusterSearchShardsRequest.java index df38690b790a4..d127829fa3584 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/shards/ClusterSearchShardsRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/shards/ClusterSearchShardsRequest.java @@ -49,6 +49,42 @@ public ClusterSearchShardsRequest(String... indices) { indices(indices); } + public ClusterSearchShardsRequest(StreamInput in) throws IOException { + super(in); + indices = new String[in.readVInt()]; + for (int i = 0; i < indices.length; i++) { + indices[i] = in.readString(); + } + + routing = in.readOptionalString(); + preference = in.readOptionalString(); + + if (in.getVersion().onOrBefore(Version.V_5_1_1)) { + //types + in.readStringArray(); + } + indicesOptions = IndicesOptions.readIndicesOptions(in); + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + super.writeTo(out); + + out.writeVInt(indices.length); + for (String index : indices) { + out.writeString(index); + } + + out.writeOptionalString(routing); + out.writeOptionalString(preference); + + if (out.getVersion().onOrBefore(Version.V_5_1_1)) { + //types + out.writeStringArray(Strings.EMPTY_ARRAY); + } + indicesOptions.writeIndicesOptions(out); + } + @Override public ActionRequestValidationException validate() { return null; @@ -110,8 +146,8 @@ public ClusterSearchShardsRequest routing(String... routings) { /** * Sets the preference to execute the search. Defaults to randomize across shards. Can be set to - * _local to prefer local shards, _primary to execute only on primary shards, or - * a custom value, which guarantees that the same order will be used across different requests. + * _local to prefer local shards or a custom value, which guarantees that the same order + * will be used across different requests. */ public ClusterSearchShardsRequest preference(String preference) { this.preference = preference; @@ -124,40 +160,6 @@ public String preference() { @Override public void readFrom(StreamInput in) throws IOException { - super.readFrom(in); - - indices = new String[in.readVInt()]; - for (int i = 0; i < indices.length; i++) { - indices[i] = in.readString(); - } - - routing = in.readOptionalString(); - preference = in.readOptionalString(); - - if (in.getVersion().onOrBefore(Version.V_5_1_1)) { - //types - in.readStringArray(); - } - indicesOptions = IndicesOptions.readIndicesOptions(in); - } - - @Override - public void writeTo(StreamOutput out) throws IOException { - super.writeTo(out); - - out.writeVInt(indices.length); - for (String index : indices) { - out.writeString(index); - } - - out.writeOptionalString(routing); - out.writeOptionalString(preference); - - if (out.getVersion().onOrBefore(Version.V_5_1_1)) { - //types - out.writeStringArray(Strings.EMPTY_ARRAY); - } - indicesOptions.writeIndicesOptions(out); + throw new UnsupportedOperationException("usage of Streamable is to be replaced by Writeable"); } - } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/shards/ClusterSearchShardsRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/shards/ClusterSearchShardsRequestBuilder.java index 7cb7ac1254c60..da31a79fc9bf0 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/shards/ClusterSearchShardsRequestBuilder.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/shards/ClusterSearchShardsRequestBuilder.java @@ -55,8 +55,8 @@ public ClusterSearchShardsRequestBuilder setRouting(String... routing) { /** * Sets the preference to execute the search. Defaults to randomize across shards. Can be set to - * _local to prefer local shards, _primary to execute only on primary shards, or - * a custom value, which guarantees that the same order will be used across different requests. + * _local to prefer local shards or a custom value, which guarantees that the same order + * will be used across different requests. */ public ClusterSearchShardsRequestBuilder setPreference(String preference) { request.preference(preference); diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/shards/TransportClusterSearchShardsAction.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/shards/TransportClusterSearchShardsAction.java index 20ed69ae5a92f..9774ecdffba17 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/shards/TransportClusterSearchShardsAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/shards/TransportClusterSearchShardsAction.java @@ -54,7 +54,7 @@ public TransportClusterSearchShardsAction(Settings settings, TransportService tr IndicesService indicesService, ThreadPool threadPool, ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver) { super(settings, ClusterSearchShardsAction.NAME, transportService, clusterService, threadPool, actionFilters, - indexNameExpressionResolver, ClusterSearchShardsRequest::new); + ClusterSearchShardsRequest::new, indexNameExpressionResolver); this.indicesService = indicesService; } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/create/CreateSnapshotRequest.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/create/CreateSnapshotRequest.java index ae7647af577e3..8b3ca8bdfb208 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/create/CreateSnapshotRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/create/CreateSnapshotRequest.java @@ -90,6 +90,31 @@ public CreateSnapshotRequest(String repository, String snapshot) { this.repository = repository; } + public CreateSnapshotRequest(StreamInput in) throws IOException { + super(in); + snapshot = in.readString(); + repository = in.readString(); + indices = in.readStringArray(); + indicesOptions = IndicesOptions.readIndicesOptions(in); + settings = readSettingsFromStream(in); + includeGlobalState = in.readBoolean(); + waitForCompletion = in.readBoolean(); + partial = in.readBoolean(); + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + super.writeTo(out); + out.writeString(snapshot); + out.writeString(repository); + out.writeStringArray(indices); + indicesOptions.writeIndicesOptions(out); + writeSettingsToStream(settings, out); + out.writeBoolean(includeGlobalState); + out.writeBoolean(waitForCompletion); + out.writeBoolean(partial); + } + @Override public ActionRequestValidationException validate() { ActionRequestValidationException validationException = null; @@ -383,28 +408,7 @@ public CreateSnapshotRequest source(Map source) { @Override public void readFrom(StreamInput in) throws IOException { - super.readFrom(in); - snapshot = in.readString(); - repository = in.readString(); - indices = in.readStringArray(); - indicesOptions = IndicesOptions.readIndicesOptions(in); - settings = readSettingsFromStream(in); - includeGlobalState = in.readBoolean(); - waitForCompletion = in.readBoolean(); - partial = in.readBoolean(); - } - - @Override - public void writeTo(StreamOutput out) throws IOException { - super.writeTo(out); - out.writeString(snapshot); - out.writeString(repository); - out.writeStringArray(indices); - indicesOptions.writeIndicesOptions(out); - writeSettingsToStream(settings, out); - out.writeBoolean(includeGlobalState); - out.writeBoolean(waitForCompletion); - out.writeBoolean(partial); + throw new UnsupportedOperationException("usage of Streamable is to be replaced by Writeable"); } @Override diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/create/TransportCreateSnapshotAction.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/create/TransportCreateSnapshotAction.java index 269edfc401b7a..52fe03f58c28d 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/create/TransportCreateSnapshotAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/create/TransportCreateSnapshotAction.java @@ -45,7 +45,7 @@ public class TransportCreateSnapshotAction extends TransportMasterNodeAction source) { @Override public void readFrom(StreamInput in) throws IOException { - super.readFrom(in); - snapshot = in.readString(); - repository = in.readString(); - indices = in.readStringArray(); - indicesOptions = IndicesOptions.readIndicesOptions(in); - renamePattern = in.readOptionalString(); - renameReplacement = in.readOptionalString(); - waitForCompletion = in.readBoolean(); - includeGlobalState = in.readBoolean(); - partial = in.readBoolean(); - includeAliases = in.readBoolean(); - settings = readSettingsFromStream(in); - indexSettings = readSettingsFromStream(in); - ignoreIndexSettings = in.readStringArray(); - } - - @Override - public void writeTo(StreamOutput out) throws IOException { - super.writeTo(out); - out.writeString(snapshot); - out.writeString(repository); - out.writeStringArray(indices); - indicesOptions.writeIndicesOptions(out); - out.writeOptionalString(renamePattern); - out.writeOptionalString(renameReplacement); - out.writeBoolean(waitForCompletion); - out.writeBoolean(includeGlobalState); - out.writeBoolean(partial); - out.writeBoolean(includeAliases); - writeSettingsToStream(settings, out); - writeSettingsToStream(indexSettings, out); - out.writeStringArray(ignoreIndexSettings); + throw new UnsupportedOperationException("usage of Streamable is to be replaced by Writeable"); } @Override diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/restore/TransportRestoreSnapshotAction.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/restore/TransportRestoreSnapshotAction.java index 5ff5bd17fe5e6..32d4800676295 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/restore/TransportRestoreSnapshotAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/restore/TransportRestoreSnapshotAction.java @@ -53,7 +53,7 @@ public class TransportRestoreSnapshotAction extends TransportMasterNodeAction, ToXContent { +public class SnapshotIndexStatus implements Iterable, ToXContentFragment { private final String index; diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/SnapshotShardsStats.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/SnapshotShardsStats.java index 63bbed43182c4..c74dd5af1eec9 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/SnapshotShardsStats.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/SnapshotShardsStats.java @@ -20,6 +20,7 @@ package org.elasticsearch.action.admin.cluster.snapshots.status; import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentFragment; import org.elasticsearch.common.xcontent.XContentBuilder; import java.io.IOException; @@ -28,7 +29,7 @@ /** * Status of a snapshot shards */ -public class SnapshotShardsStats implements ToXContent { +public class SnapshotShardsStats implements ToXContentFragment { private int initializingShards; private int startedShards; diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/SnapshotStats.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/SnapshotStats.java index a1eaaf9560ab0..ba11e51d56f87 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/SnapshotStats.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/SnapshotStats.java @@ -23,12 +23,13 @@ import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Streamable; import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentFragment; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.index.snapshots.IndexShardSnapshotStatus; import java.io.IOException; -public class SnapshotStats implements Streamable, ToXContent { +public class SnapshotStats implements Streamable, ToXContentFragment { private long startTime; private long time; diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/SnapshotStatus.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/SnapshotStatus.java index 1a5ef9ab933d0..e8c797e45a09c 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/SnapshotStatus.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/SnapshotStatus.java @@ -20,13 +20,13 @@ package org.elasticsearch.action.admin.cluster.snapshots.status; import org.elasticsearch.cluster.SnapshotsInProgress.State; -import org.elasticsearch.snapshots.Snapshot; +import org.elasticsearch.common.Strings; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Streamable; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; -import org.elasticsearch.common.xcontent.XContentFactory; +import org.elasticsearch.snapshots.Snapshot; import java.io.IOException; import java.util.ArrayList; @@ -43,7 +43,7 @@ /** * Status of a snapshot */ -public class SnapshotStatus implements ToXContent, Streamable { +public class SnapshotStatus implements ToXContentObject, Streamable { private Snapshot snapshot; @@ -159,15 +159,7 @@ public static SnapshotStatus readSnapshotStatus(StreamInput in) throws IOExcepti @Override public String toString() { - try { - XContentBuilder builder = XContentFactory.jsonBuilder().prettyPrint(); - builder.startObject(); - toXContent(builder, EMPTY_PARAMS); - builder.endObject(); - return builder.string(); - } catch (IOException e) { - return "{ \"error\" : \"" + e.getMessage() + "\"}"; - } + return Strings.toString(this, true, false); } /** diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/SnapshotsStatusRequest.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/SnapshotsStatusRequest.java index cc6ca276721fb..89a96648871a3 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/SnapshotsStatusRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/SnapshotsStatusRequest.java @@ -54,6 +54,21 @@ public SnapshotsStatusRequest(String repository, String[] snapshots) { this.snapshots = snapshots; } + public SnapshotsStatusRequest(StreamInput in) throws IOException { + super(in); + repository = in.readString(); + snapshots = in.readStringArray(); + ignoreUnavailable = in.readBoolean(); + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + super.writeTo(out); + out.writeString(repository); + out.writeStringArray(snapshots); + out.writeBoolean(ignoreUnavailable); + } + /** * Constructs a new get snapshots request with given repository name * @@ -137,17 +152,6 @@ public boolean ignoreUnavailable() { @Override public void readFrom(StreamInput in) throws IOException { - super.readFrom(in); - repository = in.readString(); - snapshots = in.readStringArray(); - ignoreUnavailable = in.readBoolean(); - } - - @Override - public void writeTo(StreamOutput out) throws IOException { - super.writeTo(out); - out.writeString(repository); - out.writeStringArray(snapshots); - out.writeBoolean(ignoreUnavailable); + throw new UnsupportedOperationException("usage of Streamable is to be replaced by Writeable"); } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/TransportSnapshotsStatusAction.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/TransportSnapshotsStatusAction.java index 7406b0fea4af0..88884fd439514 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/TransportSnapshotsStatusAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/TransportSnapshotsStatusAction.java @@ -69,7 +69,7 @@ public TransportSnapshotsStatusAction(Settings settings, TransportService transp ThreadPool threadPool, SnapshotsService snapshotsService, TransportNodesSnapshotsStatus transportNodesSnapshotsStatus, ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver) { - super(settings, SnapshotsStatusAction.NAME, transportService, clusterService, threadPool, actionFilters, indexNameExpressionResolver, SnapshotsStatusRequest::new); + super(settings, SnapshotsStatusAction.NAME, transportService, clusterService, threadPool, actionFilters, SnapshotsStatusRequest::new, indexNameExpressionResolver); this.snapshotsService = snapshotsService; this.transportNodesSnapshotsStatus = transportNodesSnapshotsStatus; } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/state/ClusterStateRequest.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/state/ClusterStateRequest.java index e6b468b804b0d..33a20332526bf 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/state/ClusterStateRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/state/ClusterStateRequest.java @@ -42,6 +42,29 @@ public class ClusterStateRequest extends MasterNodeReadRequest names; @@ -302,7 +301,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) } } - public static class ProcessStats implements ToXContent { + public static class ProcessStats implements ToXContentFragment { final int count; final int cpuPercent; @@ -544,7 +543,7 @@ public int hashCode() { } } - static class NetworkTypes implements ToXContent { + static class NetworkTypes implements ToXContentFragment { private final Map transportTypes; private final Map httpTypes; diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/stats/TransportClusterStatsAction.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/stats/TransportClusterStatsAction.java index 57eeb2d5eb4f2..c87b55b0bbd7d 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/stats/TransportClusterStatsAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/stats/TransportClusterStatsAction.java @@ -92,7 +92,8 @@ protected ClusterStatsNodeResponse newNodeResponse() { @Override protected ClusterStatsNodeResponse nodeOperation(ClusterStatsNodeRequest nodeRequest) { NodeInfo nodeInfo = nodeService.info(true, true, false, true, false, true, false, true, false, false); - NodeStats nodeStats = nodeService.stats(CommonStatsFlags.NONE, true, true, true, false, true, false, false, false, false, false, false); + NodeStats nodeStats = nodeService.stats(CommonStatsFlags.NONE, + true, true, true, false, true, false, false, false, false, false, false, false); List shardsStats = new ArrayList<>(); for (IndexService indexService : indicesService) { for (IndexShard indexShard : indexService) { diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/GetStoredScriptRequest.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/GetStoredScriptRequest.java index b961dc82dc6c2..57f30699cc0f0 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/GetStoredScriptRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/GetStoredScriptRequest.java @@ -43,6 +43,26 @@ public GetStoredScriptRequest(String id) { this.id = id; } + public GetStoredScriptRequest(StreamInput in) throws IOException { + super(in); + if (in.getVersion().before(Version.V_6_0_0_alpha2)) { + in.readString(); // read lang from previous versions + } + + id = in.readString(); + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + super.writeTo(out); + + if (out.getVersion().before(Version.V_6_0_0_alpha2)) { + out.writeString(""); // write an empty lang to previous versions + } + + out.writeString(id); + } + @Override public ActionRequestValidationException validate() { ActionRequestValidationException validationException = null; @@ -68,24 +88,7 @@ public GetStoredScriptRequest id(String id) { @Override public void readFrom(StreamInput in) throws IOException { - super.readFrom(in); - - if (in.getVersion().before(Version.V_6_0_0_alpha2)) { - in.readString(); // read lang from previous versions - } - - id = in.readString(); - } - - @Override - public void writeTo(StreamOutput out) throws IOException { - super.writeTo(out); - - if (out.getVersion().before(Version.V_6_0_0_alpha2)) { - out.writeString(""); // write an empty lang to previous versions - } - - out.writeString(id); + throw new UnsupportedOperationException("usage of Streamable is to be replaced by Writeable"); } @Override diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/TransportGetStoredScriptAction.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/TransportGetStoredScriptAction.java index ab5a3d9953a51..63f24f31f59bd 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/TransportGetStoredScriptAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/TransportGetStoredScriptAction.java @@ -43,7 +43,7 @@ public TransportGetStoredScriptAction(Settings settings, TransportService transp ThreadPool threadPool, ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver, ScriptService scriptService) { super(settings, GetStoredScriptAction.NAME, transportService, clusterService, threadPool, actionFilters, - indexNameExpressionResolver, GetStoredScriptRequest::new); + GetStoredScriptRequest::new, indexNameExpressionResolver); this.scriptService = scriptService; } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/tasks/PendingClusterTasksRequest.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/tasks/PendingClusterTasksRequest.java index 738276a990704..74f60307a144d 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/tasks/PendingClusterTasksRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/tasks/PendingClusterTasksRequest.java @@ -21,9 +21,19 @@ import org.elasticsearch.action.ActionRequestValidationException; import org.elasticsearch.action.support.master.MasterNodeReadRequest; +import org.elasticsearch.common.io.stream.StreamInput; + +import java.io.IOException; public class PendingClusterTasksRequest extends MasterNodeReadRequest { + public PendingClusterTasksRequest() { + } + + public PendingClusterTasksRequest(StreamInput in) throws IOException { + super(in); + } + @Override public ActionRequestValidationException validate() { return null; diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/tasks/TransportPendingClusterTasksAction.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/tasks/TransportPendingClusterTasksAction.java index cd58bb8d6d43e..542b2dd8badc4 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/tasks/TransportPendingClusterTasksAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/tasks/TransportPendingClusterTasksAction.java @@ -42,7 +42,7 @@ public class TransportPendingClusterTasksAction extends TransportMasterNodeReadA @Inject public TransportPendingClusterTasksAction(Settings settings, TransportService transportService, ClusterService clusterService, ThreadPool threadPool, ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver) { - super(settings, PendingClusterTasksAction.NAME, transportService, clusterService, threadPool, actionFilters, indexNameExpressionResolver, PendingClusterTasksRequest::new); + super(settings, PendingClusterTasksAction.NAME, transportService, clusterService, threadPool, actionFilters, PendingClusterTasksRequest::new, indexNameExpressionResolver); this.clusterService = clusterService; } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/alias/IndicesAliasesRequest.java b/core/src/main/java/org/elasticsearch/action/admin/indices/alias/IndicesAliasesRequest.java index 45fc63de89254..2798359d4a848 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/alias/IndicesAliasesRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/alias/IndicesAliasesRequest.java @@ -176,7 +176,7 @@ private static ObjectParser parser(String name, Supplier blocks() { return blocks; } - public Index shrinkFrom() { - return shrinkFrom; + public Index recoverFrom() { + return recoverFrom; } /** True if all fields that span multiple types should be updated, false otherwise */ @@ -165,4 +171,11 @@ public String getProvidedName() { public ActiveShardCount waitForActiveShards() { return waitForActiveShards; } + + /** + * Returns the resize type or null if this is an ordinary create index request + */ + public ResizeType resizeType() { + return resizeType; + } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/create/CreateIndexRequest.java b/core/src/main/java/org/elasticsearch/action/admin/indices/create/CreateIndexRequest.java index 68274a8c40804..2d320b094b22d 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/create/CreateIndexRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/create/CreateIndexRequest.java @@ -147,10 +147,10 @@ public String cause() { } /** - * A simplified version of settings that takes key value pairs settings. + * The settings to create the index with. */ - public CreateIndexRequest settings(Object... settings) { - this.settings = Settings.builder().put(settings).build(); + public CreateIndexRequest settings(Settings.Builder settings) { + this.settings = settings.build(); return this; } @@ -162,14 +162,6 @@ public CreateIndexRequest settings(Settings settings) { return this; } - /** - * The settings to create the index with. - */ - public CreateIndexRequest settings(Settings.Builder settings) { - this.settings = settings.build(); - return this; - } - /** * The settings to create the index with (either json or yaml format) */ @@ -382,38 +374,32 @@ public CreateIndexRequest source(BytesReference source, XContentType xContentTyp */ @SuppressWarnings("unchecked") public CreateIndexRequest source(Map source) { - boolean found = false; for (Map.Entry entry : source.entrySet()) { String name = entry.getKey(); if (name.equals("settings")) { - found = true; settings((Map) entry.getValue()); } else if (name.equals("mappings")) { - found = true; Map mappings = (Map) entry.getValue(); for (Map.Entry entry1 : mappings.entrySet()) { mapping(entry1.getKey(), (Map) entry1.getValue()); } } else if (name.equals("aliases")) { - found = true; aliases((Map) entry.getValue()); } else { // maybe custom? IndexMetaData.Custom proto = IndexMetaData.lookupPrototype(name); if (proto != null) { - found = true; try { customs.put(name, proto.fromMap((Map) entry.getValue())); } catch (IOException e) { throw new ElasticsearchParseException("failed to parse custom metadata for [{}]", name); } + } else { + // found a key which is neither custom defined nor one of the supported ones + throw new ElasticsearchParseException("unknown key [{}] for create index", name); } } } - if (!found) { - // the top level are settings, use them - settings(source); - } return this; } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/create/CreateIndexRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/admin/indices/create/CreateIndexRequestBuilder.java index f7cc45511e0bb..d5ad01da645d9 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/create/CreateIndexRequestBuilder.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/create/CreateIndexRequestBuilder.java @@ -76,16 +76,6 @@ public CreateIndexRequestBuilder setSettings(XContentBuilder builder) { return this; } - /** - * The settings to create the index with (either json or yaml format) - * @deprecated use {@link #setSettings(String, XContentType)} to avoid content type detection - */ - @Deprecated - public CreateIndexRequestBuilder setSettings(String source) { - request.settings(source); - return this; - } - /** * The settings to create the index with (either json or yaml format) */ @@ -94,14 +84,6 @@ public CreateIndexRequestBuilder setSettings(String source, XContentType xConten return this; } - /** - * A simplified version of settings that takes key value pairs settings. - */ - public CreateIndexRequestBuilder setSettings(Object... settings) { - request.settings(settings); - return this; - } - /** * The settings to create the index with (either json/yaml/properties format) */ diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/create/CreateIndexResponse.java b/core/src/main/java/org/elasticsearch/action/admin/indices/create/CreateIndexResponse.java index 7d948e7137ebf..b770c11c6ab03 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/create/CreateIndexResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/create/CreateIndexResponse.java @@ -21,16 +21,39 @@ import org.elasticsearch.Version; import org.elasticsearch.action.support.master.AcknowledgedResponse; +import org.elasticsearch.common.ParseField; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.xcontent.ConstructingObjectParser; +import org.elasticsearch.common.xcontent.ObjectParser; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; import java.io.IOException; +import static org.elasticsearch.common.xcontent.ConstructingObjectParser.constructorArg; + /** * A response for a create index action. */ -public class CreateIndexResponse extends AcknowledgedResponse { +public class CreateIndexResponse extends AcknowledgedResponse implements ToXContentObject { + + private static final String SHARDS_ACKNOWLEDGED = "shards_acknowledged"; + private static final String INDEX = "index"; + + private static final ParseField SHARDS_ACKNOWLEDGED_PARSER = new ParseField(SHARDS_ACKNOWLEDGED); + private static final ParseField INDEX_PARSER = new ParseField(INDEX); + + private static final ConstructingObjectParser PARSER = new ConstructingObjectParser<>("create_index", + true, args -> new CreateIndexResponse((boolean) args[0], (boolean) args[1], (String) args[2])); + + static { + declareAcknowledgedField(PARSER); + PARSER.declareField(constructorArg(), (parser, context) -> parser.booleanValue(), SHARDS_ACKNOWLEDGED_PARSER, + ObjectParser.ValueType.BOOLEAN); + PARSER.declareField(constructorArg(), (parser, context) -> parser.text(), INDEX_PARSER, ObjectParser.ValueType.STRING); + } private boolean shardsAcked; private String index; @@ -79,7 +102,20 @@ public String index() { } public void addCustomFields(XContentBuilder builder) throws IOException { - builder.field("shards_acknowledged", isShardsAcked()); - builder.field("index", index()); + builder.field(SHARDS_ACKNOWLEDGED, isShardsAcked()); + builder.field(INDEX, index()); + } + + @Override + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); + addAcknowledgedField(builder); + addCustomFields(builder); + builder.endObject(); + return builder; + } + + public static CreateIndexResponse fromXContent(XContentParser parser) throws IOException { + return PARSER.apply(parser, null); } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/delete/DeleteIndexResponse.java b/core/src/main/java/org/elasticsearch/action/admin/indices/delete/DeleteIndexResponse.java old mode 100644 new mode 100755 index 509686d364902..8217668e2177d --- a/core/src/main/java/org/elasticsearch/action/admin/indices/delete/DeleteIndexResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/delete/DeleteIndexResponse.java @@ -22,13 +22,24 @@ import org.elasticsearch.action.support.master.AcknowledgedResponse; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.xcontent.ConstructingObjectParser; +import org.elasticsearch.common.xcontent.ToXContentObject; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; import java.io.IOException; /** * A response for a delete index action. */ -public class DeleteIndexResponse extends AcknowledgedResponse { +public class DeleteIndexResponse extends AcknowledgedResponse implements ToXContentObject { + + private static final ConstructingObjectParser PARSER = new ConstructingObjectParser<>("delete_index", + true, args -> new DeleteIndexResponse((boolean) args[0])); + + static { + declareAcknowledgedField(PARSER); + } DeleteIndexResponse() { } @@ -48,4 +59,16 @@ public void writeTo(StreamOutput out) throws IOException { super.writeTo(out); writeAcknowledged(out); } + + @Override + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); + addAcknowledgedField(builder); + builder.endObject(); + return builder; + } + + public static DeleteIndexResponse fromXContent(XContentParser parser) throws IOException { + return PARSER.apply(parser, null); + } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/exists/indices/IndicesExistsRequest.java b/core/src/main/java/org/elasticsearch/action/admin/indices/exists/indices/IndicesExistsRequest.java index 2574719aa61bb..0c99175387db0 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/exists/indices/IndicesExistsRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/exists/indices/IndicesExistsRequest.java @@ -45,6 +45,19 @@ public IndicesExistsRequest(String... indices) { this.indices = indices; } + public IndicesExistsRequest(StreamInput in) throws IOException { + super(in); + indices = in.readStringArray(); + indicesOptions = IndicesOptions.readIndicesOptions(in); + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + super.writeTo(out); + out.writeStringArray(indices); + indicesOptions.writeIndicesOptions(out); + } + @Override public String[] indices() { return indices; @@ -84,15 +97,6 @@ public ActionRequestValidationException validate() { @Override public void readFrom(StreamInput in) throws IOException { - super.readFrom(in); - indices = in.readStringArray(); - indicesOptions = IndicesOptions.readIndicesOptions(in); - } - - @Override - public void writeTo(StreamOutput out) throws IOException { - super.writeTo(out); - out.writeStringArray(indices); - indicesOptions.writeIndicesOptions(out); + throw new UnsupportedOperationException("usage of Streamable is to be replaced by Writeable"); } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/exists/indices/TransportIndicesExistsAction.java b/core/src/main/java/org/elasticsearch/action/admin/indices/exists/indices/TransportIndicesExistsAction.java index c451e50b77cfc..2310c463581a0 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/exists/indices/TransportIndicesExistsAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/exists/indices/TransportIndicesExistsAction.java @@ -42,7 +42,7 @@ public class TransportIndicesExistsAction extends TransportMasterNodeReadAction< @Inject public TransportIndicesExistsAction(Settings settings, TransportService transportService, ClusterService clusterService, ThreadPool threadPool, ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver) { - super(settings, IndicesExistsAction.NAME, transportService, clusterService, threadPool, actionFilters, indexNameExpressionResolver, IndicesExistsRequest::new); + super(settings, IndicesExistsAction.NAME, transportService, clusterService, threadPool, actionFilters, IndicesExistsRequest::new, indexNameExpressionResolver); } @Override diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/exists/types/TransportTypesExistsAction.java b/core/src/main/java/org/elasticsearch/action/admin/indices/exists/types/TransportTypesExistsAction.java index e1cf5be1acafe..e63a27bef1818 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/exists/types/TransportTypesExistsAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/exists/types/TransportTypesExistsAction.java @@ -41,7 +41,7 @@ public class TransportTypesExistsAction extends TransportMasterNodeReadAction> shardsResultPerIndex; ShardCounts shardCounts; @@ -140,7 +140,7 @@ static ShardCounts calculateShardCounts(Iterable result return new ShardCounts(total, successful, failed); } - static final class ShardCounts implements ToXContent, Streamable { + static final class ShardCounts implements ToXContentFragment, Streamable { public int total; public int successful; diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/get/GetIndexRequest.java b/core/src/main/java/org/elasticsearch/action/admin/indices/get/GetIndexRequest.java index 48886c38aa4bc..0177799c8fd4b 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/get/GetIndexRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/get/GetIndexRequest.java @@ -100,6 +100,20 @@ public static Feature[] convertToFeatures(String... featureNames) { private Feature[] features = DEFAULT_FEATURES; private boolean humanReadable = false; + public GetIndexRequest() { + + } + + public GetIndexRequest(StreamInput in) throws IOException { + super(in); + int size = in.readVInt(); + features = new Feature[size]; + for (int i = 0; i < size; i++) { + features[i] = Feature.fromId(in.readByte()); + } + humanReadable = in.readBoolean(); + } + public GetIndexRequest features(Feature... features) { if (features == null) { throw new IllegalArgumentException("features cannot be null"); @@ -137,13 +151,7 @@ public boolean humanReadable() { @Override public void readFrom(StreamInput in) throws IOException { - super.readFrom(in); - int size = in.readVInt(); - features = new Feature[size]; - for (int i = 0; i < size; i++) { - features[i] = Feature.fromId(in.readByte()); - } - humanReadable = in.readBoolean(); + throw new UnsupportedOperationException("usage of Streamable is to be replaced by Writeable"); } @Override diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/get/TransportGetIndexAction.java b/core/src/main/java/org/elasticsearch/action/admin/indices/get/TransportGetIndexAction.java index 097444bcb68a9..c9cf3257c7637 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/get/TransportGetIndexAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/get/TransportGetIndexAction.java @@ -49,7 +49,7 @@ public class TransportGetIndexAction extends TransportClusterInfoAction { + public GetMappingsRequest() { + } + + public GetMappingsRequest(StreamInput in) throws IOException { + super(in); + } + @Override public ActionRequestValidationException validate() { return null; diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/mapping/get/TransportGetMappingsAction.java b/core/src/main/java/org/elasticsearch/action/admin/indices/mapping/get/TransportGetMappingsAction.java index 363e935ca56f0..3189a5a15c24f 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/mapping/get/TransportGetMappingsAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/mapping/get/TransportGetMappingsAction.java @@ -39,7 +39,7 @@ public class TransportGetMappingsAction extends TransportClusterInfoAction { + private ActiveShardCount waitForActiveShards = ActiveShardCount.DEFAULT; + OpenIndexClusterStateUpdateRequest() { } + + public ActiveShardCount waitForActiveShards() { + return waitForActiveShards; + } + + public OpenIndexClusterStateUpdateRequest waitForActiveShards(ActiveShardCount waitForActiveShards) { + this.waitForActiveShards = waitForActiveShards; + return this; + } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/open/OpenIndexRequest.java b/core/src/main/java/org/elasticsearch/action/admin/indices/open/OpenIndexRequest.java index 0b16da44cf690..0cace4e83bf86 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/open/OpenIndexRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/open/OpenIndexRequest.java @@ -19,8 +19,10 @@ package org.elasticsearch.action.admin.indices.open; +import org.elasticsearch.Version; import org.elasticsearch.action.ActionRequestValidationException; import org.elasticsearch.action.IndicesRequest; +import org.elasticsearch.action.support.ActiveShardCount; import org.elasticsearch.action.support.IndicesOptions; import org.elasticsearch.action.support.master.AcknowledgedRequest; import org.elasticsearch.common.io.stream.StreamInput; @@ -38,6 +40,7 @@ public class OpenIndexRequest extends AcknowledgedRequest impl private String[] indices; private IndicesOptions indicesOptions = IndicesOptions.fromOptions(false, true, false, true); + private ActiveShardCount waitForActiveShards = ActiveShardCount.DEFAULT; public OpenIndexRequest() { } @@ -101,11 +104,46 @@ public OpenIndexRequest indicesOptions(IndicesOptions indicesOptions) { return this; } + public ActiveShardCount waitForActiveShards() { + return waitForActiveShards; + } + + /** + * Sets the number of shard copies that should be active for indices opening to return. + * Defaults to {@link ActiveShardCount#DEFAULT}, which will wait for one shard copy + * (the primary) to become active. Set this value to {@link ActiveShardCount#ALL} to + * wait for all shards (primary and all replicas) to be active before returning. + * Otherwise, use {@link ActiveShardCount#from(int)} to set this value to any + * non-negative integer, up to the number of copies per shard (number of replicas + 1), + * to wait for the desired amount of shard copies to become active before returning. + * Indices opening will only wait up until the timeout value for the number of shard copies + * to be active before returning. Check {@link OpenIndexResponse#isShardsAcknowledged()} to + * determine if the requisite shard copies were all started before returning or timing out. + * + * @param waitForActiveShards number of active shard copies to wait on + */ + public OpenIndexRequest waitForActiveShards(ActiveShardCount waitForActiveShards) { + this.waitForActiveShards = waitForActiveShards; + return this; + } + + /** + * A shortcut for {@link #waitForActiveShards(ActiveShardCount)} where the numerical + * shard count is passed in, instead of having to first call {@link ActiveShardCount#from(int)} + * to get the ActiveShardCount. + */ + public OpenIndexRequest waitForActiveShards(final int waitForActiveShards) { + return waitForActiveShards(ActiveShardCount.from(waitForActiveShards)); + } + @Override public void readFrom(StreamInput in) throws IOException { super.readFrom(in); indices = in.readStringArray(); indicesOptions = IndicesOptions.readIndicesOptions(in); + if (in.getVersion().onOrAfter(Version.V_6_1_0)) { + waitForActiveShards = ActiveShardCount.readFrom(in); + } } @Override @@ -113,5 +151,8 @@ public void writeTo(StreamOutput out) throws IOException { super.writeTo(out); out.writeStringArray(indices); indicesOptions.writeIndicesOptions(out); + if (out.getVersion().onOrAfter(Version.V_6_1_0)) { + waitForActiveShards.writeTo(out); + } } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/open/OpenIndexRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/admin/indices/open/OpenIndexRequestBuilder.java index d34393a8e7c86..ab962fe34663b 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/open/OpenIndexRequestBuilder.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/open/OpenIndexRequestBuilder.java @@ -19,6 +19,7 @@ package org.elasticsearch.action.admin.indices.open; +import org.elasticsearch.action.support.ActiveShardCount; import org.elasticsearch.action.support.IndicesOptions; import org.elasticsearch.action.support.master.AcknowledgedRequestBuilder; import org.elasticsearch.client.ElasticsearchClient; @@ -58,4 +59,32 @@ public OpenIndexRequestBuilder setIndicesOptions(IndicesOptions indicesOptions) request.indicesOptions(indicesOptions); return this; } + + /** + * Sets the number of shard copies that should be active for indices opening to return. + * Defaults to {@link ActiveShardCount#DEFAULT}, which will wait for one shard copy + * (the primary) to become active. Set this value to {@link ActiveShardCount#ALL} to + * wait for all shards (primary and all replicas) to be active before returning. + * Otherwise, use {@link ActiveShardCount#from(int)} to set this value to any + * non-negative integer, up to the number of copies per shard (number of replicas + 1), + * to wait for the desired amount of shard copies to become active before returning. + * Indices opening will only wait up until the timeout value for the number of shard copies + * to be active before returning. Check {@link OpenIndexResponse#isShardsAcknowledged()} to + * determine if the requisite shard copies were all started before returning or timing out. + * + * @param waitForActiveShards number of active shard copies to wait on + */ + public OpenIndexRequestBuilder setWaitForActiveShards(ActiveShardCount waitForActiveShards) { + request.waitForActiveShards(waitForActiveShards); + return this; + } + + /** + * A shortcut for {@link #setWaitForActiveShards(ActiveShardCount)} where the numerical + * shard count is passed in, instead of having to first call {@link ActiveShardCount#from(int)} + * to get the ActiveShardCount. + */ + public OpenIndexRequestBuilder setWaitForActiveShards(final int waitForActiveShards) { + return setWaitForActiveShards(ActiveShardCount.from(waitForActiveShards)); + } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/open/OpenIndexResponse.java b/core/src/main/java/org/elasticsearch/action/admin/indices/open/OpenIndexResponse.java index f1efb3074cd1b..fe9343f363ff3 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/open/OpenIndexResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/open/OpenIndexResponse.java @@ -19,6 +19,7 @@ package org.elasticsearch.action.admin.indices.open; +import org.elasticsearch.Version; import org.elasticsearch.action.support.master.AcknowledgedResponse; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; @@ -30,22 +31,41 @@ */ public class OpenIndexResponse extends AcknowledgedResponse { + private boolean shardsAcknowledged; + OpenIndexResponse() { } - OpenIndexResponse(boolean acknowledged) { + OpenIndexResponse(boolean acknowledged, boolean shardsAcknowledged) { super(acknowledged); + assert acknowledged || shardsAcknowledged == false; // if its not acknowledged, then shards acked should be false too + this.shardsAcknowledged = shardsAcknowledged; + } + + /** + * Returns true if the requisite number of shards were started before + * returning from the indices opening operation. If {@link #isAcknowledged()} + * is false, then this also returns false. + */ + public boolean isShardsAcknowledged() { + return shardsAcknowledged; } @Override public void readFrom(StreamInput in) throws IOException { super.readFrom(in); readAcknowledged(in); + if (in.getVersion().onOrAfter(Version.V_6_1_0)) { + shardsAcknowledged = in.readBoolean(); + } } @Override public void writeTo(StreamOutput out) throws IOException { super.writeTo(out); writeAcknowledged(out); + if (out.getVersion().onOrAfter(Version.V_6_1_0)) { + out.writeBoolean(shardsAcknowledged); + } } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/open/TransportOpenIndexAction.java b/core/src/main/java/org/elasticsearch/action/admin/indices/open/TransportOpenIndexAction.java index 451b9a280be68..795e11c228839 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/open/TransportOpenIndexAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/open/TransportOpenIndexAction.java @@ -22,12 +22,11 @@ import org.apache.logging.log4j.message.ParameterizedMessage; import org.apache.logging.log4j.util.Supplier; import org.elasticsearch.action.ActionListener; -import org.elasticsearch.action.admin.indices.delete.DeleteIndexResponse; import org.elasticsearch.action.support.ActionFilters; import org.elasticsearch.action.support.DestructiveOperations; import org.elasticsearch.action.support.master.TransportMasterNodeAction; import org.elasticsearch.cluster.ClusterState; -import org.elasticsearch.cluster.ack.ClusterStateUpdateResponse; +import org.elasticsearch.cluster.ack.OpenIndexClusterStateUpdateResponse; import org.elasticsearch.cluster.block.ClusterBlockException; import org.elasticsearch.cluster.block.ClusterBlockLevel; import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; @@ -84,18 +83,18 @@ protected ClusterBlockException checkBlock(OpenIndexRequest request, ClusterStat protected void masterOperation(final OpenIndexRequest request, final ClusterState state, final ActionListener listener) { final Index[] concreteIndices = indexNameExpressionResolver.concreteIndices(state, request); if (concreteIndices == null || concreteIndices.length == 0) { - listener.onResponse(new OpenIndexResponse(true)); + listener.onResponse(new OpenIndexResponse(true, true)); return; } OpenIndexClusterStateUpdateRequest updateRequest = new OpenIndexClusterStateUpdateRequest() .ackTimeout(request.timeout()).masterNodeTimeout(request.masterNodeTimeout()) - .indices(concreteIndices); + .indices(concreteIndices).waitForActiveShards(request.waitForActiveShards()); - indexStateService.openIndex(updateRequest, new ActionListener() { + indexStateService.openIndex(updateRequest, new ActionListener() { @Override - public void onResponse(ClusterStateUpdateResponse response) { - listener.onResponse(new OpenIndexResponse(response.isAcknowledged())); + public void onResponse(OpenIndexClusterStateUpdateResponse response) { + listener.onResponse(new OpenIndexResponse(response.isAcknowledged(), response.isShardsAcknowledged())); } @Override diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/recovery/RecoveryResponse.java b/core/src/main/java/org/elasticsearch/action/admin/indices/recovery/RecoveryResponse.java index 0e0881d17293c..a19393ebd5beb 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/recovery/RecoveryResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/recovery/RecoveryResponse.java @@ -21,9 +21,10 @@ import org.elasticsearch.action.ShardOperationFailedException; import org.elasticsearch.action.support.broadcast.BroadcastResponse; +import org.elasticsearch.common.Strings; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentFragment; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.indices.recovery.RecoveryState; @@ -36,7 +37,7 @@ /** * Information regarding the recovery state of indices and their associated shards. */ -public class RecoveryResponse extends BroadcastResponse implements ToXContent { +public class RecoveryResponse extends BroadcastResponse implements ToXContentFragment { private boolean detailed = false; private Map> shardRecoveryStates = new HashMap<>(); @@ -126,4 +127,9 @@ public void readFrom(StreamInput in) throws IOException { shardRecoveryStates.put(s, list); } } + + @Override + public String toString() { + return Strings.toString(this, true, true); + } } \ No newline at end of file diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/rollover/Condition.java b/core/src/main/java/org/elasticsearch/action/admin/indices/rollover/Condition.java index d6bfaf0a48cec..83dc73f9e94b3 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/rollover/Condition.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/rollover/Condition.java @@ -19,8 +19,10 @@ package org.elasticsearch.action.admin.indices.rollover; +import org.elasticsearch.Version; import org.elasticsearch.common.ParseField; import org.elasticsearch.common.io.stream.NamedWriteable; +import org.elasticsearch.common.unit.ByteSizeValue; import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.common.xcontent.ObjectParser; @@ -38,6 +40,9 @@ public abstract class Condition implements NamedWriteable { new ParseField(MaxAgeCondition.NAME)); PARSER.declareLong((conditions, value) -> conditions.add(new MaxDocsCondition(value)), new ParseField(MaxDocsCondition.NAME)); + PARSER.declareString((conditions, s) -> + conditions.add(new MaxSizeCondition(ByteSizeValue.parseBytesSizeValue(s, MaxSizeCondition.NAME))), + new ParseField(MaxSizeCondition.NAME)); } protected T value; @@ -49,6 +54,14 @@ protected Condition(String name) { public abstract Result evaluate(Stats stats); + /** + * Checks if this condition is available in a specific version. + * This makes sure BWC when introducing a new condition which is not recognized by older versions. + */ + boolean includedInVersion(Version version) { + return true; + } + @Override public final String toString() { return "[" + name + ": " + value + "]"; @@ -60,10 +73,12 @@ public final String toString() { public static class Stats { public final long numDocs; public final long indexCreated; + public final ByteSizeValue indexSize; - public Stats(long numDocs, long indexCreated) { + public Stats(long numDocs, long indexCreated, ByteSizeValue indexSize) { this.numDocs = numDocs; this.indexCreated = indexCreated; + this.indexSize = indexSize; } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/rollover/MaxSizeCondition.java b/core/src/main/java/org/elasticsearch/action/admin/indices/rollover/MaxSizeCondition.java new file mode 100644 index 0000000000000..91b18bc050623 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/rollover/MaxSizeCondition.java @@ -0,0 +1,66 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.action.admin.indices.rollover; + +import org.elasticsearch.Version; +import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.unit.ByteSizeUnit; +import org.elasticsearch.common.unit.ByteSizeValue; + +import java.io.IOException; + +/** + * A size-based condition for an index size. + * Evaluates to true if the index size is at least {@link #value}. + */ +public class MaxSizeCondition extends Condition { + public static final String NAME = "max_size"; + + public MaxSizeCondition(ByteSizeValue value) { + super(NAME); + this.value = value; + } + + public MaxSizeCondition(StreamInput in) throws IOException { + super(NAME); + this.value = new ByteSizeValue(in.readVLong(), ByteSizeUnit.BYTES); + } + + @Override + public Result evaluate(Stats stats) { + return new Result(this, stats.indexSize.getBytes() >= value.getBytes()); + } + + @Override + boolean includedInVersion(Version version) { + return version.onOrAfter(Version.V_6_1_0); + } + + @Override + public String getWriteableName() { + return NAME; + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + out.writeVLong(value.getBytes()); + } +} diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/rollover/RolloverRequest.java b/core/src/main/java/org/elasticsearch/action/admin/indices/rollover/RolloverRequest.java index 4804bc577fc58..c25fc7eb537d3 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/rollover/RolloverRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/rollover/RolloverRequest.java @@ -27,6 +27,7 @@ import org.elasticsearch.common.ParseField; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.unit.ByteSizeValue; import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.common.xcontent.ObjectParser; @@ -106,7 +107,9 @@ public void writeTo(StreamOutput out) throws IOException { out.writeBoolean(dryRun); out.writeVInt(conditions.size()); for (Condition condition : conditions) { - out.writeNamedWriteable(condition); + if (condition.includedInVersion(out.getVersion())) { + out.writeNamedWriteable(condition); + } } createIndexRequest.writeTo(out); } @@ -155,6 +158,13 @@ public void addMaxIndexDocsCondition(long numDocs) { this.conditions.add(new MaxDocsCondition(numDocs)); } + /** + * Adds a size-based condition to check if the index size is at least size. + */ + public void addMaxIndexSizeCondition(ByteSizeValue size) { + this.conditions.add(new MaxSizeCondition(size)); + } + /** * Sets rollover index creation request to override index settings when * the rolled over index has to be created diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/rollover/RolloverRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/admin/indices/rollover/RolloverRequestBuilder.java index 35890d1d3a6fd..55df220ec0700 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/rollover/RolloverRequestBuilder.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/rollover/RolloverRequestBuilder.java @@ -23,6 +23,7 @@ import org.elasticsearch.action.support.master.MasterNodeOperationRequestBuilder; import org.elasticsearch.client.ElasticsearchClient; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.unit.ByteSizeValue; import org.elasticsearch.common.unit.TimeValue; @@ -52,6 +53,11 @@ public RolloverRequestBuilder addMaxIndexDocsCondition(long docs) { return this; } + public RolloverRequestBuilder addMaxIndexSizeCondition(ByteSizeValue size){ + this.request.addMaxIndexSizeCondition(size); + return this; + } + public RolloverRequestBuilder dryRun(boolean dryRun) { this.request.dryRun(dryRun); return this; diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/rollover/TransportRolloverAction.java b/core/src/main/java/org/elasticsearch/action/admin/indices/rollover/TransportRolloverAction.java index 2abe0dad74ee3..c66f534bd8130 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/rollover/TransportRolloverAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/rollover/TransportRolloverAction.java @@ -43,6 +43,7 @@ import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.unit.ByteSizeValue; import org.elasticsearch.index.shard.DocsStats; import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.TransportService; @@ -136,7 +137,7 @@ public void onResponse(IndicesStatsResponse statsResponse) { rolloverRequest), ActionListener.wrap(aliasClusterStateUpdateResponse -> { if (aliasClusterStateUpdateResponse.isAcknowledged()) { - activeShardsObserver.waitForActiveShards(rolloverIndexName, + activeShardsObserver.waitForActiveShards(new String[]{rolloverIndexName}, rolloverRequest.getCreateIndexRequest().waitForActiveShards(), rolloverRequest.masterNodeTimeout(), isShardsAcked -> listener.onResponse(new RolloverResponse(sourceIndexName, rolloverIndexName, @@ -195,7 +196,8 @@ static String generateRolloverIndexName(String sourceIndexName, IndexNameExpress static Set evaluateConditions(final Set conditions, final DocsStats docsStats, final IndexMetaData metaData) { final long numDocs = docsStats == null ? 0 : docsStats.getCount(); - final Condition.Stats stats = new Condition.Stats(numDocs, metaData.getCreationDate()); + final long indexSize = docsStats == null ? 0 : docsStats.getTotalSizeInBytes(); + final Condition.Stats stats = new Condition.Stats(numDocs, metaData.getCreationDate(), new ByteSizeValue(indexSize)); return conditions.stream() .map(condition -> condition.evaluate(stats)) .collect(Collectors.toSet()); diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/segments/IndicesSegmentResponse.java b/core/src/main/java/org/elasticsearch/action/admin/indices/segments/IndicesSegmentResponse.java index 19b32d25a0671..2e241ef1614b9 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/segments/IndicesSegmentResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/segments/IndicesSegmentResponse.java @@ -155,6 +155,9 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws } builder.endArray(); } + if (segment.attributes != null && segment.attributes.isEmpty() == false) { + builder.field("attributes", segment.attributes); + } builder.endObject(); } builder.endObject(); diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/settings/get/GetSettingsRequest.java b/core/src/main/java/org/elasticsearch/action/admin/indices/settings/get/GetSettingsRequest.java index d15da04acabf5..3a84543f34017 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/settings/get/GetSettingsRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/settings/get/GetSettingsRequest.java @@ -48,6 +48,26 @@ public GetSettingsRequest indicesOptions(IndicesOptions indicesOptions) { return this; } + public GetSettingsRequest() { + } + + public GetSettingsRequest(StreamInput in) throws IOException { + super(in); + indices = in.readStringArray(); + indicesOptions = IndicesOptions.readIndicesOptions(in); + names = in.readStringArray(); + humanReadable = in.readBoolean(); + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + super.writeTo(out); + out.writeStringArray(indices); + indicesOptions.writeIndicesOptions(out); + out.writeStringArray(names); + out.writeBoolean(humanReadable); + } + @Override public String[] indices() { return indices; @@ -87,19 +107,6 @@ public ActionRequestValidationException validate() { @Override public void readFrom(StreamInput in) throws IOException { - super.readFrom(in); - indices = in.readStringArray(); - indicesOptions = IndicesOptions.readIndicesOptions(in); - names = in.readStringArray(); - humanReadable = in.readBoolean(); - } - - @Override - public void writeTo(StreamOutput out) throws IOException { - super.writeTo(out); - out.writeStringArray(indices); - indicesOptions.writeIndicesOptions(out); - out.writeStringArray(names); - out.writeBoolean(humanReadable); + throw new UnsupportedOperationException("usage of Streamable is to be replaced by Writeable"); } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/settings/get/TransportGetSettingsAction.java b/core/src/main/java/org/elasticsearch/action/admin/indices/settings/get/TransportGetSettingsAction.java index 6e6d3eaee98b2..3109fa4d405ac 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/settings/get/TransportGetSettingsAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/settings/get/TransportGetSettingsAction.java @@ -48,7 +48,7 @@ public class TransportGetSettingsAction extends TransportMasterNodeReadAction entry : settings.getAsMap().entrySet()) { - if (Regex.simpleMatch(request.names(), entry.getKey())) { - settingsBuilder.put(entry.getKey(), entry.getValue()); - } - } - settings = settingsBuilder.build(); + if (CollectionUtils.isEmpty(request.names()) == false) { + settings = settings.filter(k -> Regex.simpleMatch(request.names(), k)); } indexToSettingsBuilder.put(concreteIndex.getName(), settings); } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/shards/IndicesShardStoresRequest.java b/core/src/main/java/org/elasticsearch/action/admin/indices/shards/IndicesShardStoresRequest.java index 9b129e0d7295b..18e4083095df5 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/shards/IndicesShardStoresRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/shards/IndicesShardStoresRequest.java @@ -49,6 +49,28 @@ public IndicesShardStoresRequest(String... indices) { public IndicesShardStoresRequest() { } + public IndicesShardStoresRequest(StreamInput in) throws IOException { + super(in); + indices = in.readStringArray(); + int nStatus = in.readVInt(); + statuses = EnumSet.noneOf(ClusterHealthStatus.class); + for (int i = 0; i < nStatus; i++) { + statuses.add(ClusterHealthStatus.fromValue(in.readByte())); + } + indicesOptions = IndicesOptions.readIndicesOptions(in); + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + super.writeTo(out); + out.writeStringArrayNullable(indices); + out.writeVInt(statuses.size()); + for (ClusterHealthStatus status : statuses) { + out.writeByte(status.value()); + } + indicesOptions.writeIndicesOptions(out); + } + /** * Set statuses to filter shards to get stores info on. * see {@link ClusterHealthStatus} for details. @@ -107,26 +129,8 @@ public ActionRequestValidationException validate() { return null; } - @Override - public void writeTo(StreamOutput out) throws IOException { - super.writeTo(out); - out.writeStringArrayNullable(indices); - out.writeVInt(statuses.size()); - for (ClusterHealthStatus status : statuses) { - out.writeByte(status.value()); - } - indicesOptions.writeIndicesOptions(out); - } - @Override public void readFrom(StreamInput in) throws IOException { - super.readFrom(in); - indices = in.readStringArray(); - int nStatus = in.readVInt(); - statuses = EnumSet.noneOf(ClusterHealthStatus.class); - for (int i = 0; i < nStatus; i++) { - statuses.add(ClusterHealthStatus.fromValue(in.readByte())); - } - indicesOptions = IndicesOptions.readIndicesOptions(in); + throw new UnsupportedOperationException("usage of Streamable is to be replaced by Writeable"); } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/shards/IndicesShardStoresResponse.java b/core/src/main/java/org/elasticsearch/action/admin/indices/shards/IndicesShardStoresResponse.java index b3494094182d1..70624380e8611 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/shards/IndicesShardStoresResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/shards/IndicesShardStoresResponse.java @@ -33,7 +33,6 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Streamable; -import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.ToXContentFragment; import org.elasticsearch.common.xcontent.XContentBuilder; @@ -55,7 +54,7 @@ public class IndicesShardStoresResponse extends ActionResponse implements ToXCon /** * Shard store information from a node */ - public static class StoreStatus implements Streamable, ToXContent, Comparable { + public static class StoreStatus implements Streamable, ToXContentFragment, Comparable { private DiscoveryNode node; private String allocationId; private Exception storeException; diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/shards/TransportIndicesShardStoresAction.java b/core/src/main/java/org/elasticsearch/action/admin/indices/shards/TransportIndicesShardStoresAction.java index c11a2ded83d4b..0741965f5e5c9 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/shards/TransportIndicesShardStoresAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/shards/TransportIndicesShardStoresAction.java @@ -69,7 +69,7 @@ public class TransportIndicesShardStoresAction extends TransportMasterNodeReadAc @Inject public TransportIndicesShardStoresAction(Settings settings, TransportService transportService, ClusterService clusterService, ThreadPool threadPool, ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver, TransportNodesListGatewayStartedShards listShardStoresInfo) { - super(settings, IndicesShardStoresAction.NAME, transportService, clusterService, threadPool, actionFilters, indexNameExpressionResolver, IndicesShardStoresRequest::new); + super(settings, IndicesShardStoresAction.NAME, transportService, clusterService, threadPool, actionFilters, IndicesShardStoresRequest::new, indexNameExpressionResolver); this.listShardStoresInfo = listShardStoresInfo; } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/shrink/ResizeAction.java b/core/src/main/java/org/elasticsearch/action/admin/indices/shrink/ResizeAction.java new file mode 100644 index 0000000000000..b92631dd09917 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/shrink/ResizeAction.java @@ -0,0 +1,45 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.action.admin.indices.shrink; + +import org.elasticsearch.Version; +import org.elasticsearch.action.Action; +import org.elasticsearch.client.ElasticsearchClient; + +public class ResizeAction extends Action { + + public static final ResizeAction INSTANCE = new ResizeAction(); + public static final String NAME = "indices:admin/resize"; + public static final Version COMPATIBILITY_VERSION = Version.V_6_1_0; // TODO remove this once it's backported + + private ResizeAction() { + super(NAME); + } + + @Override + public ResizeResponse newResponse() { + return new ResizeResponse(); + } + + @Override + public ResizeRequestBuilder newRequestBuilder(ElasticsearchClient client) { + return new ResizeRequestBuilder(client, this); + } +} diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/shrink/ShrinkRequest.java b/core/src/main/java/org/elasticsearch/action/admin/indices/shrink/ResizeRequest.java similarity index 65% rename from core/src/main/java/org/elasticsearch/action/admin/indices/shrink/ShrinkRequest.java rename to core/src/main/java/org/elasticsearch/action/admin/indices/shrink/ResizeRequest.java index 6ea58200a4500..f2f648f70ffa9 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/shrink/ShrinkRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/shrink/ResizeRequest.java @@ -18,12 +18,14 @@ */ package org.elasticsearch.action.admin.indices.shrink; +import org.elasticsearch.Version; import org.elasticsearch.action.ActionRequestValidationException; import org.elasticsearch.action.IndicesRequest; import org.elasticsearch.action.admin.indices.create.CreateIndexRequest; import org.elasticsearch.action.support.ActiveShardCount; import org.elasticsearch.action.support.IndicesOptions; import org.elasticsearch.action.support.master.AcknowledgedRequest; +import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.common.ParseField; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; @@ -37,37 +39,41 @@ /** * Request class to shrink an index into a single shard */ -public class ShrinkRequest extends AcknowledgedRequest implements IndicesRequest { +public class ResizeRequest extends AcknowledgedRequest implements IndicesRequest { - public static final ObjectParser PARSER = new ObjectParser<>("shrink_request", null); + public static final ObjectParser PARSER = new ObjectParser<>("resize_request", null); static { - PARSER.declareField((parser, request, context) -> request.getShrinkIndexRequest().settings(parser.map()), + PARSER.declareField((parser, request, context) -> request.getTargetIndexRequest().settings(parser.map()), new ParseField("settings"), ObjectParser.ValueType.OBJECT); - PARSER.declareField((parser, request, context) -> request.getShrinkIndexRequest().aliases(parser.map()), + PARSER.declareField((parser, request, context) -> request.getTargetIndexRequest().aliases(parser.map()), new ParseField("aliases"), ObjectParser.ValueType.OBJECT); } - private CreateIndexRequest shrinkIndexRequest; + private CreateIndexRequest targetIndexRequest; private String sourceIndex; + private ResizeType type = ResizeType.SHRINK; - ShrinkRequest() {} + ResizeRequest() {} - public ShrinkRequest(String targetIndex, String sourceindex) { - this.shrinkIndexRequest = new CreateIndexRequest(targetIndex); - this.sourceIndex = sourceindex; + public ResizeRequest(String targetIndex, String sourceIndex) { + this.targetIndexRequest = new CreateIndexRequest(targetIndex); + this.sourceIndex = sourceIndex; } @Override public ActionRequestValidationException validate() { - ActionRequestValidationException validationException = shrinkIndexRequest == null ? null : shrinkIndexRequest.validate(); + ActionRequestValidationException validationException = targetIndexRequest == null ? null : targetIndexRequest.validate(); if (sourceIndex == null) { validationException = addValidationError("source index is missing", validationException); } - if (shrinkIndexRequest == null) { - validationException = addValidationError("shrink index request is missing", validationException); + if (targetIndexRequest == null) { + validationException = addValidationError("target index request is missing", validationException); } - if (shrinkIndexRequest.settings().getByPrefix("index.sort.").isEmpty() == false) { - validationException = addValidationError("can't override index sort when shrinking index", validationException); + if (targetIndexRequest.settings().getByPrefix("index.sort.").isEmpty() == false) { + validationException = addValidationError("can't override index sort when resizing an index", validationException); + } + if (type == ResizeType.SPLIT && IndexMetaData.INDEX_NUMBER_OF_SHARDS_SETTING.exists(targetIndexRequest.settings()) == false) { + validationException = addValidationError("index.number_of_shards is required for split operations", validationException); } return validationException; } @@ -79,16 +85,24 @@ public void setSourceIndex(String index) { @Override public void readFrom(StreamInput in) throws IOException { super.readFrom(in); - shrinkIndexRequest = new CreateIndexRequest(); - shrinkIndexRequest.readFrom(in); + targetIndexRequest = new CreateIndexRequest(); + targetIndexRequest.readFrom(in); sourceIndex = in.readString(); + if (in.getVersion().onOrAfter(ResizeAction.COMPATIBILITY_VERSION)) { + type = in.readEnum(ResizeType.class); + } else { + type = ResizeType.SHRINK; // BWC this used to be shrink only + } } @Override public void writeTo(StreamOutput out) throws IOException { super.writeTo(out); - shrinkIndexRequest.writeTo(out); + targetIndexRequest.writeTo(out); out.writeString(sourceIndex); + if (out.getVersion().onOrAfter(ResizeAction.COMPATIBILITY_VERSION)) { + out.writeEnum(type); + } } @Override @@ -101,15 +115,15 @@ public IndicesOptions indicesOptions() { return IndicesOptions.lenientExpandOpen(); } - public void setShrinkIndex(CreateIndexRequest shrinkIndexRequest) { - this.shrinkIndexRequest = Objects.requireNonNull(shrinkIndexRequest, "shrink index request must not be null"); + public void setTargetIndex(CreateIndexRequest targetIndexRequest) { + this.targetIndexRequest = Objects.requireNonNull(targetIndexRequest, "target index request must not be null"); } /** * Returns the {@link CreateIndexRequest} for the shrink index */ - public CreateIndexRequest getShrinkIndexRequest() { - return shrinkIndexRequest; + public CreateIndexRequest getTargetIndexRequest() { + return targetIndexRequest; } /** @@ -128,13 +142,13 @@ public String getSourceIndex() { * non-negative integer, up to the number of copies per shard (number of replicas + 1), * to wait for the desired amount of shard copies to become active before returning. * Index creation will only wait up until the timeout value for the number of shard copies - * to be active before returning. Check {@link ShrinkResponse#isShardsAcked()} to + * to be active before returning. Check {@link ResizeResponse#isShardsAcked()} to * determine if the requisite shard copies were all started before returning or timing out. * * @param waitForActiveShards number of active shard copies to wait on */ public void setWaitForActiveShards(ActiveShardCount waitForActiveShards) { - this.getShrinkIndexRequest().waitForActiveShards(waitForActiveShards); + this.getTargetIndexRequest().waitForActiveShards(waitForActiveShards); } /** @@ -145,4 +159,18 @@ public void setWaitForActiveShards(ActiveShardCount waitForActiveShards) { public void setWaitForActiveShards(final int waitForActiveShards) { setWaitForActiveShards(ActiveShardCount.from(waitForActiveShards)); } + + /** + * The type of the resize operation + */ + public void setResizeType(ResizeType type) { + this.type = Objects.requireNonNull(type); + } + + /** + * Returns the type of the resize operation + */ + public ResizeType getResizeType() { + return type; + } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/shrink/ShrinkRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/admin/indices/shrink/ResizeRequestBuilder.java similarity index 73% rename from core/src/main/java/org/elasticsearch/action/admin/indices/shrink/ShrinkRequestBuilder.java rename to core/src/main/java/org/elasticsearch/action/admin/indices/shrink/ResizeRequestBuilder.java index 2bd10397193d5..6d8d98c0d75f0 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/shrink/ShrinkRequestBuilder.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/shrink/ResizeRequestBuilder.java @@ -18,31 +18,32 @@ */ package org.elasticsearch.action.admin.indices.shrink; +import org.elasticsearch.action.Action; import org.elasticsearch.action.admin.indices.create.CreateIndexRequest; import org.elasticsearch.action.support.ActiveShardCount; import org.elasticsearch.action.support.master.AcknowledgedRequestBuilder; import org.elasticsearch.client.ElasticsearchClient; import org.elasticsearch.common.settings.Settings; -public class ShrinkRequestBuilder extends AcknowledgedRequestBuilder { - public ShrinkRequestBuilder(ElasticsearchClient client, ShrinkAction action) { - super(client, action, new ShrinkRequest()); +public class ResizeRequestBuilder extends AcknowledgedRequestBuilder { + public ResizeRequestBuilder(ElasticsearchClient client, Action action) { + super(client, action, new ResizeRequest()); } - public ShrinkRequestBuilder setTargetIndex(CreateIndexRequest request) { - this.request.setShrinkIndex(request); + public ResizeRequestBuilder setTargetIndex(CreateIndexRequest request) { + this.request.setTargetIndex(request); return this; } - public ShrinkRequestBuilder setSourceIndex(String index) { + public ResizeRequestBuilder setSourceIndex(String index) { this.request.setSourceIndex(index); return this; } - public ShrinkRequestBuilder setSettings(Settings settings) { - this.request.getShrinkIndexRequest().settings(settings); + public ResizeRequestBuilder setSettings(Settings settings) { + this.request.getTargetIndexRequest().settings(settings); return this; } @@ -55,12 +56,12 @@ public ShrinkRequestBuilder setSettings(Settings settings) { * non-negative integer, up to the number of copies per shard (number of replicas + 1), * to wait for the desired amount of shard copies to become active before returning. * Index creation will only wait up until the timeout value for the number of shard copies - * to be active before returning. Check {@link ShrinkResponse#isShardsAcked()} to + * to be active before returning. Check {@link ResizeResponse#isShardsAcked()} to * determine if the requisite shard copies were all started before returning or timing out. * * @param waitForActiveShards number of active shard copies to wait on */ - public ShrinkRequestBuilder setWaitForActiveShards(ActiveShardCount waitForActiveShards) { + public ResizeRequestBuilder setWaitForActiveShards(ActiveShardCount waitForActiveShards) { this.request.setWaitForActiveShards(waitForActiveShards); return this; } @@ -70,7 +71,12 @@ public ShrinkRequestBuilder setWaitForActiveShards(ActiveShardCount waitForActiv * shard count is passed in, instead of having to first call {@link ActiveShardCount#from(int)} * to get the ActiveShardCount. */ - public ShrinkRequestBuilder setWaitForActiveShards(final int waitForActiveShards) { + public ResizeRequestBuilder setWaitForActiveShards(final int waitForActiveShards) { return setWaitForActiveShards(ActiveShardCount.from(waitForActiveShards)); } + + public ResizeRequestBuilder setResizeType(ResizeType type) { + this.request.setResizeType(type); + return this; + } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/shrink/ShrinkResponse.java b/core/src/main/java/org/elasticsearch/action/admin/indices/shrink/ResizeResponse.java similarity index 86% rename from core/src/main/java/org/elasticsearch/action/admin/indices/shrink/ShrinkResponse.java rename to core/src/main/java/org/elasticsearch/action/admin/indices/shrink/ResizeResponse.java index 0c5149f6bf353..cea74ced69cfc 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/shrink/ShrinkResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/shrink/ResizeResponse.java @@ -21,11 +21,11 @@ import org.elasticsearch.action.admin.indices.create.CreateIndexResponse; -public final class ShrinkResponse extends CreateIndexResponse { - ShrinkResponse() { +public final class ResizeResponse extends CreateIndexResponse { + ResizeResponse() { } - ShrinkResponse(boolean acknowledged, boolean shardsAcked, String index) { + ResizeResponse(boolean acknowledged, boolean shardsAcked, String index) { super(acknowledged, shardsAcked, index); } } diff --git a/core/src/main/java/org/elasticsearch/common/settings/loader/package-info.java b/core/src/main/java/org/elasticsearch/action/admin/indices/shrink/ResizeType.java similarity index 81% rename from core/src/main/java/org/elasticsearch/common/settings/loader/package-info.java rename to core/src/main/java/org/elasticsearch/action/admin/indices/shrink/ResizeType.java index c367b498c15b2..bca386a9567d6 100644 --- a/core/src/main/java/org/elasticsearch/common/settings/loader/package-info.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/shrink/ResizeType.java @@ -17,8 +17,11 @@ * under the License. */ +package org.elasticsearch.action.admin.indices.shrink; + /** - * Settings loader (parser) allowing to parse different "source" formats into - * a {@link org.elasticsearch.common.settings.Settings}. + * The type of the resize operation */ -package org.elasticsearch.common.settings.loader; \ No newline at end of file +public enum ResizeType { + SHRINK, SPLIT; +} diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/shrink/ShrinkAction.java b/core/src/main/java/org/elasticsearch/action/admin/indices/shrink/ShrinkAction.java index 8b5b4670e3c4d..48c23d643ba4c 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/shrink/ShrinkAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/shrink/ShrinkAction.java @@ -22,7 +22,7 @@ import org.elasticsearch.action.Action; import org.elasticsearch.client.ElasticsearchClient; -public class ShrinkAction extends Action { +public class ShrinkAction extends Action { public static final ShrinkAction INSTANCE = new ShrinkAction(); public static final String NAME = "indices:admin/shrink"; @@ -32,12 +32,12 @@ private ShrinkAction() { } @Override - public ShrinkResponse newResponse() { - return new ShrinkResponse(); + public ResizeResponse newResponse() { + return new ResizeResponse(); } @Override - public ShrinkRequestBuilder newRequestBuilder(ElasticsearchClient client) { - return new ShrinkRequestBuilder(client, this); + public ResizeRequestBuilder newRequestBuilder(ElasticsearchClient client) { + return new ResizeRequestBuilder(client, this); } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/shrink/TransportResizeAction.java b/core/src/main/java/org/elasticsearch/action/admin/indices/shrink/TransportResizeAction.java new file mode 100644 index 0000000000000..87dd9f9fa2d21 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/shrink/TransportResizeAction.java @@ -0,0 +1,201 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.action.admin.indices.shrink; + +import org.apache.lucene.index.IndexWriter; +import org.elasticsearch.Version; +import org.elasticsearch.action.ActionListener; +import org.elasticsearch.action.admin.indices.create.CreateIndexClusterStateUpdateRequest; +import org.elasticsearch.action.admin.indices.create.CreateIndexRequest; +import org.elasticsearch.action.admin.indices.stats.IndexShardStats; +import org.elasticsearch.action.admin.indices.stats.IndicesStatsResponse; +import org.elasticsearch.action.support.ActionFilters; +import org.elasticsearch.action.support.master.TransportMasterNodeAction; +import org.elasticsearch.client.Client; +import org.elasticsearch.cluster.ClusterState; +import org.elasticsearch.cluster.block.ClusterBlockException; +import org.elasticsearch.cluster.block.ClusterBlockLevel; +import org.elasticsearch.cluster.metadata.IndexMetaData; +import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; +import org.elasticsearch.cluster.metadata.MetaDataCreateIndexService; +import org.elasticsearch.cluster.node.DiscoveryNode; +import org.elasticsearch.cluster.service.ClusterService; +import org.elasticsearch.common.inject.Inject; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.index.IndexNotFoundException; +import org.elasticsearch.index.shard.DocsStats; +import org.elasticsearch.index.shard.ShardId; +import org.elasticsearch.threadpool.ThreadPool; +import org.elasticsearch.transport.TransportService; + +import java.util.Locale; +import java.util.Objects; +import java.util.Set; +import java.util.function.IntFunction; + +/** + * Main class to initiate resizing (shrink / split) an index into a new index + */ +public class TransportResizeAction extends TransportMasterNodeAction { + private final MetaDataCreateIndexService createIndexService; + private final Client client; + + @Inject + public TransportResizeAction(Settings settings, TransportService transportService, ClusterService clusterService, + ThreadPool threadPool, MetaDataCreateIndexService createIndexService, + ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver, Client client) { + this(settings, ResizeAction.NAME, transportService, clusterService, threadPool, createIndexService, actionFilters, + indexNameExpressionResolver, client); + } + + protected TransportResizeAction(Settings settings, String actionName, TransportService transportService, ClusterService clusterService, + ThreadPool threadPool, MetaDataCreateIndexService createIndexService, + ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver, Client client) { + super(settings, actionName, transportService, clusterService, threadPool, actionFilters, indexNameExpressionResolver, + ResizeRequest::new); + this.createIndexService = createIndexService; + this.client = client; + } + + + @Override + protected String executor() { + // we go async right away + return ThreadPool.Names.SAME; + } + + @Override + protected ResizeResponse newResponse() { + return new ResizeResponse(); + } + + @Override + protected ClusterBlockException checkBlock(ResizeRequest request, ClusterState state) { + return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_WRITE, request.getTargetIndexRequest().index()); + } + + @Override + protected void masterOperation(final ResizeRequest resizeRequest, final ClusterState state, + final ActionListener listener) { + + // there is no need to fetch docs stats for split but we keep it simple and do it anyway for simplicity of the code + final String sourceIndex = indexNameExpressionResolver.resolveDateMathExpression(resizeRequest.getSourceIndex()); + final String targetIndex = indexNameExpressionResolver.resolveDateMathExpression(resizeRequest.getTargetIndexRequest().index()); + client.admin().indices().prepareStats(sourceIndex).clear().setDocs(true).execute(new ActionListener() { + @Override + public void onResponse(IndicesStatsResponse indicesStatsResponse) { + CreateIndexClusterStateUpdateRequest updateRequest = prepareCreateIndexRequest(resizeRequest, state, + (i) -> { + IndexShardStats shard = indicesStatsResponse.getIndex(sourceIndex).getIndexShards().get(i); + return shard == null ? null : shard.getPrimary().getDocs(); + }, sourceIndex, targetIndex); + createIndexService.createIndex( + updateRequest, + ActionListener.wrap(response -> + listener.onResponse(new ResizeResponse(response.isAcknowledged(), response.isShardsAcked(), + updateRequest.index())), listener::onFailure + ) + ); + } + + @Override + public void onFailure(Exception e) { + listener.onFailure(e); + } + }); + + } + + // static for unittesting this method + static CreateIndexClusterStateUpdateRequest prepareCreateIndexRequest(final ResizeRequest resizeRequest, final ClusterState state + , final IntFunction perShardDocStats, String sourceIndexName, String targetIndexName) { + final CreateIndexRequest targetIndex = resizeRequest.getTargetIndexRequest(); + final IndexMetaData metaData = state.metaData().index(sourceIndexName); + if (metaData == null) { + throw new IndexNotFoundException(sourceIndexName); + } + final Settings targetIndexSettings = Settings.builder().put(targetIndex.settings()) + .normalizePrefix(IndexMetaData.INDEX_SETTING_PREFIX).build(); + final int numShards; + if (IndexMetaData.INDEX_NUMBER_OF_SHARDS_SETTING.exists(targetIndexSettings)) { + numShards = IndexMetaData.INDEX_NUMBER_OF_SHARDS_SETTING.get(targetIndexSettings); + } else { + assert resizeRequest.getResizeType() == ResizeType.SHRINK : "split must specify the number of shards explicitly"; + numShards = 1; + } + + for (int i = 0; i < numShards; i++) { + if (resizeRequest.getResizeType() == ResizeType.SHRINK) { + Set shardIds = IndexMetaData.selectShrinkShards(i, metaData, numShards); + long count = 0; + for (ShardId id : shardIds) { + DocsStats docsStats = perShardDocStats.apply(id.id()); + if (docsStats != null) { + count += docsStats.getCount(); + } + if (count > IndexWriter.MAX_DOCS) { + throw new IllegalStateException("Can't merge index with more than [" + IndexWriter.MAX_DOCS + + "] docs - too many documents in shards " + shardIds); + } + } + } else { + Objects.requireNonNull(IndexMetaData.selectSplitShard(i, metaData, numShards)); + // we just execute this to ensure we get the right exceptions if the number of shards is wrong or less then etc. + } + } + + if (IndexMetaData.INDEX_ROUTING_PARTITION_SIZE_SETTING.exists(targetIndexSettings)) { + throw new IllegalArgumentException("cannot provide a routing partition size value when resizing an index"); + } + if (IndexMetaData.INDEX_NUMBER_OF_ROUTING_SHARDS_SETTING.exists(targetIndexSettings)) { + throw new IllegalArgumentException("cannot provide index.number_of_routing_shards on resize"); + } + String cause = resizeRequest.getResizeType().name().toLowerCase(Locale.ROOT) + "_index"; + targetIndex.cause(cause); + Settings.Builder settingsBuilder = Settings.builder().put(targetIndexSettings); + settingsBuilder.put("index.number_of_shards", numShards); + targetIndex.settings(settingsBuilder); + + return new CreateIndexClusterStateUpdateRequest(targetIndex, + cause, targetIndex.index(), targetIndexName, true) + // mappings are updated on the node when creating in the shards, this prevents race-conditions since all mapping must be + // applied once we took the snapshot and if somebody messes things up and switches the index read/write and adds docs we miss + // the mappings for everything is corrupted and hard to debug + .ackTimeout(targetIndex.timeout()) + .masterNodeTimeout(targetIndex.masterNodeTimeout()) + .settings(targetIndex.settings()) + .aliases(targetIndex.aliases()) + .customs(targetIndex.customs()) + .waitForActiveShards(targetIndex.waitForActiveShards()) + .recoverFrom(metaData.getIndex()) + .resizeType(resizeRequest.getResizeType()); + } + + @Override + protected String getMasterActionName(DiscoveryNode node) { + if (node.getVersion().onOrAfter(ResizeAction.COMPATIBILITY_VERSION)){ + return super.getMasterActionName(node); + } else { + // this is for BWC - when we send this to version that doesn't have ResizeAction.NAME registered + // we have to send to shrink instead. + return ShrinkAction.NAME; + } + } +} diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/shrink/TransportShrinkAction.java b/core/src/main/java/org/elasticsearch/action/admin/indices/shrink/TransportShrinkAction.java index 2555299709cda..acc88251970f3 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/shrink/TransportShrinkAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/shrink/TransportShrinkAction.java @@ -19,143 +19,28 @@ package org.elasticsearch.action.admin.indices.shrink; -import org.apache.lucene.index.IndexWriter; -import org.elasticsearch.action.ActionListener; -import org.elasticsearch.action.admin.indices.create.CreateIndexClusterStateUpdateRequest; -import org.elasticsearch.action.admin.indices.create.CreateIndexRequest; -import org.elasticsearch.action.admin.indices.stats.IndexShardStats; -import org.elasticsearch.action.admin.indices.stats.IndicesStatsResponse; import org.elasticsearch.action.support.ActionFilters; -import org.elasticsearch.action.support.master.TransportMasterNodeAction; import org.elasticsearch.client.Client; -import org.elasticsearch.cluster.ClusterState; -import org.elasticsearch.cluster.block.ClusterBlockException; -import org.elasticsearch.cluster.block.ClusterBlockLevel; -import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; import org.elasticsearch.cluster.metadata.MetaDataCreateIndexService; import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.index.shard.DocsStats; -import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.TransportService; -import java.util.Set; -import java.util.function.IntFunction; - /** - * Main class to initiate shrinking an index into a new index with a single shard + * Main class to initiate shrinking an index into a new index + * This class is only here for backwards compatibility. It will be replaced by + * TransportResizeAction in 7.x once this is backported */ -public class TransportShrinkAction extends TransportMasterNodeAction { - - private final MetaDataCreateIndexService createIndexService; - private final Client client; +public class TransportShrinkAction extends TransportResizeAction { @Inject public TransportShrinkAction(Settings settings, TransportService transportService, ClusterService clusterService, ThreadPool threadPool, MetaDataCreateIndexService createIndexService, ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver, Client client) { - super(settings, ShrinkAction.NAME, transportService, clusterService, threadPool, actionFilters, indexNameExpressionResolver, - ShrinkRequest::new); - this.createIndexService = createIndexService; - this.client = client; + super(settings, ShrinkAction.NAME, transportService, clusterService, threadPool, createIndexService, actionFilters, + indexNameExpressionResolver, client); } - - @Override - protected String executor() { - // we go async right away - return ThreadPool.Names.SAME; - } - - @Override - protected ShrinkResponse newResponse() { - return new ShrinkResponse(); - } - - @Override - protected ClusterBlockException checkBlock(ShrinkRequest request, ClusterState state) { - return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_WRITE, request.getShrinkIndexRequest().index()); - } - - @Override - protected void masterOperation(final ShrinkRequest shrinkRequest, final ClusterState state, - final ActionListener listener) { - final String sourceIndex = indexNameExpressionResolver.resolveDateMathExpression(shrinkRequest.getSourceIndex()); - client.admin().indices().prepareStats(sourceIndex).clear().setDocs(true).execute(new ActionListener() { - @Override - public void onResponse(IndicesStatsResponse indicesStatsResponse) { - CreateIndexClusterStateUpdateRequest updateRequest = prepareCreateIndexRequest(shrinkRequest, state, - (i) -> { - IndexShardStats shard = indicesStatsResponse.getIndex(sourceIndex).getIndexShards().get(i); - return shard == null ? null : shard.getPrimary().getDocs(); - }, indexNameExpressionResolver); - createIndexService.createIndex( - updateRequest, - ActionListener.wrap(response -> - listener.onResponse(new ShrinkResponse(response.isAcknowledged(), response.isShardsAcked(), updateRequest.index())), - listener::onFailure - ) - ); - } - - @Override - public void onFailure(Exception e) { - listener.onFailure(e); - } - }); - - } - - // static for unittesting this method - static CreateIndexClusterStateUpdateRequest prepareCreateIndexRequest(final ShrinkRequest shrinkRequest, final ClusterState state - , final IntFunction perShardDocStats, IndexNameExpressionResolver indexNameExpressionResolver) { - final String sourceIndex = indexNameExpressionResolver.resolveDateMathExpression(shrinkRequest.getSourceIndex()); - final CreateIndexRequest targetIndex = shrinkRequest.getShrinkIndexRequest(); - final String targetIndexName = indexNameExpressionResolver.resolveDateMathExpression(targetIndex.index()); - final IndexMetaData metaData = state.metaData().index(sourceIndex); - final Settings targetIndexSettings = Settings.builder().put(targetIndex.settings()) - .normalizePrefix(IndexMetaData.INDEX_SETTING_PREFIX).build(); - int numShards = 1; - if (IndexMetaData.INDEX_NUMBER_OF_SHARDS_SETTING.exists(targetIndexSettings)) { - numShards = IndexMetaData.INDEX_NUMBER_OF_SHARDS_SETTING.get(targetIndexSettings); - } - for (int i = 0; i < numShards; i++) { - Set shardIds = IndexMetaData.selectShrinkShards(i, metaData, numShards); - long count = 0; - for (ShardId id : shardIds) { - DocsStats docsStats = perShardDocStats.apply(id.id()); - if (docsStats != null) { - count += docsStats.getCount(); - } - if (count > IndexWriter.MAX_DOCS) { - throw new IllegalStateException("Can't merge index with more than [" + IndexWriter.MAX_DOCS - + "] docs - too many documents in shards " + shardIds); - } - } - - } - if (IndexMetaData.INDEX_ROUTING_PARTITION_SIZE_SETTING.exists(targetIndexSettings)) { - throw new IllegalArgumentException("cannot provide a routing partition size value when shrinking an index"); - } - targetIndex.cause("shrink_index"); - Settings.Builder settingsBuilder = Settings.builder().put(targetIndexSettings); - settingsBuilder.put("index.number_of_shards", numShards); - targetIndex.settings(settingsBuilder); - - return new CreateIndexClusterStateUpdateRequest(targetIndex, - "shrink_index", targetIndex.index(), targetIndexName, true) - // mappings are updated on the node when merging in the shards, this prevents race-conditions since all mapping must be - // applied once we took the snapshot and if somebody fucks things up and switches the index read/write and adds docs we miss - // the mappings for everything is corrupted and hard to debug - .ackTimeout(targetIndex.timeout()) - .masterNodeTimeout(targetIndex.masterNodeTimeout()) - .settings(targetIndex.settings()) - .aliases(targetIndex.aliases()) - .customs(targetIndex.customs()) - .waitForActiveShards(targetIndex.waitForActiveShards()) - .shrinkFrom(metaData.getIndex()); - } - } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/stats/ShardStats.java b/core/src/main/java/org/elasticsearch/action/admin/indices/stats/ShardStats.java index 3d1e567fa1cea..8b41c4bf90c99 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/stats/ShardStats.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/stats/ShardStats.java @@ -26,7 +26,8 @@ import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Streamable; import org.elasticsearch.common.io.stream.Writeable; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContent.Params; +import org.elasticsearch.common.xcontent.ToXContentFragment; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.index.engine.CommitStats; import org.elasticsearch.index.seqno.SeqNoStats; @@ -34,7 +35,7 @@ import java.io.IOException; -public class ShardStats implements Streamable, Writeable, ToXContent { +public class ShardStats implements Streamable, Writeable, ToXContentFragment { private ShardRouting shardRouting; private CommonStats commonStats; @Nullable diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/template/delete/TransportDeleteIndexTemplateAction.java b/core/src/main/java/org/elasticsearch/action/admin/indices/template/delete/TransportDeleteIndexTemplateAction.java index bb18b57fa9386..ad9f73b55b0cb 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/template/delete/TransportDeleteIndexTemplateAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/template/delete/TransportDeleteIndexTemplateAction.java @@ -62,7 +62,7 @@ protected DeleteIndexTemplateResponse newResponse() { @Override protected ClusterBlockException checkBlock(DeleteIndexTemplateRequest request, ClusterState state) { - return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_WRITE, ""); + return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA_WRITE); } @Override diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/template/get/GetIndexTemplatesRequest.java b/core/src/main/java/org/elasticsearch/action/admin/indices/template/get/GetIndexTemplatesRequest.java index aeefc63bfa0c4..cfaa9408da1e5 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/template/get/GetIndexTemplatesRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/template/get/GetIndexTemplatesRequest.java @@ -42,6 +42,17 @@ public GetIndexTemplatesRequest(String... names) { this.names = names; } + public GetIndexTemplatesRequest(StreamInput in) throws IOException { + super(in); + names = in.readStringArray(); + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + super.writeTo(out); + out.writeStringArray(names); + } + @Override public ActionRequestValidationException validate() { ActionRequestValidationException validationException = null; @@ -74,13 +85,6 @@ public String[] names() { @Override public void readFrom(StreamInput in) throws IOException { - super.readFrom(in); - names = in.readStringArray(); - } - - @Override - public void writeTo(StreamOutput out) throws IOException { - super.writeTo(out); - out.writeStringArray(names); + throw new UnsupportedOperationException("usage of Streamable is to be replaced by Writeable"); } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/template/get/TransportGetIndexTemplatesAction.java b/core/src/main/java/org/elasticsearch/action/admin/indices/template/get/TransportGetIndexTemplatesAction.java index 294550c9a62e5..82c8bcec9b020 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/template/get/TransportGetIndexTemplatesAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/template/get/TransportGetIndexTemplatesAction.java @@ -43,7 +43,7 @@ public class TransportGetIndexTemplatesAction extends TransportMasterNodeReadAct @Inject public TransportGetIndexTemplatesAction(Settings settings, TransportService transportService, ClusterService clusterService, ThreadPool threadPool, ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver) { - super(settings, GetIndexTemplatesAction.NAME, transportService, clusterService, threadPool, actionFilters, indexNameExpressionResolver, GetIndexTemplatesRequest::new); + super(settings, GetIndexTemplatesAction.NAME, transportService, clusterService, threadPool, actionFilters, GetIndexTemplatesRequest::new, indexNameExpressionResolver); } @Override diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/template/put/PutIndexTemplateRequest.java b/core/src/main/java/org/elasticsearch/action/admin/indices/template/put/PutIndexTemplateRequest.java index 99ad163f48dd2..1553a528c57f9 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/template/put/PutIndexTemplateRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/template/put/PutIndexTemplateRequest.java @@ -104,7 +104,7 @@ public ActionRequestValidationException validate() { validationException = addValidationError("name is missing", validationException); } if (indexPatterns == null || indexPatterns.size() == 0) { - validationException = addValidationError("pattern is missing", validationException); + validationException = addValidationError("index patterns are missing", validationException); } return validationException; } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/template/put/TransportPutIndexTemplateAction.java b/core/src/main/java/org/elasticsearch/action/admin/indices/template/put/TransportPutIndexTemplateAction.java index 342b239777329..7d9897b112eae 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/template/put/TransportPutIndexTemplateAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/template/put/TransportPutIndexTemplateAction.java @@ -66,7 +66,7 @@ protected PutIndexTemplateResponse newResponse() { @Override protected ClusterBlockException checkBlock(PutIndexTemplateRequest request, ClusterState state) { - return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_WRITE, ""); + return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA_WRITE); } @Override diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/upgrade/get/UpgradeStatusResponse.java b/core/src/main/java/org/elasticsearch/action/admin/indices/upgrade/get/UpgradeStatusResponse.java index 6b1d02b6a4b02..565348f5ac22b 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/upgrade/get/UpgradeStatusResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/upgrade/get/UpgradeStatusResponse.java @@ -23,7 +23,8 @@ import org.elasticsearch.action.support.broadcast.BroadcastResponse; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContent.Params; +import org.elasticsearch.common.xcontent.ToXContentFragment; import org.elasticsearch.common.xcontent.XContentBuilder; import java.io.IOException; @@ -34,7 +35,7 @@ import java.util.Map; import java.util.Set; -public class UpgradeStatusResponse extends BroadcastResponse implements ToXContent { +public class UpgradeStatusResponse extends BroadcastResponse implements ToXContentFragment { private ShardUpgradeStatus[] shards; private Map indicesUpgradeStatus; diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/validate/query/TransportValidateQueryAction.java b/core/src/main/java/org/elasticsearch/action/admin/indices/validate/query/TransportValidateQueryAction.java index 3a13915b3aaea..c4369a30586d0 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/validate/query/TransportValidateQueryAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/validate/query/TransportValidateQueryAction.java @@ -155,7 +155,7 @@ protected ShardValidateQueryResponse shardOperation(ShardValidateQueryRequest re String error = null; ShardSearchLocalRequest shardSearchLocalRequest = new ShardSearchLocalRequest(request.shardId(), request.types(), request.nowInMillis(), request.filteringAliases()); - SearchContext searchContext = searchService.createSearchContext(shardSearchLocalRequest, SearchService.NO_TIMEOUT, null); + SearchContext searchContext = searchService.createSearchContext(shardSearchLocalRequest, SearchService.NO_TIMEOUT); try { ParsedQuery parsedQuery = searchContext.getQueryShardContext().toQuery(request.query()); searchContext.parsedQuery(parsedQuery); diff --git a/core/src/main/java/org/elasticsearch/action/bulk/BulkItemRequest.java b/core/src/main/java/org/elasticsearch/action/bulk/BulkItemRequest.java index cac2e171b61e0..3180f57d20409 100644 --- a/core/src/main/java/org/elasticsearch/action/bulk/BulkItemRequest.java +++ b/core/src/main/java/org/elasticsearch/action/bulk/BulkItemRequest.java @@ -20,11 +20,13 @@ package org.elasticsearch.action.bulk; import org.elasticsearch.action.DocWriteRequest; +import org.elasticsearch.common.Strings; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Streamable; import java.io.IOException; +import java.util.Objects; public class BulkItemRequest implements Streamable { @@ -63,6 +65,30 @@ void setPrimaryResponse(BulkItemResponse primaryResponse) { this.primaryResponse = primaryResponse; } + /** + * Abort this request, and store a {@link org.elasticsearch.action.bulk.BulkItemResponse.Failure} response. + * + * @param index The concrete index that was resolved for this request + * @param cause The cause of the rejection (may not be null) + * @throws IllegalStateException If a response already exists for this request + */ + public void abort(String index, Exception cause) { + if (primaryResponse == null) { + final BulkItemResponse.Failure failure = new BulkItemResponse.Failure(index, request.type(), request.id(), + Objects.requireNonNull(cause), true); + setPrimaryResponse(new BulkItemResponse(id, request.opType(), failure)); + } else { + assert primaryResponse.isFailed() && primaryResponse.getFailure().isAborted() + : "response [" + Strings.toString(primaryResponse) + "]; cause [" + cause + "]"; + if (primaryResponse.isFailed() && primaryResponse.getFailure().isAborted()) { + primaryResponse.getFailure().getCause().addSuppressed(cause); + } else { + throw new IllegalStateException( + "aborting item that with response [" + primaryResponse + "] that was previously processed", cause); + } + } + } + public static BulkItemRequest readBulkItem(StreamInput in) throws IOException { BulkItemRequest item = new BulkItemRequest(); item.readFrom(in); diff --git a/core/src/main/java/org/elasticsearch/action/bulk/BulkItemResponse.java b/core/src/main/java/org/elasticsearch/action/bulk/BulkItemResponse.java index 57e14bcaa2d00..6fd2fd2da848b 100644 --- a/core/src/main/java/org/elasticsearch/action/bulk/BulkItemResponse.java +++ b/core/src/main/java/org/elasticsearch/action/bulk/BulkItemResponse.java @@ -37,7 +37,7 @@ import org.elasticsearch.common.xcontent.ToXContentFragment; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentParser; -import org.elasticsearch.index.seqno.SequenceNumbersService; +import org.elasticsearch.index.seqno.SequenceNumbers; import org.elasticsearch.rest.RestStatus; import java.io.IOException; @@ -173,6 +173,7 @@ public static class Failure implements Writeable, ToXContentFragment { private final Exception cause; private final RestStatus status; private final long seqNo; + private final boolean aborted; /** * For write failures before operation was assigned a sequence number. @@ -181,25 +182,30 @@ public static class Failure implements Writeable, ToXContentFragment { * to record operation sequence no with failure */ public Failure(String index, String type, String id, Exception cause) { - this(index, type, id, cause, ExceptionsHelper.status(cause), SequenceNumbersService.UNASSIGNED_SEQ_NO); + this(index, type, id, cause, ExceptionsHelper.status(cause), SequenceNumbers.UNASSIGNED_SEQ_NO, false); + } + + public Failure(String index, String type, String id, Exception cause, boolean aborted) { + this(index, type, id, cause, ExceptionsHelper.status(cause), SequenceNumbers.UNASSIGNED_SEQ_NO, aborted); } public Failure(String index, String type, String id, Exception cause, RestStatus status) { - this(index, type, id, cause, status, SequenceNumbersService.UNASSIGNED_SEQ_NO); + this(index, type, id, cause, status, SequenceNumbers.UNASSIGNED_SEQ_NO, false); } /** For write failures after operation was assigned a sequence number. */ public Failure(String index, String type, String id, Exception cause, long seqNo) { - this(index, type, id, cause, ExceptionsHelper.status(cause), seqNo); + this(index, type, id, cause, ExceptionsHelper.status(cause), seqNo, false); } - public Failure(String index, String type, String id, Exception cause, RestStatus status, long seqNo) { + public Failure(String index, String type, String id, Exception cause, RestStatus status, long seqNo, boolean aborted) { this.index = index; this.type = type; this.id = id; this.cause = cause; this.status = status; this.seqNo = seqNo; + this.aborted = aborted; } /** @@ -214,7 +220,12 @@ public Failure(StreamInput in) throws IOException { if (in.getVersion().onOrAfter(Version.V_6_0_0_alpha1)) { seqNo = in.readZLong(); } else { - seqNo = SequenceNumbersService.UNASSIGNED_SEQ_NO; + seqNo = SequenceNumbers.UNASSIGNED_SEQ_NO; + } + if (supportsAbortedFlag(in.getVersion())) { + aborted = in.readBoolean(); + } else { + aborted = false; } } @@ -227,8 +238,15 @@ public void writeTo(StreamOutput out) throws IOException { if (out.getVersion().onOrAfter(Version.V_6_0_0_alpha1)) { out.writeZLong(getSeqNo()); } + if (supportsAbortedFlag(out.getVersion())) { + out.writeBoolean(aborted); + } } + private static boolean supportsAbortedFlag(Version version) { + // The "aborted" flag was added for 5.5.3 and 5.6.0, but was not in 6.0.0-beta2 + return version.after(Version.V_6_0_0_beta2) || (version.major == 5 && version.onOrAfter(Version.V_5_5_3)); + } /** * The index name of the action. @@ -274,13 +292,22 @@ public Exception getCause() { /** * The operation sequence number generated by primary - * NOTE: {@link SequenceNumbersService#UNASSIGNED_SEQ_NO} + * NOTE: {@link SequenceNumbers#UNASSIGNED_SEQ_NO} * indicates sequence number was not generated by primary */ public long getSeqNo() { return seqNo; } + /** + * Whether this failure is the result of an abort. + * If {@code true}, the request to which this failure relates should never be retried, regardless of the {@link #getCause() cause}. + * @see BulkItemRequest#abort(String, Exception) + */ + public boolean isAborted() { + return aborted; + } + @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { builder.field(INDEX_FIELD, index); diff --git a/core/src/main/java/org/elasticsearch/action/bulk/BulkProcessor.java b/core/src/main/java/org/elasticsearch/action/bulk/BulkProcessor.java index 3269fbc95008f..372837521dd7f 100644 --- a/core/src/main/java/org/elasticsearch/action/bulk/BulkProcessor.java +++ b/core/src/main/java/org/elasticsearch/action/bulk/BulkProcessor.java @@ -26,14 +26,17 @@ import org.elasticsearch.client.Client; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.bytes.BytesReference; +import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.ByteSizeUnit; import org.elasticsearch.common.unit.ByteSizeValue; import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.common.xcontent.XContentType; +import org.elasticsearch.threadpool.Scheduler; import org.elasticsearch.threadpool.ThreadPool; import java.io.Closeable; import java.util.Objects; +import java.util.concurrent.ScheduledThreadPoolExecutor; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicLong; import java.util.function.BiConsumer; @@ -78,22 +81,20 @@ public static class Builder { private final BiConsumer> consumer; private final Listener listener; - private final ThreadPool threadPool; - + private final Scheduler scheduler; + private final Runnable onClose; private int concurrentRequests = 1; private int bulkActions = 1000; private ByteSizeValue bulkSize = new ByteSizeValue(5, ByteSizeUnit.MB); private TimeValue flushInterval = null; private BackoffPolicy backoffPolicy = BackoffPolicy.exponentialBackoff(); - /** - * Creates a builder of bulk processor with the client to use and the listener that will be used - * to be notified on the completion of bulk requests. - */ - public Builder(BiConsumer> consumer, Listener listener, ThreadPool threadPool) { + private Builder(BiConsumer> consumer, Listener listener, + Scheduler scheduler, Runnable onClose) { this.consumer = consumer; this.listener = listener; - this.threadPool = threadPool; + this.scheduler = scheduler; + this.onClose = onClose; } /** @@ -155,39 +156,51 @@ public Builder setBackoffPolicy(BackoffPolicy backoffPolicy) { * Builds a new bulk processor. */ public BulkProcessor build() { - return new BulkProcessor(consumer, backoffPolicy, listener, concurrentRequests, bulkActions, bulkSize, flushInterval, threadPool); + return new BulkProcessor(consumer, backoffPolicy, listener, concurrentRequests, bulkActions, bulkSize, flushInterval, + scheduler, onClose); } } public static Builder builder(Client client, Listener listener) { Objects.requireNonNull(client, "client"); Objects.requireNonNull(listener, "listener"); + return new Builder(client::bulk, listener, client.threadPool(), () -> {}); + } - return new Builder(client::bulk, listener, client.threadPool()); + public static Builder builder(BiConsumer> consumer, Listener listener) { + Objects.requireNonNull(consumer, "consumer"); + Objects.requireNonNull(listener, "listener"); + final ScheduledThreadPoolExecutor scheduledThreadPoolExecutor = Scheduler.initScheduler(Settings.EMPTY); + return new Builder(consumer, listener, + (delay, executor, command) -> scheduledThreadPoolExecutor.schedule(command, delay.millis(), TimeUnit.MILLISECONDS), + () -> Scheduler.terminate(scheduledThreadPoolExecutor, 10, TimeUnit.SECONDS)); } private final int bulkActions; private final long bulkSize; - private final ThreadPool.Cancellable cancellableFlushTask; + private final Scheduler.Cancellable cancellableFlushTask; private final AtomicLong executionIdGen = new AtomicLong(); private BulkRequest bulkRequest; private final BulkRequestHandler bulkRequestHandler; + private final Scheduler scheduler; + private final Runnable onClose; private volatile boolean closed = false; BulkProcessor(BiConsumer> consumer, BackoffPolicy backoffPolicy, Listener listener, int concurrentRequests, int bulkActions, ByteSizeValue bulkSize, @Nullable TimeValue flushInterval, - ThreadPool threadPool) { + Scheduler scheduler, Runnable onClose) { this.bulkActions = bulkActions; this.bulkSize = bulkSize.getBytes(); this.bulkRequest = new BulkRequest(); - this.bulkRequestHandler = new BulkRequestHandler(consumer, backoffPolicy, listener, threadPool, concurrentRequests); - + this.scheduler = scheduler; + this.bulkRequestHandler = new BulkRequestHandler(consumer, backoffPolicy, listener, scheduler, concurrentRequests); // Start period flushing task after everything is setup - this.cancellableFlushTask = startFlushTask(flushInterval, threadPool); + this.cancellableFlushTask = startFlushTask(flushInterval, scheduler); + this.onClose = onClose; } /** @@ -200,6 +213,7 @@ public void close() { } catch (InterruptedException exc) { Thread.currentThread().interrupt(); } + onClose.run(); } /** @@ -289,9 +303,9 @@ public synchronized BulkProcessor add(BytesReference data, @Nullable String defa return this; } - private ThreadPool.Cancellable startFlushTask(TimeValue flushInterval, ThreadPool threadPool) { + private Scheduler.Cancellable startFlushTask(TimeValue flushInterval, Scheduler scheduler) { if (flushInterval == null) { - return new ThreadPool.Cancellable() { + return new Scheduler.Cancellable() { @Override public void cancel() {} @@ -301,8 +315,8 @@ public boolean isCancelled() { } }; } - - return threadPool.scheduleWithFixedDelay(new Flush(), flushInterval, ThreadPool.Names.GENERIC); + final Runnable flushRunnable = scheduler.preserveContext(new Flush()); + return scheduler.scheduleWithFixedDelay(flushRunnable, flushInterval, ThreadPool.Names.GENERIC); } private void executeIfNeeded() { diff --git a/core/src/main/java/org/elasticsearch/action/bulk/BulkRequest.java b/core/src/main/java/org/elasticsearch/action/bulk/BulkRequest.java index 1b0d89222dda9..fbbe6c1bf8a96 100644 --- a/core/src/main/java/org/elasticsearch/action/bulk/BulkRequest.java +++ b/core/src/main/java/org/elasticsearch/action/bulk/BulkRequest.java @@ -429,6 +429,7 @@ public BulkRequest add(BytesReference data, @Nullable String defaultIndex, @Null if (upsertRequest != null) { upsertRequest.version(version); upsertRequest.versionType(versionType); + upsertRequest.setPipeline(defaultPipeline); } IndexRequest doc = updateRequest.doc(); if (doc != null) { diff --git a/core/src/main/java/org/elasticsearch/action/bulk/BulkRequestHandler.java b/core/src/main/java/org/elasticsearch/action/bulk/BulkRequestHandler.java index 52a83b00483a4..423648bbb7105 100644 --- a/core/src/main/java/org/elasticsearch/action/bulk/BulkRequestHandler.java +++ b/core/src/main/java/org/elasticsearch/action/bulk/BulkRequestHandler.java @@ -25,7 +25,7 @@ import org.elasticsearch.common.logging.Loggers; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.util.concurrent.EsRejectedExecutionException; -import org.elasticsearch.threadpool.ThreadPool; +import org.elasticsearch.threadpool.Scheduler; import java.util.concurrent.CountDownLatch; import java.util.concurrent.Semaphore; @@ -44,14 +44,13 @@ public final class BulkRequestHandler { private final int concurrentRequests; BulkRequestHandler(BiConsumer> consumer, BackoffPolicy backoffPolicy, - BulkProcessor.Listener listener, ThreadPool threadPool, - int concurrentRequests) { + BulkProcessor.Listener listener, Scheduler scheduler, int concurrentRequests) { assert concurrentRequests >= 0; this.logger = Loggers.getLogger(getClass()); this.consumer = consumer; this.listener = listener; this.concurrentRequests = concurrentRequests; - this.retry = new Retry(EsRejectedExecutionException.class, backoffPolicy, threadPool); + this.retry = new Retry(EsRejectedExecutionException.class, backoffPolicy, scheduler); this.semaphore = new Semaphore(concurrentRequests > 0 ? concurrentRequests : 1); } diff --git a/core/src/main/java/org/elasticsearch/action/bulk/Retry.java b/core/src/main/java/org/elasticsearch/action/bulk/Retry.java index 8a9ef245f36a6..9985d23b9badb 100644 --- a/core/src/main/java/org/elasticsearch/action/bulk/Retry.java +++ b/core/src/main/java/org/elasticsearch/action/bulk/Retry.java @@ -26,6 +26,7 @@ import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.common.util.concurrent.FutureUtils; +import org.elasticsearch.threadpool.Scheduler; import org.elasticsearch.threadpool.ThreadPool; import java.util.ArrayList; @@ -41,13 +42,12 @@ public class Retry { private final Class retryOnThrowable; private final BackoffPolicy backoffPolicy; - private final ThreadPool threadPool; + private final Scheduler scheduler; - - public Retry(Class retryOnThrowable, BackoffPolicy backoffPolicy, ThreadPool threadPool) { + public Retry(Class retryOnThrowable, BackoffPolicy backoffPolicy, Scheduler scheduler) { this.retryOnThrowable = retryOnThrowable; this.backoffPolicy = backoffPolicy; - this.threadPool = threadPool; + this.scheduler = scheduler; } /** @@ -58,8 +58,9 @@ public Retry(Class retryOnThrowable, BackoffPolicy backoffP * @param listener A listener that is invoked when the bulk request finishes or completes with an exception. The listener is not * @param settings settings */ - public void withBackoff(BiConsumer> consumer, BulkRequest bulkRequest, ActionListener listener, Settings settings) { - RetryHandler r = new RetryHandler(retryOnThrowable, backoffPolicy, consumer, listener, settings, threadPool); + public void withBackoff(BiConsumer> consumer, BulkRequest bulkRequest, + ActionListener listener, Settings settings) { + RetryHandler r = new RetryHandler(retryOnThrowable, backoffPolicy, consumer, listener, settings, scheduler); r.execute(bulkRequest); } @@ -72,7 +73,8 @@ public void withBackoff(BiConsumer> co * @param settings settings * @return a future representing the bulk response returned by the client. */ - public PlainActionFuture withBackoff(BiConsumer> consumer, BulkRequest bulkRequest, Settings settings) { + public PlainActionFuture withBackoff(BiConsumer> consumer, + BulkRequest bulkRequest, Settings settings) { PlainActionFuture future = PlainActionFuture.newFuture(); withBackoff(consumer, bulkRequest, future, settings); return future; @@ -80,7 +82,7 @@ public PlainActionFuture withBackoff(BiConsumer { private final Logger logger; - private final ThreadPool threadPool; + private final Scheduler scheduler; private final BiConsumer> consumer; private final ActionListener listener; private final Iterator backoff; @@ -95,13 +97,13 @@ static class RetryHandler implements ActionListener { RetryHandler(Class retryOnThrowable, BackoffPolicy backoffPolicy, BiConsumer> consumer, ActionListener listener, - Settings settings, ThreadPool threadPool) { + Settings settings, Scheduler scheduler) { this.retryOnThrowable = retryOnThrowable; this.backoff = backoffPolicy.iterator(); this.consumer = consumer; this.listener = listener; this.logger = Loggers.getLogger(getClass(), settings); - this.threadPool = threadPool; + this.scheduler = scheduler; // in contrast to System.currentTimeMillis(), nanoTime() uses a monotonic clock under the hood this.startTimestampNanos = System.nanoTime(); } @@ -136,8 +138,8 @@ private void retry(BulkRequest bulkRequestForRetry) { assert backoff.hasNext(); TimeValue next = backoff.next(); logger.trace("Retry of bulk request scheduled in {} ms.", next.millis()); - Runnable command = threadPool.getThreadContext().preserveContext(() -> this.execute(bulkRequestForRetry)); - scheduledRequestFuture = threadPool.schedule(next, ThreadPool.Names.SAME, command); + Runnable command = scheduler.preserveContext(() -> this.execute(bulkRequestForRetry)); + scheduledRequestFuture = scheduler.schedule(next, ThreadPool.Names.SAME, command); } private BulkRequest createBulkRequestForRetry(BulkResponse bulkItemResponses) { diff --git a/core/src/main/java/org/elasticsearch/action/bulk/TransportShardBulkAction.java b/core/src/main/java/org/elasticsearch/action/bulk/TransportShardBulkAction.java index 55029d18257db..e66df2b0d9267 100644 --- a/core/src/main/java/org/elasticsearch/action/bulk/TransportShardBulkAction.java +++ b/core/src/main/java/org/elasticsearch/action/bulk/TransportShardBulkAction.java @@ -54,10 +54,9 @@ import org.elasticsearch.index.engine.Engine; import org.elasticsearch.index.engine.VersionConflictEngineException; import org.elasticsearch.index.get.GetResult; -import org.elasticsearch.index.mapper.MapperParsingException; import org.elasticsearch.index.mapper.Mapping; import org.elasticsearch.index.mapper.SourceToParse; -import org.elasticsearch.index.seqno.SequenceNumbersService; +import org.elasticsearch.index.seqno.SequenceNumbers; import org.elasticsearch.index.shard.IndexShard; import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.index.translog.Translog; @@ -120,8 +119,10 @@ public static WritePrimaryResult performOnP final IndexMetaData metaData = primary.indexSettings().getIndexMetaData(); Translog.Location location = null; for (int requestIndex = 0; requestIndex < request.items().length; requestIndex++) { - location = executeBulkItemRequest(metaData, primary, request, location, requestIndex, + if (isAborted(request.items()[requestIndex].getPrimaryResponse()) == false) { + location = executeBulkItemRequest(metaData, primary, request, location, requestIndex, updateHelper, nowInMillisSupplier, mappingUpdater); + } } BulkItemResponse[] responses = new BulkItemResponse[request.items().length]; BulkItemRequest[] items = request.items(); @@ -260,6 +261,10 @@ static Translog.Location executeBulkItemRequest(IndexMetaData metaData, IndexSha return calculateTranslogLocation(location, responseHolder); } + private static boolean isAborted(BulkItemResponse response) { + return response != null && response.isFailed() && response.getFailure().isAborted(); + } + private static boolean isConflictException(final Exception e) { return ExceptionsHelper.unwrapCause(e) instanceof VersionConflictEngineException; } @@ -270,7 +275,7 @@ private static boolean isConflictException(final Exception e) { static BulkItemResultHolder processUpdateResponse(final UpdateRequest updateRequest, final String concreteIndex, final Engine.Result result, final UpdateHelper.Result translate, final IndexShard primary, final int bulkReqId) throws Exception { - assert result.getSeqNo() != SequenceNumbersService.UNASSIGNED_SEQ_NO : "failed result should not have a sequence number"; + assert result.getSeqNo() != SequenceNumbers.UNASSIGNED_SEQ_NO : "failed result should not have a sequence number"; Engine.Operation.TYPE opType = result.getOperationType(); @@ -339,7 +344,7 @@ static BulkItemResultHolder executeUpdateRequestOnce(UpdateRequest updateRequest } catch (Exception failure) { // we may fail translating a update to index or delete operation // we use index result to communicate failure while translating update request - final Engine.Result result = new Engine.IndexResult(failure, updateRequest.version(), SequenceNumbersService.UNASSIGNED_SEQ_NO); + final Engine.Result result = new Engine.IndexResult(failure, updateRequest.version(), SequenceNumbers.UNASSIGNED_SEQ_NO); return new BulkItemResultHolder(null, result, primaryItemRequest); } @@ -441,7 +446,7 @@ static ReplicaItemExecutionMode replicaItemExecutionMode(final BulkItemRequest r final BulkItemResponse primaryResponse = request.getPrimaryResponse(); assert primaryResponse != null : "expected primary response to be set for item [" + index + "] request [" + request.request() + "]"; if (primaryResponse.isFailed()) { - return primaryResponse.getFailure().getSeqNo() != SequenceNumbersService.UNASSIGNED_SEQ_NO + return primaryResponse.getFailure().getSeqNo() != SequenceNumbers.UNASSIGNED_SEQ_NO ? ReplicaItemExecutionMode.FAILURE // we have a seq no generated with the failure, replicate as no-op : ReplicaItemExecutionMode.NOOP; // no seq no generated, ignore replication } else { @@ -480,7 +485,7 @@ public static Translog.Location performOnReplica(BulkShardRequest request, Index break; case FAILURE: final BulkItemResponse.Failure failure = item.getPrimaryResponse().getFailure(); - assert failure.getSeqNo() != SequenceNumbersService.UNASSIGNED_SEQ_NO : "seq no must be assigned"; + assert failure.getSeqNo() != SequenceNumbers.UNASSIGNED_SEQ_NO : "seq no must be assigned"; operationResult = replica.markSeqNoAsNoop(failure.getSeqNo(), failure.getMessage()); assert operationResult != null : "operation result must never be null when primary response has no failure"; location = syncOperationResultOrThrow(operationResult, location); diff --git a/core/src/main/java/org/elasticsearch/action/explain/TransportExplainAction.java b/core/src/main/java/org/elasticsearch/action/explain/TransportExplainAction.java index 72aaeb9eb371a..c30dfd360a08b 100644 --- a/core/src/main/java/org/elasticsearch/action/explain/TransportExplainAction.java +++ b/core/src/main/java/org/elasticsearch/action/explain/TransportExplainAction.java @@ -40,7 +40,7 @@ import org.elasticsearch.search.internal.AliasFilter; import org.elasticsearch.search.internal.SearchContext; import org.elasticsearch.search.internal.ShardSearchLocalRequest; -import org.elasticsearch.search.rescore.RescoreSearchContext; +import org.elasticsearch.search.rescore.RescoreContext; import org.elasticsearch.search.rescore.Rescorer; import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.TransportService; @@ -90,7 +90,7 @@ protected void resolveRequest(ClusterState state, InternalRequest request) { protected ExplainResponse shardOperation(ExplainRequest request, ShardId shardId) throws IOException { ShardSearchLocalRequest shardSearchLocalRequest = new ShardSearchLocalRequest(shardId, new String[]{request.type()}, request.nowInMillis, request.filteringAlias()); - SearchContext context = searchService.createSearchContext(shardSearchLocalRequest, SearchService.NO_TIMEOUT, null); + SearchContext context = searchService.createSearchContext(shardSearchLocalRequest, SearchService.NO_TIMEOUT); Engine.GetResult result = null; try { Term uidTerm = context.mapperService().createUidTerm(request.type(), request.id()); @@ -105,9 +105,9 @@ protected ExplainResponse shardOperation(ExplainRequest request, ShardId shardId context.preProcess(true); int topLevelDocId = result.docIdAndVersion().docId + result.docIdAndVersion().context.docBase; Explanation explanation = context.searcher().explain(context.query(), topLevelDocId); - for (RescoreSearchContext ctx : context.rescore()) { + for (RescoreContext ctx : context.rescore()) { Rescorer rescorer = ctx.rescorer(); - explanation = rescorer.explain(topLevelDocId, context, ctx, explanation); + explanation = rescorer.explain(topLevelDocId, context.searcher(), ctx, explanation); } if (request.storedFields() != null || (request.fetchSourceContext() != null && request.fetchSourceContext().fetchSource())) { // Advantage is that we're not opening a second searcher to retrieve the _source. Also diff --git a/core/src/main/java/org/elasticsearch/action/get/GetRequest.java b/core/src/main/java/org/elasticsearch/action/get/GetRequest.java index 93045182f4c20..ea5dda45279e6 100644 --- a/core/src/main/java/org/elasticsearch/action/get/GetRequest.java +++ b/core/src/main/java/org/elasticsearch/action/get/GetRequest.java @@ -152,8 +152,8 @@ public GetRequest routing(String routing) { /** * Sets the preference to execute the search. Defaults to randomize across shards. Can be set to - * _local to prefer local shards, _primary to execute only on primary shards, or - * a custom value, which guarantees that the same order will be used across different requests. + * _local to prefer local shards or a custom value, which guarantees that the same order + * will be used across different requests. */ public GetRequest preference(String preference) { this.preference = preference; diff --git a/core/src/main/java/org/elasticsearch/action/get/GetRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/get/GetRequestBuilder.java index 973b130bedbd2..1ca8dbde65200 100644 --- a/core/src/main/java/org/elasticsearch/action/get/GetRequestBuilder.java +++ b/core/src/main/java/org/elasticsearch/action/get/GetRequestBuilder.java @@ -76,8 +76,8 @@ public GetRequestBuilder setRouting(String routing) { /** * Sets the preference to execute the search. Defaults to randomize across shards. Can be set to - * _local to prefer local shards, _primary to execute only on primary shards, or - * a custom value, which guarantees that the same order will be used across different requests. + * _local to prefer local shards or a custom value, which guarantees that the same order + * will be used across different requests. */ public GetRequestBuilder setPreference(String preference) { request.preference(preference); diff --git a/core/src/main/java/org/elasticsearch/action/get/MultiGetRequest.java b/core/src/main/java/org/elasticsearch/action/get/MultiGetRequest.java index 20a619cec2c70..420e0b448b052 100644 --- a/core/src/main/java/org/elasticsearch/action/get/MultiGetRequest.java +++ b/core/src/main/java/org/elasticsearch/action/get/MultiGetRequest.java @@ -284,8 +284,8 @@ public ActionRequestValidationException validate() { /** * Sets the preference to execute the search. Defaults to randomize across shards. Can be set to - * _local to prefer local shards, _primary to execute only on primary shards, or - * a custom value, which guarantees that the same order will be used across different requests. + * _local to prefer local shards or a custom value, which guarantees that the same order + * will be used across different requests. */ public MultiGetRequest preference(String preference) { this.preference = preference; diff --git a/core/src/main/java/org/elasticsearch/action/get/MultiGetRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/get/MultiGetRequestBuilder.java index a2cb204d5eabf..fd7a6ac88253e 100644 --- a/core/src/main/java/org/elasticsearch/action/get/MultiGetRequestBuilder.java +++ b/core/src/main/java/org/elasticsearch/action/get/MultiGetRequestBuilder.java @@ -58,8 +58,8 @@ public MultiGetRequestBuilder add(MultiGetRequest.Item item) { /** * Sets the preference to execute the search. Defaults to randomize across shards. Can be set to - * _local to prefer local shards, _primary to execute only on primary shards, or - * a custom value, which guarantees that the same order will be used across different requests. + * _local to prefer local shards or a custom value, which guarantees that the same order + * will be used across different requests. */ public MultiGetRequestBuilder setPreference(String preference) { request.preference(preference); diff --git a/core/src/main/java/org/elasticsearch/action/get/MultiGetShardRequest.java b/core/src/main/java/org/elasticsearch/action/get/MultiGetShardRequest.java index 25a624b2eb558..fea3cd1043c62 100644 --- a/core/src/main/java/org/elasticsearch/action/get/MultiGetShardRequest.java +++ b/core/src/main/java/org/elasticsearch/action/get/MultiGetShardRequest.java @@ -64,8 +64,8 @@ public int shardId() { /** * Sets the preference to execute the search. Defaults to randomize across shards. Can be set to - * _local to prefer local shards, _primary to execute only on primary shards, or - * a custom value, which guarantees that the same order will be used across different requests. + * _local to prefer local shards or a custom value, which guarantees that the same order + * will be used across different requests. */ public MultiGetShardRequest preference(String preference) { this.preference = preference; diff --git a/core/src/main/java/org/elasticsearch/action/ingest/GetPipelineRequest.java b/core/src/main/java/org/elasticsearch/action/ingest/GetPipelineRequest.java index 1ba22bce805b0..f34f157063cd3 100644 --- a/core/src/main/java/org/elasticsearch/action/ingest/GetPipelineRequest.java +++ b/core/src/main/java/org/elasticsearch/action/ingest/GetPipelineRequest.java @@ -42,6 +42,17 @@ public GetPipelineRequest(String... ids) { this.ids = Strings.EMPTY_ARRAY; } + public GetPipelineRequest(StreamInput in) throws IOException { + super(in); + ids = in.readStringArray(); + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + super.writeTo(out); + out.writeStringArray(ids); + } + public String[] getIds() { return ids; } @@ -53,13 +64,6 @@ public ActionRequestValidationException validate() { @Override public void readFrom(StreamInput in) throws IOException { - super.readFrom(in); - ids = in.readStringArray(); - } - - @Override - public void writeTo(StreamOutput out) throws IOException { - super.writeTo(out); - out.writeStringArray(ids); + throw new UnsupportedOperationException("usage of Streamable is to be replaced by Writeable"); } } diff --git a/core/src/main/java/org/elasticsearch/action/ingest/GetPipelineTransportAction.java b/core/src/main/java/org/elasticsearch/action/ingest/GetPipelineTransportAction.java index f64b36d47aedb..191ed87a42cde 100644 --- a/core/src/main/java/org/elasticsearch/action/ingest/GetPipelineTransportAction.java +++ b/core/src/main/java/org/elasticsearch/action/ingest/GetPipelineTransportAction.java @@ -42,7 +42,7 @@ public class GetPipelineTransportAction extends TransportMasterNodeReadAction implements PrimaryReplicaSyncer.SyncAction { - public static String ACTION_NAME = "indices:admin/seq_no/resync"; + private static String ACTION_NAME = "internal:index/seq_no/resync"; @Inject public TransportResyncReplicationAction(Settings settings, TransportService transportService, @@ -93,7 +93,8 @@ protected void sendReplicaRequest( if (node.getVersion().onOrAfter(Version.V_6_0_0_alpha1)) { super.sendReplicaRequest(replicaRequest, node, listener); } else { - listener.onResponse(new ReplicaResponse(SequenceNumbersService.PRE_60_NODE_LOCAL_CHECKPOINT)); + final long pre60NodeCheckpoint = SequenceNumbers.PRE_60_NODE_CHECKPOINT; + listener.onResponse(new ReplicaResponse(pre60NodeCheckpoint, pre60NodeCheckpoint)); } } diff --git a/core/src/main/java/org/elasticsearch/action/search/AbstractSearchAsyncAction.java b/core/src/main/java/org/elasticsearch/action/search/AbstractSearchAsyncAction.java index 89be2ecabeb24..1c25cd7ac37a2 100644 --- a/core/src/main/java/org/elasticsearch/action/search/AbstractSearchAsyncAction.java +++ b/core/src/main/java/org/elasticsearch/action/search/AbstractSearchAsyncAction.java @@ -76,8 +76,8 @@ protected AbstractSearchAsyncAction(String name, Logger logger, SearchTransportS Executor executor, SearchRequest request, ActionListener listener, GroupShardsIterator shardsIts, TransportSearchAction.SearchTimeProvider timeProvider, long clusterStateVersion, - SearchTask task, SearchPhaseResults resultConsumer) { - super(name, request, shardsIts, logger); + SearchTask task, SearchPhaseResults resultConsumer, int maxConcurrentShardRequests) { + super(name, request, shardsIts, logger, maxConcurrentShardRequests, executor); this.timeProvider = timeProvider; this.logger = logger; this.searchTransportService = searchTransportService; diff --git a/core/src/main/java/org/elasticsearch/action/search/CanMatchPreFilterSearchPhase.java b/core/src/main/java/org/elasticsearch/action/search/CanMatchPreFilterSearchPhase.java index ea5cf831859de..49575125f68d6 100644 --- a/core/src/main/java/org/elasticsearch/action/search/CanMatchPreFilterSearchPhase.java +++ b/core/src/main/java/org/elasticsearch/action/search/CanMatchPreFilterSearchPhase.java @@ -26,10 +26,6 @@ import org.elasticsearch.search.internal.AliasFilter; import org.elasticsearch.transport.Transport; -import java.util.ArrayList; -import java.util.Collections; -import java.util.Iterator; -import java.util.List; import java.util.Map; import java.util.concurrent.Executor; import java.util.function.BiFunction; @@ -55,9 +51,12 @@ final class CanMatchPreFilterSearchPhase extends AbstractSearchAsyncAction listener, GroupShardsIterator shardsIts, TransportSearchAction.SearchTimeProvider timeProvider, long clusterStateVersion, SearchTask task, Function, SearchPhase> phaseFactory) { + /* + * We set max concurrent shard requests to the number of shards to otherwise avoid deep recursing that would occur if the local node + * is the coordinating node for the query, holds all the shards for the request, and there are a lot of shards. + */ super("can_match", logger, searchTransportService, nodeIdToConnection, aliasFilter, concreteIndexBoosts, executor, request, - listener, - shardsIts, timeProvider, clusterStateVersion, task, new BitSetSearchPhaseResults(shardsIts.size())); + listener, shardsIts, timeProvider, clusterStateVersion, task, new BitSetSearchPhaseResults(shardsIts.size()), shardsIts.size()); this.phaseFactory = phaseFactory; this.shardsIts = shardsIts; } diff --git a/core/src/main/java/org/elasticsearch/action/search/ExpandSearchPhase.java b/core/src/main/java/org/elasticsearch/action/search/ExpandSearchPhase.java index bc673644a0683..53ce4299c546b 100644 --- a/core/src/main/java/org/elasticsearch/action/search/ExpandSearchPhase.java +++ b/core/src/main/java/org/elasticsearch/action/search/ExpandSearchPhase.java @@ -88,10 +88,9 @@ public void run() throws IOException { } for (InnerHitBuilder innerHitBuilder : innerHitBuilders) { SearchSourceBuilder sourceBuilder = buildExpandSearchSourceBuilder(innerHitBuilder) - .query(groupQuery); - SearchRequest groupRequest = new SearchRequest(searchRequest.indices()) - .types(searchRequest.types()) - .source(sourceBuilder); + .query(groupQuery) + .postFilter(searchRequest.source().postFilter()); + SearchRequest groupRequest = buildExpandSearchRequest(searchRequest, sourceBuilder); multiRequest.add(groupRequest); } } @@ -120,6 +119,21 @@ public void run() throws IOException { } } + private SearchRequest buildExpandSearchRequest(SearchRequest orig, SearchSourceBuilder sourceBuilder) { + SearchRequest groupRequest = new SearchRequest(orig.indices()) + .types(orig.types()) + .source(sourceBuilder) + .indicesOptions(orig.indicesOptions()) + .requestCache(orig.requestCache()) + .preference(orig.preference()) + .routing(orig.routing()) + .searchType(orig.searchType()); + if (orig.isMaxConcurrentShardRequestsSet()) { + groupRequest.setMaxConcurrentShardRequests(orig.getMaxConcurrentShardRequests()); + } + return groupRequest; + } + private SearchSourceBuilder buildExpandSearchSourceBuilder(InnerHitBuilder options) { SearchSourceBuilder groupSource = new SearchSourceBuilder(); groupSource.from(options.getFrom()); diff --git a/core/src/main/java/org/elasticsearch/action/search/InitialSearchPhase.java b/core/src/main/java/org/elasticsearch/action/search/InitialSearchPhase.java index fcee980379bf1..0da7424293758 100644 --- a/core/src/main/java/org/elasticsearch/action/search/InitialSearchPhase.java +++ b/core/src/main/java/org/elasticsearch/action/search/InitialSearchPhase.java @@ -26,12 +26,15 @@ import org.elasticsearch.cluster.routing.GroupShardsIterator; import org.elasticsearch.cluster.routing.ShardRouting; import org.elasticsearch.common.Nullable; +import org.elasticsearch.common.util.concurrent.AbstractRunnable; import org.elasticsearch.common.util.concurrent.AtomicArray; import org.elasticsearch.search.SearchPhaseResult; import org.elasticsearch.search.SearchShardTarget; -import org.elasticsearch.transport.ConnectTransportException; import java.io.IOException; +import java.util.ArrayList; +import java.util.List; +import java.util.concurrent.Executor; import java.util.concurrent.atomic.AtomicInteger; import java.util.stream.Stream; @@ -45,24 +48,38 @@ */ abstract class InitialSearchPhase extends SearchPhase { private final SearchRequest request; + private final GroupShardsIterator toSkipShardsIts; private final GroupShardsIterator shardsIts; private final Logger logger; private final int expectedTotalOps; private final AtomicInteger totalOps = new AtomicInteger(); private final AtomicInteger shardExecutionIndex = new AtomicInteger(0); private final int maxConcurrentShardRequests; + private final Executor executor; - InitialSearchPhase(String name, SearchRequest request, GroupShardsIterator shardsIts, Logger logger) { + InitialSearchPhase(String name, SearchRequest request, GroupShardsIterator shardsIts, Logger logger, + int maxConcurrentShardRequests, Executor executor) { super(name); this.request = request; - this.shardsIts = shardsIts; + final List toSkipIterators = new ArrayList<>(); + final List iterators = new ArrayList<>(); + for (final SearchShardIterator iterator : shardsIts) { + if (iterator.skip()) { + toSkipIterators.add(iterator); + } else { + iterators.add(iterator); + } + } + this.toSkipShardsIts = new GroupShardsIterator<>(toSkipIterators); + this.shardsIts = new GroupShardsIterator<>(iterators); this.logger = logger; // we need to add 1 for non active partition, since we count it in the total. This means for each shard in the iterator we sum up // it's number of active shards but use 1 as the default if no replica of a shard is active at this point. // on a per shards level we use shardIt.remaining() to increment the totalOps pointer but add 1 for the current shard result // we process hence we add one for the non active partition here. this.expectedTotalOps = shardsIts.totalSizeWith1ForEmpty(); - maxConcurrentShardRequests = Math.min(request.getMaxConcurrentShardRequests(), shardsIts.size()); + this.maxConcurrentShardRequests = Math.min(maxConcurrentShardRequests, shardsIts.size()); + this.executor = executor; } private void onShardFailure(final int shardIndex, @Nullable ShardRouting shard, @Nullable String nodeId, @@ -70,19 +87,19 @@ private void onShardFailure(final int shardIndex, @Nullable ShardRouting shard, // we always add the shard failure for a specific shard instance // we do make sure to clean it on a successful response from a shard SearchShardTarget shardTarget = new SearchShardTarget(nodeId, shardIt.shardId(), shardIt.getClusterAlias(), - shardIt.getOriginalIndices()); + shardIt.getOriginalIndices()); onShardFailure(shardIndex, shardTarget, e); if (totalOps.incrementAndGet() == expectedTotalOps) { if (logger.isDebugEnabled()) { if (e != null && !TransportActions.isShardNotAvailableException(e)) { logger.debug( - (Supplier) () -> new ParameterizedMessage( - "{}: Failed to execute [{}]", - shard != null ? shard.shortSummary() : - shardIt.shardId(), - request), - e); + (Supplier) () -> new ParameterizedMessage( + "{}: Failed to execute [{}]", + shard != null ? shard.shortSummary() : + shardIt.shardId(), + request), + e); } else if (logger.isTraceEnabled()) { logger.trace((Supplier) () -> new ParameterizedMessage("{}: Failed to execute [{}]", shard, request), e); } @@ -93,32 +110,27 @@ private void onShardFailure(final int shardIndex, @Nullable ShardRouting shard, final boolean lastShard = nextShard == null; // trace log this exception logger.trace( - (Supplier) () -> new ParameterizedMessage( - "{}: Failed to execute [{}] lastShard [{}]", - shard != null ? shard.shortSummary() : shardIt.shardId(), - request, - lastShard), - e); + (Supplier) () -> new ParameterizedMessage( + "{}: Failed to execute [{}] lastShard [{}]", + shard != null ? shard.shortSummary() : shardIt.shardId(), + request, + lastShard), + e); if (!lastShard) { - try { - performPhaseOnShard(shardIndex, shardIt, nextShard); - } catch (Exception inner) { - inner.addSuppressed(e); - onShardFailure(shardIndex, shard, shard.currentNodeId(), shardIt, inner); - } + performPhaseOnShard(shardIndex, shardIt, nextShard); } else { maybeExecuteNext(); // move to the next execution if needed // no more shards active, add a failure if (logger.isDebugEnabled() && !logger.isTraceEnabled()) { // do not double log this exception if (e != null && !TransportActions.isShardNotAvailableException(e)) { logger.debug( - (Supplier) () -> new ParameterizedMessage( - "{}: Failed to execute [{}] lastShard [{}]", - shard != null ? shard.shortSummary() : - shardIt.shardId(), - request, - lastShard), - e); + (Supplier) () -> new ParameterizedMessage( + "{}: Failed to execute [{}] lastShard [{}]", + shard != null ? shard.shortSummary() : + shardIt.shardId(), + request, + lastShard), + e); } } } @@ -127,14 +139,18 @@ private void onShardFailure(final int shardIndex, @Nullable ShardRouting shard, @Override public final void run() throws IOException { - boolean success = shardExecutionIndex.compareAndSet(0, maxConcurrentShardRequests); - assert success; - for (int i = 0; i < maxConcurrentShardRequests; i++) { - SearchShardIterator shardRoutings = shardsIts.get(i); - if (shardRoutings.skip()) { - skipShard(shardRoutings); - } else { - performPhaseOnShard(i, shardRoutings, shardRoutings.nextOrNull()); + for (final SearchShardIterator iterator : toSkipShardsIts) { + assert iterator.skip(); + skipShard(iterator); + } + if (shardsIts.size() > 0) { + int maxConcurrentShardRequests = Math.min(this.maxConcurrentShardRequests, shardsIts.size()); + final boolean success = shardExecutionIndex.compareAndSet(0, maxConcurrentShardRequests); + assert success; + for (int index = 0; index < maxConcurrentShardRequests; index++) { + final SearchShardIterator shardRoutings = shardsIts.get(index); + assert shardRoutings.skip() == false; + performPhaseOnShard(index, shardRoutings, shardRoutings.nextOrNull()); } } } @@ -142,38 +158,71 @@ public final void run() throws IOException { private void maybeExecuteNext() { final int index = shardExecutionIndex.getAndIncrement(); if (index < shardsIts.size()) { - SearchShardIterator shardRoutings = shardsIts.get(index); - if (shardRoutings.skip()) { - skipShard(shardRoutings); - } else { - performPhaseOnShard(index, shardRoutings, shardRoutings.nextOrNull()); - } + final SearchShardIterator shardRoutings = shardsIts.get(index); + performPhaseOnShard(index, shardRoutings, shardRoutings.nextOrNull()); } } + private void maybeFork(final Thread thread, final Runnable runnable) { + if (thread == Thread.currentThread()) { + fork(runnable); + } else { + runnable.run(); + } + } + + private void fork(final Runnable runnable) { + executor.execute(new AbstractRunnable() { + @Override + public void onFailure(Exception e) { + + } + + @Override + protected void doRun() throws Exception { + runnable.run(); + } + + @Override + public boolean isForceExecution() { + // we can not allow a stuffed queue to reject execution here + return true; + } + }); + } + private void performPhaseOnShard(final int shardIndex, final SearchShardIterator shardIt, final ShardRouting shard) { + /* + * We capture the thread that this phase is starting on. When we are called back after executing the phase, we are either on the + * same thread (because we never went async, or the same thread was selected from the thread pool) or a different thread. If we + * continue on the same thread in the case that we never went async and this happens repeatedly we will end up recursing deeply and + * could stack overflow. To prevent this, we fork if we are called back on the same thread that execution started on and otherwise + * we can continue (cf. InitialSearchPhase#maybeFork). + */ + final Thread thread = Thread.currentThread(); if (shard == null) { - onShardFailure(shardIndex, null, null, shardIt, new NoShardAvailableActionException(shardIt.shardId())); + fork(() -> onShardFailure(shardIndex, null, null, shardIt, new NoShardAvailableActionException(shardIt.shardId()))); } else { try { executePhaseOnShard(shardIt, shard, new SearchActionListener(new SearchShardTarget(shard.currentNodeId(), shardIt.shardId(), shardIt.getClusterAlias(), shardIt.getOriginalIndices()), shardIndex) { @Override public void innerOnResponse(FirstResult result) { - onShardResult(result, shardIt); + maybeFork(thread, () -> onShardResult(result, shardIt)); } @Override public void onFailure(Exception t) { - onShardFailure(shardIndex, shard, shard.currentNodeId(), shardIt, t); + maybeFork(thread, () -> onShardFailure(shardIndex, shard, shard.currentNodeId(), shardIt, t)); } }); - } catch (ConnectTransportException | IllegalArgumentException ex) { - // we are getting the connection early here so we might run into nodes that are not connected. in that case we move on to - // the next shard. previously when using discovery nodes here we had a special case for null when a node was not connected - // at all which is not not needed anymore. - onShardFailure(shardIndex, shard, shard.currentNodeId(), shardIt, ex); + } catch (final Exception e) { + /* + * It is possible to run into connection exceptions here because we are getting the connection early and might run in to + * nodes that are not connected. In this case, on shard failure will move us to the next shard copy. + */ + fork(() -> onShardFailure(shardIndex, shard, shard.currentNodeId(), shardIt, e)); } } } @@ -203,7 +252,7 @@ private void successfulShardExecution(SearchShardIterator shardsIt) { } else if (xTotalOps > expectedTotalOps) { throw new AssertionError("unexpected higher total ops [" + xTotalOps + "] compared to expected [" + expectedTotalOps + "]"); - } else { + } else if (shardsIt.skip() == false) { maybeExecuteNext(); } } diff --git a/core/src/main/java/org/elasticsearch/action/search/MultiSearchRequest.java b/core/src/main/java/org/elasticsearch/action/search/MultiSearchRequest.java index 7ab97f9bc570e..76f73bde4b658 100644 --- a/core/src/main/java/org/elasticsearch/action/search/MultiSearchRequest.java +++ b/core/src/main/java/org/elasticsearch/action/search/MultiSearchRequest.java @@ -117,8 +117,7 @@ public void readFrom(StreamInput in) throws IOException { maxConcurrentSearchRequests = in.readVInt(); int size = in.readVInt(); for (int i = 0; i < size; i++) { - SearchRequest request = new SearchRequest(); - request.readFrom(in); + SearchRequest request = new SearchRequest(in); requests.add(request); } } diff --git a/core/src/main/java/org/elasticsearch/action/search/MultiSearchResponse.java b/core/src/main/java/org/elasticsearch/action/search/MultiSearchResponse.java index 4d42ad334a9f0..560379a6ce2f6 100644 --- a/core/src/main/java/org/elasticsearch/action/search/MultiSearchResponse.java +++ b/core/src/main/java/org/elasticsearch/action/search/MultiSearchResponse.java @@ -21,12 +21,14 @@ import org.elasticsearch.ElasticsearchException; import org.elasticsearch.ExceptionsHelper; +import org.elasticsearch.Version; import org.elasticsearch.action.ActionResponse; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.Strings; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Streamable; +import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; @@ -112,11 +114,14 @@ public Exception getFailure() { private Item[] items; + private long tookInMillis; + MultiSearchResponse() { } - public MultiSearchResponse(Item[] items) { + public MultiSearchResponse(Item[] items, long tookInMillis) { this.items = items; + this.tookInMillis = tookInMillis; } @Override @@ -131,6 +136,13 @@ public Item[] getResponses() { return this.items; } + /** + * How long the msearch took. + */ + public TimeValue getTook() { + return new TimeValue(tookInMillis); + } + @Override public void readFrom(StreamInput in) throws IOException { super.readFrom(in); @@ -138,6 +150,9 @@ public void readFrom(StreamInput in) throws IOException { for (int i = 0; i < items.length; i++) { items[i] = Item.readItem(in); } + if (in.getVersion().onOrAfter(Version.V_7_0_0_alpha1)) { + tookInMillis = in.readVLong(); + } } @Override @@ -147,11 +162,15 @@ public void writeTo(StreamOutput out) throws IOException { for (Item item : items) { item.writeTo(out); } + if (out.getVersion().onOrAfter(Version.V_7_0_0_alpha1)) { + out.writeVLong(tookInMillis); + } } @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { builder.startObject(); + builder.field("took", tookInMillis); builder.startArray(Fields.RESPONSES); for (Item item : items) { builder.startObject(); diff --git a/core/src/main/java/org/elasticsearch/action/search/SearchDfsQueryThenFetchAsyncAction.java b/core/src/main/java/org/elasticsearch/action/search/SearchDfsQueryThenFetchAsyncAction.java index a901d71157137..ec055dfec8df6 100644 --- a/core/src/main/java/org/elasticsearch/action/search/SearchDfsQueryThenFetchAsyncAction.java +++ b/core/src/main/java/org/elasticsearch/action/search/SearchDfsQueryThenFetchAsyncAction.java @@ -42,7 +42,8 @@ final class SearchDfsQueryThenFetchAsyncAction extends AbstractSearchAsyncAction final GroupShardsIterator shardsIts, final TransportSearchAction.SearchTimeProvider timeProvider, final long clusterStateVersion, final SearchTask task) { super("dfs", logger, searchTransportService, nodeIdToConnection, aliasFilter, concreteIndexBoosts, executor, request, listener, - shardsIts, timeProvider, clusterStateVersion, task, new ArraySearchPhaseResults<>(shardsIts.size())); + shardsIts, timeProvider, clusterStateVersion, task, new ArraySearchPhaseResults<>(shardsIts.size()), + request.getMaxConcurrentShardRequests()); this.searchPhaseController = searchPhaseController; } diff --git a/core/src/main/java/org/elasticsearch/action/search/SearchExecutionStatsCollector.java b/core/src/main/java/org/elasticsearch/action/search/SearchExecutionStatsCollector.java index 72c3d5eaab6d2..0ffad5aa4065b 100644 --- a/core/src/main/java/org/elasticsearch/action/search/SearchExecutionStatsCollector.java +++ b/core/src/main/java/org/elasticsearch/action/search/SearchExecutionStatsCollector.java @@ -61,7 +61,7 @@ public void onResponse(SearchPhaseResult response) { final int queueSize = queryResult.nodeQueueSize(); final long responseDuration = System.nanoTime() - startNanos; // EWMA/queue size may be -1 if the query node doesn't support capturing it - if (serviceTimeEWMA > 0 && queueSize > 0) { + if (serviceTimeEWMA > 0 && queueSize >= 0) { collector.addNodeStatistics(nodeId, queueSize, responseDuration, serviceTimeEWMA); } } diff --git a/core/src/main/java/org/elasticsearch/action/search/SearchQueryThenFetchAsyncAction.java b/core/src/main/java/org/elasticsearch/action/search/SearchQueryThenFetchAsyncAction.java index de8109aadd8fe..5ddd1df231d17 100644 --- a/core/src/main/java/org/elasticsearch/action/search/SearchQueryThenFetchAsyncAction.java +++ b/core/src/main/java/org/elasticsearch/action/search/SearchQueryThenFetchAsyncAction.java @@ -42,7 +42,8 @@ final class SearchQueryThenFetchAsyncAction extends AbstractSearchAsyncAction shardsIts, final TransportSearchAction.SearchTimeProvider timeProvider, long clusterStateVersion, SearchTask task) { super("query", logger, searchTransportService, nodeIdToConnection, aliasFilter, concreteIndexBoosts, executor, request, listener, - shardsIts, timeProvider, clusterStateVersion, task, searchPhaseController.newSearchPhaseResults(request, shardsIts.size())); + shardsIts, timeProvider, clusterStateVersion, task, searchPhaseController.newSearchPhaseResults(request, shardsIts.size()), + request.getMaxConcurrentShardRequests()); this.searchPhaseController = searchPhaseController; } diff --git a/core/src/main/java/org/elasticsearch/action/search/SearchRequest.java b/core/src/main/java/org/elasticsearch/action/search/SearchRequest.java index 3690bff5664e8..7bfa317c72c70 100644 --- a/core/src/main/java/org/elasticsearch/action/search/SearchRequest.java +++ b/core/src/main/java/org/elasticsearch/action/search/SearchRequest.java @@ -109,6 +109,55 @@ public SearchRequest(String[] indices, SearchSourceBuilder source) { this.source = source; } + /** + * Constructs a new search request from reading the specified stream. + * + * @param in The stream the request is read from + * @throws IOException if there is an issue reading the stream + */ + public SearchRequest(StreamInput in) throws IOException { + super(in); + searchType = SearchType.fromId(in.readByte()); + indices = new String[in.readVInt()]; + for (int i = 0; i < indices.length; i++) { + indices[i] = in.readString(); + } + routing = in.readOptionalString(); + preference = in.readOptionalString(); + scroll = in.readOptionalWriteable(Scroll::new); + source = in.readOptionalWriteable(SearchSourceBuilder::new); + types = in.readStringArray(); + indicesOptions = IndicesOptions.readIndicesOptions(in); + requestCache = in.readOptionalBoolean(); + batchedReduceSize = in.readVInt(); + if (in.getVersion().onOrAfter(Version.V_5_6_0)) { + maxConcurrentShardRequests = in.readVInt(); + preFilterShardSize = in.readVInt(); + } + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + super.writeTo(out); + out.writeByte(searchType.id()); + out.writeVInt(indices.length); + for (String index : indices) { + out.writeString(index); + } + out.writeOptionalString(routing); + out.writeOptionalString(preference); + out.writeOptionalWriteable(scroll); + out.writeOptionalWriteable(source); + out.writeStringArray(types); + indicesOptions.writeIndicesOptions(out); + out.writeOptionalBoolean(requestCache); + out.writeVInt(batchedReduceSize); + if (out.getVersion().onOrAfter(Version.V_5_6_0)) { + out.writeVInt(maxConcurrentShardRequests); + out.writeVInt(preFilterShardSize); + } + } + @Override public ActionRequestValidationException validate() { ActionRequestValidationException validationException = null; @@ -116,6 +165,10 @@ public ActionRequestValidationException validate() { validationException = addValidationError("disabling [track_total_hits] is not allowed in a scroll context", validationException); } + if (source != null && source.from() > 0 && scroll() != null) { + validationException = + addValidationError("using [from] is not allowed in a scroll context", validationException); + } return validationException; } @@ -188,8 +241,8 @@ public SearchRequest routing(String... routings) { /** * Sets the preference to execute the search. Defaults to randomize across shards. Can be set to - * _local to prefer local shards, _primary to execute only on primary shards, or - * a custom value, which guarantees that the same order will be used across different requests. + * _local to prefer local shards or a custom value, which guarantees that the same order + * will be used across different requests. */ public SearchRequest preference(String preference) { this.preference = preference; @@ -381,6 +434,7 @@ public String getDescription() { sb.append("], "); sb.append("search_type[").append(searchType).append("], "); if (source != null) { + sb.append("source[").append(source.toString(FORMAT_PARAMS)).append("]"); } else { sb.append("source[]"); @@ -392,46 +446,7 @@ public String getDescription() { @Override public void readFrom(StreamInput in) throws IOException { - super.readFrom(in); - searchType = SearchType.fromId(in.readByte()); - indices = new String[in.readVInt()]; - for (int i = 0; i < indices.length; i++) { - indices[i] = in.readString(); - } - routing = in.readOptionalString(); - preference = in.readOptionalString(); - scroll = in.readOptionalWriteable(Scroll::new); - source = in.readOptionalWriteable(SearchSourceBuilder::new); - types = in.readStringArray(); - indicesOptions = IndicesOptions.readIndicesOptions(in); - requestCache = in.readOptionalBoolean(); - batchedReduceSize = in.readVInt(); - if (in.getVersion().onOrAfter(Version.V_5_6_0)) { - maxConcurrentShardRequests = in.readVInt(); - preFilterShardSize = in.readVInt(); - } - } - - @Override - public void writeTo(StreamOutput out) throws IOException { - super.writeTo(out); - out.writeByte(searchType.id()); - out.writeVInt(indices.length); - for (String index : indices) { - out.writeString(index); - } - out.writeOptionalString(routing); - out.writeOptionalString(preference); - out.writeOptionalWriteable(scroll); - out.writeOptionalWriteable(source); - out.writeStringArray(types); - indicesOptions.writeIndicesOptions(out); - out.writeOptionalBoolean(requestCache); - out.writeVInt(batchedReduceSize); - if (out.getVersion().onOrAfter(Version.V_5_6_0)) { - out.writeVInt(maxConcurrentShardRequests); - out.writeVInt(preFilterShardSize); - } + throw new UnsupportedOperationException("usage of Streamable is to be replaced by Writeable"); } @Override diff --git a/core/src/main/java/org/elasticsearch/action/search/SearchRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/search/SearchRequestBuilder.java index 92c1be01626a9..922e9be5fd75d 100644 --- a/core/src/main/java/org/elasticsearch/action/search/SearchRequestBuilder.java +++ b/core/src/main/java/org/elasticsearch/action/search/SearchRequestBuilder.java @@ -32,7 +32,7 @@ import org.elasticsearch.search.aggregations.PipelineAggregationBuilder; import org.elasticsearch.search.builder.SearchSourceBuilder; import org.elasticsearch.search.fetch.subphase.highlight.HighlightBuilder; -import org.elasticsearch.search.rescore.RescoreBuilder; +import org.elasticsearch.search.rescore.RescorerBuilder; import org.elasticsearch.search.slice.SliceBuilder; import org.elasticsearch.search.sort.SortBuilder; import org.elasticsearch.search.sort.SortOrder; @@ -144,8 +144,8 @@ public SearchRequestBuilder setRouting(String... routing) { /** * Sets the preference to execute the search. Defaults to randomize across shards. Can be set to - * _local to prefer local shards, _primary to execute only on primary shards, or - * a custom value, which guarantees that the same order will be used across different requests. + * _local to prefer local shards or a custom value, which guarantees that the same order + * will be used across different requests. */ public SearchRequestBuilder setPreference(String preference) { request.preference(preference); @@ -415,25 +415,25 @@ public SearchRequestBuilder suggest(SuggestBuilder suggestBuilder) { /** * Clears all rescorers on the builder and sets the first one. To use multiple rescore windows use - * {@link #addRescorer(org.elasticsearch.search.rescore.RescoreBuilder, int)}. + * {@link #addRescorer(org.elasticsearch.search.rescore.RescorerBuilder, int)}. * * @param rescorer rescorer configuration * @return this for chaining */ - public SearchRequestBuilder setRescorer(RescoreBuilder rescorer) { + public SearchRequestBuilder setRescorer(RescorerBuilder rescorer) { sourceBuilder().clearRescorers(); return addRescorer(rescorer); } /** * Clears all rescorers on the builder and sets the first one. To use multiple rescore windows use - * {@link #addRescorer(org.elasticsearch.search.rescore.RescoreBuilder, int)}. + * {@link #addRescorer(org.elasticsearch.search.rescore.RescorerBuilder, int)}. * * @param rescorer rescorer configuration * @param window rescore window * @return this for chaining */ - public SearchRequestBuilder setRescorer(RescoreBuilder rescorer, int window) { + public SearchRequestBuilder setRescorer(RescorerBuilder rescorer, int window) { sourceBuilder().clearRescorers(); return addRescorer(rescorer.windowSize(window)); } @@ -444,7 +444,7 @@ public SearchRequestBuilder setRescorer(RescoreBuilder rescorer, int window) { * @param rescorer rescorer configuration * @return this for chaining */ - public SearchRequestBuilder addRescorer(RescoreBuilder rescorer) { + public SearchRequestBuilder addRescorer(RescorerBuilder rescorer) { sourceBuilder().addRescorer(rescorer); return this; } @@ -456,7 +456,7 @@ public SearchRequestBuilder addRescorer(RescoreBuilder rescorer) { * @param window rescore window * @return this for chaining */ - public SearchRequestBuilder addRescorer(RescoreBuilder rescorer, int window) { + public SearchRequestBuilder addRescorer(RescorerBuilder rescorer, int window) { sourceBuilder().addRescorer(rescorer.windowSize(window)); return this; } diff --git a/core/src/main/java/org/elasticsearch/action/search/SearchScrollRequest.java b/core/src/main/java/org/elasticsearch/action/search/SearchScrollRequest.java index fbe648cceaa80..be83ef6d5839e 100644 --- a/core/src/main/java/org/elasticsearch/action/search/SearchScrollRequest.java +++ b/core/src/main/java/org/elasticsearch/action/search/SearchScrollRequest.java @@ -48,6 +48,19 @@ public SearchScrollRequest(String scrollId) { this.scrollId = scrollId; } + public SearchScrollRequest(StreamInput in) throws IOException { + super(in); + scrollId = in.readString(); + scroll = in.readOptionalWriteable(Scroll::new); + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + super.writeTo(out); + out.writeString(scrollId); + out.writeOptionalWriteable(scroll); + } + @Override public ActionRequestValidationException validate() { ActionRequestValidationException validationException = null; @@ -100,16 +113,7 @@ public SearchScrollRequest scroll(String keepAlive) { @Override public void readFrom(StreamInput in) throws IOException { - super.readFrom(in); - scrollId = in.readString(); - scroll = in.readOptionalWriteable(Scroll::new); - } - - @Override - public void writeTo(StreamOutput out) throws IOException { - super.writeTo(out); - out.writeString(scrollId); - out.writeOptionalWriteable(scroll); + throw new UnsupportedOperationException("usage of Streamable is to be replaced by Writeable"); } @Override diff --git a/core/src/main/java/org/elasticsearch/action/search/SearchTransportService.java b/core/src/main/java/org/elasticsearch/action/search/SearchTransportService.java index f4816bb2a21f4..509f89a7542fe 100644 --- a/core/src/main/java/org/elasticsearch/action/search/SearchTransportService.java +++ b/core/src/main/java/org/elasticsearch/action/search/SearchTransportService.java @@ -30,6 +30,7 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.util.concurrent.ConcurrentCollections; import org.elasticsearch.search.SearchPhaseResult; import org.elasticsearch.search.SearchService; import org.elasticsearch.search.dfs.DfsSearchResult; @@ -39,6 +40,7 @@ import org.elasticsearch.search.fetch.ShardFetchRequest; import org.elasticsearch.search.fetch.ShardFetchSearchRequest; import org.elasticsearch.search.internal.InternalScrollSearchRequest; +import org.elasticsearch.search.internal.ShardSearchRequest; import org.elasticsearch.search.internal.ShardSearchTransportRequest; import org.elasticsearch.search.query.QuerySearchRequest; import org.elasticsearch.search.query.QuerySearchResult; @@ -46,10 +48,11 @@ import org.elasticsearch.tasks.Task; import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.RemoteClusterService; +import org.elasticsearch.transport.TaskAwareTransportRequestHandler; import org.elasticsearch.transport.Transport; import org.elasticsearch.transport.TransportActionProxy; -import org.elasticsearch.transport.TaskAwareTransportRequestHandler; import org.elasticsearch.transport.TransportChannel; +import org.elasticsearch.transport.TransportException; import org.elasticsearch.transport.TransportRequest; import org.elasticsearch.transport.TransportRequestOptions; import org.elasticsearch.transport.TransportResponse; @@ -57,6 +60,9 @@ import java.io.IOException; import java.io.UncheckedIOException; +import java.util.Collections; +import java.util.HashMap; +import java.util.Map; import java.util.function.BiFunction; import java.util.function.Supplier; @@ -80,6 +86,7 @@ public class SearchTransportService extends AbstractComponent { private final TransportService transportService; private final BiFunction responseWrapper; + private final Map clientConnections = ConcurrentCollections.newConcurrentMapWithAggressiveConcurrency(); public SearchTransportService(Settings settings, TransportService transportService, BiFunction responseWrapper) { @@ -88,6 +95,10 @@ public SearchTransportService(Settings settings, TransportService transportServi this.responseWrapper = responseWrapper; } + public Map getClientConnections() { + return Collections.unmodifiableMap(clientConnections); + } + public void sendFreeContext(Transport.Connection connection, final long contextId, OriginalIndices originalIndices) { transportService.sendRequest(connection, FREE_CONTEXT_ACTION_NAME, new SearchFreeContextRequest(originalIndices, contextId), TransportRequestOptions.EMPTY, new ActionListenerResponseHandler<>(new ActionListener() { @@ -131,7 +142,7 @@ public void sendClearAllScrollContexts(Transport.Connection connection, final Ac public void sendExecuteDfs(Transport.Connection connection, final ShardSearchTransportRequest request, SearchTask task, final SearchActionListener listener) { transportService.sendChildRequest(connection, DFS_ACTION_NAME, request, task, - new ActionListenerResponseHandler<>(listener, DfsSearchResult::new)); + new ConnectionCountingHandler<>(listener, DfsSearchResult::new, clientConnections, connection.getNode().getId())); } public void sendExecuteQuery(Transport.Connection connection, final ShardSearchTransportRequest request, SearchTask task, @@ -143,25 +154,26 @@ public void sendExecuteQuery(Transport.Connection connection, final ShardSearchT final ActionListener handler = responseWrapper.apply(connection, listener); transportService.sendChildRequest(connection, QUERY_ACTION_NAME, request, task, - new ActionListenerResponseHandler<>(handler, supplier)); + new ConnectionCountingHandler<>(handler, supplier, clientConnections, connection.getNode().getId())); } public void sendExecuteQuery(Transport.Connection connection, final QuerySearchRequest request, SearchTask task, final SearchActionListener listener) { transportService.sendChildRequest(connection, QUERY_ID_ACTION_NAME, request, task, - new ActionListenerResponseHandler<>(listener, QuerySearchResult::new)); + new ConnectionCountingHandler<>(listener, QuerySearchResult::new, clientConnections, connection.getNode().getId())); } public void sendExecuteScrollQuery(Transport.Connection connection, final InternalScrollSearchRequest request, SearchTask task, final SearchActionListener listener) { transportService.sendChildRequest(connection, QUERY_SCROLL_ACTION_NAME, request, task, - new ActionListenerResponseHandler<>(listener, ScrollQuerySearchResult::new)); + new ConnectionCountingHandler<>(listener, ScrollQuerySearchResult::new, clientConnections, connection.getNode().getId())); } public void sendExecuteScrollFetch(Transport.Connection connection, final InternalScrollSearchRequest request, SearchTask task, final SearchActionListener listener) { transportService.sendChildRequest(connection, QUERY_FETCH_SCROLL_ACTION_NAME, request, task, - new ActionListenerResponseHandler<>(listener, ScrollQueryFetchSearchResult::new)); + new ConnectionCountingHandler<>(listener, ScrollQueryFetchSearchResult::new, + clientConnections, connection.getNode().getId())); } public void sendExecuteFetch(Transport.Connection connection, final ShardFetchSearchRequest request, SearchTask task, @@ -177,22 +189,31 @@ public void sendExecuteFetchScroll(Transport.Connection connection, final ShardF private void sendExecuteFetch(Transport.Connection connection, String action, final ShardFetchRequest request, SearchTask task, final SearchActionListener listener) { transportService.sendChildRequest(connection, action, request, task, - new ActionListenerResponseHandler<>(listener, FetchSearchResult::new)); + new ConnectionCountingHandler<>(listener, FetchSearchResult::new, clientConnections, connection.getNode().getId())); } /** * Used by {@link TransportSearchAction} to send the expand queries (field collapsing). */ void sendExecuteMultiSearch(final MultiSearchRequest request, SearchTask task, - final ActionListener listener) { - transportService.sendChildRequest(transportService.getConnection(transportService.getLocalNode()), MultiSearchAction.NAME, request, - task, new ActionListenerResponseHandler<>(listener, MultiSearchResponse::new)); + final ActionListener listener) { + final Transport.Connection connection = transportService.getConnection(transportService.getLocalNode()); + transportService.sendChildRequest(connection, MultiSearchAction.NAME, request, task, + new ConnectionCountingHandler<>(listener, MultiSearchResponse::new, clientConnections, connection.getNode().getId())); } public RemoteClusterService getRemoteClusterService() { return transportService.getRemoteClusterService(); } + /** + * Return a map of nodeId to pending number of search requests. + * This is a snapshot of the current pending search and not a live map. + */ + public Map getPendingSearchRequests() { + return new HashMap<>(clientConnections); + } + static class ScrollFreeContextRequest extends TransportRequest { private long id; @@ -203,13 +224,8 @@ static class ScrollFreeContextRequest extends TransportRequest { this.id = id; } - public long id() { - return this.id; - } - - @Override - public void readFrom(StreamInput in) throws IOException { - super.readFrom(in); + ScrollFreeContextRequest(StreamInput in) throws IOException { + super(in); id = in.readLong(); } @@ -218,6 +234,15 @@ public void writeTo(StreamOutput out) throws IOException { super.writeTo(out); out.writeLong(id); } + + public long id() { + return this.id; + } + + @Override + public void readFrom(StreamInput in) throws IOException { + throw new UnsupportedOperationException("usage of Streamable is to be replaced by Writeable"); + } } static class SearchFreeContextRequest extends ScrollFreeContextRequest implements IndicesRequest { @@ -231,6 +256,17 @@ static class SearchFreeContextRequest extends ScrollFreeContextRequest implement this.originalIndices = originalIndices; } + SearchFreeContextRequest(StreamInput in) throws IOException { + super(in); + originalIndices = OriginalIndices.readOriginalIndices(in); + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + super.writeTo(out); + OriginalIndices.writeOriginalIndices(originalIndices, out); + } + @Override public String[] indices() { if (originalIndices == null) { @@ -249,14 +285,7 @@ public IndicesOptions indicesOptions() { @Override public void readFrom(StreamInput in) throws IOException { - super.readFrom(in); - originalIndices = OriginalIndices.readOriginalIndices(in); - } - - @Override - public void writeTo(StreamOutput out) throws IOException { - super.writeTo(out); - OriginalIndices.writeOriginalIndices(originalIndices, out); + throw new UnsupportedOperationException("usage of Streamable is to be replaced by Writeable"); } } @@ -289,7 +318,7 @@ public void writeTo(StreamOutput out) throws IOException { } public static void registerRequestHandler(TransportService transportService, SearchService searchService) { - transportService.registerRequestHandler(FREE_CONTEXT_SCROLL_ACTION_NAME, ScrollFreeContextRequest::new, ThreadPool.Names.SAME, + transportService.registerRequestHandler(FREE_CONTEXT_SCROLL_ACTION_NAME, ThreadPool.Names.SAME, ScrollFreeContextRequest::new, new TaskAwareTransportRequestHandler() { @Override public void messageReceived(ScrollFreeContextRequest request, TransportChannel channel, Task task) throws Exception { @@ -297,8 +326,9 @@ public void messageReceived(ScrollFreeContextRequest request, TransportChannel c channel.sendResponse(new SearchFreeContextResponse(freed)); } }); - TransportActionProxy.registerProxyAction(transportService, FREE_CONTEXT_SCROLL_ACTION_NAME, SearchFreeContextResponse::new); - transportService.registerRequestHandler(FREE_CONTEXT_ACTION_NAME, SearchFreeContextRequest::new, ThreadPool.Names.SAME, + TransportActionProxy.registerProxyAction(transportService, FREE_CONTEXT_SCROLL_ACTION_NAME, + (Supplier) SearchFreeContextResponse::new); + transportService.registerRequestHandler(FREE_CONTEXT_ACTION_NAME, ThreadPool.Names.SAME, SearchFreeContextRequest::new, new TaskAwareTransportRequestHandler() { @Override public void messageReceived(SearchFreeContextRequest request, TransportChannel channel, Task task) throws Exception { @@ -306,7 +336,8 @@ public void messageReceived(SearchFreeContextRequest request, TransportChannel c channel.sendResponse(new SearchFreeContextResponse(freed)); } }); - TransportActionProxy.registerProxyAction(transportService, FREE_CONTEXT_ACTION_NAME, SearchFreeContextResponse::new); + TransportActionProxy.registerProxyAction(transportService, FREE_CONTEXT_ACTION_NAME, + (Supplier) SearchFreeContextResponse::new); transportService.registerRequestHandler(CLEAR_SCROLL_CONTEXTS_ACTION_NAME, () -> TransportRequest.Empty.INSTANCE, ThreadPool.Names.SAME, new TaskAwareTransportRequestHandler() { @Override @@ -316,9 +347,9 @@ public void messageReceived(TransportRequest.Empty request, TransportChannel cha } }); TransportActionProxy.registerProxyAction(transportService, CLEAR_SCROLL_CONTEXTS_ACTION_NAME, - () -> TransportResponse.Empty.INSTANCE); + () -> TransportResponse.Empty.INSTANCE); - transportService.registerRequestHandler(DFS_ACTION_NAME, ShardSearchTransportRequest::new, ThreadPool.Names.SAME, + transportService.registerRequestHandler(DFS_ACTION_NAME, ThreadPool.Names.SAME, ShardSearchTransportRequest::new, new TaskAwareTransportRequestHandler() { @Override public void messageReceived(ShardSearchTransportRequest request, TransportChannel channel, Task task) throws Exception { @@ -346,7 +377,7 @@ public void onFailure(Exception e) { }); TransportActionProxy.registerProxyAction(transportService, DFS_ACTION_NAME, DfsSearchResult::new); - transportService.registerRequestHandler(QUERY_ACTION_NAME, ShardSearchTransportRequest::new, ThreadPool.Names.SAME, + transportService.registerRequestHandler(QUERY_ACTION_NAME, ThreadPool.Names.SAME, ShardSearchTransportRequest::new, new TaskAwareTransportRequestHandler() { @Override public void messageReceived(ShardSearchTransportRequest request, TransportChannel channel, Task task) throws Exception { @@ -371,9 +402,10 @@ public void onFailure(Exception e) { }); } }); - TransportActionProxy.registerProxyAction(transportService, QUERY_ACTION_NAME, QuerySearchResult::new); + TransportActionProxy.registerProxyAction(transportService, QUERY_ACTION_NAME, + (request) -> ((ShardSearchRequest)request).numberOfShards() == 1 ? QueryFetchSearchResult::new : QuerySearchResult::new); - transportService.registerRequestHandler(QUERY_ID_ACTION_NAME, QuerySearchRequest::new, ThreadPool.Names.SEARCH, + transportService.registerRequestHandler(QUERY_ID_ACTION_NAME, ThreadPool.Names.SEARCH, QuerySearchRequest::new, new TaskAwareTransportRequestHandler() { @Override public void messageReceived(QuerySearchRequest request, TransportChannel channel, Task task) throws Exception { @@ -383,7 +415,7 @@ public void messageReceived(QuerySearchRequest request, TransportChannel channel }); TransportActionProxy.registerProxyAction(transportService, QUERY_ID_ACTION_NAME, QuerySearchResult::new); - transportService.registerRequestHandler(QUERY_SCROLL_ACTION_NAME, InternalScrollSearchRequest::new, ThreadPool.Names.SEARCH, + transportService.registerRequestHandler(QUERY_SCROLL_ACTION_NAME, ThreadPool.Names.SEARCH, InternalScrollSearchRequest::new, new TaskAwareTransportRequestHandler() { @Override public void messageReceived(InternalScrollSearchRequest request, TransportChannel channel, Task task) throws Exception { @@ -393,7 +425,7 @@ public void messageReceived(InternalScrollSearchRequest request, TransportChanne }); TransportActionProxy.registerProxyAction(transportService, QUERY_SCROLL_ACTION_NAME, ScrollQuerySearchResult::new); - transportService.registerRequestHandler(QUERY_FETCH_SCROLL_ACTION_NAME, InternalScrollSearchRequest::new, ThreadPool.Names.SEARCH, + transportService.registerRequestHandler(QUERY_FETCH_SCROLL_ACTION_NAME, ThreadPool.Names.SEARCH, InternalScrollSearchRequest::new, new TaskAwareTransportRequestHandler() { @Override public void messageReceived(InternalScrollSearchRequest request, TransportChannel channel, Task task) throws Exception { @@ -403,7 +435,7 @@ public void messageReceived(InternalScrollSearchRequest request, TransportChanne }); TransportActionProxy.registerProxyAction(transportService, QUERY_FETCH_SCROLL_ACTION_NAME, ScrollQueryFetchSearchResult::new); - transportService.registerRequestHandler(FETCH_ID_SCROLL_ACTION_NAME, ShardFetchRequest::new, ThreadPool.Names.SEARCH, + transportService.registerRequestHandler(FETCH_ID_SCROLL_ACTION_NAME, ThreadPool.Names.SEARCH, ShardFetchRequest::new, new TaskAwareTransportRequestHandler() { @Override public void messageReceived(ShardFetchRequest request, TransportChannel channel, Task task) throws Exception { @@ -413,7 +445,7 @@ public void messageReceived(ShardFetchRequest request, TransportChannel channel, }); TransportActionProxy.registerProxyAction(transportService, FETCH_ID_SCROLL_ACTION_NAME, FetchSearchResult::new); - transportService.registerRequestHandler(FETCH_ID_ACTION_NAME, ShardFetchSearchRequest::new, ThreadPool.Names.SEARCH, + transportService.registerRequestHandler(FETCH_ID_ACTION_NAME, ThreadPool.Names.SEARCH, ShardFetchSearchRequest::new, new TaskAwareTransportRequestHandler() { @Override public void messageReceived(ShardFetchSearchRequest request, TransportChannel channel, Task task) throws Exception { @@ -423,8 +455,8 @@ public void messageReceived(ShardFetchSearchRequest request, TransportChannel ch }); TransportActionProxy.registerProxyAction(transportService, FETCH_ID_ACTION_NAME, FetchSearchResult::new); - // this is super cheap and should not hit thread-pool rejections - transportService.registerRequestHandler(QUERY_CAN_MATCH_NAME, ShardSearchTransportRequest::new, ThreadPool.Names.SAME, + // this is cheap, it does not fetch during the rewrite phase, so we can let it quickly execute on a networking thread + transportService.registerRequestHandler(QUERY_CAN_MATCH_NAME, ThreadPool.Names.SAME, ShardSearchTransportRequest::new, new TaskAwareTransportRequestHandler() { @Override public void messageReceived(ShardSearchTransportRequest request, TransportChannel channel, Task task) throws Exception { @@ -432,7 +464,8 @@ public void messageReceived(ShardSearchTransportRequest request, TransportChanne channel.sendResponse(new CanMatchResponse(canMatch)); } }); - TransportActionProxy.registerProxyAction(transportService, QUERY_CAN_MATCH_NAME, CanMatchResponse::new); + TransportActionProxy.registerProxyAction(transportService, QUERY_CAN_MATCH_NAME, + (Supplier) CanMatchResponse::new); } public static final class CanMatchResponse extends SearchPhaseResult { @@ -478,4 +511,47 @@ Transport.Connection getConnection(String clusterAlias, DiscoveryNode node) { return transportService.getRemoteClusterService().getConnection(node, clusterAlias); } } + + final class ConnectionCountingHandler extends ActionListenerResponseHandler { + private final Map clientConnections; + private final String nodeId; + + ConnectionCountingHandler(final ActionListener listener, final Supplier responseSupplier, + final Map clientConnections, final String nodeId) { + super(listener, responseSupplier); + this.clientConnections = clientConnections; + this.nodeId = nodeId; + // Increment the number of connections for this node by one + clientConnections.compute(nodeId, (id, conns) -> conns == null ? 1 : conns + 1); + } + + @Override + public void handleResponse(Response response) { + super.handleResponse(response); + // Decrement the number of connections or remove it entirely if there are no more connections + // We need to remove the entry here so we don't leak when nodes go away forever + assert assertNodePresent(); + clientConnections.computeIfPresent(nodeId, (id, conns) -> conns.longValue() == 1 ? null : conns - 1); + } + + @Override + public void handleException(TransportException e) { + super.handleException(e); + // Decrement the number of connections or remove it entirely if there are no more connections + // We need to remove the entry here so we don't leak when nodes go away forever + assert assertNodePresent(); + clientConnections.computeIfPresent(nodeId, (id, conns) -> conns.longValue() == 1 ? null : conns - 1); + } + + private boolean assertNodePresent() { + clientConnections.compute(nodeId, (id, conns) -> { + assert conns != null : "number of connections for " + id + " is null, but should be an integer"; + assert conns >= 1 : "number of connections for " + id + " should be >= 1 but was " + conns; + return conns; + }); + // Always return true, there is additional asserting here, the boolean is just so this + // can be skipped when assertions are not enabled + return true; + } + } } diff --git a/core/src/main/java/org/elasticsearch/action/search/ShardSearchFailure.java b/core/src/main/java/org/elasticsearch/action/search/ShardSearchFailure.java index 7eb939ca8274e..f2ba62fefd43f 100644 --- a/core/src/main/java/org/elasticsearch/action/search/ShardSearchFailure.java +++ b/core/src/main/java/org/elasticsearch/action/search/ShardSearchFailure.java @@ -131,7 +131,8 @@ public String reason() { @Override public String toString() { - return "shard [" + (shardTarget == null ? "_na" : shardTarget) + "], reason [" + reason + "], cause [" + (cause == null ? "_na" : ExceptionsHelper.stackTrace(cause)) + "]"; + return "shard [" + (shardTarget == null ? "_na" : shardTarget) + "], reason [" + reason + "], cause [" + + (cause == null ? "_na" : ExceptionsHelper.stackTrace(cause)) + "]"; } public static ShardSearchFailure readShardSearchFailure(StreamInput in) throws IOException { @@ -210,9 +211,12 @@ public static ShardSearchFailure fromXContent(XContentParser parser) throws IOEx parser.skipChildren(); } } - return new ShardSearchFailure(exception, - new SearchShardTarget(nodeId, - new ShardId(new Index(indexName, IndexMetaData.INDEX_UUID_NA_VALUE), shardId), null, OriginalIndices.NONE)); + SearchShardTarget searchShardTarget = null; + if (nodeId != null) { + searchShardTarget = new SearchShardTarget(nodeId, + new ShardId(new Index(indexName, IndexMetaData.INDEX_UUID_NA_VALUE), shardId), null, OriginalIndices.NONE); + } + return new ShardSearchFailure(exception, searchShardTarget); } @Override diff --git a/core/src/main/java/org/elasticsearch/action/search/TransportMultiSearchAction.java b/core/src/main/java/org/elasticsearch/action/search/TransportMultiSearchAction.java index b65cd4d55516a..9dec3be5c1b11 100644 --- a/core/src/main/java/org/elasticsearch/action/search/TransportMultiSearchAction.java +++ b/core/src/main/java/org/elasticsearch/action/search/TransportMultiSearchAction.java @@ -34,16 +34,18 @@ import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.TransportService; -import java.util.List; import java.util.Queue; import java.util.concurrent.ConcurrentLinkedQueue; +import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicInteger; +import java.util.function.LongSupplier; public class TransportMultiSearchAction extends HandledTransportAction { private final int availableProcessors; private final ClusterService clusterService; private final TransportAction searchAction; + private final LongSupplier relativeTimeProvider; @Inject public TransportMultiSearchAction(Settings settings, ThreadPool threadPool, TransportService transportService, @@ -53,19 +55,23 @@ public TransportMultiSearchAction(Settings settings, ThreadPool threadPool, Tran this.clusterService = clusterService; this.searchAction = searchAction; this.availableProcessors = EsExecutors.numberOfProcessors(settings); + this.relativeTimeProvider = System::nanoTime; } TransportMultiSearchAction(ThreadPool threadPool, ActionFilters actionFilters, TransportService transportService, ClusterService clusterService, TransportAction searchAction, - IndexNameExpressionResolver resolver, int availableProcessors) { + IndexNameExpressionResolver resolver, int availableProcessors, LongSupplier relativeTimeProvider) { super(Settings.EMPTY, MultiSearchAction.NAME, threadPool, transportService, actionFilters, resolver, MultiSearchRequest::new); this.clusterService = clusterService; this.searchAction = searchAction; this.availableProcessors = availableProcessors; + this.relativeTimeProvider = relativeTimeProvider; } @Override protected void doExecute(MultiSearchRequest request, ActionListener listener) { + final long relativeStartTime = relativeTimeProvider.getAsLong(); + ClusterState clusterState = clusterService.state(); clusterState.blocks().globalBlockedRaiseException(ClusterBlockLevel.READ); @@ -85,7 +91,7 @@ protected void doExecute(MultiSearchRequest request, ActionListener requests, final AtomicArray responses, final AtomicInteger responseCounter, - final ActionListener listener) { + final ActionListener listener, + final long relativeStartTime) { SearchRequestSlot request = requests.poll(); if (request == null) { /* @@ -155,16 +162,25 @@ private void handleResponse(final int responseSlot, final MultiSearchResponse.It } else { if (thread == Thread.currentThread()) { // we are on the same thread, we need to fork to another thread to avoid recursive stack overflow on a single thread - threadPool.generic().execute(() -> executeSearch(requests, responses, responseCounter, listener)); + threadPool.generic() + .execute(() -> executeSearch(requests, responses, responseCounter, listener, relativeStartTime)); } else { // we are on a different thread (we went asynchronous), it's safe to recurse - executeSearch(requests, responses, responseCounter, listener); + executeSearch(requests, responses, responseCounter, listener, relativeStartTime); } } } private void finish() { - listener.onResponse(new MultiSearchResponse(responses.toArray(new MultiSearchResponse.Item[responses.length()]))); + listener.onResponse(new MultiSearchResponse(responses.toArray(new MultiSearchResponse.Item[responses.length()]), + buildTookInMillis())); + } + + /** + * Builds how long it took to execute the msearch. + */ + private long buildTookInMillis() { + return TimeUnit.NANOSECONDS.toMillis(relativeTimeProvider.getAsLong() - relativeStartTime); } }); } @@ -178,7 +194,5 @@ static final class SearchRequestSlot { this.request = request; this.responseSlot = responseSlot; } - } - } diff --git a/core/src/main/java/org/elasticsearch/action/search/TransportSearchAction.java b/core/src/main/java/org/elasticsearch/action/search/TransportSearchAction.java index e1ae2a5a66866..8400707e370d1 100644 --- a/core/src/main/java/org/elasticsearch/action/search/TransportSearchAction.java +++ b/core/src/main/java/org/elasticsearch/action/search/TransportSearchAction.java @@ -82,7 +82,7 @@ public TransportSearchAction(Settings settings, ThreadPool threadPool, Transport SearchTransportService searchTransportService, SearchPhaseController searchPhaseController, ClusterService clusterService, ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver) { - super(settings, SearchAction.NAME, threadPool, transportService, actionFilters, indexNameExpressionResolver, SearchRequest::new); + super(settings, SearchAction.NAME, threadPool, transportService, actionFilters, SearchRequest::new, indexNameExpressionResolver); this.searchPhaseController = searchPhaseController; this.searchTransportService = searchTransportService; this.remoteClusterService = searchTransportService.getRemoteClusterService(); @@ -284,8 +284,9 @@ private void executeSearch(SearchTask task, SearchTimeProvider timeProvider, Sea for (int i = 0; i < indices.length; i++) { concreteIndices[i] = indices[i].getName(); } + Map nodeSearchCounts = searchTransportService.getPendingSearchRequests(); GroupShardsIterator localShardsIterator = clusterService.operationRouting().searchShards(clusterState, - concreteIndices, routingMap, searchRequest.preference()); + concreteIndices, routingMap, searchRequest.preference(), searchService.getResponseCollectorService(), nodeSearchCounts); GroupShardsIterator shardIterators = mergeShardsIterators(localShardsIterator, localIndices, remoteShardIterators); diff --git a/core/src/main/java/org/elasticsearch/action/search/TransportSearchScrollAction.java b/core/src/main/java/org/elasticsearch/action/search/TransportSearchScrollAction.java index e334b95180122..6f230c9bd8b89 100644 --- a/core/src/main/java/org/elasticsearch/action/search/TransportSearchScrollAction.java +++ b/core/src/main/java/org/elasticsearch/action/search/TransportSearchScrollAction.java @@ -45,8 +45,8 @@ public TransportSearchScrollAction(Settings settings, ThreadPool threadPool, Tra ClusterService clusterService, ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver, SearchTransportService searchTransportService, SearchPhaseController searchPhaseController) { - super(settings, SearchScrollAction.NAME, threadPool, transportService, actionFilters, indexNameExpressionResolver, - SearchScrollRequest::new); + super(settings, SearchScrollAction.NAME, threadPool, transportService, actionFilters, SearchScrollRequest::new, + indexNameExpressionResolver); this.clusterService = clusterService; this.searchTransportService = searchTransportService; this.searchPhaseController = searchPhaseController; diff --git a/core/src/main/java/org/elasticsearch/action/support/ActionFilters.java b/core/src/main/java/org/elasticsearch/action/support/ActionFilters.java index 426129a66d6ab..c66bac31a3dee 100644 --- a/core/src/main/java/org/elasticsearch/action/support/ActionFilters.java +++ b/core/src/main/java/org/elasticsearch/action/support/ActionFilters.java @@ -19,8 +19,6 @@ package org.elasticsearch.action.support; -import org.elasticsearch.common.inject.Inject; - import java.util.Arrays; import java.util.Comparator; import java.util.Set; @@ -32,7 +30,6 @@ public class ActionFilters { private final ActionFilter[] filters; - @Inject public ActionFilters(Set actionFilters) { this.filters = actionFilters.toArray(new ActionFilter[actionFilters.size()]); Arrays.sort(filters, new Comparator() { diff --git a/core/src/main/java/org/elasticsearch/action/support/ActiveShardCount.java b/core/src/main/java/org/elasticsearch/action/support/ActiveShardCount.java index 4d15639dbed80..cdd895ff8cd2c 100644 --- a/core/src/main/java/org/elasticsearch/action/support/ActiveShardCount.java +++ b/core/src/main/java/org/elasticsearch/action/support/ActiveShardCount.java @@ -138,36 +138,40 @@ public boolean enoughShardsActive(final int activeShardCount) { /** * Returns true iff the given cluster state's routing table contains enough active - * shards for the given index to meet the required shard count represented by this instance. + * shards for the given indices to meet the required shard count represented by this instance. */ - public boolean enoughShardsActive(final ClusterState clusterState, final String indexName) { + public boolean enoughShardsActive(final ClusterState clusterState, final String... indices) { if (this == ActiveShardCount.NONE) { // not waiting for any active shards return true; } - final IndexMetaData indexMetaData = clusterState.metaData().index(indexName); - if (indexMetaData == null) { - // its possible the index was deleted while waiting for active shard copies, - // in this case, we'll just consider it that we have enough active shard copies - // and we can stop waiting - return true; - } - final IndexRoutingTable indexRoutingTable = clusterState.routingTable().index(indexName); - assert indexRoutingTable != null; - if (indexRoutingTable.allPrimaryShardsActive() == false) { - // all primary shards aren't active yet - return false; - } - ActiveShardCount waitForActiveShards = this; - if (waitForActiveShards == ActiveShardCount.DEFAULT) { - waitForActiveShards = SETTING_WAIT_FOR_ACTIVE_SHARDS.get(indexMetaData.getSettings()); - } - for (final IntObjectCursor shardRouting : indexRoutingTable.getShards()) { - if (waitForActiveShards.enoughShardsActive(shardRouting.value) == false) { - // not enough active shard copies yet + + for (final String indexName : indices) { + final IndexMetaData indexMetaData = clusterState.metaData().index(indexName); + if (indexMetaData == null) { + // its possible the index was deleted while waiting for active shard copies, + // in this case, we'll just consider it that we have enough active shard copies + // and we can stop waiting + continue; + } + final IndexRoutingTable indexRoutingTable = clusterState.routingTable().index(indexName); + assert indexRoutingTable != null; + if (indexRoutingTable.allPrimaryShardsActive() == false) { + // all primary shards aren't active yet return false; } + ActiveShardCount waitForActiveShards = this; + if (waitForActiveShards == ActiveShardCount.DEFAULT) { + waitForActiveShards = SETTING_WAIT_FOR_ACTIVE_SHARDS.get(indexMetaData.getSettings()); + } + for (final IntObjectCursor shardRouting : indexRoutingTable.getShards()) { + if (waitForActiveShards.enoughShardsActive(shardRouting.value) == false) { + // not enough active shard copies yet + return false; + } + } } + return true; } diff --git a/core/src/main/java/org/elasticsearch/action/support/ActiveShardsObserver.java b/core/src/main/java/org/elasticsearch/action/support/ActiveShardsObserver.java index 280ba6ac94dc6..30d6461ef614b 100644 --- a/core/src/main/java/org/elasticsearch/action/support/ActiveShardsObserver.java +++ b/core/src/main/java/org/elasticsearch/action/support/ActiveShardsObserver.java @@ -29,6 +29,7 @@ import org.elasticsearch.node.NodeClosedException; import org.elasticsearch.threadpool.ThreadPool; +import java.util.Arrays; import java.util.function.Consumer; import java.util.function.Predicate; @@ -50,13 +51,13 @@ public ActiveShardsObserver(final Settings settings, final ClusterService cluste /** * Waits on the specified number of active shards to be started before executing the * - * @param indexName the index to wait for active shards on + * @param indexNames the indices to wait for active shards on * @param activeShardCount the number of active shards to wait on before returning * @param timeout the timeout value * @param onResult a function that is executed in response to the requisite shards becoming active or a timeout (whichever comes first) * @param onFailure a function that is executed in response to an error occurring during waiting for the active shards */ - public void waitForActiveShards(final String indexName, + public void waitForActiveShards(final String[] indexNames, final ActiveShardCount activeShardCount, final TimeValue timeout, final Consumer onResult, @@ -71,10 +72,10 @@ public void waitForActiveShards(final String indexName, final ClusterState state = clusterService.state(); final ClusterStateObserver observer = new ClusterStateObserver(state, clusterService, null, logger, threadPool.getThreadContext()); - if (activeShardCount.enoughShardsActive(state, indexName)) { + if (activeShardCount.enoughShardsActive(state, indexNames)) { onResult.accept(true); } else { - final Predicate shardsAllocatedPredicate = newState -> activeShardCount.enoughShardsActive(newState, indexName); + final Predicate shardsAllocatedPredicate = newState -> activeShardCount.enoughShardsActive(newState, indexNames); final ClusterStateObserver.Listener observerListener = new ClusterStateObserver.Listener() { @Override @@ -84,7 +85,7 @@ public void onNewClusterState(ClusterState state) { @Override public void onClusterServiceClose() { - logger.debug("[{}] cluster service closed while waiting for enough shards to be started.", indexName); + logger.debug("[{}] cluster service closed while waiting for enough shards to be started.", Arrays.toString(indexNames)); onFailure.accept(new NodeClosedException(clusterService.localNode())); } diff --git a/core/src/main/java/org/elasticsearch/action/support/AutoCreateIndex.java b/core/src/main/java/org/elasticsearch/action/support/AutoCreateIndex.java index 2e442e2cc141c..d834d80338432 100644 --- a/core/src/main/java/org/elasticsearch/action/support/AutoCreateIndex.java +++ b/core/src/main/java/org/elasticsearch/action/support/AutoCreateIndex.java @@ -64,7 +64,7 @@ public boolean needToCheck() { /** * Should the index be auto created? - * @throws IndexNotFoundException if the the index doesn't exist and shouldn't be auto created + * @throws IndexNotFoundException if the index doesn't exist and shouldn't be auto created */ public boolean shouldAutoCreate(String index, ClusterState state) { if (resolver.hasIndexOrAlias(index, state)) { diff --git a/core/src/main/java/org/elasticsearch/action/support/HandledTransportAction.java b/core/src/main/java/org/elasticsearch/action/support/HandledTransportAction.java index 68b699cb110ba..10719fcb91c6a 100644 --- a/core/src/main/java/org/elasticsearch/action/support/HandledTransportAction.java +++ b/core/src/main/java/org/elasticsearch/action/support/HandledTransportAction.java @@ -23,6 +23,7 @@ import org.elasticsearch.action.ActionRequest; import org.elasticsearch.action.ActionResponse; import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; +import org.elasticsearch.common.io.stream.Writeable; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.tasks.Task; import org.elasticsearch.threadpool.ThreadPool; @@ -43,6 +44,12 @@ protected HandledTransportAction(Settings settings, String actionName, ThreadPoo this(settings, actionName, true, threadPool, transportService, actionFilters, indexNameExpressionResolver, request); } + protected HandledTransportAction(Settings settings, String actionName, ThreadPool threadPool, TransportService transportService, + ActionFilters actionFilters, Writeable.Reader requestReader, + IndexNameExpressionResolver indexNameExpressionResolver) { + this(settings, actionName, true, threadPool, transportService, actionFilters, requestReader, indexNameExpressionResolver); + } + protected HandledTransportAction(Settings settings, String actionName, boolean canTripCircuitBreaker, ThreadPool threadPool, TransportService transportService, ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver, Supplier request) { @@ -51,6 +58,14 @@ protected HandledTransportAction(Settings settings, String actionName, boolean c new TransportHandler()); } + protected HandledTransportAction(Settings settings, String actionName, boolean canTripCircuitBreaker, ThreadPool threadPool, + TransportService transportService, ActionFilters actionFilters, + Writeable.Reader requestReader, IndexNameExpressionResolver indexNameExpressionResolver) { + super(settings, actionName, threadPool, actionFilters, indexNameExpressionResolver, transportService.getTaskManager()); + transportService.registerRequestHandler(actionName, ThreadPool.Names.SAME, false, canTripCircuitBreaker, requestReader, + new TransportHandler()); + } + class TransportHandler implements TransportRequestHandler { @Override diff --git a/core/src/main/java/org/elasticsearch/action/support/PlainListenableActionFuture.java b/core/src/main/java/org/elasticsearch/action/support/PlainListenableActionFuture.java index 749bf1fea019d..943c36797096c 100644 --- a/core/src/main/java/org/elasticsearch/action/support/PlainListenableActionFuture.java +++ b/core/src/main/java/org/elasticsearch/action/support/PlainListenableActionFuture.java @@ -33,7 +33,7 @@ public class PlainListenableActionFuture extends AdapterActionFuture im volatile Object listeners; boolean executedListeners = false; - private PlainListenableActionFuture() {} + protected PlainListenableActionFuture() {} /** * This method returns a listenable future. The listeners will be called on completion of the future. diff --git a/core/src/main/java/org/elasticsearch/action/support/ToXContentToBytes.java b/core/src/main/java/org/elasticsearch/action/support/ToXContentToBytes.java deleted file mode 100644 index 741b197dbc836..0000000000000 --- a/core/src/main/java/org/elasticsearch/action/support/ToXContentToBytes.java +++ /dev/null @@ -1,88 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.action.support; - -import org.elasticsearch.ElasticsearchException; -import org.elasticsearch.ExceptionsHelper; -import org.elasticsearch.client.Requests; -import org.elasticsearch.common.bytes.BytesReference; -import org.elasticsearch.common.xcontent.ToXContent; -import org.elasticsearch.common.xcontent.XContentBuilder; -import org.elasticsearch.common.xcontent.XContentFactory; -import org.elasticsearch.common.xcontent.XContentType; - -/** - * Base class for {@link ToXContent} implementation that also support conversion to {@link BytesReference} for serialization purposes - */ -public abstract class ToXContentToBytes implements ToXContent { - - private final XContentType defaultType; - - protected ToXContentToBytes() { - this.defaultType = Requests.CONTENT_TYPE; - } - - protected ToXContentToBytes(XContentType defaultType) { - this.defaultType = defaultType; - } - - /** - * Returns a {@link org.elasticsearch.common.bytes.BytesReference} - * containing the {@link ToXContent} output in binary format. - * Builds the request based on the default {@link XContentType}, either {@link Requests#CONTENT_TYPE} or provided as a constructor argument - */ - public final BytesReference buildAsBytes() { - return buildAsBytes(defaultType); - } - - /** - * Returns a {@link org.elasticsearch.common.bytes.BytesReference} - * containing the {@link ToXContent} output in binary format. - * Builds the request as the provided contentType - */ - public final BytesReference buildAsBytes(XContentType contentType) { - try { - XContentBuilder builder = XContentFactory.contentBuilder(contentType); - toXContent(builder, ToXContent.EMPTY_PARAMS); - return builder.bytes(); - } catch (Exception e) { - throw new ElasticsearchException("Failed to build ToXContent", e); - } - } - - @Override - public final String toString() { - return toString(EMPTY_PARAMS); - } - - public final String toString(Params params) { - try { - XContentBuilder builder = XContentFactory.jsonBuilder(); - if (params.paramAsBoolean("pretty", true)) { - builder.prettyPrint(); - } - toXContent(builder, params); - return builder.string(); - } catch (Exception e) { - // So we have a stack trace logged somewhere - return "{ \"error\" : \"" + ExceptionsHelper.detailedMessage(e) + "\"}"; - } - } -} diff --git a/core/src/main/java/org/elasticsearch/action/support/master/AcknowledgedResponse.java b/core/src/main/java/org/elasticsearch/action/support/master/AcknowledgedResponse.java old mode 100644 new mode 100755 index cdac96a7a7975..e4467964722c6 --- a/core/src/main/java/org/elasticsearch/action/support/master/AcknowledgedResponse.java +++ b/core/src/main/java/org/elasticsearch/action/support/master/AcknowledgedResponse.java @@ -19,17 +19,32 @@ package org.elasticsearch.action.support.master; import org.elasticsearch.action.ActionResponse; +import org.elasticsearch.common.ParseField; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.xcontent.ConstructingObjectParser; +import org.elasticsearch.common.xcontent.ObjectParser; +import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.XContentBuilder; import java.io.IOException; +import static org.elasticsearch.common.xcontent.ConstructingObjectParser.constructorArg; + /** * Abstract class that allows to mark action responses that support acknowledgements. * Facilitates consistency across different api. */ public abstract class AcknowledgedResponse extends ActionResponse { + private static final String ACKNOWLEDGED = "acknowledged"; + private static final ParseField ACKNOWLEDGED_PARSER = new ParseField(ACKNOWLEDGED); + + protected static void declareAcknowledgedField(ConstructingObjectParser PARSER) { + PARSER.declareField(constructorArg(), (parser, context) -> parser.booleanValue(), ACKNOWLEDGED_PARSER, + ObjectParser.ValueType.BOOLEAN); + } + private boolean acknowledged; protected AcknowledgedResponse() { @@ -61,4 +76,8 @@ protected void readAcknowledged(StreamInput in) throws IOException { protected void writeAcknowledged(StreamOutput out) throws IOException { out.writeBoolean(acknowledged); } + + protected void addAcknowledgedField(XContentBuilder builder) throws IOException { + builder.field(ACKNOWLEDGED, isAcknowledged()); + } } diff --git a/core/src/main/java/org/elasticsearch/action/support/master/MasterNodeReadRequest.java b/core/src/main/java/org/elasticsearch/action/support/master/MasterNodeReadRequest.java index 142f3a2fe40a7..92578d7f33fb5 100644 --- a/core/src/main/java/org/elasticsearch/action/support/master/MasterNodeReadRequest.java +++ b/core/src/main/java/org/elasticsearch/action/support/master/MasterNodeReadRequest.java @@ -31,14 +31,12 @@ public abstract class MasterNodeReadRequest request, IndexNameExpressionResolver indexNameExpressionResolver) { + this(settings, actionName, true, transportService, clusterService, threadPool, actionFilters, request, indexNameExpressionResolver); + } + protected TransportMasterNodeAction(Settings settings, String actionName, boolean canTripCircuitBreaker, TransportService transportService, ClusterService clusterService, ThreadPool threadPool, - ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver, - Supplier request) { + ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver, Supplier request) { super(settings, actionName, canTripCircuitBreaker, threadPool, transportService, actionFilters, indexNameExpressionResolver, request); this.transportService = transportService; @@ -74,6 +81,17 @@ protected TransportMasterNodeAction(Settings settings, String actionName, boolea this.executor = executor(); } + protected TransportMasterNodeAction(Settings settings, String actionName, boolean canTripCircuitBreaker, + TransportService transportService, ClusterService clusterService, ThreadPool threadPool, + ActionFilters actionFilters, Writeable.Reader request, + IndexNameExpressionResolver indexNameExpressionResolver) { + super(settings, actionName, canTripCircuitBreaker, threadPool, transportService, actionFilters, request, + indexNameExpressionResolver); + this.transportService = transportService; + this.clusterService = clusterService; + this.executor = executor(); + } + protected abstract String executor(); protected abstract Response newResponse(); @@ -172,7 +190,10 @@ protected void doRun() throws Exception { logger.debug("no known master node, scheduling a retry"); retry(null, masterChangePredicate); } else { - transportService.sendRequest(nodes.getMasterNode(), actionName, request, new ActionListenerResponseHandler(listener, TransportMasterNodeAction.this::newResponse) { + DiscoveryNode masterNode = nodes.getMasterNode(); + final String actionName = getMasterActionName(masterNode); + transportService.sendRequest(masterNode, actionName, request, new ActionListenerResponseHandler(listener, + TransportMasterNodeAction.this::newResponse) { @Override public void handleException(final TransportException exp) { Throwable cause = exp.unwrapCause(); @@ -212,4 +233,12 @@ public void onTimeout(TimeValue timeout) { ); } } + + /** + * Allows to conditionally return a different master node action name in the case an action gets renamed. + * This mainly for backwards compatibility should be used rarely + */ + protected String getMasterActionName(DiscoveryNode node) { + return actionName; + } } diff --git a/core/src/main/java/org/elasticsearch/action/support/master/TransportMasterNodeReadAction.java b/core/src/main/java/org/elasticsearch/action/support/master/TransportMasterNodeReadAction.java index ceb8e1a40ac88..4f36929df2755 100644 --- a/core/src/main/java/org/elasticsearch/action/support/master/TransportMasterNodeReadAction.java +++ b/core/src/main/java/org/elasticsearch/action/support/master/TransportMasterNodeReadAction.java @@ -23,6 +23,7 @@ import org.elasticsearch.action.support.ActionFilters; import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; import org.elasticsearch.cluster.service.ClusterService; +import org.elasticsearch.common.io.stream.Writeable; import org.elasticsearch.common.settings.Setting; import org.elasticsearch.common.settings.Setting.Property; import org.elasticsearch.common.settings.Settings; @@ -49,6 +50,12 @@ protected TransportMasterNodeReadAction(Settings settings, String actionName, Tr this(settings, actionName, true, transportService, clusterService, threadPool, actionFilters, indexNameExpressionResolver,request); } + protected TransportMasterNodeReadAction(Settings settings, String actionName, TransportService transportService, + ClusterService clusterService, ThreadPool threadPool, ActionFilters actionFilters, + Writeable.Reader request, IndexNameExpressionResolver indexNameExpressionResolver) { + this(settings, actionName, true, transportService, clusterService, threadPool, actionFilters, request, indexNameExpressionResolver); + } + protected TransportMasterNodeReadAction(Settings settings, String actionName, boolean checkSizeLimit, TransportService transportService, ClusterService clusterService, ThreadPool threadPool, ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver, Supplier request) { @@ -57,6 +64,14 @@ protected TransportMasterNodeReadAction(Settings settings, String actionName, bo this.forceLocal = FORCE_LOCAL_SETTING.get(settings); } + protected TransportMasterNodeReadAction(Settings settings, String actionName, boolean checkSizeLimit, TransportService transportService, + ClusterService clusterService, ThreadPool threadPool, ActionFilters actionFilters, + Writeable.Reader request, IndexNameExpressionResolver indexNameExpressionResolver) { + super(settings, actionName, checkSizeLimit, transportService, clusterService, threadPool, actionFilters, request, + indexNameExpressionResolver); + this.forceLocal = FORCE_LOCAL_SETTING.get(settings); + } + @Override protected final boolean localExecute(Request request) { return forceLocal || request.local(); diff --git a/core/src/main/java/org/elasticsearch/action/support/master/info/ClusterInfoRequest.java b/core/src/main/java/org/elasticsearch/action/support/master/info/ClusterInfoRequest.java index fc14cd38e5681..03291bc59b6f0 100644 --- a/core/src/main/java/org/elasticsearch/action/support/master/info/ClusterInfoRequest.java +++ b/core/src/main/java/org/elasticsearch/action/support/master/info/ClusterInfoRequest.java @@ -35,6 +35,24 @@ public abstract class ClusterInfoRequest request) { - super(settings, actionName, transportService, clusterService, threadPool, actionFilters, indexNameExpressionResolver, request); + Writeable.Reader request, IndexNameExpressionResolver indexNameExpressionResolver) { + super(settings, actionName, transportService, clusterService, threadPool, actionFilters, request, indexNameExpressionResolver); } @Override diff --git a/core/src/main/java/org/elasticsearch/action/support/replication/ReplicationOperation.java b/core/src/main/java/org/elasticsearch/action/support/replication/ReplicationOperation.java index f28beed1d7fac..6fa06c25457b0 100644 --- a/core/src/main/java/org/elasticsearch/action/support/replication/ReplicationOperation.java +++ b/core/src/main/java/org/elasticsearch/action/support/replication/ReplicationOperation.java @@ -32,6 +32,7 @@ import org.elasticsearch.common.Nullable; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.util.set.Sets; +import org.elasticsearch.index.seqno.SequenceNumbersService; import org.elasticsearch.index.shard.ReplicationGroup; import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.rest.RestStatus; @@ -173,6 +174,7 @@ public void onResponse(ReplicaResponse response) { successfulShards.incrementAndGet(); try { primary.updateLocalCheckpointForShard(shard.allocationId().getId(), response.localCheckpoint()); + primary.updateGlobalCheckpointForShard(shard.allocationId().getId(), response.globalCheckpoint()); } catch (final AlreadyClosedException e) { // okay, the index was deleted or this shard was never activated after a relocation; fall through and finish normally } catch (final Exception e) { @@ -315,6 +317,14 @@ public interface Primary< */ void updateLocalCheckpointForShard(String allocationId, long checkpoint); + /** + * Update the local knowledge of the global checkpoint for the specified allocation ID. + * + * @param allocationId the allocation ID to update the global checkpoint for + * @param globalCheckpoint the global checkpoint + */ + void updateGlobalCheckpointForShard(String allocationId, long globalCheckpoint); + /** * Returns the local checkpoint on the primary shard. * @@ -343,7 +353,7 @@ public interface Primary< public interface Replicas> { /** - * Performs the the specified request on the specified replica. + * Performs the specified request on the specified replica. * * @param replica the shard this request should be executed on * @param replicaRequest the operation to perform @@ -385,12 +395,24 @@ void markShardCopyAsStaleIfNeeded(ShardId shardId, String allocationId, Runnable } /** - * An interface to encapsulate the metadata needed from replica shards when they respond to operations performed on them + * An interface to encapsulate the metadata needed from replica shards when they respond to operations performed on them. */ public interface ReplicaResponse { - /** the local check point for the shard. see {@link org.elasticsearch.index.seqno.SequenceNumbersService#getLocalCheckpoint()} */ + /** + * The local checkpoint for the shard. See {@link SequenceNumbersService#getLocalCheckpoint()}. + * + * @return the local checkpoint + **/ long localCheckpoint(); + + /** + * The global checkpoint for the shard. See {@link SequenceNumbersService#getGlobalCheckpoint()}. + * + * @return the global checkpoint + **/ + long globalCheckpoint(); + } public static class RetryOnPrimaryException extends ElasticsearchException { diff --git a/core/src/main/java/org/elasticsearch/action/support/replication/TransportReplicationAction.java b/core/src/main/java/org/elasticsearch/action/support/replication/TransportReplicationAction.java index 2ec84e35d1792..a63d14d7f9d12 100644 --- a/core/src/main/java/org/elasticsearch/action/support/replication/TransportReplicationAction.java +++ b/core/src/main/java/org/elasticsearch/action/support/replication/TransportReplicationAction.java @@ -20,7 +20,9 @@ package org.elasticsearch.action.support.replication; import org.apache.logging.log4j.message.ParameterizedMessage; +import org.apache.lucene.store.AlreadyClosedException; import org.elasticsearch.ElasticsearchException; +import org.elasticsearch.ExceptionsHelper; import org.elasticsearch.Version; import org.elasticsearch.action.ActionListener; import org.elasticsearch.action.ActionListenerResponseHandler; @@ -53,8 +55,9 @@ import org.elasticsearch.common.util.concurrent.AbstractRunnable; import org.elasticsearch.index.IndexNotFoundException; import org.elasticsearch.index.IndexService; -import org.elasticsearch.index.seqno.SequenceNumbersService; +import org.elasticsearch.index.seqno.SequenceNumbers; import org.elasticsearch.index.shard.IndexShard; +import org.elasticsearch.index.shard.IndexShardClosedException; import org.elasticsearch.index.shard.IndexShardState; import org.elasticsearch.index.shard.ReplicationGroup; import org.elasticsearch.index.shard.ShardId; @@ -108,12 +111,26 @@ public abstract class TransportReplicationAction< protected final String transportReplicaAction; protected final String transportPrimaryAction; + private final boolean syncGlobalCheckpointAfterOperation; + protected TransportReplicationAction(Settings settings, String actionName, TransportService transportService, ClusterService clusterService, IndicesService indicesService, ThreadPool threadPool, ShardStateAction shardStateAction, ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver, Supplier request, Supplier replicaRequest, String executor) { + this(settings, actionName, transportService, clusterService, indicesService, threadPool, shardStateAction, actionFilters, + indexNameExpressionResolver, request, replicaRequest, executor, false); + } + + + protected TransportReplicationAction(Settings settings, String actionName, TransportService transportService, + ClusterService clusterService, IndicesService indicesService, + ThreadPool threadPool, ShardStateAction shardStateAction, + ActionFilters actionFilters, + IndexNameExpressionResolver indexNameExpressionResolver, Supplier request, + Supplier replicaRequest, String executor, + boolean syncGlobalCheckpointAfterOperation) { super(settings, actionName, threadPool, actionFilters, indexNameExpressionResolver, transportService.getTaskManager()); this.transportService = transportService; this.clusterService = clusterService; @@ -126,6 +143,8 @@ protected TransportReplicationAction(Settings settings, String actionName, Trans registerRequestHandlers(actionName, transportService, request, replicaRequest, executor); this.transportOptions = transportOptions(); + + this.syncGlobalCheckpointAfterOperation = syncGlobalCheckpointAfterOperation; } protected void registerRequestHandlers(String actionName, TransportService transportService, Supplier request, @@ -150,7 +169,7 @@ protected void doExecute(Task task, Request request, ActionListener li new ReroutePhase((ReplicationTask) task, request, listener).run(); } - protected ReplicationOperation.Replicas newReplicasProxy(long primaryTerm) { + protected ReplicationOperation.Replicas newReplicasProxy(long primaryTerm) { return new ReplicasProxy(primaryTerm); } @@ -359,6 +378,22 @@ private ActionListener createResponseListener(final PrimaryShardRefere return new ActionListener() { @Override public void onResponse(Response response) { + if (syncGlobalCheckpointAfterOperation) { + final IndexShard shard = primaryShardReference.indexShard; + try { + shard.maybeSyncGlobalCheckpoint("post-operation"); + } catch (final Exception e) { + // only log non-closed exceptions + if (ExceptionsHelper.unwrap(e, AlreadyClosedException.class, IndexShardClosedException.class) == null) { + logger.info( + new ParameterizedMessage( + "{} failed to execute post-operation global checkpoint sync", + shard.shardId()), + e); + // intentionally swallow, a missed global checkpoint sync should not fail this operation + } + } + } primaryShardReference.close(); // release shard operation lock before responding to caller setPhase(replicationTask, "finished"); try { @@ -530,7 +565,8 @@ public void onResponse(Releasable releasable) { try { final ReplicaResult replicaResult = shardOperationOnReplica(request, replica); releasable.close(); // release shard operation lock before responding to caller - final TransportReplicationAction.ReplicaResponse response = new ReplicaResponse(replica.getLocalCheckpoint()); + final TransportReplicationAction.ReplicaResponse response = + new ReplicaResponse(replica.getLocalCheckpoint(), replica.getGlobalCheckpoint()); replicaResult.respond(new ResponseListener(response)); } catch (final Exception e) { Releasables.closeWhileHandlingException(releasable); // release shard operation lock before responding to caller @@ -1005,6 +1041,11 @@ public void updateLocalCheckpointForShard(String allocationId, long checkpoint) indexShard.updateLocalCheckpointForShard(allocationId, checkpoint); } + @Override + public void updateGlobalCheckpointForShard(final String allocationId, final long globalCheckpoint) { + indexShard.updateGlobalCheckpointForShard(allocationId, globalCheckpoint); + } + @Override public long localCheckpoint() { return indexShard.getLocalCheckpoint(); @@ -1024,40 +1065,47 @@ public ReplicationGroup getReplicationGroup() { public static class ReplicaResponse extends ActionResponse implements ReplicationOperation.ReplicaResponse { private long localCheckpoint; + private long globalCheckpoint; ReplicaResponse() { } - public ReplicaResponse(long localCheckpoint) { + public ReplicaResponse(long localCheckpoint, long globalCheckpoint) { /* - * A replica should always know its own local checkpoint so this should always be a valid sequence number or the pre-6.0 local + * A replica should always know its own local checkpoints so this should always be a valid sequence number or the pre-6.0 * checkpoint value when simulating responses to replication actions that pre-6.0 nodes are not aware of (e.g., the global * checkpoint background sync, and the primary/replica resync). */ - assert localCheckpoint != SequenceNumbersService.UNASSIGNED_SEQ_NO; + assert localCheckpoint != SequenceNumbers.UNASSIGNED_SEQ_NO; this.localCheckpoint = localCheckpoint; + this.globalCheckpoint = globalCheckpoint; } @Override public void readFrom(StreamInput in) throws IOException { + super.readFrom(in); if (in.getVersion().onOrAfter(Version.V_6_0_0_alpha1)) { - super.readFrom(in); localCheckpoint = in.readZLong(); } else { // 5.x used to read empty responses, which don't really read anything off the stream, so just do nothing. - localCheckpoint = SequenceNumbersService.PRE_60_NODE_LOCAL_CHECKPOINT; + localCheckpoint = SequenceNumbers.PRE_60_NODE_CHECKPOINT; + } + if (in.getVersion().onOrAfter(Version.V_6_0_0_rc1)) { + globalCheckpoint = in.readZLong(); + } else { + globalCheckpoint = SequenceNumbers.PRE_60_NODE_CHECKPOINT; } } @Override public void writeTo(StreamOutput out) throws IOException { + super.writeTo(out); if (out.getVersion().onOrAfter(Version.V_6_0_0_alpha1)) { - super.writeTo(out); out.writeZLong(localCheckpoint); - } else { - // we use to write empty responses - Empty.INSTANCE.writeTo(out); + } + if (out.getVersion().onOrAfter(Version.V_6_0_0_rc1)) { + out.writeZLong(globalCheckpoint); } } @@ -1065,6 +1113,12 @@ public void writeTo(StreamOutput out) throws IOException { public long localCheckpoint() { return localCheckpoint; } + + @Override + public long globalCheckpoint() { + return globalCheckpoint; + } + } /** @@ -1235,7 +1289,7 @@ public void readFrom(StreamInput in) throws IOException { if (in.getVersion().onOrAfter(Version.V_6_0_0_alpha1)) { globalCheckpoint = in.readZLong(); } else { - globalCheckpoint = SequenceNumbersService.UNASSIGNED_SEQ_NO; + globalCheckpoint = SequenceNumbers.UNASSIGNED_SEQ_NO; } } diff --git a/core/src/main/java/org/elasticsearch/action/support/replication/TransportWriteAction.java b/core/src/main/java/org/elasticsearch/action/support/replication/TransportWriteAction.java index 31c72108ecf65..ec3dcd94d3084 100644 --- a/core/src/main/java/org/elasticsearch/action/support/replication/TransportWriteAction.java +++ b/core/src/main/java/org/elasticsearch/action/support/replication/TransportWriteAction.java @@ -71,7 +71,7 @@ protected TransportWriteAction(Settings settings, String actionName, TransportSe ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver, Supplier request, Supplier replicaRequest, String executor) { super(settings, actionName, transportService, clusterService, indicesService, threadPool, shardStateAction, actionFilters, - indexNameExpressionResolver, request, replicaRequest, executor); + indexNameExpressionResolver, request, replicaRequest, executor, true); } /** Syncs operation result to the translog or throws a shard not available failure */ diff --git a/core/src/main/java/org/elasticsearch/action/termvectors/MultiTermVectorsShardRequest.java b/core/src/main/java/org/elasticsearch/action/termvectors/MultiTermVectorsShardRequest.java index 6356c554991e6..8fdb6398ddccf 100644 --- a/core/src/main/java/org/elasticsearch/action/termvectors/MultiTermVectorsShardRequest.java +++ b/core/src/main/java/org/elasticsearch/action/termvectors/MultiTermVectorsShardRequest.java @@ -59,8 +59,8 @@ public int shardId() { /** * Sets the preference to execute the search. Defaults to randomize across shards. Can be set to - * _local to prefer local shards, _primary to execute only on primary shards, or - * a custom value, which guarantees that the same order will be used across different requests. + * _local to prefer local shards or a custom value, which guarantees that the same order + * will be used across different requests. */ public MultiTermVectorsShardRequest preference(String preference) { this.preference = preference; diff --git a/core/src/main/java/org/elasticsearch/action/termvectors/TermVectorsRequest.java b/core/src/main/java/org/elasticsearch/action/termvectors/TermVectorsRequest.java index 0fe83e214463a..1886a8c2661ed 100644 --- a/core/src/main/java/org/elasticsearch/action/termvectors/TermVectorsRequest.java +++ b/core/src/main/java/org/elasticsearch/action/termvectors/TermVectorsRequest.java @@ -294,8 +294,7 @@ public String preference() { /** * Sets the preference to execute the search. Defaults to randomize across - * shards. Can be set to _local to prefer local shards, - * _primary to execute only on primary shards, or a custom value, + * shards. Can be set to _local to prefer local shards or a custom value, * which guarantees that the same order will be used across different * requests. */ diff --git a/core/src/main/java/org/elasticsearch/action/termvectors/TermVectorsRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/termvectors/TermVectorsRequestBuilder.java index 9aa3ebca759c3..47bd09b100857 100644 --- a/core/src/main/java/org/elasticsearch/action/termvectors/TermVectorsRequestBuilder.java +++ b/core/src/main/java/org/elasticsearch/action/termvectors/TermVectorsRequestBuilder.java @@ -99,8 +99,8 @@ public TermVectorsRequestBuilder setParent(String parent) { /** * Sets the preference to execute the search. Defaults to randomize across shards. Can be set to - * _local to prefer local shards, _primary to execute only on primary shards, or - * a custom value, which guarantees that the same order will be used across different requests. + * _local to prefer local shards or a custom value, which guarantees that the same order + * will be used across different requests. */ public TermVectorsRequestBuilder setPreference(String preference) { request.preference(preference); diff --git a/core/src/main/java/org/elasticsearch/action/termvectors/TermVectorsResponse.java b/core/src/main/java/org/elasticsearch/action/termvectors/TermVectorsResponse.java index 7532ade3fa327..21a77c2e0f2b3 100644 --- a/core/src/main/java/org/elasticsearch/action/termvectors/TermVectorsResponse.java +++ b/core/src/main/java/org/elasticsearch/action/termvectors/TermVectorsResponse.java @@ -305,9 +305,9 @@ private void buildFieldStatistics(XContentBuilder builder, Terms curTerms) throw long sumDocFreq = curTerms.getSumDocFreq(); int docCount = curTerms.getDocCount(); long sumTotalTermFrequencies = curTerms.getSumTotalTermFreq(); - if (docCount > 0) { - assert ((sumDocFreq > 0)) : "docCount >= 0 but sumDocFreq ain't!"; - assert ((sumTotalTermFrequencies > 0)) : "docCount >= 0 but sumTotalTermFrequencies ain't!"; + if (docCount >= 0) { + assert ((sumDocFreq >= 0)) : "docCount >= 0 but sumDocFreq ain't!"; + assert ((sumTotalTermFrequencies >= 0)) : "docCount >= 0 but sumTotalTermFrequencies ain't!"; builder.startObject(FieldStrings.FIELD_STATISTICS); builder.field(FieldStrings.SUM_DOC_FREQ, sumDocFreq); builder.field(FieldStrings.DOC_COUNT, docCount); diff --git a/core/src/main/java/org/elasticsearch/action/update/UpdateResponse.java b/core/src/main/java/org/elasticsearch/action/update/UpdateResponse.java index 672b190d91130..9e33e62622a0e 100644 --- a/core/src/main/java/org/elasticsearch/action/update/UpdateResponse.java +++ b/core/src/main/java/org/elasticsearch/action/update/UpdateResponse.java @@ -25,7 +25,7 @@ import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.index.get.GetResult; -import org.elasticsearch.index.seqno.SequenceNumbersService; +import org.elasticsearch.index.seqno.SequenceNumbers; import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.rest.RestStatus; @@ -47,7 +47,7 @@ public UpdateResponse() { * For example: update script with operation set to none */ public UpdateResponse(ShardId shardId, String type, String id, long version, Result result) { - this(new ShardInfo(0, 0), shardId, type, id, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, version, result); + this(new ShardInfo(0, 0), shardId, type, id, SequenceNumbers.UNASSIGNED_SEQ_NO, 0, version, result); } public UpdateResponse( diff --git a/core/src/main/java/org/elasticsearch/bootstrap/Bootstrap.java b/core/src/main/java/org/elasticsearch/bootstrap/Bootstrap.java index 1b18e2a4160a0..30b9fb7e28dd0 100644 --- a/core/src/main/java/org/elasticsearch/bootstrap/Bootstrap.java +++ b/core/src/main/java/org/elasticsearch/bootstrap/Bootstrap.java @@ -35,7 +35,6 @@ import org.elasticsearch.common.PidFile; import org.elasticsearch.common.SuppressForbidden; import org.elasticsearch.common.inject.CreationException; -import org.elasticsearch.common.logging.DeprecationLogger; import org.elasticsearch.common.logging.ESLoggerFactory; import org.elasticsearch.common.logging.LogConfigurator; import org.elasticsearch.common.logging.Loggers; @@ -213,28 +212,27 @@ public void run() { node = new Node(environment) { @Override protected void validateNodeBeforeAcceptingRequests( - final Settings settings, + final BootstrapContext context, final BoundTransportAddress boundTransportAddress, List checks) throws NodeValidationException { - BootstrapChecks.check(settings, boundTransportAddress, checks); + BootstrapChecks.check(context, boundTransportAddress, checks); } }; } - private static SecureSettings loadSecureSettings(Environment initialEnv) throws BootstrapException { + static SecureSettings loadSecureSettings(Environment initialEnv) throws BootstrapException { final KeyStoreWrapper keystore; try { keystore = KeyStoreWrapper.load(initialEnv.configFile()); } catch (IOException e) { throw new BootstrapException(e); } + if (keystore == null) { + return null; // no keystore + } try { - if (keystore == null) { - // create it, we always want one! we use an empty passphrase, but a user can change this later if they want. - KeyStoreWrapper.create(new char[0]); - } else { - keystore.decrypt(new char[0] /* TODO: read password from stdin */); - } + keystore.decrypt(new char[0] /* TODO: read password from stdin */); + KeyStoreWrapper.upgrade(keystore, initialEnv.configFile()); } catch (Exception e) { throw new BootstrapException(e); } diff --git a/core/src/main/java/org/elasticsearch/bootstrap/BootstrapCheck.java b/core/src/main/java/org/elasticsearch/bootstrap/BootstrapCheck.java index ffe52dfe5b957..78c60d694b0bb 100644 --- a/core/src/main/java/org/elasticsearch/bootstrap/BootstrapCheck.java +++ b/core/src/main/java/org/elasticsearch/bootstrap/BootstrapCheck.java @@ -19,24 +19,58 @@ package org.elasticsearch.bootstrap; +import java.util.Objects; + /** * Encapsulates a bootstrap check. */ public interface BootstrapCheck { /** - * Test if the node fails the check. - * - * @return {@code true} if the node failed the check + * Encapsulate the result of a bootstrap check. */ - boolean check(); + final class BootstrapCheckResult { + + private final String message; + + private static final BootstrapCheckResult SUCCESS = new BootstrapCheckResult(null); + + public static BootstrapCheckResult success() { + return SUCCESS; + } + + public static BootstrapCheckResult failure(final String message) { + Objects.requireNonNull(message); + return new BootstrapCheckResult(message); + } + + private BootstrapCheckResult(final String message) { + this.message = message; + } + + public boolean isSuccess() { + return this == SUCCESS; + } + + public boolean isFailure() { + return !isSuccess(); + } + + public String getMessage() { + assert isFailure(); + assert message != null; + return message; + } + + } /** - * The error message for a failed check. + * Test if the node fails the check. * - * @return the error message on check failure + * @param context the bootstrap context + * @return the result of the bootstrap check */ - String errorMessage(); + BootstrapCheckResult check(BootstrapContext context); default boolean alwaysEnforce() { return false; diff --git a/core/src/main/java/org/elasticsearch/bootstrap/BootstrapChecks.java b/core/src/main/java/org/elasticsearch/bootstrap/BootstrapChecks.java index 4adec75ae67a0..54f1528e4633b 100644 --- a/core/src/main/java/org/elasticsearch/bootstrap/BootstrapChecks.java +++ b/core/src/main/java/org/elasticsearch/bootstrap/BootstrapChecks.java @@ -26,7 +26,6 @@ import org.elasticsearch.common.SuppressForbidden; import org.elasticsearch.common.io.PathUtils; import org.elasticsearch.common.logging.Loggers; -import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.transport.BoundTransportAddress; import org.elasticsearch.common.transport.TransportAddress; import org.elasticsearch.discovery.DiscoveryModule; @@ -65,18 +64,18 @@ private BootstrapChecks() { * {@code es.enforce.bootstrap.checks} is set to {@code true} then the bootstrap checks will be enforced regardless of whether or not * the transport protocol is bound to a non-loopback interface. * - * @param settings the current node settings + * @param context the current node bootstrap context * @param boundTransportAddress the node network bindings */ - static void check(final Settings settings, final BoundTransportAddress boundTransportAddress, List additionalChecks) - throws NodeValidationException { - final List builtInChecks = checks(settings); + static void check(final BootstrapContext context, final BoundTransportAddress boundTransportAddress, + List additionalChecks) throws NodeValidationException { + final List builtInChecks = checks(); final List combinedChecks = new ArrayList<>(builtInChecks); combinedChecks.addAll(additionalChecks); - check( - enforceLimits(boundTransportAddress, DiscoveryModule.DISCOVERY_TYPE_SETTING.get(settings)), + check( context, + enforceLimits(boundTransportAddress, DiscoveryModule.DISCOVERY_TYPE_SETTING.get(context.settings)), Collections.unmodifiableList(combinedChecks), - Node.NODE_NAME_SETTING.get(settings)); + Node.NODE_NAME_SETTING.get(context.settings)); } /** @@ -84,15 +83,17 @@ static void check(final Settings settings, final BoundTransportAddress boundTran * property {@code es.enforce.bootstrap.checks} is set to {@code true} then the bootstrap checks will be enforced regardless of whether * or not the transport protocol is bound to a non-loopback interface. * + * @param context the current node boostrap context * @param enforceLimits {@code true} if the checks should be enforced or otherwise warned * @param checks the checks to execute * @param nodeName the node name to be used as a logging prefix */ static void check( + final BootstrapContext context, final boolean enforceLimits, final List checks, final String nodeName) throws NodeValidationException { - check(enforceLimits, checks, Loggers.getLogger(BootstrapChecks.class, nodeName)); + check(context, enforceLimits, checks, Loggers.getLogger(BootstrapChecks.class, nodeName)); } /** @@ -100,11 +101,13 @@ static void check( * property {@code es.enforce.bootstrap.checks }is set to {@code true} then the bootstrap checks will be enforced regardless of whether * or not the transport protocol is bound to a non-loopback interface. * + * @param context the current node boostrap context * @param enforceLimits {@code true} if the checks should be enforced or otherwise warned * @param checks the checks to execute * @param logger the logger to */ static void check( + final BootstrapContext context, final boolean enforceLimits, final List checks, final Logger logger) throws NodeValidationException { @@ -134,11 +137,12 @@ static void check( } for (final BootstrapCheck check : checks) { - if (check.check()) { + final BootstrapCheck.BootstrapCheckResult result = check.check(context); + if (result.isFailure()) { if (!(enforceLimits || enforceBootstrapChecks) && !check.alwaysEnforce()) { - ignoredErrors.add(check.errorMessage()); + ignoredErrors.add(result.getMessage()); } else { - errors.add(check.errorMessage()); + errors.add(result.getMessage()); } } } @@ -180,13 +184,13 @@ static boolean enforceLimits(final BoundTransportAddress boundTransportAddress, } // the list of checks to execute - static List checks(final Settings settings) { + static List checks() { final List checks = new ArrayList<>(); checks.add(new HeapSizeCheck()); final FileDescriptorCheck fileDescriptorCheck = Constants.MAC_OS_X ? new OsXFileDescriptorCheck() : new FileDescriptorCheck(); checks.add(fileDescriptorCheck); - checks.add(new MlockallCheck(BootstrapSettings.MEMORY_LOCK_SETTING.get(settings))); + checks.add(new MlockallCheck()); if (Constants.LINUX) { checks.add(new MaxNumberOfThreadsCheck()); } @@ -201,7 +205,7 @@ static List checks(final Settings settings) { } checks.add(new ClientJvmCheck()); checks.add(new UseSerialGCCheck()); - checks.add(new SystemCallFilterCheck(BootstrapSettings.SYSTEM_CALL_FILTER_SETTING.get(settings))); + checks.add(new SystemCallFilterCheck()); checks.add(new OnErrorCheck()); checks.add(new OnOutOfMemoryErrorCheck()); checks.add(new EarlyAccessCheck()); @@ -212,21 +216,20 @@ static List checks(final Settings settings) { static class HeapSizeCheck implements BootstrapCheck { @Override - public boolean check() { + public BootstrapCheckResult check(BootstrapContext context) { final long initialHeapSize = getInitialHeapSize(); final long maxHeapSize = getMaxHeapSize(); - return initialHeapSize != 0 && maxHeapSize != 0 && initialHeapSize != maxHeapSize; - } - - @Override - public String errorMessage() { - return String.format( - Locale.ROOT, - "initial heap size [%d] not equal to maximum heap size [%d]; " + - "this can cause resize pauses and prevents mlockall from locking the entire heap", - getInitialHeapSize(), - getMaxHeapSize() - ); + if (initialHeapSize != 0 && maxHeapSize != 0 && initialHeapSize != maxHeapSize) { + final String message = String.format( + Locale.ROOT, + "initial heap size [%d] not equal to maximum heap size [%d]; " + + "this can cause resize pauses and prevents mlockall from locking the entire heap", + getInitialHeapSize(), + getMaxHeapSize()); + return BootstrapCheckResult.failure(message); + } else { + return BootstrapCheckResult.success(); + } } // visible for testing @@ -268,19 +271,18 @@ protected FileDescriptorCheck(final int limit) { this.limit = limit; } - public final boolean check() { + public final BootstrapCheckResult check(BootstrapContext context) { final long maxFileDescriptorCount = getMaxFileDescriptorCount(); - return maxFileDescriptorCount != -1 && maxFileDescriptorCount < limit; - } - - @Override - public final String errorMessage() { - return String.format( - Locale.ROOT, - "max file descriptors [%d] for elasticsearch process is too low, increase to at least [%d]", - getMaxFileDescriptorCount(), - limit - ); + if (maxFileDescriptorCount != -1 && maxFileDescriptorCount < limit) { + final String message = String.format( + Locale.ROOT, + "max file descriptors [%d] for elasticsearch process is too low, increase to at least [%d]", + getMaxFileDescriptorCount(), + limit); + return BootstrapCheckResult.failure(message); + } else { + return BootstrapCheckResult.success(); + } } // visible for testing @@ -292,20 +294,13 @@ long getMaxFileDescriptorCount() { static class MlockallCheck implements BootstrapCheck { - private final boolean mlockallSet; - - MlockallCheck(final boolean mlockAllSet) { - this.mlockallSet = mlockAllSet; - } - @Override - public boolean check() { - return mlockallSet && !isMemoryLocked(); - } - - @Override - public String errorMessage() { - return "memory locking requested for elasticsearch process but memory is not locked"; + public BootstrapCheckResult check(BootstrapContext context) { + if (BootstrapSettings.MEMORY_LOCK_SETTING.get(context.settings) && !isMemoryLocked()) { + return BootstrapCheckResult.failure("memory locking requested for elasticsearch process but memory is not locked"); + } else { + return BootstrapCheckResult.success(); + } } // visible for testing @@ -321,18 +316,18 @@ static class MaxNumberOfThreadsCheck implements BootstrapCheck { private static final long MAX_NUMBER_OF_THREADS_THRESHOLD = 1 << 12; @Override - public boolean check() { - return getMaxNumberOfThreads() != -1 && getMaxNumberOfThreads() < MAX_NUMBER_OF_THREADS_THRESHOLD; - } - - @Override - public String errorMessage() { - return String.format( - Locale.ROOT, - "max number of threads [%d] for user [%s] is too low, increase to at least [%d]", - getMaxNumberOfThreads(), - BootstrapInfo.getSystemProperties().get("user.name"), - MAX_NUMBER_OF_THREADS_THRESHOLD); + public BootstrapCheckResult check(BootstrapContext context) { + if (getMaxNumberOfThreads() != -1 && getMaxNumberOfThreads() < MAX_NUMBER_OF_THREADS_THRESHOLD) { + final String message = String.format( + Locale.ROOT, + "max number of threads [%d] for user [%s] is too low, increase to at least [%d]", + getMaxNumberOfThreads(), + BootstrapInfo.getSystemProperties().get("user.name"), + MAX_NUMBER_OF_THREADS_THRESHOLD); + return BootstrapCheckResult.failure(message); + } else { + return BootstrapCheckResult.success(); + } } // visible for testing @@ -345,17 +340,17 @@ long getMaxNumberOfThreads() { static class MaxSizeVirtualMemoryCheck implements BootstrapCheck { @Override - public boolean check() { - return getMaxSizeVirtualMemory() != Long.MIN_VALUE && getMaxSizeVirtualMemory() != getRlimInfinity(); - } - - @Override - public String errorMessage() { - return String.format( - Locale.ROOT, - "max size virtual memory [%d] for user [%s] is too low, increase to [unlimited]", - getMaxSizeVirtualMemory(), - BootstrapInfo.getSystemProperties().get("user.name")); + public BootstrapCheckResult check(BootstrapContext context) { + if (getMaxSizeVirtualMemory() != Long.MIN_VALUE && getMaxSizeVirtualMemory() != getRlimInfinity()) { + final String message = String.format( + Locale.ROOT, + "max size virtual memory [%d] for user [%s] is too low, increase to [unlimited]", + getMaxSizeVirtualMemory(), + BootstrapInfo.getSystemProperties().get("user.name")); + return BootstrapCheckResult.failure(message); + } else { + return BootstrapCheckResult.success(); + } } // visible for testing @@ -376,18 +371,18 @@ long getMaxSizeVirtualMemory() { static class MaxFileSizeCheck implements BootstrapCheck { @Override - public boolean check() { + public BootstrapCheckResult check(BootstrapContext context) { final long maxFileSize = getMaxFileSize(); - return maxFileSize != Long.MIN_VALUE && maxFileSize != getRlimInfinity(); - } - - @Override - public String errorMessage() { - return String.format( - Locale.ROOT, - "max file size [%d] for user [%s] is too low, increase to [unlimited]", - getMaxFileSize(), - BootstrapInfo.getSystemProperties().get("user.name")); + if (maxFileSize != Long.MIN_VALUE && maxFileSize != getRlimInfinity()) { + final String message = String.format( + Locale.ROOT, + "max file size [%d] for user [%s] is too low, increase to [unlimited]", + getMaxFileSize(), + BootstrapInfo.getSystemProperties().get("user.name")); + return BootstrapCheckResult.failure(message); + } else { + return BootstrapCheckResult.success(); + } } long getRlimInfinity() { @@ -405,17 +400,17 @@ static class MaxMapCountCheck implements BootstrapCheck { private static final long LIMIT = 1 << 18; @Override - public boolean check() { - return getMaxMapCount() != -1 && getMaxMapCount() < LIMIT; - } - - @Override - public String errorMessage() { - return String.format( - Locale.ROOT, - "max virtual memory areas vm.max_map_count [%d] is too low, increase to at least [%d]", - getMaxMapCount(), - LIMIT); + public BootstrapCheckResult check(BootstrapContext context) { + if (getMaxMapCount() != -1 && getMaxMapCount() < LIMIT) { + final String message = String.format( + Locale.ROOT, + "max virtual memory areas vm.max_map_count [%d] is too low, increase to at least [%d]", + getMaxMapCount(), + LIMIT); + return BootstrapCheckResult.failure(message); + } else { + return BootstrapCheckResult.success(); + } } // visible for testing @@ -470,8 +465,16 @@ long parseProcSysVmMaxMapCount(final String procSysVmMaxMapCount) throws NumberF static class ClientJvmCheck implements BootstrapCheck { @Override - public boolean check() { - return getVmName().toLowerCase(Locale.ROOT).contains("client"); + public BootstrapCheckResult check(BootstrapContext context) { + if (getVmName().toLowerCase(Locale.ROOT).contains("client")) { + final String message = String.format( + Locale.ROOT, + "JVM is using the client VM [%s] but should be using a server VM for the best performance", + getVmName()); + return BootstrapCheckResult.failure(message); + } else { + return BootstrapCheckResult.success(); + } } // visible for testing @@ -479,14 +482,6 @@ String getVmName() { return JvmInfo.jvmInfo().getVmName(); } - @Override - public String errorMessage() { - return String.format( - Locale.ROOT, - "JVM is using the client VM [%s] but should be using a server VM for the best performance", - getVmName()); - } - } /** @@ -496,8 +491,17 @@ public String errorMessage() { static class UseSerialGCCheck implements BootstrapCheck { @Override - public boolean check() { - return getUseSerialGC().equals("true"); + public BootstrapCheckResult check(BootstrapContext context) { + if (getUseSerialGC().equals("true")) { + final String message = String.format( + Locale.ROOT, + "JVM is using the serial collector but should not be for the best performance; " + + "either it's the default for the VM [%s] or -XX:+UseSerialGC was explicitly specified", + JvmInfo.jvmInfo().getVmName()); + return BootstrapCheckResult.failure(message); + } else { + return BootstrapCheckResult.success(); + } } // visible for testing @@ -505,15 +509,6 @@ String getUseSerialGC() { return JvmInfo.jvmInfo().useSerialGC(); } - @Override - public String errorMessage() { - return String.format( - Locale.ROOT, - "JVM is using the serial collector but should not be for the best performance; " + - "either it's the default for the VM [%s] or -XX:+UseSerialGC was explicitly specified", - JvmInfo.jvmInfo().getVmName()); - } - } /** @@ -521,15 +516,15 @@ public String errorMessage() { */ static class SystemCallFilterCheck implements BootstrapCheck { - private final boolean areSystemCallFiltersEnabled; - - SystemCallFilterCheck(final boolean areSystemCallFiltersEnabled) { - this.areSystemCallFiltersEnabled = areSystemCallFiltersEnabled; - } - @Override - public boolean check() { - return areSystemCallFiltersEnabled && !isSystemCallFilterInstalled(); + public BootstrapCheckResult check(BootstrapContext context) { + if (BootstrapSettings.SYSTEM_CALL_FILTER_SETTING.get(context.settings) && !isSystemCallFilterInstalled()) { + final String message = "system call filters failed to install; " + + "check the logs and fix your configuration or disable system call filters at your own risk"; + return BootstrapCheckResult.failure(message); + } else { + return BootstrapCheckResult.success(); + } } // visible for testing @@ -537,21 +532,21 @@ boolean isSystemCallFilterInstalled() { return Natives.isSystemCallFilterInstalled(); } - @Override - public String errorMessage() { - return "system call filters failed to install; " + - "check the logs and fix your configuration or disable system call filters at your own risk"; - } - } abstract static class MightForkCheck implements BootstrapCheck { @Override - public boolean check() { - return isSystemCallFilterInstalled() && mightFork(); + public BootstrapCheckResult check(BootstrapContext context) { + if (isSystemCallFilterInstalled() && mightFork()) { + return BootstrapCheckResult.failure(message(context)); + } else { + return BootstrapCheckResult.success(); + } } + abstract String message(BootstrapContext context); + // visible for testing boolean isSystemCallFilterInstalled() { return Natives.isSystemCallFilterInstalled(); @@ -581,7 +576,7 @@ String onError() { } @Override - public String errorMessage() { + String message(BootstrapContext context) { return String.format( Locale.ROOT, "OnError [%s] requires forking but is prevented by system call filters ([%s=true]);" + @@ -605,8 +600,7 @@ String onOutOfMemoryError() { return JvmInfo.jvmInfo().onOutOfMemoryError(); } - @Override - public String errorMessage() { + String message(BootstrapContext context) { return String.format( Locale.ROOT, "OnOutOfMemoryError [%s] requires forking but is prevented by system call filters ([%s=true]);" + @@ -623,8 +617,17 @@ public String errorMessage() { static class EarlyAccessCheck implements BootstrapCheck { @Override - public boolean check() { - return "Oracle Corporation".equals(jvmVendor()) && javaVersion().endsWith("-ea"); + public BootstrapCheckResult check(BootstrapContext context) { + final String javaVersion = javaVersion(); + if ("Oracle Corporation".equals(jvmVendor()) && javaVersion.endsWith("-ea")) { + final String message = String.format( + Locale.ROOT, + "Java version [%s] is an early-access build, only use release builds", + javaVersion); + return BootstrapCheckResult.failure(message); + } else { + return BootstrapCheckResult.success(); + } } String jvmVendor() { @@ -635,14 +638,6 @@ String javaVersion() { return Constants.JAVA_VERSION; } - @Override - public String errorMessage() { - return String.format( - Locale.ROOT, - "Java version [%s] is an early-access build, only use release builds", - javaVersion()); - } - } /** @@ -651,7 +646,7 @@ public String errorMessage() { static class G1GCCheck implements BootstrapCheck { @Override - public boolean check() { + public BootstrapCheckResult check(BootstrapContext context) { if ("Oracle Corporation".equals(jvmVendor()) && isJava8() && isG1GCEnabled()) { final String jvmVersion = jvmVersion(); // HotSpot versions on Java 8 match this regular expression; note that this changes with Java 9 after JEP-223 @@ -662,10 +657,14 @@ public boolean check() { final int major = Integer.parseInt(matcher.group(1)); final int update = Integer.parseInt(matcher.group(2)); // HotSpot versions for Java 8 have major version 25, the bad versions are all versions prior to update 40 - return major == 25 && update < 40; - } else { - return false; + if (major == 25 && update < 40) { + final String message = String.format( + Locale.ROOT, + "JVM version [%s] can cause data corruption when used with G1GC; upgrade to at least Java 8u40", jvmVersion); + return BootstrapCheckResult.failure(message); + } } + return BootstrapCheckResult.success(); } // visible for testing @@ -691,13 +690,6 @@ boolean isJava8() { return JavaVersion.current().equals(JavaVersion.parse("1.8")); } - @Override - public String errorMessage() { - return String.format( - Locale.ROOT, - "JVM version [%s] can cause data corruption when used with G1GC; upgrade to at least Java 8u40", jvmVersion()); - } - } } diff --git a/core/src/main/java/org/elasticsearch/bootstrap/BootstrapContext.java b/core/src/main/java/org/elasticsearch/bootstrap/BootstrapContext.java new file mode 100644 index 0000000000000..f23d0db6d80bf --- /dev/null +++ b/core/src/main/java/org/elasticsearch/bootstrap/BootstrapContext.java @@ -0,0 +1,41 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.bootstrap; + +import org.elasticsearch.cluster.metadata.MetaData; +import org.elasticsearch.common.settings.Settings; + +/** + * Context that is passed to every bootstrap check to make decisions on. + */ +public class BootstrapContext { + /** + * The nodes settings + */ + public final Settings settings; + /** + * The nodes local state metadata loaded on startup + */ + public final MetaData metaData; + + public BootstrapContext(Settings settings, MetaData metaData) { + this.settings = settings; + this.metaData = metaData; + } +} diff --git a/core/src/main/java/org/elasticsearch/bootstrap/ElasticsearchUncaughtExceptionHandler.java b/core/src/main/java/org/elasticsearch/bootstrap/ElasticsearchUncaughtExceptionHandler.java index b1df4f5ccc0ea..c6692cec08b7a 100644 --- a/core/src/main/java/org/elasticsearch/bootstrap/ElasticsearchUncaughtExceptionHandler.java +++ b/core/src/main/java/org/elasticsearch/bootstrap/ElasticsearchUncaughtExceptionHandler.java @@ -21,7 +21,6 @@ import org.apache.logging.log4j.Logger; import org.apache.logging.log4j.message.ParameterizedMessage; -import org.apache.lucene.index.MergePolicy; import org.elasticsearch.common.SuppressForbidden; import org.elasticsearch.common.logging.Loggers; @@ -68,11 +67,7 @@ public void uncaughtException(Thread t, Throwable e) { // visible for testing static boolean isFatalUncaught(Throwable e) { - return isFatalCause(e) || (e instanceof MergePolicy.MergeException && isFatalCause(e.getCause())); - } - - private static boolean isFatalCause(Throwable cause) { - return cause instanceof Error; + return e instanceof Error; } // visible for testing diff --git a/core/src/main/java/org/elasticsearch/bootstrap/FilePermissionUtils.java b/core/src/main/java/org/elasticsearch/bootstrap/FilePermissionUtils.java new file mode 100644 index 0000000000000..5355ffb455e59 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/bootstrap/FilePermissionUtils.java @@ -0,0 +1,86 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.bootstrap; + +import org.elasticsearch.common.SuppressForbidden; + +import java.io.FilePermission; +import java.io.IOException; +import java.nio.file.Files; +import java.nio.file.Path; +import java.security.Permissions; + +public class FilePermissionUtils { + + /** no instantiation */ + private FilePermissionUtils() {} + + private static final boolean VERSION_IS_AT_LEAST_JAVA_9 = JavaVersion.current().compareTo(JavaVersion.parse("9")) >= 0; + + /** + * Add access to single file path + * @param policy current policy to add permissions to + * @param path the path itself + * @param permissions set of file permissions to grant to the path + */ + @SuppressForbidden(reason = "only place where creating Java-9 compatible FilePermission objects is possible") + public static void addSingleFilePath(Permissions policy, Path path, String permissions) throws IOException { + policy.add(new FilePermission(path.toString(), permissions)); + if (VERSION_IS_AT_LEAST_JAVA_9 && Files.exists(path)) { + // Java 9 FilePermission model requires this due to the removal of pathname canonicalization, + // see also https://github.com/elastic/elasticsearch/issues/21534 + Path realPath = path.toRealPath(); + if (path.toString().equals(realPath.toString()) == false) { + policy.add(new FilePermission(realPath.toString(), permissions)); + } + } + } + + /** + * Add access to path (and all files underneath it); this also creates the directory if it does not exist. + * + * @param policy current policy to add permissions to + * @param configurationName the configuration name associated with the path (for error messages only) + * @param path the path itself + * @param permissions set of file permissions to grant to the path + */ + @SuppressForbidden(reason = "only place where creating Java-9 compatible FilePermission objects is possible") + public static void addDirectoryPath(Permissions policy, String configurationName, Path path, String permissions) throws IOException { + // paths may not exist yet, this also checks accessibility + try { + Security.ensureDirectoryExists(path); + } catch (IOException e) { + throw new IllegalStateException("Unable to access '" + configurationName + "' (" + path + ")", e); + } + + // add each path twice: once for itself, again for files underneath it + policy.add(new FilePermission(path.toString(), permissions)); + policy.add(new FilePermission(path.toString() + path.getFileSystem().getSeparator() + "-", permissions)); + if (VERSION_IS_AT_LEAST_JAVA_9) { + // Java 9 FilePermission model requires this due to the removal of pathname canonicalization, + // see also https://github.com/elastic/elasticsearch/issues/21534 + Path realPath = path.toRealPath(); + if (path.toString().equals(realPath.toString()) == false) { + policy.add(new FilePermission(realPath.toString(), permissions)); + policy.add(new FilePermission(realPath.toString() + realPath.getFileSystem().getSeparator() + "-", permissions)); + } + } + } +} diff --git a/core/src/main/java/org/elasticsearch/bootstrap/Security.java b/core/src/main/java/org/elasticsearch/bootstrap/Security.java index 2b0812c557789..a1ce20a0e27c8 100644 --- a/core/src/main/java/org/elasticsearch/bootstrap/Security.java +++ b/core/src/main/java/org/elasticsearch/bootstrap/Security.java @@ -19,18 +19,18 @@ package org.elasticsearch.bootstrap; +import org.elasticsearch.Build; import org.elasticsearch.SecureSM; +import org.elasticsearch.Version; import org.elasticsearch.common.SuppressForbidden; import org.elasticsearch.common.io.PathUtils; import org.elasticsearch.common.network.NetworkModule; -import org.elasticsearch.common.settings.Setting; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.env.Environment; import org.elasticsearch.http.HttpTransportSettings; import org.elasticsearch.plugins.PluginInfo; import org.elasticsearch.transport.TcpTransport; -import java.io.FilePermission; import java.io.IOException; import java.net.SocketPermission; import java.net.URISyntaxException; @@ -45,13 +45,18 @@ import java.security.Permissions; import java.security.Policy; import java.security.URIParameter; +import java.util.ArrayList; import java.util.Collections; import java.util.HashMap; import java.util.HashSet; import java.util.LinkedHashSet; +import java.util.List; import java.util.Map; import java.util.Set; +import static org.elasticsearch.bootstrap.FilePermissionUtils.addDirectoryPath; +import static org.elasticsearch.bootstrap.FilePermissionUtils.addSingleFilePath; + /** * Initializes SecurityManager with necessary permissions. *
@@ -190,6 +195,7 @@ static Map getPluginPermissions(Environment environment) throws I @SuppressForbidden(reason = "accesses fully qualified URLs to configure security") static Policy readPolicy(URL policyFile, Set codebases) { try { + List propertiesSet = new ArrayList<>(); try { // set codebase properties for (URL url : codebases) { @@ -197,7 +203,22 @@ static Policy readPolicy(URL policyFile, Set codebases) { if (shortName.endsWith(".jar") == false) { continue; // tests :( } - String previous = System.setProperty("codebase." + shortName, url.toString()); + String property = "codebase." + shortName; + if (shortName.startsWith("elasticsearch-rest-client")) { + // The rest client is currently the only example where we have an elasticsearch built artifact + // which needs special permissions in policy files when used. This temporary solution is to + // pass in an extra system property that omits the -version.jar suffix the other properties have. + // That allows the snapshots to reference snapshot builds of the client, and release builds to + // referenced release builds of the client, all with the same grant statements. + final String esVersion = Version.CURRENT + (Build.CURRENT.isSnapshot() ? "-SNAPSHOT" : ""); + final int index = property.indexOf("-" + esVersion + ".jar"); + assert index >= 0; + String restClientAlias = property.substring(0, index); + propertiesSet.add(restClientAlias); + System.setProperty(restClientAlias, url.toString()); + } + propertiesSet.add(property); + String previous = System.setProperty(property, url.toString()); if (previous != null) { throw new IllegalStateException("codebase property already set: " + shortName + "->" + previous); } @@ -205,12 +226,8 @@ static Policy readPolicy(URL policyFile, Set codebases) { return Policy.getInstance("JavaPolicy", new URIParameter(policyFile.toURI())); } finally { // clear codebase properties - for (URL url : codebases) { - String shortName = PathUtils.get(url.toURI()).getFileName().toString(); - if (shortName.endsWith(".jar") == false) { - continue; // tests :( - } - System.clearProperty("codebase." + shortName); + for (String property : propertiesSet) { + System.clearProperty(property); } } } catch (NoSuchAlgorithmException | URISyntaxException e) { @@ -240,10 +257,10 @@ static void addClasspathPermissions(Permissions policy) throws IOException { throw new RuntimeException(e); } // resource itself - policy.add(new FilePermission(path.toString(), "read,readlink")); - // classes underneath if (Files.isDirectory(path)) { - policy.add(new FilePermission(path.toString() + path.getFileSystem().getSeparator() + "-", "read,readlink")); + addDirectoryPath(policy, "class.path", path, "read,readlink"); + } else { + addSingleFilePath(policy, path, "read,readlink"); } } } @@ -251,22 +268,23 @@ static void addClasspathPermissions(Permissions policy) throws IOException { /** * Adds access to all configurable paths. */ - static void addFilePermissions(Permissions policy, Environment environment) { + static void addFilePermissions(Permissions policy, Environment environment) throws IOException { // read-only dirs - addPath(policy, Environment.PATH_HOME_SETTING.getKey(), environment.binFile(), "read,readlink"); - addPath(policy, Environment.PATH_HOME_SETTING.getKey(), environment.libFile(), "read,readlink"); - addPath(policy, Environment.PATH_HOME_SETTING.getKey(), environment.modulesFile(), "read,readlink"); - addPath(policy, Environment.PATH_HOME_SETTING.getKey(), environment.pluginsFile(), "read,readlink"); - addPath(policy, "path.conf'", environment.configFile(), "read,readlink"); + addDirectoryPath(policy, Environment.PATH_HOME_SETTING.getKey(), environment.binFile(), "read,readlink"); + addDirectoryPath(policy, Environment.PATH_HOME_SETTING.getKey(), environment.libFile(), "read,readlink"); + addDirectoryPath(policy, Environment.PATH_HOME_SETTING.getKey(), environment.modulesFile(), "read,readlink"); + addDirectoryPath(policy, Environment.PATH_HOME_SETTING.getKey(), environment.pluginsFile(), "read,readlink"); + addDirectoryPath(policy, "path.conf'", environment.configFile(), "read,readlink"); // read-write dirs - addPath(policy, "java.io.tmpdir", environment.tmpFile(), "read,readlink,write,delete"); - addPath(policy, Environment.PATH_LOGS_SETTING.getKey(), environment.logsFile(), "read,readlink,write,delete"); + addDirectoryPath(policy, "java.io.tmpdir", environment.tmpFile(), "read,readlink,write,delete"); + addDirectoryPath(policy, Environment.PATH_LOGS_SETTING.getKey(), environment.logsFile(), "read,readlink,write,delete"); if (environment.sharedDataFile() != null) { - addPath(policy, Environment.PATH_SHARED_DATA_SETTING.getKey(), environment.sharedDataFile(), "read,readlink,write,delete"); + addDirectoryPath(policy, Environment.PATH_SHARED_DATA_SETTING.getKey(), environment.sharedDataFile(), + "read,readlink,write,delete"); } final Set dataFilesPaths = new HashSet<>(); for (Path path : environment.dataFiles()) { - addPath(policy, Environment.PATH_DATA_SETTING.getKey(), path, "read,readlink,write,delete"); + addDirectoryPath(policy, Environment.PATH_DATA_SETTING.getKey(), path, "read,readlink,write,delete"); /* * We have to do this after adding the path because a side effect of that is that the directory is created; the Path#toRealPath * invocation will fail if the directory does not already exist. We use Path#toRealPath to follow symlinks and handle issues @@ -282,11 +300,11 @@ static void addFilePermissions(Permissions policy, Environment environment) { } } for (Path path : environment.repoFiles()) { - addPath(policy, Environment.PATH_REPO_SETTING.getKey(), path, "read,readlink,write,delete"); + addDirectoryPath(policy, Environment.PATH_REPO_SETTING.getKey(), path, "read,readlink,write,delete"); } if (environment.pidFile() != null) { // we just need permission to remove the file if its elsewhere. - policy.add(new FilePermission(environment.pidFile().toString(), "delete")); + addSingleFilePath(policy, environment.pidFile(), "delete"); } } @@ -367,27 +385,6 @@ private static void addSocketPermissionForPortRange(final Permissions policy, fi policy.add(new SocketPermission("*:" + portRange, "listen,resolve")); } - /** - * Add access to path (and all files underneath it); this also creates the directory if it does not exist. - * - * @param policy current policy to add permissions to - * @param configurationName the configuration name associated with the path (for error messages only) - * @param path the path itself - * @param permissions set of file permissions to grant to the path - */ - static void addPath(Permissions policy, String configurationName, Path path, String permissions) { - // paths may not exist yet, this also checks accessibility - try { - ensureDirectoryExists(path); - } catch (IOException e) { - throw new IllegalStateException("Unable to access '" + configurationName + "' (" + path + ")", e); - } - - // add each path twice: once for itself, again for files underneath it - policy.add(new FilePermission(path.toString(), permissions)); - policy.add(new FilePermission(path.toString() + path.getFileSystem().getSeparator() + "-", permissions)); - } - /** * Ensures configured directory {@code path} exists. * @throws IOException if {@code path} exists, but is not a directory, not accessible, or broken symbolic link. diff --git a/core/src/main/java/org/elasticsearch/bootstrap/Spawner.java b/core/src/main/java/org/elasticsearch/bootstrap/Spawner.java index f1616ba0eea09..0b9913f7f06a4 100644 --- a/core/src/main/java/org/elasticsearch/bootstrap/Spawner.java +++ b/core/src/main/java/org/elasticsearch/bootstrap/Spawner.java @@ -21,6 +21,7 @@ import org.apache.lucene.util.Constants; import org.apache.lucene.util.IOUtils; +import org.elasticsearch.common.io.FileSystemUtils; import org.elasticsearch.env.Environment; import org.elasticsearch.plugins.Platforms; import org.elasticsearch.plugins.PluginInfo; @@ -73,6 +74,9 @@ void spawnNativePluginControllers(final Environment environment) throws IOExcept */ try (DirectoryStream stream = Files.newDirectoryStream(pluginsFile)) { for (final Path plugin : stream) { + if (FileSystemUtils.isDesktopServicesStore(plugin)) { + continue; + } final PluginInfo info = PluginInfo.readFromProperties(plugin); final Path spawnPath = Platforms.nativeControllerPath(plugin); if (!Files.isRegularFile(spawnPath)) { diff --git a/core/src/main/java/org/elasticsearch/bootstrap/SystemCallFilter.java b/core/src/main/java/org/elasticsearch/bootstrap/SystemCallFilter.java index ecb313780f671..73814a4311af0 100644 --- a/core/src/main/java/org/elasticsearch/bootstrap/SystemCallFilter.java +++ b/core/src/main/java/org/elasticsearch/bootstrap/SystemCallFilter.java @@ -199,7 +199,6 @@ static SockFilter BPF_JUMP(int code, int k, int jt, int jf) { static final int SECCOMP_RET_ALLOW = 0x7FFF0000; // some errno constants for error checking/handling - static final int EPERM = 0x01; static final int EACCES = 0x0D; static final int EFAULT = 0x0E; static final int EINVAL = 0x16; @@ -272,27 +271,6 @@ private static int linuxImpl() { "with CONFIG_SECCOMP and CONFIG_SECCOMP_FILTER compiled in"); } - // pure paranoia: - - // check that unimplemented syscalls actually return ENOSYS - // you never know (e.g. https://code.google.com/p/chromium/issues/detail?id=439795) - if (linux_syscall(999) >= 0) { - throw new UnsupportedOperationException("seccomp unavailable: your kernel is buggy and you should upgrade"); - } - - switch (Native.getLastError()) { - case ENOSYS: - break; // ok - case EPERM: - // NOT ok, but likely a docker container - if (logger.isDebugEnabled()) { - logger.debug("syscall(BOGUS) bogusly gets EPERM instead of ENOSYS"); - } - break; - default: - throw new UnsupportedOperationException("seccomp unavailable: your kernel is buggy and you should upgrade"); - } - // try to check system calls really are who they claim // you never know (e.g. https://chromium.googlesource.com/chromium/src.git/+/master/sandbox/linux/seccomp-bpf/sandbox_bpf.cc#57) final int bogusArg = 0xf7a46a5c; diff --git a/core/src/main/java/org/elasticsearch/cli/UserException.java b/core/src/main/java/org/elasticsearch/cli/UserException.java index a7f88ccab4af8..4749b1b87b7aa 100644 --- a/core/src/main/java/org/elasticsearch/cli/UserException.java +++ b/core/src/main/java/org/elasticsearch/cli/UserException.java @@ -32,4 +32,17 @@ public UserException(int exitCode, String msg) { super(msg); this.exitCode = exitCode; } + + /** + * Constructs a new user exception with specified exit status, message, and underlying cause. + * + * @param exitCode the exit code + * @param msg the message + * @param cause the underlying cause + */ + public UserException(final int exitCode, final String msg, final Throwable cause) { + super(msg, cause); + this.exitCode = exitCode; + } + } diff --git a/core/src/main/java/org/elasticsearch/client/IndicesAdminClient.java b/core/src/main/java/org/elasticsearch/client/IndicesAdminClient.java index b254039910c01..81de57f91afee 100644 --- a/core/src/main/java/org/elasticsearch/client/IndicesAdminClient.java +++ b/core/src/main/java/org/elasticsearch/client/IndicesAdminClient.java @@ -50,9 +50,6 @@ import org.elasticsearch.action.admin.indices.exists.types.TypesExistsRequest; import org.elasticsearch.action.admin.indices.exists.types.TypesExistsRequestBuilder; import org.elasticsearch.action.admin.indices.exists.types.TypesExistsResponse; -import org.elasticsearch.action.fieldcaps.FieldCapabilitiesRequest; -import org.elasticsearch.action.fieldcaps.FieldCapabilitiesRequestBuilder; -import org.elasticsearch.action.fieldcaps.FieldCapabilitiesResponse; import org.elasticsearch.action.admin.indices.flush.FlushRequest; import org.elasticsearch.action.admin.indices.flush.FlushRequestBuilder; import org.elasticsearch.action.admin.indices.flush.FlushResponse; @@ -98,9 +95,9 @@ import org.elasticsearch.action.admin.indices.shards.IndicesShardStoreRequestBuilder; import org.elasticsearch.action.admin.indices.shards.IndicesShardStoresRequest; import org.elasticsearch.action.admin.indices.shards.IndicesShardStoresResponse; -import org.elasticsearch.action.admin.indices.shrink.ShrinkRequest; -import org.elasticsearch.action.admin.indices.shrink.ShrinkRequestBuilder; -import org.elasticsearch.action.admin.indices.shrink.ShrinkResponse; +import org.elasticsearch.action.admin.indices.shrink.ResizeRequest; +import org.elasticsearch.action.admin.indices.shrink.ResizeRequestBuilder; +import org.elasticsearch.action.admin.indices.shrink.ResizeResponse; import org.elasticsearch.action.admin.indices.stats.IndicesStatsRequest; import org.elasticsearch.action.admin.indices.stats.IndicesStatsRequestBuilder; import org.elasticsearch.action.admin.indices.stats.IndicesStatsResponse; @@ -792,19 +789,19 @@ public interface IndicesAdminClient extends ElasticsearchClient { GetSettingsRequestBuilder prepareGetSettings(String... indices); /** - * Shrinks an index using an explicit request allowing to specify the settings, mappings and aliases of the target index of the index. + * Resize an index using an explicit request allowing to specify the settings, mappings and aliases of the target index of the index. */ - ShrinkRequestBuilder prepareShrinkIndex(String sourceIndex, String targetIndex); + ResizeRequestBuilder prepareResizeIndex(String sourceIndex, String targetIndex); /** - * Shrinks an index using an explicit request allowing to specify the settings, mappings and aliases of the target index of the index. + * Resize an index using an explicit request allowing to specify the settings, mappings and aliases of the target index of the index. */ - ActionFuture shrinkIndex(ShrinkRequest request); + ActionFuture resizeIndex(ResizeRequest request); /** * Shrinks an index using an explicit request allowing to specify the settings, mappings and aliases of the target index of the index. */ - void shrinkIndex(ShrinkRequest request, ActionListener listener); + void resizeIndex(ResizeRequest request, ActionListener listener); /** * Swaps the index pointed to by an alias given all provided conditions are satisfied diff --git a/core/src/main/java/org/elasticsearch/client/support/AbstractClient.java b/core/src/main/java/org/elasticsearch/client/support/AbstractClient.java index c2b813d3d659e..c0da35a307981 100644 --- a/core/src/main/java/org/elasticsearch/client/support/AbstractClient.java +++ b/core/src/main/java/org/elasticsearch/client/support/AbstractClient.java @@ -232,10 +232,10 @@ import org.elasticsearch.action.admin.indices.shards.IndicesShardStoresAction; import org.elasticsearch.action.admin.indices.shards.IndicesShardStoresRequest; import org.elasticsearch.action.admin.indices.shards.IndicesShardStoresResponse; -import org.elasticsearch.action.admin.indices.shrink.ShrinkAction; -import org.elasticsearch.action.admin.indices.shrink.ShrinkRequest; -import org.elasticsearch.action.admin.indices.shrink.ShrinkRequestBuilder; -import org.elasticsearch.action.admin.indices.shrink.ShrinkResponse; +import org.elasticsearch.action.admin.indices.shrink.ResizeAction; +import org.elasticsearch.action.admin.indices.shrink.ResizeRequest; +import org.elasticsearch.action.admin.indices.shrink.ResizeRequestBuilder; +import org.elasticsearch.action.admin.indices.shrink.ResizeResponse; import org.elasticsearch.action.admin.indices.stats.IndicesStatsAction; import org.elasticsearch.action.admin.indices.stats.IndicesStatsRequest; import org.elasticsearch.action.admin.indices.stats.IndicesStatsRequestBuilder; @@ -1730,19 +1730,19 @@ public GetSettingsRequestBuilder prepareGetSettings(String... indices) { } @Override - public ShrinkRequestBuilder prepareShrinkIndex(String sourceIndex, String targetIndex) { - return new ShrinkRequestBuilder(this, ShrinkAction.INSTANCE).setSourceIndex(sourceIndex) + public ResizeRequestBuilder prepareResizeIndex(String sourceIndex, String targetIndex) { + return new ResizeRequestBuilder(this, ResizeAction.INSTANCE).setSourceIndex(sourceIndex) .setTargetIndex(new CreateIndexRequest(targetIndex)); } @Override - public ActionFuture shrinkIndex(ShrinkRequest request) { - return execute(ShrinkAction.INSTANCE, request); + public ActionFuture resizeIndex(ResizeRequest request) { + return execute(ResizeAction.INSTANCE, request); } @Override - public void shrinkIndex(ShrinkRequest request, ActionListener listener) { - execute(ShrinkAction.INSTANCE, request, listener); + public void resizeIndex(ResizeRequest request, ActionListener listener) { + execute(ResizeAction.INSTANCE, request, listener); } @Override diff --git a/core/src/main/java/org/elasticsearch/client/transport/TransportProxyClient.java b/core/src/main/java/org/elasticsearch/client/transport/TransportProxyClient.java index 5436bef172a47..e07fab0092d0e 100644 --- a/core/src/main/java/org/elasticsearch/client/transport/TransportProxyClient.java +++ b/core/src/main/java/org/elasticsearch/client/transport/TransportProxyClient.java @@ -56,6 +56,7 @@ final class TransportProxyClient { ActionRequestBuilder> void execute(final Action action, final Request request, ActionListener listener) { final TransportActionNodeProxy proxy = proxies.get(action); + assert proxy != null : "no proxy found for action: " + action; nodesService.execute((n, l) -> proxy.execute(n, request, l), listener); } } diff --git a/core/src/main/java/org/elasticsearch/cluster/ClusterModule.java b/core/src/main/java/org/elasticsearch/cluster/ClusterModule.java index a5b7f422a9322..a4bb6a559254c 100644 --- a/core/src/main/java/org/elasticsearch/cluster/ClusterModule.java +++ b/core/src/main/java/org/elasticsearch/cluster/ClusterModule.java @@ -50,6 +50,7 @@ import org.elasticsearch.cluster.routing.allocation.decider.NodeVersionAllocationDecider; import org.elasticsearch.cluster.routing.allocation.decider.RebalanceOnlyWhenActiveAllocationDecider; import org.elasticsearch.cluster.routing.allocation.decider.ReplicaAfterPrimaryActiveAllocationDecider; +import org.elasticsearch.cluster.routing.allocation.decider.ResizeAllocationDecider; import org.elasticsearch.cluster.routing.allocation.decider.SameShardAllocationDecider; import org.elasticsearch.cluster.routing.allocation.decider.ShardsLimitAllocationDecider; import org.elasticsearch.cluster.routing.allocation.decider.SnapshotInProgressAllocationDecider; @@ -182,6 +183,7 @@ public static Collection createAllocationDeciders(Settings se // collect deciders by class so that we can detect duplicates Map deciders = new LinkedHashMap<>(); addAllocationDecider(deciders, new MaxRetryAllocationDecider(settings)); + addAllocationDecider(deciders, new ResizeAllocationDecider(settings)); addAllocationDecider(deciders, new ReplicaAfterPrimaryActiveAllocationDecider(settings)); addAllocationDecider(deciders, new RebalanceOnlyWhenActiveAllocationDecider(settings)); addAllocationDecider(deciders, new ClusterRebalanceAllocationDecider(settings, clusterSettings)); diff --git a/core/src/main/java/org/elasticsearch/cluster/ClusterName.java b/core/src/main/java/org/elasticsearch/cluster/ClusterName.java index 36676300954e7..ab2efc6061e4e 100644 --- a/core/src/main/java/org/elasticsearch/cluster/ClusterName.java +++ b/core/src/main/java/org/elasticsearch/cluster/ClusterName.java @@ -34,6 +34,9 @@ public class ClusterName implements Writeable { if (s.isEmpty()) { throw new IllegalArgumentException("[cluster.name] must not be empty"); } + if (s.contains(":")) { + throw new IllegalArgumentException("[cluster.name] must not contain ':'"); + } return new ClusterName(s); }, Setting.Property.NodeScope); diff --git a/core/src/main/java/org/elasticsearch/cluster/ClusterState.java b/core/src/main/java/org/elasticsearch/cluster/ClusterState.java index 3f6409cc5dfd0..30c8df07ec1a5 100644 --- a/core/src/main/java/org/elasticsearch/cluster/ClusterState.java +++ b/core/src/main/java/org/elasticsearch/cluster/ClusterState.java @@ -50,7 +50,6 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.ToXContentFragment; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentHelper; @@ -91,7 +90,7 @@ public class ClusterState implements ToXContentFragment, Diffable public static final ClusterState EMPTY_STATE = builder(ClusterName.CLUSTER_NAME_SETTING.getDefault(Settings.EMPTY)).build(); - public interface Custom extends NamedDiffable, ToXContent { + public interface Custom extends NamedDiffable, ToXContentFragment { /** * Returns true iff this {@link Custom} is private to the cluster and should never be send to a client. diff --git a/core/src/main/java/org/elasticsearch/cluster/ack/OpenIndexClusterStateUpdateResponse.java b/core/src/main/java/org/elasticsearch/cluster/ack/OpenIndexClusterStateUpdateResponse.java new file mode 100644 index 0000000000000..33089fa009cdd --- /dev/null +++ b/core/src/main/java/org/elasticsearch/cluster/ack/OpenIndexClusterStateUpdateResponse.java @@ -0,0 +1,39 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.cluster.ack; + +/** + * A cluster state update response with specific fields for index opening. + */ +public class OpenIndexClusterStateUpdateResponse extends ClusterStateUpdateResponse { + + private final boolean shardsAcknowledged; + + public OpenIndexClusterStateUpdateResponse(boolean acknowledged, boolean shardsAcknowledged) { + super(acknowledged); + this.shardsAcknowledged = shardsAcknowledged; + } + + /** + * Returns whether the requisite number of shard copies started before the completion of the operation. + */ + public boolean isShardsAcknowledged() { + return shardsAcknowledged; + } +} diff --git a/core/src/main/java/org/elasticsearch/cluster/metadata/IndexGraveyard.java b/core/src/main/java/org/elasticsearch/cluster/metadata/IndexGraveyard.java index d60617ea6423a..aa273dd92197d 100644 --- a/core/src/main/java/org/elasticsearch/cluster/metadata/IndexGraveyard.java +++ b/core/src/main/java/org/elasticsearch/cluster/metadata/IndexGraveyard.java @@ -30,7 +30,7 @@ import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.ContextParser; import org.elasticsearch.common.xcontent.ObjectParser; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.index.Index; @@ -347,7 +347,7 @@ public String getWriteableName() { /** * An individual tombstone entry for representing a deleted index. */ - public static final class Tombstone implements ToXContent, Writeable { + public static final class Tombstone implements ToXContentObject, Writeable { private static final String INDEX_KEY = "index"; private static final String DELETE_DATE_IN_MILLIS_KEY = "delete_date_in_millis"; diff --git a/core/src/main/java/org/elasticsearch/cluster/metadata/IndexMetaData.java b/core/src/main/java/org/elasticsearch/cluster/metadata/IndexMetaData.java index 56a8c1a4b8b1a..3d14670e52771 100644 --- a/core/src/main/java/org/elasticsearch/cluster/metadata/IndexMetaData.java +++ b/core/src/main/java/org/elasticsearch/cluster/metadata/IndexMetaData.java @@ -44,7 +44,6 @@ import org.elasticsearch.common.settings.Setting; import org.elasticsearch.common.settings.Setting.Property; import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.common.settings.loader.SettingsLoader; import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.ToXContentFragment; import org.elasticsearch.common.xcontent.XContentBuilder; @@ -66,6 +65,7 @@ import java.util.EnumSet; import java.util.HashMap; import java.util.HashSet; +import java.util.Iterator; import java.util.Locale; import java.util.Map; import java.util.Set; @@ -196,6 +196,24 @@ static Setting buildNumberOfShardsSetting() { public static final Setting INDEX_ROUTING_PARTITION_SIZE_SETTING = Setting.intSetting(SETTING_ROUTING_PARTITION_SIZE, 1, 1, Property.IndexScope); + public static final Setting INDEX_NUMBER_OF_ROUTING_SHARDS_SETTING = + Setting.intSetting("index.number_of_routing_shards", INDEX_NUMBER_OF_SHARDS_SETTING, 1, new Setting.Validator() { + @Override + public void validate(Integer numRoutingShards, Map, Integer> settings) { + Integer numShards = settings.get(INDEX_NUMBER_OF_SHARDS_SETTING); + if (numRoutingShards < numShards) { + throw new IllegalArgumentException("index.number_of_routing_shards [" + numRoutingShards + + "] must be >= index.number_of_shards [" + numShards + "]"); + } + getRoutingFactor(numShards, numRoutingShards); + } + + @Override + public Iterator> settings() { + return Collections.singleton(INDEX_NUMBER_OF_SHARDS_SETTING).iterator(); + } + }, Property.IndexScope); + public static final String SETTING_AUTO_EXPAND_REPLICAS = "index.auto_expand_replicas"; public static final Setting INDEX_AUTO_EXPAND_REPLICAS_SETTING = AutoExpandReplicas.SETTING; public static final String SETTING_READ_ONLY = "index.blocks.read_only"; @@ -241,14 +259,18 @@ static Setting buildNumberOfShardsSetting() { public static final String INDEX_ROUTING_REQUIRE_GROUP_PREFIX = "index.routing.allocation.require"; public static final String INDEX_ROUTING_INCLUDE_GROUP_PREFIX = "index.routing.allocation.include"; public static final String INDEX_ROUTING_EXCLUDE_GROUP_PREFIX = "index.routing.allocation.exclude"; - public static final Setting INDEX_ROUTING_REQUIRE_GROUP_SETTING = - Setting.groupSetting(INDEX_ROUTING_REQUIRE_GROUP_PREFIX + ".", IP_VALIDATOR, Property.Dynamic, Property.IndexScope); - public static final Setting INDEX_ROUTING_INCLUDE_GROUP_SETTING = - Setting.groupSetting(INDEX_ROUTING_INCLUDE_GROUP_PREFIX + ".", IP_VALIDATOR, Property.Dynamic, Property.IndexScope); - public static final Setting INDEX_ROUTING_EXCLUDE_GROUP_SETTING = - Setting.groupSetting(INDEX_ROUTING_EXCLUDE_GROUP_PREFIX + ".", IP_VALIDATOR, Property.Dynamic, Property.IndexScope); - public static final Setting INDEX_ROUTING_INITIAL_RECOVERY_GROUP_SETTING = - Setting.groupSetting("index.routing.allocation.initial_recovery."); // this is only setable internally not a registered setting!! + public static final Setting.AffixSetting INDEX_ROUTING_REQUIRE_GROUP_SETTING = + Setting.prefixKeySetting(INDEX_ROUTING_REQUIRE_GROUP_PREFIX + ".", (key) -> + Setting.simpleString(key, (value, map) -> IP_VALIDATOR.accept(key, value), Property.Dynamic, Property.IndexScope)); + public static final Setting.AffixSetting INDEX_ROUTING_INCLUDE_GROUP_SETTING = + Setting.prefixKeySetting(INDEX_ROUTING_INCLUDE_GROUP_PREFIX + ".", (key) -> + Setting.simpleString(key, (value, map) -> IP_VALIDATOR.accept(key, value), Property.Dynamic, Property.IndexScope)); + public static final Setting.AffixSetting INDEX_ROUTING_EXCLUDE_GROUP_SETTING = + Setting.prefixKeySetting(INDEX_ROUTING_EXCLUDE_GROUP_PREFIX + ".", (key) -> + Setting.simpleString(key, (value, map) -> IP_VALIDATOR.accept(key, value), Property.Dynamic, Property.IndexScope)); + public static final Setting.AffixSetting INDEX_ROUTING_INITIAL_RECOVERY_GROUP_SETTING = + Setting.prefixKeySetting("index.routing.allocation.initial_recovery.", key -> Setting.simpleString(key)); + // this is only setable internally not a registered setting!! /** * The number of active shard copies to check for before proceeding with a write operation. @@ -450,14 +472,22 @@ public MappingMetaData mapping(String mappingType) { return mappings.get(mappingType); } + // we keep the shrink settings for BWC - this can be removed in 8.0 + // we can't remove in 7 since this setting might be baked into an index coming in via a full cluster restart from 6.0 public static final String INDEX_SHRINK_SOURCE_UUID_KEY = "index.shrink.source.uuid"; public static final String INDEX_SHRINK_SOURCE_NAME_KEY = "index.shrink.source.name"; + public static final String INDEX_RESIZE_SOURCE_UUID_KEY = "index.resize.source.uuid"; + public static final String INDEX_RESIZE_SOURCE_NAME_KEY = "index.resize.source.name"; public static final Setting INDEX_SHRINK_SOURCE_UUID = Setting.simpleString(INDEX_SHRINK_SOURCE_UUID_KEY); public static final Setting INDEX_SHRINK_SOURCE_NAME = Setting.simpleString(INDEX_SHRINK_SOURCE_NAME_KEY); - - - public Index getMergeSourceIndex() { - return INDEX_SHRINK_SOURCE_UUID.exists(settings) ? new Index(INDEX_SHRINK_SOURCE_NAME.get(settings), INDEX_SHRINK_SOURCE_UUID.get(settings)) : null; + public static final Setting INDEX_RESIZE_SOURCE_UUID = Setting.simpleString(INDEX_RESIZE_SOURCE_UUID_KEY, + INDEX_SHRINK_SOURCE_UUID); + public static final Setting INDEX_RESIZE_SOURCE_NAME = Setting.simpleString(INDEX_RESIZE_SOURCE_NAME_KEY, + INDEX_SHRINK_SOURCE_NAME); + + public Index getResizeSourceIndex() { + return INDEX_RESIZE_SOURCE_UUID.exists(settings) || INDEX_SHRINK_SOURCE_UUID.exists(settings) + ? new Index(INDEX_RESIZE_SOURCE_NAME.get(settings), INDEX_RESIZE_SOURCE_UUID.get(settings)) : null; } /** @@ -1003,7 +1033,6 @@ public IndexMetaData build() { throw new IllegalArgumentException("routing partition size [" + routingPartitionSize + "] should be a positive number" + " less than the number of shards [" + getRoutingNumShards() + "] for [" + index + "]"); } - // fill missing slots in inSyncAllocationIds with empty set if needed and make all entries immutable ImmutableOpenIntMap.Builder> filledInSyncAllocationIds = ImmutableOpenIntMap.builder(); for (int i = 0; i < numberOfShards; i++) { @@ -1013,28 +1042,28 @@ public IndexMetaData build() { filledInSyncAllocationIds.put(i, Collections.emptySet()); } } - final Map requireMap = INDEX_ROUTING_REQUIRE_GROUP_SETTING.get(settings).getAsMap(); + final Map requireMap = INDEX_ROUTING_REQUIRE_GROUP_SETTING.getAsMap(settings); final DiscoveryNodeFilters requireFilters; if (requireMap.isEmpty()) { requireFilters = null; } else { requireFilters = DiscoveryNodeFilters.buildFromKeyValue(AND, requireMap); } - Map includeMap = INDEX_ROUTING_INCLUDE_GROUP_SETTING.get(settings).getAsMap(); + Map includeMap = INDEX_ROUTING_INCLUDE_GROUP_SETTING.getAsMap(settings); final DiscoveryNodeFilters includeFilters; if (includeMap.isEmpty()) { includeFilters = null; } else { includeFilters = DiscoveryNodeFilters.buildFromKeyValue(OR, includeMap); } - Map excludeMap = INDEX_ROUTING_EXCLUDE_GROUP_SETTING.get(settings).getAsMap(); + Map excludeMap = INDEX_ROUTING_EXCLUDE_GROUP_SETTING.getAsMap(settings); final DiscoveryNodeFilters excludeFilters; if (excludeMap.isEmpty()) { excludeFilters = null; } else { excludeFilters = DiscoveryNodeFilters.buildFromKeyValue(OR, excludeMap); } - Map initialRecoveryMap = INDEX_ROUTING_INITIAL_RECOVERY_GROUP_SETTING.get(settings).getAsMap(); + Map initialRecoveryMap = INDEX_ROUTING_INITIAL_RECOVERY_GROUP_SETTING.getAsMap(settings); final DiscoveryNodeFilters initialRecoveryFilters; if (initialRecoveryMap.isEmpty()) { initialRecoveryFilters = null; @@ -1075,9 +1104,7 @@ public static void toXContent(IndexMetaData indexMetaData, XContentBuilder build boolean binary = params.paramAsBoolean("binary", false); builder.startObject(KEY_SETTINGS); - for (Map.Entry entry : indexMetaData.getSettings().getAsMap().entrySet()) { - builder.field(entry.getKey(), entry.getValue()); - } + indexMetaData.getSettings().toXContent(builder, new MapParams(Collections.singletonMap("flat_settings", "true"))); builder.endObject(); builder.startArray(KEY_MAPPINGS); @@ -1143,7 +1170,7 @@ public static IndexMetaData fromXContent(XContentParser parser) throws IOExcepti currentFieldName = parser.currentName(); } else if (token == XContentParser.Token.START_OBJECT) { if (KEY_SETTINGS.equals(currentFieldName)) { - builder.settings(Settings.builder().put(SettingsLoader.Helper.loadNestedFromMap(parser.mapOrdered()))); + builder.settings(Settings.fromXContent(parser)); } else if (KEY_MAPPINGS.equals(currentFieldName)) { while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { if (token == XContentParser.Token.FIELD_NAME) { @@ -1292,12 +1319,50 @@ public int getRoutingNumShards() { /** * Returns the routing factor for this index. The default is 1. * - * @see #getRoutingFactor(IndexMetaData, int) for details + * @see #getRoutingFactor(int, int) for details */ public int getRoutingFactor() { return routingFactor; } + /** + * Returns the source shard ID to split the given target shard off + * @param shardId the id of the target shard to split into + * @param sourceIndexMetadata the source index metadata + * @param numTargetShards the total number of shards in the target index + * @return a the source shard ID to split off from + */ + public static ShardId selectSplitShard(int shardId, IndexMetaData sourceIndexMetadata, int numTargetShards) { + if (shardId >= numTargetShards) { + throw new IllegalArgumentException("the number of target shards (" + numTargetShards + ") must be greater than the shard id: " + + shardId); + } + int numSourceShards = sourceIndexMetadata.getNumberOfShards(); + if (numSourceShards > numTargetShards) { + throw new IllegalArgumentException("the number of source shards [" + numSourceShards + + "] must be less that the number of target shards [" + numTargetShards + "]"); + } + int routingFactor = getRoutingFactor(numSourceShards, numTargetShards); + // this is just an additional assertion that ensures we are a factor of the routing num shards. + assert getRoutingFactor(numTargetShards, sourceIndexMetadata.getRoutingNumShards()) >= 0; + return new ShardId(sourceIndexMetadata.getIndex(), shardId/routingFactor); + } + + /** + * Selects the source shards for a local shard recovery. This might either be a split or a shrink operation. + * @param shardId the target shard ID to select the source shards for + * @param sourceIndexMetadata the source metadata + * @param numTargetShards the number of target shards + */ + public static Set selectRecoverFromShards(int shardId, IndexMetaData sourceIndexMetadata, int numTargetShards) { + if (sourceIndexMetadata.getNumberOfShards() > numTargetShards) { + return selectShrinkShards(shardId, sourceIndexMetadata, numTargetShards); + } else if (sourceIndexMetadata.getNumberOfShards() < numTargetShards) { + return Collections.singleton(selectSplitShard(shardId, sourceIndexMetadata, numTargetShards)); + } + throw new IllegalArgumentException("can't select recover from shards if both indices have the same number of shards"); + } + /** * Returns the source shard ids to shrink into the given shard id. * @param shardId the id of the target shard to shrink to @@ -1310,7 +1375,11 @@ public static Set selectShrinkShards(int shardId, IndexMetaData sourceI throw new IllegalArgumentException("the number of target shards (" + numTargetShards + ") must be greater than the shard id: " + shardId); } - int routingFactor = getRoutingFactor(sourceIndexMetadata, numTargetShards); + if (sourceIndexMetadata.getNumberOfShards() < numTargetShards) { + throw new IllegalArgumentException("the number of target shards [" + numTargetShards + +"] must be less that the number of source shards [" + sourceIndexMetadata.getNumberOfShards() + "]"); + } + int routingFactor = getRoutingFactor(sourceIndexMetadata.getNumberOfShards(), numTargetShards); Set shards = new HashSet<>(routingFactor); for (int i = shardId * routingFactor; i < routingFactor*shardId + routingFactor; i++) { shards.add(new ShardId(sourceIndexMetadata.getIndex(), i)); @@ -1324,21 +1393,30 @@ public static Set selectShrinkShards(int shardId, IndexMetaData sourceI * {@link org.elasticsearch.cluster.routing.OperationRouting#generateShardId(IndexMetaData, String, String)} to guarantee consistent * hashing / routing of documents even if the number of shards changed (ie. a shrunk index). * - * @param sourceIndexMetadata the metadata of the source index + * @param sourceNumberOfShards the total number of shards in the source index * @param targetNumberOfShards the total number of shards in the target index * @return the routing factor for and shrunk index with the given number of target shards. * @throws IllegalArgumentException if the number of source shards is less than the number of target shards or if the source shards * are not divisible by the number of target shards. */ - public static int getRoutingFactor(IndexMetaData sourceIndexMetadata, int targetNumberOfShards) { - int sourceNumberOfShards = sourceIndexMetadata.getNumberOfShards(); - if (sourceNumberOfShards < targetNumberOfShards) { - throw new IllegalArgumentException("the number of target shards must be less that the number of source shards"); - } - int factor = sourceNumberOfShards / targetNumberOfShards; - if (factor * targetNumberOfShards != sourceNumberOfShards || factor <= 1) { - throw new IllegalArgumentException("the number of source shards [" + sourceNumberOfShards + "] must be a must be a multiple of [" - + targetNumberOfShards + "]"); + public static int getRoutingFactor(int sourceNumberOfShards, int targetNumberOfShards) { + final int factor; + if (sourceNumberOfShards < targetNumberOfShards) { // split + factor = targetNumberOfShards / sourceNumberOfShards; + if (factor * sourceNumberOfShards != targetNumberOfShards || factor <= 1) { + throw new IllegalArgumentException("the number of source shards [" + sourceNumberOfShards + "] must be a must be a " + + "factor of [" + + targetNumberOfShards + "]"); + } + } else if (sourceNumberOfShards > targetNumberOfShards) { // shrink + factor = sourceNumberOfShards / targetNumberOfShards; + if (factor * targetNumberOfShards != sourceNumberOfShards || factor <= 1) { + throw new IllegalArgumentException("the number of source shards [" + sourceNumberOfShards + "] must be a must be a " + + "multiple of [" + + targetNumberOfShards + "]"); + } + } else { + factor = 1; } return factor; } diff --git a/core/src/main/java/org/elasticsearch/cluster/metadata/IndexNameExpressionResolver.java b/core/src/main/java/org/elasticsearch/cluster/metadata/IndexNameExpressionResolver.java index 17b43efdf9d85..6dc92a44bb08b 100644 --- a/core/src/main/java/org/elasticsearch/cluster/metadata/IndexNameExpressionResolver.java +++ b/core/src/main/java/org/elasticsearch/cluster/metadata/IndexNameExpressionResolver.java @@ -34,6 +34,7 @@ import org.elasticsearch.index.Index; import org.elasticsearch.index.IndexNotFoundException; import org.elasticsearch.indices.IndexClosedException; +import org.elasticsearch.indices.InvalidIndexNameException; import org.joda.time.DateTimeZone; import org.joda.time.format.DateTimeFormat; import org.joda.time.format.DateTimeFormatter; @@ -601,6 +602,7 @@ private Set innerResolve(Context context, List expressions, Indi if (Strings.isEmpty(expression)) { throw indexNotFoundException(expression); } + validateAliasOrIndex(expression); if (aliasOrIndexExists(options, metaData, expression)) { if (result != null) { result.add(expression); @@ -654,6 +656,16 @@ private Set innerResolve(Context context, List expressions, Indi return result; } + private static void validateAliasOrIndex(String expression) { + // Expressions can not start with an underscore. This is reserved for APIs. If the check gets here, the API + // does not exist and the path is interpreted as an expression. If the expression begins with an underscore, + // throw a specific error that is different from the [[IndexNotFoundException]], which is typically thrown + // if the expression can't be found. + if (expression.charAt(0) == '_') { + throw new InvalidIndexNameException(expression, "must not start with '_'."); + } + } + private static boolean aliasOrIndexExists(IndicesOptions options, MetaData metaData, String expression) { AliasOrIndex aliasOrIndex = metaData.getAliasAndIndexLookup().get(expression); //treat aliases as unavailable indices when ignoreAliases is set to true (e.g. delete index and update aliases api) diff --git a/core/src/main/java/org/elasticsearch/cluster/metadata/IndexTemplateMetaData.java b/core/src/main/java/org/elasticsearch/cluster/metadata/IndexTemplateMetaData.java index cae2042f52f0f..66f5a49f6d68c 100644 --- a/core/src/main/java/org/elasticsearch/cluster/metadata/IndexTemplateMetaData.java +++ b/core/src/main/java/org/elasticsearch/cluster/metadata/IndexTemplateMetaData.java @@ -26,7 +26,6 @@ import org.elasticsearch.cluster.Diff; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.bytes.BytesArray; -import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.collect.ImmutableOpenMap; import org.elasticsearch.common.collect.MapBuilder; import org.elasticsearch.common.compress.CompressedXContent; @@ -35,7 +34,6 @@ import org.elasticsearch.common.logging.DeprecationLogger; import org.elasticsearch.common.logging.Loggers; import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.common.settings.loader.SettingsLoader; import org.elasticsearch.common.util.set.Sets; import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.XContentBuilder; @@ -231,9 +229,7 @@ public static IndexTemplateMetaData readFrom(StreamInput in) throws IOException IndexMetaData.Custom customIndexMetaData = IndexMetaData.lookupPrototypeSafe(type).readFrom(in); builder.putCustom(type, customIndexMetaData); } - if (in.getVersion().onOrAfter(Version.V_5_0_0_beta1)) { - builder.version(in.readOptionalVInt()); - } + builder.version(in.readOptionalVInt()); return builder.build(); } @@ -265,9 +261,7 @@ public void writeTo(StreamOutput out) throws IOException { out.writeString(cursor.key); cursor.value.writeTo(out); } - if (out.getVersion().onOrAfter(Version.V_5_0_0_beta1)) { - out.writeOptionalVInt(version); - } + out.writeOptionalVInt(version); } public static class Builder { @@ -409,7 +403,7 @@ public static void toInnerXContent(IndexTemplateMetaData indexTemplateMetaData, builder.startObject("mappings"); for (ObjectObjectCursor cursor : indexTemplateMetaData.mappings()) { byte[] mappingSource = cursor.value.uncompressed(); - Map mapping = XContentHelper.convertToMap(new BytesArray(mappingSource), false).v2(); + Map mapping = XContentHelper.convertToMap(new BytesArray(mappingSource), true).v2(); if (mapping.size() == 1 && mapping.containsKey(cursor.key)) { // the type name is the root value, reduce it mapping = (Map) mapping.get(cursor.key); @@ -451,9 +445,8 @@ public static IndexTemplateMetaData fromXContent(XContentParser parser, String t } else if (token == XContentParser.Token.START_OBJECT) { if ("settings".equals(currentFieldName)) { Settings.Builder templateSettingsBuilder = Settings.builder(); - templateSettingsBuilder.put( - SettingsLoader.Helper.loadNestedFromMap(parser.mapOrdered())) - .normalizePrefix(IndexMetaData.INDEX_SETTING_PREFIX); + templateSettingsBuilder.put(Settings.fromXContent(parser)); + templateSettingsBuilder.normalizePrefix(IndexMetaData.INDEX_SETTING_PREFIX); builder.settings(templateSettingsBuilder.build()); } else if ("mappings".equals(currentFieldName)) { while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { diff --git a/core/src/main/java/org/elasticsearch/cluster/metadata/MetaData.java b/core/src/main/java/org/elasticsearch/cluster/metadata/MetaData.java index 870da642c4a1a..c582f372e517a 100644 --- a/core/src/main/java/org/elasticsearch/cluster/metadata/MetaData.java +++ b/core/src/main/java/org/elasticsearch/cluster/metadata/MetaData.java @@ -22,7 +22,6 @@ import com.carrotsearch.hppc.ObjectHashSet; import com.carrotsearch.hppc.cursors.ObjectCursor; import com.carrotsearch.hppc.cursors.ObjectObjectCursor; - import org.apache.logging.log4j.Logger; import org.apache.lucene.util.CollectionUtil; import org.elasticsearch.cluster.Diff; @@ -33,6 +32,7 @@ import org.elasticsearch.cluster.block.ClusterBlock; import org.elasticsearch.cluster.block.ClusterBlockLevel; import org.elasticsearch.common.Nullable; +import org.elasticsearch.common.Strings; import org.elasticsearch.common.UUIDs; import org.elasticsearch.common.collect.HppcMaps; import org.elasticsearch.common.collect.ImmutableOpenMap; @@ -43,7 +43,6 @@ import org.elasticsearch.common.settings.Setting; import org.elasticsearch.common.settings.Setting.Property; import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.common.settings.loader.SettingsLoader; import org.elasticsearch.common.xcontent.NamedXContentRegistry.UnknownNamedObjectException; import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.ToXContentFragment; @@ -63,9 +62,11 @@ import java.util.Comparator; import java.util.EnumSet; import java.util.HashMap; +import java.util.HashSet; import java.util.Iterator; import java.util.List; import java.util.Map; +import java.util.Set; import java.util.SortedMap; import java.util.TreeMap; @@ -112,7 +113,7 @@ public enum XContentContext { */ public static EnumSet ALL_CONTEXTS = EnumSet.allOf(XContentContext.class); - public interface Custom extends NamedDiffable, ToXContent { + public interface Custom extends NamedDiffable, ToXContentFragment { EnumSet context(); } @@ -915,55 +916,70 @@ public MetaData build() { // while these datastructures aren't even used. // 2) The aliasAndIndexLookup can be updated instead of rebuilding it all the time. - // build all concrete indices arrays: - // TODO: I think we can remove these arrays. it isn't worth the effort, for operations on all indices. - // When doing an operation across all indices, most of the time is spent on actually going to all shards and - // do the required operations, the bottleneck isn't resolving expressions into concrete indices. - List allIndicesLst = new ArrayList<>(); - for (ObjectCursor cursor : indices.values()) { - allIndicesLst.add(cursor.value.getIndex().getName()); - } - String[] allIndices = allIndicesLst.toArray(new String[allIndicesLst.size()]); - - List allOpenIndicesLst = new ArrayList<>(); - List allClosedIndicesLst = new ArrayList<>(); + final Set allIndices = new HashSet<>(indices.size()); + final List allOpenIndices = new ArrayList<>(); + final List allClosedIndices = new ArrayList<>(); + final Set duplicateAliasesIndices = new HashSet<>(); for (ObjectCursor cursor : indices.values()) { - IndexMetaData indexMetaData = cursor.value; + final IndexMetaData indexMetaData = cursor.value; + final String name = indexMetaData.getIndex().getName(); + boolean added = allIndices.add(name); + assert added : "double index named [" + name + "]"; if (indexMetaData.getState() == IndexMetaData.State.OPEN) { - allOpenIndicesLst.add(indexMetaData.getIndex().getName()); + allOpenIndices.add(indexMetaData.getIndex().getName()); } else if (indexMetaData.getState() == IndexMetaData.State.CLOSE) { - allClosedIndicesLst.add(indexMetaData.getIndex().getName()); + allClosedIndices.add(indexMetaData.getIndex().getName()); + } + indexMetaData.getAliases().keysIt().forEachRemaining(duplicateAliasesIndices::add); + } + duplicateAliasesIndices.retainAll(allIndices); + if (duplicateAliasesIndices.isEmpty() == false) { + // iterate again and constructs a helpful message + ArrayList duplicates = new ArrayList<>(); + for (ObjectCursor cursor : indices.values()) { + for (String alias: duplicateAliasesIndices) { + if (cursor.value.getAliases().containsKey(alias)) { + duplicates.add(alias + " (alias of " + cursor.value.getIndex() + ")"); + } + } } + assert duplicates.size() > 0; + throw new IllegalStateException("index and alias names need to be unique, but the following duplicates were found [" + + Strings.collectionToCommaDelimitedString(duplicates)+ "]"); + } - String[] allOpenIndices = allOpenIndicesLst.toArray(new String[allOpenIndicesLst.size()]); - String[] allClosedIndices = allClosedIndicesLst.toArray(new String[allClosedIndicesLst.size()]); // build all indices map SortedMap aliasAndIndexLookup = new TreeMap<>(); for (ObjectCursor cursor : indices.values()) { IndexMetaData indexMetaData = cursor.value; - aliasAndIndexLookup.put(indexMetaData.getIndex().getName(), new AliasOrIndex.Index(indexMetaData)); + AliasOrIndex existing = aliasAndIndexLookup.put(indexMetaData.getIndex().getName(), new AliasOrIndex.Index(indexMetaData)); + assert existing == null : "duplicate for " + indexMetaData.getIndex(); for (ObjectObjectCursor aliasCursor : indexMetaData.getAliases()) { AliasMetaData aliasMetaData = aliasCursor.value; - AliasOrIndex aliasOrIndex = aliasAndIndexLookup.get(aliasMetaData.getAlias()); - if (aliasOrIndex == null) { - aliasOrIndex = new AliasOrIndex.Alias(aliasMetaData, indexMetaData); - aliasAndIndexLookup.put(aliasMetaData.getAlias(), aliasOrIndex); - } else if (aliasOrIndex instanceof AliasOrIndex.Alias) { - AliasOrIndex.Alias alias = (AliasOrIndex.Alias) aliasOrIndex; - alias.addIndex(indexMetaData); - } else if (aliasOrIndex instanceof AliasOrIndex.Index) { - AliasOrIndex.Index index = (AliasOrIndex.Index) aliasOrIndex; - throw new IllegalStateException("index and alias names need to be unique, but alias [" + aliasMetaData.getAlias() + "] and index " + index.getIndex().getIndex() + " have the same name"); - } else { - throw new IllegalStateException("unexpected alias [" + aliasMetaData.getAlias() + "][" + aliasOrIndex + "]"); - } + aliasAndIndexLookup.compute(aliasMetaData.getAlias(), (aliasName, alias) -> { + if (alias == null) { + return new AliasOrIndex.Alias(aliasMetaData, indexMetaData); + } else { + assert alias instanceof AliasOrIndex.Alias : alias.getClass().getName(); + ((AliasOrIndex.Alias) alias).addIndex(indexMetaData); + return alias; + } + }); } } aliasAndIndexLookup = Collections.unmodifiableSortedMap(aliasAndIndexLookup); + // build all concrete indices arrays: + // TODO: I think we can remove these arrays. it isn't worth the effort, for operations on all indices. + // When doing an operation across all indices, most of the time is spent on actually going to all shards and + // do the required operations, the bottleneck isn't resolving expressions into concrete indices. + String[] allIndicesArray = allIndices.toArray(new String[allIndices.size()]); + String[] allOpenIndicesArray = allOpenIndices.toArray(new String[allOpenIndices.size()]); + String[] allClosedIndicesArray = allClosedIndices.toArray(new String[allClosedIndices.size()]); + return new MetaData(clusterUUID, version, transientSettings, persistentSettings, indices.build(), templates.build(), - customs.build(), allIndices, allOpenIndices, allClosedIndices, aliasAndIndexLookup); + customs.build(), allIndicesArray, allOpenIndicesArray, allClosedIndicesArray, aliasAndIndexLookup); } public static String toXContent(MetaData metaData) throws IOException { @@ -984,17 +1000,13 @@ public static void toXContent(MetaData metaData, XContentBuilder builder, ToXCon if (!metaData.persistentSettings().isEmpty()) { builder.startObject("settings"); - for (Map.Entry entry : metaData.persistentSettings().getAsMap().entrySet()) { - builder.field(entry.getKey(), entry.getValue()); - } + metaData.persistentSettings().toXContent(builder, new MapParams(Collections.singletonMap("flat_settings", "true"))); builder.endObject(); } if (context == XContentContext.API && !metaData.transientSettings().isEmpty()) { builder.startObject("transient_settings"); - for (Map.Entry entry : metaData.transientSettings().getAsMap().entrySet()) { - builder.field(entry.getKey(), entry.getValue()); - } + metaData.transientSettings().toXContent(builder, new MapParams(Collections.singletonMap("flat_settings", "true"))); builder.endObject(); } @@ -1054,7 +1066,7 @@ public static MetaData fromXContent(XContentParser parser) throws IOException { currentFieldName = parser.currentName(); } else if (token == XContentParser.Token.START_OBJECT) { if ("settings".equals(currentFieldName)) { - builder.persistentSettings(Settings.builder().put(SettingsLoader.Helper.loadNestedFromMap(parser.mapOrdered())).build()); + builder.persistentSettings(Settings.fromXContent(parser)); } else if ("indices".equals(currentFieldName)) { while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { builder.put(IndexMetaData.Builder.fromXContent(parser), false); diff --git a/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataCreateIndexService.java b/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataCreateIndexService.java index 07a97b4f320cb..49568ab300f03 100644 --- a/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataCreateIndexService.java +++ b/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataCreateIndexService.java @@ -21,6 +21,7 @@ import com.carrotsearch.hppc.cursors.ObjectCursor; import com.carrotsearch.hppc.cursors.ObjectObjectCursor; +import org.apache.logging.log4j.Logger; import org.apache.logging.log4j.message.ParameterizedMessage; import org.apache.logging.log4j.util.Supplier; import org.apache.lucene.util.CollectionUtil; @@ -30,6 +31,7 @@ import org.elasticsearch.action.ActionListener; import org.elasticsearch.action.admin.indices.alias.Alias; import org.elasticsearch.action.admin.indices.create.CreateIndexClusterStateUpdateRequest; +import org.elasticsearch.action.admin.indices.shrink.ResizeType; import org.elasticsearch.action.support.ActiveShardCount; import org.elasticsearch.action.support.ActiveShardsObserver; import org.elasticsearch.cluster.AckedClusterStateUpdateTask; @@ -93,7 +95,6 @@ import java.util.function.Predicate; import java.util.stream.IntStream; -import static org.elasticsearch.action.support.ContextPreservingActionListener.wrapPreservingContext; import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_AUTO_EXPAND_REPLICAS; import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_CREATION_DATE; import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_INDEX_UUID; @@ -116,7 +117,6 @@ public class MetaDataCreateIndexService extends AbstractComponent { private final IndexScopedSettings indexScopedSettings; private final ActiveShardsObserver activeShardsObserver; private final NamedXContentRegistry xContentRegistry; - private final ThreadPool threadPool; @Inject public MetaDataCreateIndexService(Settings settings, ClusterService clusterService, @@ -132,7 +132,6 @@ public MetaDataCreateIndexService(Settings settings, ClusterService clusterServi this.env = env; this.indexScopedSettings = indexScopedSettings; this.activeShardsObserver = new ActiveShardsObserver(settings, clusterService, threadPool); - this.threadPool = threadPool; this.xContentRegistry = xContentRegistry; } @@ -165,6 +164,9 @@ public static void validateIndexOrAliasName(String index, BiFunction listener) { onlyCreateIndex(request, ActionListener.wrap(response -> { if (response.isAcknowledged()) { - activeShardsObserver.waitForActiveShards(request.index(), request.waitForActiveShards(), request.ackTimeout(), + activeShardsObserver.waitForActiveShards(new String[]{request.index()}, request.waitForActiveShards(), request.ackTimeout(), shardsAcked -> { if (shardsAcked == false) { logger.debug("[{}] index created, but the operation timed out while waiting for " + @@ -223,319 +225,355 @@ private void onlyCreateIndex(final CreateIndexClusterStateUpdateRequest request, request.settings(updatedSettingsBuilder.build()); clusterService.submitStateUpdateTask("create-index [" + request.index() + "], cause [" + request.cause() + "]", - new AckedClusterStateUpdateTask(Priority.URGENT, request, - wrapPreservingContext(listener, threadPool.getThreadContext())) { + new IndexCreationTask(logger, allocationService, request, listener, indicesService, aliasValidator, xContentRegistry, settings, + this::validate)); + } - @Override - protected ClusterStateUpdateResponse newResponse(boolean acknowledged) { - return new ClusterStateUpdateResponse(acknowledged); - } + interface IndexValidator { + void validate(CreateIndexClusterStateUpdateRequest request, ClusterState state); + } - @Override - public ClusterState execute(ClusterState currentState) throws Exception { - Index createdIndex = null; - String removalExtraInfo = null; - IndexRemovalReason removalReason = IndexRemovalReason.FAILURE; - try { - validate(request, currentState); + static class IndexCreationTask extends AckedClusterStateUpdateTask { + + private final IndicesService indicesService; + private final AliasValidator aliasValidator; + private final NamedXContentRegistry xContentRegistry; + private final CreateIndexClusterStateUpdateRequest request; + private final Logger logger; + private final AllocationService allocationService; + private final Settings settings; + private final IndexValidator validator; + + IndexCreationTask(Logger logger, AllocationService allocationService, CreateIndexClusterStateUpdateRequest request, + ActionListener listener, IndicesService indicesService, + AliasValidator aliasValidator, NamedXContentRegistry xContentRegistry, + Settings settings, IndexValidator validator) { + super(Priority.URGENT, request, listener); + this.request = request; + this.logger = logger; + this.allocationService = allocationService; + this.indicesService = indicesService; + this.aliasValidator = aliasValidator; + this.xContentRegistry = xContentRegistry; + this.settings = settings; + this.validator = validator; + } - for (Alias alias : request.aliases()) { - aliasValidator.validateAlias(alias, request.index(), currentState.metaData()); - } + @Override + protected ClusterStateUpdateResponse newResponse(boolean acknowledged) { + return new ClusterStateUpdateResponse(acknowledged); + } - // we only find a template when its an API call (a new index) - // find templates, highest order are better matching - List templates = findTemplates(request, currentState); + @Override + public ClusterState execute(ClusterState currentState) throws Exception { + Index createdIndex = null; + String removalExtraInfo = null; + IndexRemovalReason removalReason = IndexRemovalReason.FAILURE; + try { + validator.validate(request, currentState); - Map customs = new HashMap<>(); + for (Alias alias : request.aliases()) { + aliasValidator.validateAlias(alias, request.index(), currentState.metaData()); + } - // add the request mapping - Map> mappings = new HashMap<>(); + // we only find a template when its an API call (a new index) + // find templates, highest order are better matching + List templates = findTemplates(request, currentState); - Map templatesAliases = new HashMap<>(); + Map customs = new HashMap<>(); - List templateNames = new ArrayList<>(); + // add the request mapping + Map> mappings = new HashMap<>(); - for (Map.Entry entry : request.mappings().entrySet()) { - mappings.put(entry.getKey(), MapperService.parseMapping(xContentRegistry, entry.getValue())); - } + Map templatesAliases = new HashMap<>(); - for (Map.Entry entry : request.customs().entrySet()) { - customs.put(entry.getKey(), entry.getValue()); - } + List templateNames = new ArrayList<>(); - final Index shrinkFromIndex = request.shrinkFrom(); - - if (shrinkFromIndex == null) { - // apply templates, merging the mappings into the request mapping if exists - for (IndexTemplateMetaData template : templates) { - templateNames.add(template.getName()); - for (ObjectObjectCursor cursor : template.mappings()) { - String mappingString = cursor.value.string(); - if (mappings.containsKey(cursor.key)) { - XContentHelper.mergeDefaults(mappings.get(cursor.key), - MapperService.parseMapping(xContentRegistry, mappingString)); - } else { - mappings.put(cursor.key, - MapperService.parseMapping(xContentRegistry, mappingString)); - } - } - // handle custom - for (ObjectObjectCursor cursor : template.customs()) { - String type = cursor.key; - IndexMetaData.Custom custom = cursor.value; - IndexMetaData.Custom existing = customs.get(type); - if (existing == null) { - customs.put(type, custom); - } else { - IndexMetaData.Custom merged = existing.mergeWith(custom); - customs.put(type, merged); - } - } - //handle aliases - for (ObjectObjectCursor cursor : template.aliases()) { - AliasMetaData aliasMetaData = cursor.value; - //if an alias with same name came with the create index request itself, - // ignore this one taken from the index template - if (request.aliases().contains(new Alias(aliasMetaData.alias()))) { - continue; - } - //if an alias with same name was already processed, ignore this one - if (templatesAliases.containsKey(cursor.key)) { - continue; - } - - //Allow templatesAliases to be templated by replacing a token with the name of the index that we are applying it to - if (aliasMetaData.alias().contains("{index}")) { - String templatedAlias = aliasMetaData.alias().replace("{index}", request.index()); - aliasMetaData = AliasMetaData.newAliasMetaData(aliasMetaData, templatedAlias); - } - - aliasValidator.validateAliasMetaData(aliasMetaData, request.index(), currentState.metaData()); - templatesAliases.put(aliasMetaData.alias(), aliasMetaData); - } - } - } - Settings.Builder indexSettingsBuilder = Settings.builder(); - if (shrinkFromIndex == null) { - // apply templates, here, in reverse order, since first ones are better matching - for (int i = templates.size() - 1; i >= 0; i--) { - indexSettingsBuilder.put(templates.get(i).settings()); - } - } - // now, put the request settings, so they override templates - indexSettingsBuilder.put(request.settings()); - if (indexSettingsBuilder.get(SETTING_NUMBER_OF_SHARDS) == null) { - indexSettingsBuilder.put(SETTING_NUMBER_OF_SHARDS, settings.getAsInt(SETTING_NUMBER_OF_SHARDS, 5)); - } - if (indexSettingsBuilder.get(SETTING_NUMBER_OF_REPLICAS) == null) { - indexSettingsBuilder.put(SETTING_NUMBER_OF_REPLICAS, settings.getAsInt(SETTING_NUMBER_OF_REPLICAS, 1)); - } - if (settings.get(SETTING_AUTO_EXPAND_REPLICAS) != null && indexSettingsBuilder.get(SETTING_AUTO_EXPAND_REPLICAS) == null) { - indexSettingsBuilder.put(SETTING_AUTO_EXPAND_REPLICAS, settings.get(SETTING_AUTO_EXPAND_REPLICAS)); - } + for (Map.Entry entry : request.mappings().entrySet()) { + mappings.put(entry.getKey(), MapperService.parseMapping(xContentRegistry, entry.getValue())); + } - if (indexSettingsBuilder.get(SETTING_VERSION_CREATED) == null) { - DiscoveryNodes nodes = currentState.nodes(); - final Version createdVersion = Version.min(Version.CURRENT, nodes.getSmallestNonClientNodeVersion()); - indexSettingsBuilder.put(SETTING_VERSION_CREATED, createdVersion); - } + for (Map.Entry entry : request.customs().entrySet()) { + customs.put(entry.getKey(), entry.getValue()); + } - if (indexSettingsBuilder.get(SETTING_CREATION_DATE) == null) { - indexSettingsBuilder.put(SETTING_CREATION_DATE, new DateTime(DateTimeZone.UTC).getMillis()); + final Index recoverFromIndex = request.recoverFrom(); + + if (recoverFromIndex == null) { + // apply templates, merging the mappings into the request mapping if exists + for (IndexTemplateMetaData template : templates) { + templateNames.add(template.getName()); + for (ObjectObjectCursor cursor : template.mappings()) { + String mappingString = cursor.value.string(); + if (mappings.containsKey(cursor.key)) { + XContentHelper.mergeDefaults(mappings.get(cursor.key), + MapperService.parseMapping(xContentRegistry, mappingString)); + } else { + mappings.put(cursor.key, + MapperService.parseMapping(xContentRegistry, mappingString)); } - indexSettingsBuilder.put(IndexMetaData.SETTING_INDEX_PROVIDED_NAME, request.getProvidedName()); - indexSettingsBuilder.put(SETTING_INDEX_UUID, UUIDs.randomBase64UUID()); - final IndexMetaData.Builder tmpImdBuilder = IndexMetaData.builder(request.index()); - - final int routingNumShards; - if (shrinkFromIndex == null) { - routingNumShards = IndexMetaData.INDEX_NUMBER_OF_SHARDS_SETTING.get(indexSettingsBuilder.build()); + } + // handle custom + for (ObjectObjectCursor cursor : template.customs()) { + String type = cursor.key; + IndexMetaData.Custom custom = cursor.value; + IndexMetaData.Custom existing = customs.get(type); + if (existing == null) { + customs.put(type, custom); } else { - final IndexMetaData sourceMetaData = currentState.metaData().getIndexSafe(shrinkFromIndex); - routingNumShards = sourceMetaData.getRoutingNumShards(); + IndexMetaData.Custom merged = existing.mergeWith(custom); + customs.put(type, merged); } - tmpImdBuilder.setRoutingNumShards(routingNumShards); - - if (shrinkFromIndex != null) { - prepareShrinkIndexSettings( - currentState, mappings.keySet(), indexSettingsBuilder, shrinkFromIndex, request.index()); + } + //handle aliases + for (ObjectObjectCursor cursor : template.aliases()) { + AliasMetaData aliasMetaData = cursor.value; + //if an alias with same name came with the create index request itself, + // ignore this one taken from the index template + if (request.aliases().contains(new Alias(aliasMetaData.alias()))) { + continue; } - final Settings actualIndexSettings = indexSettingsBuilder.build(); - tmpImdBuilder.settings(actualIndexSettings); - - if (shrinkFromIndex != null) { - /* - * We need to arrange that the primary term on all the shards in the shrunken index is at least as large as - * the maximum primary term on all the shards in the source index. This ensures that we have correct - * document-level semantics regarding sequence numbers in the shrunken index. - */ - final IndexMetaData sourceMetaData = currentState.metaData().getIndexSafe(shrinkFromIndex); - final long primaryTerm = - IntStream - .range(0, sourceMetaData.getNumberOfShards()) - .mapToLong(sourceMetaData::primaryTerm) - .max() - .getAsLong(); - for (int shardId = 0; shardId < tmpImdBuilder.numberOfShards(); shardId++) { - tmpImdBuilder.primaryTerm(shardId, primaryTerm); - } + //if an alias with same name was already processed, ignore this one + if (templatesAliases.containsKey(cursor.key)) { + continue; } - // Set up everything, now locally create the index to see that things are ok, and apply - final IndexMetaData tmpImd = tmpImdBuilder.build(); - ActiveShardCount waitForActiveShards = request.waitForActiveShards(); - if (waitForActiveShards == ActiveShardCount.DEFAULT) { - waitForActiveShards = tmpImd.getWaitForActiveShards(); - } - if (waitForActiveShards.validate(tmpImd.getNumberOfReplicas()) == false) { - throw new IllegalArgumentException("invalid wait_for_active_shards[" + request.waitForActiveShards() + - "]: cannot be greater than number of shard copies [" + - (tmpImd.getNumberOfReplicas() + 1) + "]"); - } - // create the index here (on the master) to validate it can be created, as well as adding the mapping - final IndexService indexService = indicesService.createIndex(tmpImd, Collections.emptyList()); - createdIndex = indexService.index(); - // now add the mappings - MapperService mapperService = indexService.mapperService(); - try { - mapperService.merge(mappings, MergeReason.MAPPING_UPDATE, request.updateAllTypes()); - } catch (Exception e) { - removalExtraInfo = "failed on parsing default mapping/mappings on index creation"; - throw e; + //Allow templatesAliases to be templated by replacing a token with the name of the index that we are applying it to + if (aliasMetaData.alias().contains("{index}")) { + String templatedAlias = aliasMetaData.alias().replace("{index}", request.index()); + aliasMetaData = AliasMetaData.newAliasMetaData(aliasMetaData, templatedAlias); } - if (request.shrinkFrom() == null) { - // now that the mapping is merged we can validate the index sort. - // we cannot validate for index shrinking since the mapping is empty - // at this point. The validation will take place later in the process - // (when all shards are copied in a single place). - indexService.getIndexSortSupplier().get(); - } + aliasValidator.validateAliasMetaData(aliasMetaData, request.index(), currentState.metaData()); + templatesAliases.put(aliasMetaData.alias(), aliasMetaData); + } + } + } + Settings.Builder indexSettingsBuilder = Settings.builder(); + if (recoverFromIndex == null) { + // apply templates, here, in reverse order, since first ones are better matching + for (int i = templates.size() - 1; i >= 0; i--) { + indexSettingsBuilder.put(templates.get(i).settings()); + } + } + // now, put the request settings, so they override templates + indexSettingsBuilder.put(request.settings()); + if (indexSettingsBuilder.get(SETTING_NUMBER_OF_SHARDS) == null) { + indexSettingsBuilder.put(SETTING_NUMBER_OF_SHARDS, settings.getAsInt(SETTING_NUMBER_OF_SHARDS, 5)); + } + if (indexSettingsBuilder.get(SETTING_NUMBER_OF_REPLICAS) == null) { + indexSettingsBuilder.put(SETTING_NUMBER_OF_REPLICAS, settings.getAsInt(SETTING_NUMBER_OF_REPLICAS, 1)); + } + if (settings.get(SETTING_AUTO_EXPAND_REPLICAS) != null && indexSettingsBuilder.get(SETTING_AUTO_EXPAND_REPLICAS) == null) { + indexSettingsBuilder.put(SETTING_AUTO_EXPAND_REPLICAS, settings.get(SETTING_AUTO_EXPAND_REPLICAS)); + } - // the context is only used for validation so it's fine to pass fake values for the shard id and the current - // timestamp - final QueryShardContext queryShardContext = indexService.newQueryShardContext(0, null, () -> 0L, null); + if (indexSettingsBuilder.get(SETTING_VERSION_CREATED) == null) { + DiscoveryNodes nodes = currentState.nodes(); + final Version createdVersion = Version.min(Version.CURRENT, nodes.getSmallestNonClientNodeVersion()); + indexSettingsBuilder.put(SETTING_VERSION_CREATED, createdVersion); + } - for (Alias alias : request.aliases()) { - if (Strings.hasLength(alias.filter())) { - aliasValidator.validateAliasFilter(alias.name(), alias.filter(), queryShardContext, xContentRegistry); - } - } - for (AliasMetaData aliasMetaData : templatesAliases.values()) { - if (aliasMetaData.filter() != null) { - aliasValidator.validateAliasFilter(aliasMetaData.alias(), aliasMetaData.filter().uncompressed(), - queryShardContext, xContentRegistry); - } - } + if (indexSettingsBuilder.get(SETTING_CREATION_DATE) == null) { + indexSettingsBuilder.put(SETTING_CREATION_DATE, new DateTime(DateTimeZone.UTC).getMillis()); + } + indexSettingsBuilder.put(IndexMetaData.SETTING_INDEX_PROVIDED_NAME, request.getProvidedName()); + indexSettingsBuilder.put(SETTING_INDEX_UUID, UUIDs.randomBase64UUID()); + final IndexMetaData.Builder tmpImdBuilder = IndexMetaData.builder(request.index()); + + final int routingNumShards; + if (recoverFromIndex == null) { + Settings idxSettings = indexSettingsBuilder.build(); + routingNumShards = IndexMetaData.INDEX_NUMBER_OF_ROUTING_SHARDS_SETTING.get(idxSettings); + } else { + assert IndexMetaData.INDEX_NUMBER_OF_ROUTING_SHARDS_SETTING.exists(indexSettingsBuilder.build()) == false + : "index.number_of_routing_shards should be present on the target index on resize"; + final IndexMetaData sourceMetaData = currentState.metaData().getIndexSafe(recoverFromIndex); + routingNumShards = sourceMetaData.getRoutingNumShards(); + } + // remove the setting it's temporary and is only relevant once we create the index + indexSettingsBuilder.remove(IndexMetaData.INDEX_NUMBER_OF_ROUTING_SHARDS_SETTING.getKey()); + tmpImdBuilder.setRoutingNumShards(routingNumShards); + + if (recoverFromIndex != null) { + assert request.resizeType() != null; + prepareResizeIndexSettings( + currentState, mappings.keySet(), indexSettingsBuilder, recoverFromIndex, request.index(), request.resizeType()); + } + final Settings actualIndexSettings = indexSettingsBuilder.build(); + tmpImdBuilder.settings(actualIndexSettings); + + if (recoverFromIndex != null) { + /* + * We need to arrange that the primary term on all the shards in the shrunken index is at least as large as + * the maximum primary term on all the shards in the source index. This ensures that we have correct + * document-level semantics regarding sequence numbers in the shrunken index. + */ + final IndexMetaData sourceMetaData = currentState.metaData().getIndexSafe(recoverFromIndex); + final long primaryTerm = + IntStream + .range(0, sourceMetaData.getNumberOfShards()) + .mapToLong(sourceMetaData::primaryTerm) + .max() + .getAsLong(); + for (int shardId = 0; shardId < tmpImdBuilder.numberOfShards(); shardId++) { + tmpImdBuilder.primaryTerm(shardId, primaryTerm); + } + } - // now, update the mappings with the actual source - Map mappingsMetaData = new HashMap<>(); - for (DocumentMapper mapper : mapperService.docMappers(true)) { - MappingMetaData mappingMd = new MappingMetaData(mapper); - mappingsMetaData.put(mapper.type(), mappingMd); - } + // Set up everything, now locally create the index to see that things are ok, and apply + final IndexMetaData tmpImd = tmpImdBuilder.build(); + ActiveShardCount waitForActiveShards = request.waitForActiveShards(); + if (waitForActiveShards == ActiveShardCount.DEFAULT) { + waitForActiveShards = tmpImd.getWaitForActiveShards(); + } + if (waitForActiveShards.validate(tmpImd.getNumberOfReplicas()) == false) { + throw new IllegalArgumentException("invalid wait_for_active_shards[" + request.waitForActiveShards() + + "]: cannot be greater than number of shard copies [" + + (tmpImd.getNumberOfReplicas() + 1) + "]"); + } + // create the index here (on the master) to validate it can be created, as well as adding the mapping + final IndexService indexService = indicesService.createIndex(tmpImd, Collections.emptyList()); + createdIndex = indexService.index(); + // now add the mappings + MapperService mapperService = indexService.mapperService(); + try { + mapperService.merge(mappings, MergeReason.MAPPING_UPDATE, request.updateAllTypes()); + } catch (Exception e) { + removalExtraInfo = "failed on parsing default mapping/mappings on index creation"; + throw e; + } - final IndexMetaData.Builder indexMetaDataBuilder = IndexMetaData.builder(request.index()) - .settings(actualIndexSettings) - .setRoutingNumShards(routingNumShards); + if (request.recoverFrom() == null) { + // now that the mapping is merged we can validate the index sort. + // we cannot validate for index shrinking since the mapping is empty + // at this point. The validation will take place later in the process + // (when all shards are copied in a single place). + indexService.getIndexSortSupplier().get(); + } - for (int shardId = 0; shardId < tmpImd.getNumberOfShards(); shardId++) { - indexMetaDataBuilder.primaryTerm(shardId, tmpImd.primaryTerm(shardId)); - } + // the context is only used for validation so it's fine to pass fake values for the shard id and the current + // timestamp + final QueryShardContext queryShardContext = indexService.newQueryShardContext(0, null, () -> 0L, null); - for (MappingMetaData mappingMd : mappingsMetaData.values()) { - indexMetaDataBuilder.putMapping(mappingMd); - } + for (Alias alias : request.aliases()) { + if (Strings.hasLength(alias.filter())) { + aliasValidator.validateAliasFilter(alias.name(), alias.filter(), queryShardContext, xContentRegistry); + } + } + for (AliasMetaData aliasMetaData : templatesAliases.values()) { + if (aliasMetaData.filter() != null) { + aliasValidator.validateAliasFilter(aliasMetaData.alias(), aliasMetaData.filter().uncompressed(), + queryShardContext, xContentRegistry); + } + } - for (AliasMetaData aliasMetaData : templatesAliases.values()) { - indexMetaDataBuilder.putAlias(aliasMetaData); - } - for (Alias alias : request.aliases()) { - AliasMetaData aliasMetaData = AliasMetaData.builder(alias.name()).filter(alias.filter()) - .indexRouting(alias.indexRouting()).searchRouting(alias.searchRouting()).build(); - indexMetaDataBuilder.putAlias(aliasMetaData); - } + // now, update the mappings with the actual source + Map mappingsMetaData = new HashMap<>(); + for (DocumentMapper mapper : mapperService.docMappers(true)) { + MappingMetaData mappingMd = new MappingMetaData(mapper); + mappingsMetaData.put(mapper.type(), mappingMd); + } - for (Map.Entry customEntry : customs.entrySet()) { - indexMetaDataBuilder.putCustom(customEntry.getKey(), customEntry.getValue()); - } + final IndexMetaData.Builder indexMetaDataBuilder = IndexMetaData.builder(request.index()) + .settings(actualIndexSettings) + .setRoutingNumShards(routingNumShards); - indexMetaDataBuilder.state(request.state()); + for (int shardId = 0; shardId < tmpImd.getNumberOfShards(); shardId++) { + indexMetaDataBuilder.primaryTerm(shardId, tmpImd.primaryTerm(shardId)); + } - final IndexMetaData indexMetaData; - try { - indexMetaData = indexMetaDataBuilder.build(); - } catch (Exception e) { - removalExtraInfo = "failed to build index metadata"; - throw e; - } + for (MappingMetaData mappingMd : mappingsMetaData.values()) { + indexMetaDataBuilder.putMapping(mappingMd); + } - indexService.getIndexEventListener().beforeIndexAddedToCluster(indexMetaData.getIndex(), - indexMetaData.getSettings()); + for (AliasMetaData aliasMetaData : templatesAliases.values()) { + indexMetaDataBuilder.putAlias(aliasMetaData); + } + for (Alias alias : request.aliases()) { + AliasMetaData aliasMetaData = AliasMetaData.builder(alias.name()).filter(alias.filter()) + .indexRouting(alias.indexRouting()).searchRouting(alias.searchRouting()).build(); + indexMetaDataBuilder.putAlias(aliasMetaData); + } - MetaData newMetaData = MetaData.builder(currentState.metaData()) - .put(indexMetaData, false) - .build(); + for (Map.Entry customEntry : customs.entrySet()) { + indexMetaDataBuilder.putCustom(customEntry.getKey(), customEntry.getValue()); + } - logger.info("[{}] creating index, cause [{}], templates {}, shards [{}]/[{}], mappings {}", - request.index(), request.cause(), templateNames, indexMetaData.getNumberOfShards(), - indexMetaData.getNumberOfReplicas(), mappings.keySet()); + indexMetaDataBuilder.state(request.state()); - ClusterBlocks.Builder blocks = ClusterBlocks.builder().blocks(currentState.blocks()); - if (!request.blocks().isEmpty()) { - for (ClusterBlock block : request.blocks()) { - blocks.addIndexBlock(request.index(), block); - } - } - blocks.updateBlocks(indexMetaData); + final IndexMetaData indexMetaData; + try { + indexMetaData = indexMetaDataBuilder.build(); + } catch (Exception e) { + removalExtraInfo = "failed to build index metadata"; + throw e; + } - ClusterState updatedState = ClusterState.builder(currentState).blocks(blocks).metaData(newMetaData).build(); + indexService.getIndexEventListener().beforeIndexAddedToCluster(indexMetaData.getIndex(), + indexMetaData.getSettings()); - if (request.state() == State.OPEN) { - RoutingTable.Builder routingTableBuilder = RoutingTable.builder(updatedState.routingTable()) - .addAsNew(updatedState.metaData().index(request.index())); - updatedState = allocationService.reroute( - ClusterState.builder(updatedState).routingTable(routingTableBuilder.build()).build(), - "index [" + request.index() + "] created"); - } - removalExtraInfo = "cleaning up after validating index on master"; - removalReason = IndexRemovalReason.NO_LONGER_ASSIGNED; - return updatedState; - } finally { - if (createdIndex != null) { - // Index was already partially created - need to clean up - indicesService.removeIndex(createdIndex, removalReason, removalExtraInfo); - } - } - } + MetaData newMetaData = MetaData.builder(currentState.metaData()) + .put(indexMetaData, false) + .build(); - @Override - public void onFailure(String source, Exception e) { - if (e instanceof ResourceAlreadyExistsException) { - logger.trace((Supplier) () -> new ParameterizedMessage("[{}] failed to create", request.index()), e); - } else { - logger.debug((Supplier) () -> new ParameterizedMessage("[{}] failed to create", request.index()), e); - } - super.onFailure(source, e); + logger.info("[{}] creating index, cause [{}], templates {}, shards [{}]/[{}], mappings {}", + request.index(), request.cause(), templateNames, indexMetaData.getNumberOfShards(), + indexMetaData.getNumberOfReplicas(), mappings.keySet()); + + ClusterBlocks.Builder blocks = ClusterBlocks.builder().blocks(currentState.blocks()); + if (!request.blocks().isEmpty()) { + for (ClusterBlock block : request.blocks()) { + blocks.addIndexBlock(request.index(), block); } - }); - } + } + blocks.updateBlocks(indexMetaData); + + ClusterState updatedState = ClusterState.builder(currentState).blocks(blocks).metaData(newMetaData).build(); - private List findTemplates(CreateIndexClusterStateUpdateRequest request, ClusterState state) throws IOException { - List templateMetadata = new ArrayList<>(); - for (ObjectCursor cursor : state.metaData().templates().values()) { - IndexTemplateMetaData metadata = cursor.value; - for (String template: metadata.patterns()) { - if (Regex.simpleMatch(template, request.index())) { - templateMetadata.add(metadata); - break; + if (request.state() == State.OPEN) { + RoutingTable.Builder routingTableBuilder = RoutingTable.builder(updatedState.routingTable()) + .addAsNew(updatedState.metaData().index(request.index())); + updatedState = allocationService.reroute( + ClusterState.builder(updatedState).routingTable(routingTableBuilder.build()).build(), + "index [" + request.index() + "] created"); + } + removalExtraInfo = "cleaning up after validating index on master"; + removalReason = IndexRemovalReason.NO_LONGER_ASSIGNED; + return updatedState; + } finally { + if (createdIndex != null) { + // Index was already partially created - need to clean up + indicesService.removeIndex(createdIndex, removalReason, removalExtraInfo); } } } - CollectionUtil.timSort(templateMetadata, Comparator.comparingInt(IndexTemplateMetaData::order).reversed()); - return templateMetadata; + @Override + public void onFailure(String source, Exception e) { + if (e instanceof ResourceAlreadyExistsException) { + logger.trace((Supplier) () -> new ParameterizedMessage("[{}] failed to create", request.index()), e); + } else { + logger.debug((Supplier) () -> new ParameterizedMessage("[{}] failed to create", request.index()), e); + } + super.onFailure(source, e); + } + + private List findTemplates(CreateIndexClusterStateUpdateRequest request, ClusterState state) throws IOException { + List templateMetadata = new ArrayList<>(); + for (ObjectCursor cursor : state.metaData().templates().values()) { + IndexTemplateMetaData metadata = cursor.value; + for (String template: metadata.patterns()) { + if (Regex.simpleMatch(template, request.index())) { + templateMetadata.add(metadata); + break; + } + } + } + + CollectionUtil.timSort(templateMetadata, Comparator.comparingInt(IndexTemplateMetaData::order).reversed()); + return templateMetadata; + } } private void validate(CreateIndexClusterStateUpdateRequest request, ClusterState state) { @@ -573,35 +611,14 @@ List getIndexSettingsValidationErrors(Settings settings) { static List validateShrinkIndex(ClusterState state, String sourceIndex, Set targetIndexMappingsTypes, String targetIndexName, Settings targetIndexSettings) { - if (state.metaData().hasIndex(targetIndexName)) { - throw new ResourceAlreadyExistsException(state.metaData().index(targetIndexName).getIndex()); - } - final IndexMetaData sourceMetaData = state.metaData().index(sourceIndex); - if (sourceMetaData == null) { - throw new IndexNotFoundException(sourceIndex); - } - // ensure index is read-only - if (state.blocks().indexBlocked(ClusterBlockLevel.WRITE, sourceIndex) == false) { - throw new IllegalStateException("index " + sourceIndex + " must be read-only to shrink index. use \"index.blocks.write=true\""); - } + IndexMetaData sourceMetaData = validateResize(state, sourceIndex, targetIndexMappingsTypes, targetIndexName, targetIndexSettings); + assert IndexMetaData.INDEX_NUMBER_OF_SHARDS_SETTING.exists(targetIndexSettings); + IndexMetaData.selectShrinkShards(0, sourceMetaData, IndexMetaData.INDEX_NUMBER_OF_SHARDS_SETTING.get(targetIndexSettings)); if (sourceMetaData.getNumberOfShards() == 1) { throw new IllegalArgumentException("can't shrink an index with only one shard"); } - - if ((targetIndexMappingsTypes.size() > 1 || - (targetIndexMappingsTypes.isEmpty() || targetIndexMappingsTypes.contains(MapperService.DEFAULT_MAPPING)) == false)) { - throw new IllegalArgumentException("mappings are not allowed when shrinking indices" + - ", all mappings are copied from the source index"); - } - - if (IndexMetaData.INDEX_NUMBER_OF_SHARDS_SETTING.exists(targetIndexSettings)) { - // this method applies all necessary checks ie. if the target shards are less than the source shards - // of if the source shards are divisible by the number of target shards - IndexMetaData.getRoutingFactor(sourceMetaData, IndexMetaData.INDEX_NUMBER_OF_SHARDS_SETTING.get(targetIndexSettings)); - } - // now check that index is all on one node final IndexRoutingTable table = state.routingTable().index(sourceIndex); Map nodesToNumRouting = new HashMap<>(); @@ -624,28 +641,82 @@ static List validateShrinkIndex(ClusterState state, String sourceIndex, return nodesToAllocateOn; } - static void prepareShrinkIndexSettings(ClusterState currentState, Set mappingKeys, Settings.Builder indexSettingsBuilder, Index shrinkFromIndex, String shrinkIntoName) { - final IndexMetaData sourceMetaData = currentState.metaData().index(shrinkFromIndex.getName()); + static void validateSplitIndex(ClusterState state, String sourceIndex, + Set targetIndexMappingsTypes, String targetIndexName, + Settings targetIndexSettings) { + IndexMetaData sourceMetaData = validateResize(state, sourceIndex, targetIndexMappingsTypes, targetIndexName, targetIndexSettings); + IndexMetaData.selectSplitShard(0, sourceMetaData, IndexMetaData.INDEX_NUMBER_OF_SHARDS_SETTING.get(targetIndexSettings)); + if (sourceMetaData.getCreationVersion().before(Version.V_6_0_0_alpha1)) { + // ensure we have a single type since this would make the splitting code considerably more complex + // and a 5.x index would not be splittable unless it has been shrunk before so rather opt out of the complexity + // since in 5.x we don't have a setting to artificially set the number of routing shards + throw new IllegalStateException("source index created version is too old to apply a split operation"); + } + + } + + static IndexMetaData validateResize(ClusterState state, String sourceIndex, + Set targetIndexMappingsTypes, String targetIndexName, + Settings targetIndexSettings) { + if (state.metaData().hasIndex(targetIndexName)) { + throw new ResourceAlreadyExistsException(state.metaData().index(targetIndexName).getIndex()); + } + final IndexMetaData sourceMetaData = state.metaData().index(sourceIndex); + if (sourceMetaData == null) { + throw new IndexNotFoundException(sourceIndex); + } + // ensure index is read-only + if (state.blocks().indexBlocked(ClusterBlockLevel.WRITE, sourceIndex) == false) { + throw new IllegalStateException("index " + sourceIndex + " must be read-only to resize index. use \"index.blocks.write=true\""); + } + + if ((targetIndexMappingsTypes.size() > 1 || + (targetIndexMappingsTypes.isEmpty() || targetIndexMappingsTypes.contains(MapperService.DEFAULT_MAPPING)) == false)) { + throw new IllegalArgumentException("mappings are not allowed when resizing indices" + + ", all mappings are copied from the source index"); + } + + if (IndexMetaData.INDEX_NUMBER_OF_SHARDS_SETTING.exists(targetIndexSettings)) { + // this method applies all necessary checks ie. if the target shards are less than the source shards + // of if the source shards are divisible by the number of target shards + IndexMetaData.getRoutingFactor(sourceMetaData.getNumberOfShards(), + IndexMetaData.INDEX_NUMBER_OF_SHARDS_SETTING.get(targetIndexSettings)); + } + return sourceMetaData; + } + + static void prepareResizeIndexSettings(ClusterState currentState, Set mappingKeys, Settings.Builder indexSettingsBuilder, + Index resizeSourceIndex, String resizeIntoName, ResizeType type) { + final IndexMetaData sourceMetaData = currentState.metaData().index(resizeSourceIndex.getName()); + if (type == ResizeType.SHRINK) { + final List nodesToAllocateOn = validateShrinkIndex(currentState, resizeSourceIndex.getName(), + mappingKeys, resizeIntoName, indexSettingsBuilder.build()); + indexSettingsBuilder + // we use "i.r.a.initial_recovery" rather than "i.r.a.require|include" since we want the replica to allocate right away + // once we are allocated. + .put(IndexMetaData.INDEX_ROUTING_INITIAL_RECOVERY_GROUP_SETTING.getKey() + "_id", + Strings.arrayToCommaDelimitedString(nodesToAllocateOn.toArray())) + // we only try once and then give up with a shrink index + .put("index.allocation.max_retries", 1) + // we add the legacy way of specifying it here for BWC. We can remove this once it's backported to 6.x + .put(IndexMetaData.INDEX_SHRINK_SOURCE_NAME.getKey(), resizeSourceIndex.getName()) + .put(IndexMetaData.INDEX_SHRINK_SOURCE_UUID.getKey(), resizeSourceIndex.getUUID()); + } else if (type == ResizeType.SPLIT) { + validateSplitIndex(currentState, resizeSourceIndex.getName(), mappingKeys, resizeIntoName, indexSettingsBuilder.build()); + } else { + throw new IllegalStateException("unknown resize type is " + type); + } - final List nodesToAllocateOn = validateShrinkIndex(currentState, shrinkFromIndex.getName(), - mappingKeys, shrinkIntoName, indexSettingsBuilder.build()); final Predicate sourceSettingsPredicate = (s) -> s.startsWith("index.similarity.") || s.startsWith("index.analysis.") || s.startsWith("index.sort."); indexSettingsBuilder - // we use "i.r.a.initial_recovery" rather than "i.r.a.require|include" since we want the replica to allocate right away - // once we are allocated. - .put(IndexMetaData.INDEX_ROUTING_INITIAL_RECOVERY_GROUP_SETTING.getKey() + "_id", - Strings.arrayToCommaDelimitedString(nodesToAllocateOn.toArray())) - // we only try once and then give up with a shrink index - .put("index.allocation.max_retries", 1) // now copy all similarity / analysis / sort settings - this overrides all settings from the user unless they // wanna add extra settings .put(IndexMetaData.SETTING_VERSION_CREATED, sourceMetaData.getCreationVersion()) .put(IndexMetaData.SETTING_VERSION_UPGRADED, sourceMetaData.getUpgradedVersion()) .put(sourceMetaData.getSettings().filter(sourceSettingsPredicate)) .put(IndexMetaData.SETTING_ROUTING_PARTITION_SIZE, sourceMetaData.getRoutingPartitionSize()) - .put(IndexMetaData.INDEX_SHRINK_SOURCE_NAME.getKey(), shrinkFromIndex.getName()) - .put(IndexMetaData.INDEX_SHRINK_SOURCE_UUID.getKey(), shrinkFromIndex.getUUID()); + .put(IndexMetaData.INDEX_RESIZE_SOURCE_NAME.getKey(), resizeSourceIndex.getName()) + .put(IndexMetaData.INDEX_RESIZE_SOURCE_UUID.getKey(), resizeSourceIndex.getUUID()); } - } diff --git a/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexStateService.java b/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexStateService.java index 7f8a176243aa6..038c03f342a34 100644 --- a/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexStateService.java +++ b/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexStateService.java @@ -24,9 +24,11 @@ import org.elasticsearch.action.ActionListener; import org.elasticsearch.action.admin.indices.close.CloseIndexClusterStateUpdateRequest; import org.elasticsearch.action.admin.indices.open.OpenIndexClusterStateUpdateRequest; +import org.elasticsearch.action.support.ActiveShardsObserver; import org.elasticsearch.cluster.AckedClusterStateUpdateTask; import org.elasticsearch.cluster.ClusterState; import org.elasticsearch.cluster.ack.ClusterStateUpdateResponse; +import org.elasticsearch.cluster.ack.OpenIndexClusterStateUpdateResponse; import org.elasticsearch.cluster.block.ClusterBlock; import org.elasticsearch.cluster.block.ClusterBlockLevel; import org.elasticsearch.cluster.block.ClusterBlocks; @@ -42,6 +44,7 @@ import org.elasticsearch.rest.RestStatus; import org.elasticsearch.snapshots.RestoreService; import org.elasticsearch.snapshots.SnapshotsService; +import org.elasticsearch.threadpool.ThreadPool; import java.util.ArrayList; import java.util.Arrays; @@ -62,16 +65,18 @@ public class MetaDataIndexStateService extends AbstractComponent { private final MetaDataIndexUpgradeService metaDataIndexUpgradeService; private final IndicesService indicesService; + private final ActiveShardsObserver activeShardsObserver; @Inject public MetaDataIndexStateService(Settings settings, ClusterService clusterService, AllocationService allocationService, MetaDataIndexUpgradeService metaDataIndexUpgradeService, - IndicesService indicesService) { + IndicesService indicesService, ThreadPool threadPool) { super(settings); this.indicesService = indicesService; this.clusterService = clusterService; this.allocationService = allocationService; this.metaDataIndexUpgradeService = metaDataIndexUpgradeService; + this.activeShardsObserver = new ActiveShardsObserver(settings, clusterService, threadPool); } public void closeIndex(final CloseIndexClusterStateUpdateRequest request, final ActionListener listener) { @@ -130,7 +135,25 @@ public ClusterState execute(ClusterState currentState) { }); } - public void openIndex(final OpenIndexClusterStateUpdateRequest request, final ActionListener listener) { + public void openIndex(final OpenIndexClusterStateUpdateRequest request, final ActionListener listener) { + onlyOpenIndex(request, ActionListener.wrap(response -> { + if (response.isAcknowledged()) { + String[] indexNames = Arrays.stream(request.indices()).map(Index::getName).toArray(String[]::new); + activeShardsObserver.waitForActiveShards(indexNames, request.waitForActiveShards(), request.ackTimeout(), + shardsAcknowledged -> { + if (shardsAcknowledged == false) { + logger.debug("[{}] indices opened, but the operation timed out while waiting for " + + "enough shards to be started.", Arrays.toString(indexNames)); + } + listener.onResponse(new OpenIndexClusterStateUpdateResponse(response.isAcknowledged(), shardsAcknowledged)); + }, listener::onFailure); + } else { + listener.onResponse(new OpenIndexClusterStateUpdateResponse(false, false)); + } + }, listener::onFailure)); + } + + private void onlyOpenIndex(final OpenIndexClusterStateUpdateRequest request, final ActionListener listener) { if (request.indices() == null || request.indices().length == 0) { throw new IllegalArgumentException("Index name is required"); } diff --git a/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexUpgradeService.java b/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexUpgradeService.java index 269657367dcfa..2ff5fd5c2b217 100644 --- a/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexUpgradeService.java +++ b/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexUpgradeService.java @@ -18,13 +18,11 @@ */ package org.elasticsearch.cluster.metadata; -import com.carrotsearch.hppc.cursors.ObjectCursor; import org.apache.logging.log4j.message.ParameterizedMessage; import org.apache.logging.log4j.util.Supplier; import org.apache.lucene.analysis.Analyzer; import org.elasticsearch.Version; import org.elasticsearch.common.component.AbstractComponent; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.IndexScopedSettings; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.NamedXContentRegistry; @@ -35,7 +33,6 @@ import org.elasticsearch.index.mapper.MapperService; import org.elasticsearch.index.similarity.SimilarityService; import org.elasticsearch.indices.mapper.MapperRegistry; -import org.elasticsearch.plugins.Plugin; import java.util.AbstractMap; import java.util.Collection; @@ -59,7 +56,6 @@ public class MetaDataIndexUpgradeService extends AbstractComponent { private final IndexScopedSettings indexScopedSettings; private final UnaryOperator upgraders; - @Inject public MetaDataIndexUpgradeService(Settings settings, NamedXContentRegistry xContentRegistry, MapperRegistry mapperRegistry, IndexScopedSettings indexScopedSettings, Collection> indexMetaDataUpgraders) { diff --git a/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataUpdateSettingsService.java b/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataUpdateSettingsService.java index 653edcb9e89e8..abc0a4e8ea2de 100644 --- a/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataUpdateSettingsService.java +++ b/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataUpdateSettingsService.java @@ -161,21 +161,20 @@ public void updateSettings(final UpdateSettingsClusterStateUpdateRequest request final Settings normalizedSettings = Settings.builder().put(request.settings()).normalizePrefix(IndexMetaData.INDEX_SETTING_PREFIX).build(); Settings.Builder settingsForClosedIndices = Settings.builder(); Settings.Builder settingsForOpenIndices = Settings.builder(); - Settings.Builder skipppedSettings = Settings.builder(); + final Set skippedSettings = new HashSet<>(); indexScopedSettings.validate(normalizedSettings); // never allow to change the number of shards - for (Map.Entry entry : normalizedSettings.getAsMap().entrySet()) { - Setting setting = indexScopedSettings.get(entry.getKey()); + for (String key : normalizedSettings.keySet()) { + Setting setting = indexScopedSettings.get(key); assert setting != null; // we already validated the normalized settings - settingsForClosedIndices.put(entry.getKey(), entry.getValue()); + settingsForClosedIndices.copy(key, normalizedSettings); if (setting.isDynamic()) { - settingsForOpenIndices.put(entry.getKey(), entry.getValue()); + settingsForOpenIndices.copy(key, normalizedSettings); } else { - skipppedSettings.put(entry.getKey(), entry.getValue()); + skippedSettings.add(key); } } - final Settings skippedSettigns = skipppedSettings.build(); final Settings closedSettings = settingsForClosedIndices.build(); final Settings openSettings = settingsForOpenIndices.build(); final boolean preserveExisting = request.isPreserveExisting(); @@ -210,12 +209,9 @@ public ClusterState execute(ClusterState currentState) { } } - if (!skippedSettigns.isEmpty() && !openIndices.isEmpty()) { + if (!skippedSettings.isEmpty() && !openIndices.isEmpty()) { throw new IllegalArgumentException(String.format(Locale.ROOT, - "Can't update non dynamic settings [%s] for open indices %s", - skippedSettigns.getAsMap().keySet(), - openIndices - )); + "Can't update non dynamic settings [%s] for open indices %s", skippedSettings, openIndices)); } int updatedNumberOfReplicas = openSettings.getAsInt(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, -1); diff --git a/core/src/main/java/org/elasticsearch/cluster/metadata/RepositoriesMetaData.java b/core/src/main/java/org/elasticsearch/cluster/metadata/RepositoriesMetaData.java index 67909bff61449..6aa2d83fa8d47 100644 --- a/core/src/main/java/org/elasticsearch/cluster/metadata/RepositoriesMetaData.java +++ b/core/src/main/java/org/elasticsearch/cluster/metadata/RepositoriesMetaData.java @@ -20,15 +20,12 @@ package org.elasticsearch.cluster.metadata; import org.elasticsearch.ElasticsearchParseException; -import org.elasticsearch.cluster.AbstractDiffable; import org.elasticsearch.cluster.AbstractNamedDiffable; -import org.elasticsearch.cluster.ClusterState; import org.elasticsearch.cluster.NamedDiff; import org.elasticsearch.cluster.metadata.MetaData.Custom; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.common.settings.loader.SettingsLoader; import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentParser; @@ -151,7 +148,7 @@ public static RepositoriesMetaData fromXContent(XContentParser parser) throws IO if (parser.nextToken() != XContentParser.Token.START_OBJECT) { throw new ElasticsearchParseException("failed to parse repository [{}], incompatible params", name); } - settings = Settings.builder().put(SettingsLoader.Helper.loadNestedFromMap(parser.mapOrdered())).build(); + settings = Settings.fromXContent(parser); } else { throw new ElasticsearchParseException("failed to parse repository [{}], unknown field [{}]", name, currentFieldName); } diff --git a/core/src/main/java/org/elasticsearch/cluster/metadata/TemplateUpgradeService.java b/core/src/main/java/org/elasticsearch/cluster/metadata/TemplateUpgradeService.java index 5f3b9cdf2da3d..c0d8d1ceab6d5 100644 --- a/core/src/main/java/org/elasticsearch/cluster/metadata/TemplateUpgradeService.java +++ b/core/src/main/java/org/elasticsearch/cluster/metadata/TemplateUpgradeService.java @@ -116,60 +116,23 @@ public void clusterChanged(ClusterChangedEvent event) { return; } - if (shouldLocalNodeUpdateTemplates(state.nodes()) == false) { + if (state.nodes().isLocalNodeElectedMaster() == false) { return; } lastTemplateMetaData = templates; Optional, Set>> changes = calculateTemplateChanges(templates); if (changes.isPresent()) { - logger.info("Starting template upgrade to version {}, {} templates will be updated and {} will be removed", - Version.CURRENT, - changes.get().v1().size(), - changes.get().v2().size()); if (updatesInProgress.compareAndSet(0, changes.get().v1().size() + changes.get().v2().size())) { + logger.info("Starting template upgrade to version {}, {} templates will be updated and {} will be removed", + Version.CURRENT, + changes.get().v1().size(), + changes.get().v2().size()); threadPool.generic().execute(() -> updateTemplates(changes.get().v1(), changes.get().v2())); } } } - /** - * Checks if the current node should update the templates - * - * If the master has the newest verison in the cluster - it will be dedicated template updater. - * Otherwise the node with the highest id among nodes with the highest version should update the templates - */ - boolean shouldLocalNodeUpdateTemplates(DiscoveryNodes nodes) { - DiscoveryNode localNode = nodes.getLocalNode(); - // Only data and master nodes should update the template - if (localNode.isDataNode() || localNode.isMasterNode()) { - DiscoveryNode masterNode = nodes.getMasterNode(); - if (masterNode == null) { - return false; - } - Version maxVersion = nodes.getLargestNonClientNodeVersion(); - if (maxVersion.equals(masterNode.getVersion())) { - // If the master has the latest version - we will allow it to handle the update - return nodes.isLocalNodeElectedMaster(); - } else { - if (maxVersion.equals(localNode.getVersion()) == false) { - // The localhost node doesn't have the latest version - not going to update - return false; - } - for (ObjectCursor node : nodes.getMasterAndDataNodes().values()) { - if (node.value.getVersion().equals(maxVersion) && node.value.getId().compareTo(localNode.getId()) > 0) { - // We have a node with higher id then mine - it should update - return false; - } - } - // We have the highest version and highest id - we should perform the update - return true; - } - } else { - return false; - } - } - void updateTemplates(Map changes, Set deletions) { for (Map.Entry change : changes.entrySet()) { PutIndexTemplateRequest request = diff --git a/core/src/main/java/org/elasticsearch/cluster/node/DiscoveryNode.java b/core/src/main/java/org/elasticsearch/cluster/node/DiscoveryNode.java index 283982446dd08..f6f18540825cc 100644 --- a/core/src/main/java/org/elasticsearch/cluster/node/DiscoveryNode.java +++ b/core/src/main/java/org/elasticsearch/cluster/node/DiscoveryNode.java @@ -84,7 +84,7 @@ public static boolean isIngestNode(Settings settings) { *

* Note: if the version of the node is unknown {@link Version#minimumCompatibilityVersion()} should be used for the current * version. it corresponds to the minimum version this elasticsearch version can communicate with. If a higher version is used - * the node might not be able to communicate with the remove node. After initial handshakes node versions will be discovered + * the node might not be able to communicate with the remote node. After initial handshakes node versions will be discovered * and updated. *

* @@ -101,7 +101,7 @@ public DiscoveryNode(final String id, TransportAddress address, Version version) *

* Note: if the version of the node is unknown {@link Version#minimumCompatibilityVersion()} should be used for the current * version. it corresponds to the minimum version this elasticsearch version can communicate with. If a higher version is used - * the node might not be able to communicate with the remove node. After initial handshakes node versions will be discovered + * the node might not be able to communicate with the remote node. After initial handshakes node versions will be discovered * and updated. *

* @@ -121,7 +121,7 @@ public DiscoveryNode(String id, TransportAddress address, Map at *

* Note: if the version of the node is unknown {@link Version#minimumCompatibilityVersion()} should be used for the current * version. it corresponds to the minimum version this elasticsearch version can communicate with. If a higher version is used - * the node might not be able to communicate with the remove node. After initial handshakes node versions will be discovered + * the node might not be able to communicate with the remote node. After initial handshakes node versions will be discovered * and updated. *

* @@ -143,7 +143,7 @@ public DiscoveryNode(String nodeName, String nodeId, TransportAddress address, *

* Note: if the version of the node is unknown {@link Version#minimumCompatibilityVersion()} should be used for the current * version. it corresponds to the minimum version this elasticsearch version can communicate with. If a higher version is used - * the node might not be able to communicate with the remove node. After initial handshakes node versions will be discovered + * the node might not be able to communicate with the remote node. After initial handshakes node versions will be discovered * and updated. *

* @@ -189,9 +189,8 @@ public DiscoveryNode(String nodeName, String nodeId, String ephemeralId, String /** Creates a DiscoveryNode representing the local node. */ public static DiscoveryNode createLocal(Settings settings, TransportAddress publishAddress, String nodeId) { - Map attributes = new HashMap<>(Node.NODE_ATTRIBUTES.get(settings).getAsMap()); + Map attributes = Node.NODE_ATTRIBUTES.getAsMap(settings); Set roles = getRolesFromSettings(settings); - return new DiscoveryNode(Node.NODE_NAME_SETTING.get(settings), nodeId, publishAddress, attributes, roles, Version.CURRENT); } @@ -270,7 +269,7 @@ public String getId() { } /** - * The unique ephemeral id of the node. Ephemeral ids are meant to be attached the the life span + * The unique ephemeral id of the node. Ephemeral ids are meant to be attached the life span * of a node process. When ever a node is restarted, it's ephemeral id is required to change (while it's {@link #getId()} * will be read from the data folder and will remain the same across restarts). Since all node attributes and addresses * are maintained during the life span of a node process, we can (and are) using the ephemeralId in diff --git a/core/src/main/java/org/elasticsearch/cluster/node/DiscoveryNodeFilters.java b/core/src/main/java/org/elasticsearch/cluster/node/DiscoveryNodeFilters.java index f7b12762bee01..6b15d1f24581d 100644 --- a/core/src/main/java/org/elasticsearch/cluster/node/DiscoveryNodeFilters.java +++ b/core/src/main/java/org/elasticsearch/cluster/node/DiscoveryNodeFilters.java @@ -29,7 +29,7 @@ import java.util.HashMap; import java.util.Map; -import java.util.function.Consumer; +import java.util.function.BiConsumer; public class DiscoveryNodeFilters { @@ -43,15 +43,10 @@ public enum OpType { * "_ip", "_host_ip", and "_publish_ip" and ensuring each of their comma separated values * that has no wildcards is a valid IP address. */ - public static final Consumer IP_VALIDATOR = (settings) -> { - Map settingsMap = settings.getAsMap(); - for (Map.Entry entry : settingsMap.entrySet()) { - String propertyKey = entry.getKey(); - if (entry.getValue() == null) { - continue; // this setting gets reset - } - if ("_ip".equals(propertyKey) || "_host_ip".equals(propertyKey) || "_publish_ip".equals(propertyKey)) { - for (String value : Strings.tokenizeToStringArray(entry.getValue(), ",")) { + public static final BiConsumer IP_VALIDATOR = (propertyKey, rawValue) -> { + if (rawValue != null) { + if (propertyKey.endsWith("._ip") || propertyKey.endsWith("._host_ip") || propertyKey.endsWith("_publish_ip")) { + for (String value : Strings.tokenizeToStringArray(rawValue, ",")) { if (Regex.isSimpleMatchPattern(value) == false && InetAddresses.isInetAddress(value) == false) { throw new IllegalArgumentException("invalid IP address [" + value + "] for [" + propertyKey + "]"); } @@ -60,10 +55,6 @@ public enum OpType { } }; - public static DiscoveryNodeFilters buildFromSettings(OpType opType, String prefix, Settings settings) { - return buildFromKeyValue(opType, settings.getByPrefix(prefix).getAsMap()); - } - public static DiscoveryNodeFilters buildFromKeyValue(OpType opType, Map filters) { Map bFilters = new HashMap<>(); for (Map.Entry entry : filters.entrySet()) { diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/IndexRoutingTable.java b/core/src/main/java/org/elasticsearch/cluster/routing/IndexRoutingTable.java index 5a0bd0d426313..5a4e0c78414dd 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/IndexRoutingTable.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/IndexRoutingTable.java @@ -411,7 +411,7 @@ private Builder initializeEmpty(IndexMetaData indexMetaData, UnassignedInfo unas if (indexMetaData.inSyncAllocationIds(shardNumber).isEmpty() == false) { // we have previous valid copies for this shard. use them for recovery primaryRecoverySource = StoreRecoverySource.EXISTING_STORE_INSTANCE; - } else if (indexMetaData.getMergeSourceIndex() != null) { + } else if (indexMetaData.getResizeSourceIndex() != null) { // this is a new index but the initial shards should merged from another index primaryRecoverySource = LocalShardsRecoverySource.INSTANCE; } else { diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/IndexShardRoutingTable.java b/core/src/main/java/org/elasticsearch/cluster/routing/IndexShardRoutingTable.java index 8ed06736b6bb3..a2d015a0dd13f 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/IndexShardRoutingTable.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/IndexShardRoutingTable.java @@ -29,17 +29,21 @@ import org.elasticsearch.common.util.set.Sets; import org.elasticsearch.index.Index; import org.elasticsearch.index.shard.ShardId; +import org.elasticsearch.node.ResponseCollectorService; import java.io.IOException; import java.util.ArrayList; import java.util.Arrays; import java.util.Collections; +import java.util.Comparator; +import java.util.HashMap; import java.util.HashSet; import java.util.Iterator; import java.util.LinkedList; import java.util.List; import java.util.Locale; import java.util.Map; +import java.util.Optional; import java.util.Set; import static java.util.Collections.emptyMap; @@ -262,88 +266,179 @@ public ShardIterator activeInitializingShardsIt(int seed) { } /** - * Returns true if no primaries are active or initializing for this shard + * Returns an iterator over active and initializing shards, ordered by the adaptive replica + * selection forumla. Making sure though that its random within the active shards of the same + * (or missing) rank, and initializing shards are the last to iterate through. */ - private boolean noPrimariesActive() { - if (!primaryAsList.isEmpty() && !primaryAsList.get(0).active() && !primaryAsList.get(0).initializing()) { - return true; + public ShardIterator activeInitializingShardsRankedIt(@Nullable ResponseCollectorService collector, + @Nullable Map nodeSearchCounts) { + final int seed = shuffler.nextSeed(); + if (allInitializingShards.isEmpty()) { + return new PlainShardIterator(shardId, + rankShardsAndUpdateStats(shuffler.shuffle(activeShards, seed), collector, nodeSearchCounts)); } - return false; + + ArrayList ordered = new ArrayList<>(activeShards.size() + allInitializingShards.size()); + List rankedActiveShards = + rankShardsAndUpdateStats(shuffler.shuffle(activeShards, seed), collector, nodeSearchCounts); + ordered.addAll(rankedActiveShards); + List rankedInitializingShards = + rankShardsAndUpdateStats(allInitializingShards, collector, nodeSearchCounts); + ordered.addAll(rankedInitializingShards); + return new PlainShardIterator(shardId, ordered); } - /** - * Returns an iterator only on the primary shard. - */ - public ShardIterator primaryShardIt() { - return new PlainShardIterator(shardId, primaryAsList); + private static Set getAllNodeIds(final List shards) { + final Set nodeIds = new HashSet<>(); + for (ShardRouting shard : shards) { + nodeIds.add(shard.currentNodeId()); + } + return nodeIds; } - public ShardIterator primaryActiveInitializingShardIt() { - if (noPrimariesActive()) { - return new PlainShardIterator(shardId, NO_SHARDS); + private static Map> + getNodeStats(final Set nodeIds, final ResponseCollectorService collector) { + + final Map> nodeStats = new HashMap<>(nodeIds.size()); + for (String nodeId : nodeIds) { + nodeStats.put(nodeId, collector.getNodeStatistics(nodeId)); } - return primaryShardIt(); + return nodeStats; } - public ShardIterator primaryFirstActiveInitializingShardsIt() { - ArrayList ordered = new ArrayList<>(activeShards.size() + allInitializingShards.size()); - // fill it in a randomized fashion - for (ShardRouting shardRouting : shuffler.shuffle(activeShards)) { - ordered.add(shardRouting); - if (shardRouting.primary()) { - // switch, its the matching node id - ordered.set(ordered.size() - 1, ordered.get(0)); - ordered.set(0, shardRouting); - } + private static Map rankNodes(final Map> nodeStats, + final Map nodeSearchCounts) { + final Map nodeRanks = new HashMap<>(nodeStats.size()); + for (Map.Entry> entry : nodeStats.entrySet()) { + Optional maybeStats = entry.getValue(); + maybeStats.ifPresent(stats -> { + final String nodeId = entry.getKey(); + nodeRanks.put(nodeId, stats.rank(nodeSearchCounts.getOrDefault(nodeId, 1L))); + }); } - // no need to worry about primary first here..., its temporal - if (!allInitializingShards.isEmpty()) { - ordered.addAll(allInitializingShards); + return nodeRanks; + } + + /** + * Adjust the for all other nodes' collected stats. In the original ranking paper there is no need to adjust other nodes' stats because + * Cassandra sends occasional requests to all copies of the data, so their stats will be updated during that broadcast phase. In + * Elasticsearch, however, we do not have that sort of broadcast-to-all behavior. In order to prevent a node that gets a high score and + * then never gets any more requests, we must ensure it eventually returns to a more normal score and can be a candidate for serving + * requests. + * + * This adjustment takes the "winning" node's statistics and adds the average of those statistics with each non-winning node. Let's say + * the winning node had a queue size of 10 and a non-winning node had a queue of 18. The average queue size is (10 + 18) / 2 = 14 so the + * non-winning node will have statistics added for a queue size of 14. This is repeated for the response time and service times as well. + */ + private static void adjustStats(final ResponseCollectorService collector, + final Map> nodeStats, + final String minNodeId, + final ResponseCollectorService.ComputedNodeStats minStats) { + if (minNodeId != null) { + for (Map.Entry> entry : nodeStats.entrySet()) { + final String nodeId = entry.getKey(); + final Optional maybeStats = entry.getValue(); + if (nodeId.equals(minNodeId) == false && maybeStats.isPresent()) { + final ResponseCollectorService.ComputedNodeStats stats = maybeStats.get(); + final int updatedQueue = (minStats.queueSize + stats.queueSize) / 2; + final long updatedResponse = (long) (minStats.responseTime + stats.responseTime) / 2; + final long updatedService = (long) (minStats.serviceTime + stats.serviceTime) / 2; + collector.addNodeStatistics(nodeId, updatedQueue, updatedResponse, updatedService); + } + } } - return new PlainShardIterator(shardId, ordered); } - public ShardIterator replicaActiveInitializingShardIt() { - // If the primaries are unassigned, return an empty list (there aren't - // any replicas to query anyway) - if (noPrimariesActive()) { - return new PlainShardIterator(shardId, NO_SHARDS); + private static List rankShardsAndUpdateStats(List shards, final ResponseCollectorService collector, + final Map nodeSearchCounts) { + if (collector == null || nodeSearchCounts == null || shards.size() <= 1) { + return shards; } - LinkedList ordered = new LinkedList<>(); - for (ShardRouting replica : shuffler.shuffle(replicas)) { - if (replica.active()) { - ordered.addFirst(replica); - } else if (replica.initializing()) { - ordered.addLast(replica); + // Retrieve which nodes we can potentially send the query to + final Set nodeIds = getAllNodeIds(shards); + final int nodeCount = nodeIds.size(); + + final Map> nodeStats = getNodeStats(nodeIds, collector); + + // Retrieve all the nodes the shards exist on + final Map nodeRanks = rankNodes(nodeStats, nodeSearchCounts); + + // sort all shards based on the shard rank + ArrayList sortedShards = new ArrayList<>(shards); + Collections.sort(sortedShards, new NodeRankComparator(nodeRanks)); + + // adjust the non-winner nodes' stats so they will get a chance to receive queries + if (sortedShards.size() > 1) { + ShardRouting minShard = sortedShards.get(0); + // If the winning shard is not started we are ranking initializing + // shards, don't bother to do adjustments + if (minShard.started()) { + String minNodeId = minShard.currentNodeId(); + Optional maybeMinStats = nodeStats.get(minNodeId); + if (maybeMinStats.isPresent()) { + adjustStats(collector, nodeStats, minNodeId, maybeMinStats.get()); + // Increase the number of searches for the "winning" node by one. + // Note that this doesn't actually affect the "real" counts, instead + // it only affects the captured node search counts, which is + // captured once for each query in TransportSearchAction + nodeSearchCounts.compute(minNodeId, (id, conns) -> conns == null ? 1 : conns + 1); + } } } - return new PlainShardIterator(shardId, ordered); + + return sortedShards; } - public ShardIterator replicaFirstActiveInitializingShardsIt() { - // If the primaries are unassigned, return an empty list (there aren't - // any replicas to query anyway) - if (noPrimariesActive()) { - return new PlainShardIterator(shardId, NO_SHARDS); + private static class NodeRankComparator implements Comparator { + private final Map nodeRanks; + + NodeRankComparator(Map nodeRanks) { + this.nodeRanks = nodeRanks; } - ArrayList ordered = new ArrayList<>(activeShards.size() + allInitializingShards.size()); - // fill it in a randomized fashion with the active replicas - for (ShardRouting replica : shuffler.shuffle(replicas)) { - if (replica.active()) { - ordered.add(replica); + @Override + public int compare(ShardRouting s1, ShardRouting s2) { + if (s1.currentNodeId().equals(s2.currentNodeId())) { + // these shards on the same node + return 0; + } + Double shard1rank = nodeRanks.get(s1.currentNodeId()); + Double shard2rank = nodeRanks.get(s2.currentNodeId()); + if (shard1rank != null) { + if (shard2rank != null) { + return shard1rank.compareTo(shard2rank); + } else { + // place non-nulls after null values + return 1; + } + } else { + if (shard2rank != null) { + // place nulls before non-null values + return -1; + } else { + // Both nodes do not have stats, they are equal + return 0; + } } } + } - // Add the primary shard - ordered.add(primary); - - // Add initializing shards last - if (!allInitializingShards.isEmpty()) { - ordered.addAll(allInitializingShards); + /** + * Returns true if no primaries are active or initializing for this shard + */ + private boolean noPrimariesActive() { + if (!primaryAsList.isEmpty() && !primaryAsList.get(0).active() && !primaryAsList.get(0).initializing()) { + return true; } - return new PlainShardIterator(shardId, ordered); + return false; + } + + /** + * Returns an iterator only on the primary shard. + */ + public ShardIterator primaryShardIt() { + return new PlainShardIterator(shardId, primaryAsList); } public ShardIterator onlyNodeActiveInitializingShardsIt(String nodeId) { diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/OperationRouting.java b/core/src/main/java/org/elasticsearch/cluster/routing/OperationRouting.java index 8a88ee1751a14..005600ceb4431 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/OperationRouting.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/OperationRouting.java @@ -28,10 +28,12 @@ import org.elasticsearch.common.Strings; import org.elasticsearch.common.component.AbstractComponent; import org.elasticsearch.common.settings.ClusterSettings; +import org.elasticsearch.common.settings.Setting; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.index.IndexNotFoundException; import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.index.shard.ShardNotFoundException; +import org.elasticsearch.node.ResponseCollectorService; import java.util.ArrayList; import java.util.Arrays; @@ -43,13 +45,24 @@ public class OperationRouting extends AbstractComponent { + public static final Setting USE_ADAPTIVE_REPLICA_SELECTION_SETTING = + Setting.boolSetting("cluster.routing.use_adaptive_replica_selection", true, + Setting.Property.Dynamic, Setting.Property.NodeScope); + private String[] awarenessAttributes; + private boolean useAdaptiveReplicaSelection; public OperationRouting(Settings settings, ClusterSettings clusterSettings) { super(settings); this.awarenessAttributes = AwarenessAllocationDecider.CLUSTER_ROUTING_ALLOCATION_AWARENESS_ATTRIBUTE_SETTING.get(settings); + this.useAdaptiveReplicaSelection = USE_ADAPTIVE_REPLICA_SELECTION_SETTING.get(settings); clusterSettings.addSettingsUpdateConsumer(AwarenessAllocationDecider.CLUSTER_ROUTING_ALLOCATION_AWARENESS_ATTRIBUTE_SETTING, this::setAwarenessAttributes); + clusterSettings.addSettingsUpdateConsumer(USE_ADAPTIVE_REPLICA_SELECTION_SETTING, this::setUseAdaptiveReplicaSelection); + } + + void setUseAdaptiveReplicaSelection(boolean useAdaptiveReplicaSelection) { + this.useAdaptiveReplicaSelection = useAdaptiveReplicaSelection; } private void setAwarenessAttributes(String[] awarenessAttributes) { @@ -61,19 +74,33 @@ public ShardIterator indexShards(ClusterState clusterState, String index, String } public ShardIterator getShards(ClusterState clusterState, String index, String id, @Nullable String routing, @Nullable String preference) { - return preferenceActiveShardIterator(shards(clusterState, index, id, routing), clusterState.nodes().getLocalNodeId(), clusterState.nodes(), preference); + return preferenceActiveShardIterator(shards(clusterState, index, id, routing), clusterState.nodes().getLocalNodeId(), clusterState.nodes(), preference, null, null); } public ShardIterator getShards(ClusterState clusterState, String index, int shardId, @Nullable String preference) { final IndexShardRoutingTable indexShard = clusterState.getRoutingTable().shardRoutingTable(index, shardId); - return preferenceActiveShardIterator(indexShard, clusterState.nodes().getLocalNodeId(), clusterState.nodes(), preference); + return preferenceActiveShardIterator(indexShard, clusterState.nodes().getLocalNodeId(), clusterState.nodes(), preference, null, null); + } + + public GroupShardsIterator searchShards(ClusterState clusterState, + String[] concreteIndices, + @Nullable Map> routing, + @Nullable String preference) { + return searchShards(clusterState, concreteIndices, routing, preference, null, null); } - public GroupShardsIterator searchShards(ClusterState clusterState, String[] concreteIndices, @Nullable Map> routing, @Nullable String preference) { + + public GroupShardsIterator searchShards(ClusterState clusterState, + String[] concreteIndices, + @Nullable Map> routing, + @Nullable String preference, + @Nullable ResponseCollectorService collectorService, + @Nullable Map nodeCounts) { final Set shards = computeTargetedShards(clusterState, concreteIndices, routing); final Set set = new HashSet<>(shards.size()); for (IndexShardRoutingTable shard : shards) { - ShardIterator iterator = preferenceActiveShardIterator(shard, clusterState.nodes().getLocalNodeId(), clusterState.nodes(), preference); + ShardIterator iterator = preferenceActiveShardIterator(shard, + clusterState.nodes().getLocalNodeId(), clusterState.nodes(), preference, collectorService, nodeCounts); if (iterator != null) { set.add(iterator); } @@ -107,10 +134,17 @@ private Set computeTargetedShards(ClusterState clusterSt return set; } - private ShardIterator preferenceActiveShardIterator(IndexShardRoutingTable indexShard, String localNodeId, DiscoveryNodes nodes, @Nullable String preference) { + private ShardIterator preferenceActiveShardIterator(IndexShardRoutingTable indexShard, String localNodeId, + DiscoveryNodes nodes, @Nullable String preference, + @Nullable ResponseCollectorService collectorService, + @Nullable Map nodeCounts) { if (preference == null || preference.isEmpty()) { if (awarenessAttributes.length == 0) { - return indexShard.activeInitializingShardsRandomIt(); + if (useAdaptiveReplicaSelection) { + return indexShard.activeInitializingShardsRankedIt(collectorService, nodeCounts); + } else { + return indexShard.activeInitializingShardsRandomIt(); + } } else { return indexShard.preferAttributesActiveInitializingShardsIt(awarenessAttributes, nodes); } @@ -141,7 +175,11 @@ private ShardIterator preferenceActiveShardIterator(IndexShardRoutingTable index // no more preference if (index == -1 || index == preference.length() - 1) { if (awarenessAttributes.length == 0) { - return indexShard.activeInitializingShardsRandomIt(); + if (useAdaptiveReplicaSelection) { + return indexShard.activeInitializingShardsRankedIt(collectorService, nodeCounts); + } else { + return indexShard.activeInitializingShardsRandomIt(); + } } else { return indexShard.preferAttributesActiveInitializingShardsIt(awarenessAttributes, nodes); } @@ -160,14 +198,6 @@ private ShardIterator preferenceActiveShardIterator(IndexShardRoutingTable index return indexShard.preferNodeActiveInitializingShardsIt(nodesIds); case LOCAL: return indexShard.preferNodeActiveInitializingShardsIt(Collections.singleton(localNodeId)); - case PRIMARY: - return indexShard.primaryActiveInitializingShardIt(); - case REPLICA: - return indexShard.replicaActiveInitializingShardIt(); - case PRIMARY_FIRST: - return indexShard.primaryFirstActiveInitializingShardsIt(); - case REPLICA_FIRST: - return indexShard.replicaFirstActiveInitializingShardsIt(); case ONLY_LOCAL: return indexShard.onlyNodeActiveInitializingShardsIt(localNodeId); case ONLY_NODES: @@ -229,7 +259,7 @@ public ShardId shardId(ClusterState clusterState, String index, String id, @Null return new ShardId(indexMetaData.getIndex(), generateShardId(indexMetaData, id, routing)); } - static int generateShardId(IndexMetaData indexMetaData, @Nullable String id, @Nullable String routing) { + public static int generateShardId(IndexMetaData indexMetaData, @Nullable String id, @Nullable String routing) { final String effectiveRouting; final int partitionOffset; diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/Preference.java b/core/src/main/java/org/elasticsearch/cluster/routing/Preference.java index d4685d7aeadc1..9a55a99a51ca8 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/Preference.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/Preference.java @@ -39,26 +39,6 @@ public enum Preference { */ LOCAL("_local"), - /** - * Route to primary shards - */ - PRIMARY("_primary"), - - /** - * Route to replica shards - */ - REPLICA("_replica"), - - /** - * Route to primary shards first - */ - PRIMARY_FIRST("_primary_first"), - - /** - * Route to replica shards first - */ - REPLICA_FIRST("_replica_first"), - /** * Route to the local shard only */ @@ -97,16 +77,6 @@ public static Preference parse(String preference) { return PREFER_NODES; case "_local": return LOCAL; - case "_primary": - return PRIMARY; - case "_replica": - return REPLICA; - case "_primary_first": - case "_primaryFirst": - return PRIMARY_FIRST; - case "_replica_first": - case "_replicaFirst": - return REPLICA_FIRST; case "_only_local": case "_onlyLocal": return ONLY_LOCAL; diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/RecoverySource.java b/core/src/main/java/org/elasticsearch/cluster/routing/RecoverySource.java index 32afad99f2764..ff7aab4a25622 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/RecoverySource.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/RecoverySource.java @@ -24,6 +24,7 @@ import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.snapshots.Snapshot; @@ -38,7 +39,7 @@ * - {@link SnapshotRecoverySource} recovery from a snapshot * - {@link LocalShardsRecoverySource} recovery from other shards of another index on the same node */ -public abstract class RecoverySource implements Writeable, ToXContent { +public abstract class RecoverySource implements Writeable, ToXContentObject { @Override public final XContentBuilder toXContent(XContentBuilder builder, ToXContent.Params params) throws IOException { diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/RoutingNodes.java b/core/src/main/java/org/elasticsearch/cluster/routing/RoutingNodes.java index 8268b98f34dc2..c5f0cb82febba 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/RoutingNodes.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/RoutingNodes.java @@ -748,18 +748,6 @@ private void assignedShardsRemove(ShardRouting shard) { assert false : "No shard found to remove"; } - private ShardRouting reinitShadowPrimary(ShardRouting candidate) { - if (candidate.relocating()) { - cancelRelocation(candidate); - } - ShardRouting reinitializedShard = candidate.reinitializePrimaryShard(); - updateAssigned(candidate, reinitializedShard); - inactivePrimaryCount++; - inactiveShardCount++; - addRecovery(reinitializedShard); - return reinitializedShard; - } - private ShardRouting reinitReplica(ShardRouting shard) { assert shard.primary() == false : "shard must be a replica: " + shard; assert shard.initializing() : "can only reinitialize an initializing replica: " + shard; diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/ShardRouting.java b/core/src/main/java/org/elasticsearch/cluster/routing/ShardRouting.java index 8241521a16225..be1213ad134f1 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/ShardRouting.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/ShardRouting.java @@ -26,7 +26,8 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContent.Params; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.index.Index; import org.elasticsearch.index.shard.ShardId; @@ -39,7 +40,7 @@ * {@link ShardRouting} immutably encapsulates information about shard * routings like id, state, version, etc. */ -public final class ShardRouting implements Writeable, ToXContent { +public final class ShardRouting implements Writeable, ToXContentObject { /** * Used if shard size is not available @@ -383,17 +384,6 @@ public ShardRouting removeRelocationSource() { AllocationId.finishRelocation(allocationId), expectedShardSize); } - /** - * Moves the primary shard from started to initializing - */ - public ShardRouting reinitializePrimaryShard() { - assert state == ShardRoutingState.STARTED : this; - assert primary : this; - return new ShardRouting(shardId, currentNodeId, null, primary, ShardRoutingState.INITIALIZING, - StoreRecoverySource.EXISTING_STORE_INSTANCE, new UnassignedInfo(UnassignedInfo.Reason.REINITIALIZED, null), - allocationId, UNAVAILABLE_EXPECTED_SHARD_SIZE); - } - /** * Reinitializes a replica shard, giving it a fresh allocation id */ diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/UnassignedInfo.java b/core/src/main/java/org/elasticsearch/cluster/routing/UnassignedInfo.java index 3726bac781e3c..a543f4c3d3b3e 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/UnassignedInfo.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/UnassignedInfo.java @@ -20,6 +20,7 @@ package org.elasticsearch.cluster.routing; import org.elasticsearch.ExceptionsHelper; +import org.elasticsearch.Version; import org.elasticsearch.cluster.ClusterState; import org.elasticsearch.cluster.metadata.MetaData; import org.elasticsearch.cluster.routing.allocation.decider.Decision; @@ -33,7 +34,8 @@ import org.elasticsearch.common.settings.Setting.Property; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.TimeValue; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContent.Params; +import org.elasticsearch.common.xcontent.ToXContentFragment; import org.elasticsearch.common.xcontent.XContentBuilder; import java.io.IOException; @@ -43,12 +45,12 @@ /** * Holds additional information as to why the shard is in unassigned state. */ -public final class UnassignedInfo implements ToXContent, Writeable { +public final class UnassignedInfo implements ToXContentFragment, Writeable { public static final FormatDateTimeFormatter DATE_TIME_FORMATTER = Joda.forPattern("dateOptionalTime"); public static final Setting INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING = - Setting.timeSetting("index.unassigned.node_left.delayed_timeout", TimeValue.timeValueMinutes(1), Property.Dynamic, + Setting.positiveTimeSetting("index.unassigned.node_left.delayed_timeout", TimeValue.timeValueMinutes(1), Property.Dynamic, Property.IndexScope); /** * Reason why the shard is in unassigned state. @@ -112,7 +114,11 @@ public enum Reason { /** * Unassigned after forcing an empty primary */ - FORCED_EMPTY_PRIMARY + FORCED_EMPTY_PRIMARY, + /** + * Forced manually to allocate + */ + MANUAL_ALLOCATION } /** @@ -261,7 +267,11 @@ public UnassignedInfo(StreamInput in) throws IOException { } public void writeTo(StreamOutput out) throws IOException { - out.writeByte((byte) reason.ordinal()); + if (out.getVersion().before(Version.V_6_0_0_beta2) && reason == Reason.MANUAL_ALLOCATION) { + out.writeByte((byte) Reason.ALLOCATION_FAILED.ordinal()); + } else { + out.writeByte((byte) reason.ordinal()); + } out.writeLong(unassignedTimeMillis); // Do not serialize unassignedTimeNanos as System.nanoTime() cannot be compared across different JVMs out.writeBoolean(delayed); diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/AllocationService.java b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/AllocationService.java index 6b0f8bfba2af4..774e4b9301ca4 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/AllocationService.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/AllocationService.java @@ -97,7 +97,7 @@ public ClusterState applyStartedShards(ClusterState clusterState, List(startedShards); Collections.sort(startedShards, Comparator.comparing(ShardRouting::primary)); @@ -164,7 +164,7 @@ public ClusterState applyFailedShards(final ClusterState clusterState, final Lis routingNodes.unassigned().shuffle(); long currentNanoTime = currentNanoTime(); RoutingAllocation allocation = new RoutingAllocation(allocationDeciders, routingNodes, tmpState, - clusterInfoService.getClusterInfo(), currentNanoTime, false); + clusterInfoService.getClusterInfo(), currentNanoTime); for (FailedShard failedShardEntry : failedShards) { ShardRouting shardToFail = failedShardEntry.getRoutingEntry(); @@ -202,7 +202,7 @@ public ClusterState deassociateDeadNodes(final ClusterState clusterState, boolea // shuffle the unassigned nodes, just so we won't have things like poison failed shards routingNodes.unassigned().shuffle(); RoutingAllocation allocation = new RoutingAllocation(allocationDeciders, routingNodes, clusterState, - clusterInfoService.getClusterInfo(), currentNanoTime(), false); + clusterInfoService.getClusterInfo(), currentNanoTime()); // first, clear from the shards any node id they used to belong to that is now dead deassociateDeadNodes(allocation); @@ -239,6 +239,22 @@ private void removeDelayMarkers(RoutingAllocation allocation) { } } + /** + * Reset failed allocation counter for unassigned shards + */ + private void resetFailedAllocationCounter(RoutingAllocation allocation) { + final RoutingNodes.UnassignedShards.UnassignedIterator unassignedIterator = allocation.routingNodes().unassigned().iterator(); + while (unassignedIterator.hasNext()) { + ShardRouting shardRouting = unassignedIterator.next(); + UnassignedInfo unassignedInfo = shardRouting.unassignedInfo(); + unassignedIterator.updateUnassigned(new UnassignedInfo(unassignedInfo.getNumFailedAllocations() > 0 ? + UnassignedInfo.Reason.MANUAL_ALLOCATION : unassignedInfo.getReason(), unassignedInfo.getMessage(), + unassignedInfo.getFailure(), 0, unassignedInfo.getUnassignedTimeInNanos(), + unassignedInfo.getUnassignedTimeInMillis(), unassignedInfo.isDelayed(), + unassignedInfo.getLastAllocationStatus()), shardRouting.recoverySource(), allocation.changes()); + } + } + /** * Internal helper to cap the number of elements in a potentially long list for logging. * @@ -262,7 +278,7 @@ public CommandsResult reroute(final ClusterState clusterState, AllocationCommand // a consistent result of the effect the commands have on the routing // this allows systems to dry run the commands, see the resulting cluster state, and act on it RoutingAllocation allocation = new RoutingAllocation(allocationDeciders, routingNodes, clusterState, - clusterInfoService.getClusterInfo(), currentNanoTime(), retryFailed); + clusterInfoService.getClusterInfo(), currentNanoTime()); // don't short circuit deciders, we want a full explanation allocation.debugDecision(true); // we ignore disable allocation, because commands are explicit @@ -272,6 +288,10 @@ public CommandsResult reroute(final ClusterState clusterState, AllocationCommand allocation.ignoreDisable(false); // the assumption is that commands will move / act on shards (or fail through exceptions) // so, there will always be shard "movements", so no need to check on reroute + + if (retryFailed) { + resetFailedAllocationCounter(allocation); + } reroute(allocation); return new CommandsResult(explanations, buildResultAndLogHealthChange(clusterState, allocation, "reroute commands")); } @@ -296,7 +316,7 @@ protected ClusterState reroute(final ClusterState clusterState, String reason, b // shuffle the unassigned nodes, just so we won't have things like poison failed shards routingNodes.unassigned().shuffle(); RoutingAllocation allocation = new RoutingAllocation(allocationDeciders, routingNodes, clusterState, - clusterInfoService.getClusterInfo(), currentNanoTime(), false); + clusterInfoService.getClusterInfo(), currentNanoTime()); allocation.debugDecision(debug); reroute(allocation); if (allocation.routingNodesChanged() == false) { diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/NodeAllocationResult.java b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/NodeAllocationResult.java index 0d3fe2df920f6..ffb9351f57637 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/NodeAllocationResult.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/NodeAllocationResult.java @@ -27,7 +27,8 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentFragment; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; import java.io.IOException; @@ -38,7 +39,7 @@ /** * This class represents the shard allocation decision and its explanation for a single node. */ -public class NodeAllocationResult implements ToXContent, Writeable, Comparable { +public class NodeAllocationResult implements ToXContentObject, Writeable, Comparable { private static final Comparator nodeResultComparator = Comparator.comparing(NodeAllocationResult::getNodeDecision) @@ -186,7 +187,7 @@ public int compareTo(NodeAllocationResult other) { } /** A class that captures metadata about a shard store on a node. */ - public static final class ShardStoreInfo implements ToXContent, Writeable { + public static final class ShardStoreInfo implements ToXContentFragment, Writeable { private final boolean inSync; @Nullable private final String allocationId; diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/RerouteExplanation.java b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/RerouteExplanation.java index 36dbadec49e1b..761096907d783 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/RerouteExplanation.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/RerouteExplanation.java @@ -23,7 +23,8 @@ import org.elasticsearch.cluster.routing.allocation.decider.Decision; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContent.Params; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; import java.io.IOException; @@ -32,7 +33,7 @@ * Class encapsulating the explanation for a single {@link AllocationCommand} * taken from the Deciders */ -public class RerouteExplanation implements ToXContent { +public class RerouteExplanation implements ToXContentObject { private AllocationCommand command; private Decision decisions; diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/RoutingAllocation.java b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/RoutingAllocation.java index e1ae367bebf76..abc363931c1e4 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/RoutingAllocation.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/RoutingAllocation.java @@ -66,8 +66,6 @@ public class RoutingAllocation { private boolean ignoreDisable = false; - private final boolean retryFailed; - private DebugMode debugDecision = DebugMode.OFF; private boolean hasPendingAsyncFetch = false; @@ -90,7 +88,7 @@ public class RoutingAllocation { * @param currentNanoTime the nano time to use for all delay allocation calculation (typically {@link System#nanoTime()}) */ public RoutingAllocation(AllocationDeciders deciders, RoutingNodes routingNodes, ClusterState clusterState, ClusterInfo clusterInfo, - long currentNanoTime, boolean retryFailed) { + long currentNanoTime) { this.deciders = deciders; this.routingNodes = routingNodes; this.metaData = clusterState.metaData(); @@ -99,7 +97,6 @@ public RoutingAllocation(AllocationDeciders deciders, RoutingNodes routingNodes, this.customs = clusterState.customs(); this.clusterInfo = clusterInfo; this.currentNanoTime = currentNanoTime; - this.retryFailed = retryFailed; } /** returns the nano time captured at the beginning of the allocation. used to make sure all time based decisions are aligned */ @@ -285,10 +282,6 @@ public void setHasPendingAsyncFetch() { this.hasPendingAsyncFetch = true; } - public boolean isRetryFailed() { - return retryFailed; - } - public enum DebugMode { /** * debug mode is off diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/RoutingExplanations.java b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/RoutingExplanations.java index 95f1a29eed5e5..fe97b524298cb 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/RoutingExplanations.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/RoutingExplanations.java @@ -19,20 +19,24 @@ package org.elasticsearch.cluster.routing.allocation; +import org.elasticsearch.cluster.routing.allocation.decider.Decision; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContent.Params; +import org.elasticsearch.common.xcontent.ToXContentFragment; import org.elasticsearch.common.xcontent.XContentBuilder; import java.io.IOException; import java.util.ArrayList; import java.util.List; +import java.util.Optional; +import java.util.stream.Collectors; /** * Class used to encapsulate a number of {@link RerouteExplanation} * explanations. */ -public class RoutingExplanations implements ToXContent { +public class RoutingExplanations implements ToXContentFragment { private final List explanations; public RoutingExplanations() { @@ -48,6 +52,18 @@ public List explanations() { return this.explanations; } + /** + * Provides feedback from commands with a YES decision that should be displayed to the user after the command has been applied + */ + public List getYesDecisionMessages() { + return explanations().stream() + .filter(explanation -> explanation.decisions().type().equals(Decision.Type.YES)) + .map(explanation -> explanation.command().getMessage()) + .filter(Optional::isPresent) + .map(Optional::get) + .collect(Collectors.toList()); + } + /** * Read in a RoutingExplanations object */ diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/ShardAllocationDecision.java b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/ShardAllocationDecision.java index 557ce9300c610..390e4510f0f3d 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/ShardAllocationDecision.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/ShardAllocationDecision.java @@ -22,7 +22,8 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContent.Params; +import org.elasticsearch.common.xcontent.ToXContentFragment; import org.elasticsearch.common.xcontent.XContentBuilder; import java.io.IOException; @@ -40,7 +41,7 @@ * then both {@link #getAllocateDecision()} and {@link #getMoveDecision()} will return * objects whose {@code isDecisionTaken()} method returns {@code false}. */ -public final class ShardAllocationDecision implements ToXContent, Writeable { +public final class ShardAllocationDecision implements ToXContentFragment, Writeable { public static final ShardAllocationDecision NOT_TAKEN = new ShardAllocationDecision(AllocateUnassignedDecision.NOT_TAKEN, MoveDecision.NOT_TAKEN); diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/command/AllocateEmptyPrimaryAllocationCommand.java b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/command/AllocateEmptyPrimaryAllocationCommand.java index 157acc0e537b6..66281b73458b3 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/command/AllocateEmptyPrimaryAllocationCommand.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/command/AllocateEmptyPrimaryAllocationCommand.java @@ -38,6 +38,7 @@ import org.elasticsearch.index.shard.ShardNotFoundException; import java.io.IOException; +import java.util.Optional; /** * Allocates an unassigned empty primary shard to a specific node. Use with extreme care as this will result in data loss. @@ -72,6 +73,11 @@ public String name() { return NAME; } + @Override + public Optional getMessage() { + return Optional.of("allocated an empty primary for [" + index + "][" + shardId + "] on node [" + node + "] from user command"); + } + public static AllocateEmptyPrimaryAllocationCommand fromXContent(XContentParser parser) throws IOException { return new Builder().parse(parser).build(); } @@ -115,19 +121,22 @@ public RerouteExplanation execute(RoutingAllocation allocation, boolean explain) } if (shardRouting.recoverySource().getType() != RecoverySource.Type.EMPTY_STORE && acceptDataLoss == false) { - return explainOrThrowRejectedCommand(explain, allocation, - "allocating an empty primary for [" + index + "][" + shardId + "] can result in data loss. Please confirm by setting the accept_data_loss parameter to true"); + String dataLossWarning = "allocating an empty primary for [" + index + "][" + shardId + "] can result in data loss. Please confirm " + + "by setting the accept_data_loss parameter to true"; + return explainOrThrowRejectedCommand(explain, allocation, dataLossWarning); } UnassignedInfo unassignedInfoToUpdate = null; if (shardRouting.unassignedInfo().getReason() != UnassignedInfo.Reason.FORCED_EMPTY_PRIMARY) { - unassignedInfoToUpdate = new UnassignedInfo(UnassignedInfo.Reason.FORCED_EMPTY_PRIMARY, - "force empty allocation from previous reason " + shardRouting.unassignedInfo().getReason() + ", " + shardRouting.unassignedInfo().getMessage(), + String unassignedInfoMessage = "force empty allocation from previous reason " + shardRouting.unassignedInfo().getReason() + + ", " + shardRouting.unassignedInfo().getMessage(); + unassignedInfoToUpdate = new UnassignedInfo(UnassignedInfo.Reason.FORCED_EMPTY_PRIMARY, unassignedInfoMessage, shardRouting.unassignedInfo().getFailure(), 0, System.nanoTime(), System.currentTimeMillis(), false, shardRouting.unassignedInfo().getLastAllocationStatus()); } - initializeUnassignedShard(allocation, routingNodes, routingNode, shardRouting, unassignedInfoToUpdate, StoreRecoverySource.EMPTY_STORE_INSTANCE); + initializeUnassignedShard(allocation, routingNodes, routingNode, shardRouting, unassignedInfoToUpdate, + StoreRecoverySource.EMPTY_STORE_INSTANCE); return new RerouteExplanation(this, allocation.decision(Decision.YES, name() + " (allocation command)", "ignore deciders")); } diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/command/AllocateStalePrimaryAllocationCommand.java b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/command/AllocateStalePrimaryAllocationCommand.java index c643fb5c948ae..11c4420200e33 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/command/AllocateStalePrimaryAllocationCommand.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/command/AllocateStalePrimaryAllocationCommand.java @@ -35,6 +35,7 @@ import org.elasticsearch.index.shard.ShardNotFoundException; import java.io.IOException; +import java.util.Optional; /** * Allocates an unassigned stale primary shard to a specific node. Use with extreme care as this will result in data loss. @@ -70,6 +71,11 @@ public String name() { return NAME; } + @Override + public Optional getMessage() { + return Optional.of("allocated a stale primary for [" + index + "][" + shardId + "] on node [" + node + "] from user command"); + } + public static AllocateStalePrimaryAllocationCommand fromXContent(XContentParser parser) throws IOException { return new Builder().parse(parser).build(); } @@ -113,8 +119,9 @@ public RerouteExplanation execute(RoutingAllocation allocation, boolean explain) } if (acceptDataLoss == false) { - return explainOrThrowRejectedCommand(explain, allocation, - "allocating an empty primary for [" + index + "][" + shardId + "] can result in data loss. Please confirm by setting the accept_data_loss parameter to true"); + String dataLossWarning = "allocating an empty primary for [" + index + "][" + shardId + "] can result in data loss. Please " + + "confirm by setting the accept_data_loss parameter to true"; + return explainOrThrowRejectedCommand(explain, allocation, dataLossWarning); } if (shardRouting.recoverySource().getType() != RecoverySource.Type.EXISTING_STORE) { diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/command/AllocationCommand.java b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/command/AllocationCommand.java index 92c1ffa9921fc..ed5df30c54b7f 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/command/AllocationCommand.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/command/AllocationCommand.java @@ -23,17 +23,18 @@ import org.elasticsearch.cluster.routing.allocation.RoutingAllocation; import org.elasticsearch.common.io.stream.NamedWriteable; import org.elasticsearch.common.network.NetworkModule; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentParser; import java.io.IOException; +import java.util.Optional; /** * A command to move shards in some way. * * Commands are registered in {@link NetworkModule}. */ -public interface AllocationCommand extends NamedWriteable, ToXContent { +public interface AllocationCommand extends NamedWriteable, ToXContentObject { interface Parser { /** * Reads an {@link AllocationCommand} of type T from a {@link XContentParser}. @@ -61,4 +62,12 @@ interface Parser { default String getWriteableName() { return name(); } + + /** + * Returns any feedback the command wants to provide for logging. This message should be appropriate to expose to the user after the + * command has been applied + */ + default Optional getMessage() { + return Optional.empty(); + } } diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/command/AllocationCommands.java b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/command/AllocationCommands.java index 5098b027f611a..72290eb9ccf1a 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/command/AllocationCommands.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/command/AllocationCommands.java @@ -20,11 +20,12 @@ package org.elasticsearch.cluster.routing.allocation.command; import org.elasticsearch.ElasticsearchParseException; -import org.elasticsearch.action.support.ToXContentToBytes; import org.elasticsearch.cluster.routing.allocation.RoutingAllocation; import org.elasticsearch.cluster.routing.allocation.RoutingExplanations; +import org.elasticsearch.common.Strings; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.xcontent.ToXContentFragment; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentParser; @@ -38,7 +39,7 @@ * A simple {@link AllocationCommand} composite managing several * {@link AllocationCommand} implementations */ -public class AllocationCommands extends ToXContentToBytes { +public class AllocationCommands implements ToXContentFragment { private final List commands = new ArrayList<>(); /** @@ -196,4 +197,9 @@ public int hashCode() { // Override equals and hashCode for testing return Objects.hashCode(commands); } + + @Override + public String toString() { + return Strings.toString(this, true, true); + } } diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/AwarenessAllocationDecider.java b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/AwarenessAllocationDecider.java index 4160fd224aa14..f00e9cdc3ce8f 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/AwarenessAllocationDecider.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/AwarenessAllocationDecider.java @@ -20,6 +20,7 @@ package org.elasticsearch.cluster.routing.allocation.decider; import java.util.HashMap; +import java.util.List; import java.util.Map; import com.carrotsearch.hppc.ObjectIntHashMap; @@ -85,7 +86,7 @@ public class AwarenessAllocationDecider extends AllocationDecider { private volatile String[] awarenessAttributes; - private volatile Map forcedAwarenessAttributes; + private volatile Map> forcedAwarenessAttributes; public AwarenessAllocationDecider(Settings settings, ClusterSettings clusterSettings) { super(settings); @@ -97,11 +98,11 @@ public AwarenessAllocationDecider(Settings settings, ClusterSettings clusterSett } private void setForcedAwarenessAttributes(Settings forceSettings) { - Map forcedAwarenessAttributes = new HashMap<>(); + Map> forcedAwarenessAttributes = new HashMap<>(); Map forceGroups = forceSettings.getAsGroups(); for (Map.Entry entry : forceGroups.entrySet()) { - String[] aValues = entry.getValue().getAsArray("values"); - if (aValues.length > 0) { + List aValues = entry.getValue().getAsList("values"); + if (aValues.size() > 0) { forcedAwarenessAttributes.put(entry.getKey(), aValues); } } @@ -169,7 +170,7 @@ private Decision underCapacity(ShardRouting shardRouting, RoutingNode node, Rout } int numberOfAttributes = nodesPerAttribute.size(); - String[] fullValues = forcedAwarenessAttributes.get(awarenessAttribute); + List fullValues = forcedAwarenessAttributes.get(awarenessAttribute); if (fullValues != null) { for (String fullValue : fullValues) { if (!shardPerAttribute.containsKey(fullValue)) { diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/DiskThresholdDecider.java b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/DiskThresholdDecider.java index 56663be1ef427..2a323af5f8435 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/DiskThresholdDecider.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/DiskThresholdDecider.java @@ -403,14 +403,14 @@ private Decision earlyTerminate(RoutingAllocation allocation, ImmutableOpenMap shardIds = IndexMetaData.selectShrinkShards(shard.id(), sourceIndexMeta, metaData.getNumberOfShards()); + final Set shardIds = IndexMetaData.selectRecoverFromShards(shard.id(), sourceIndexMeta, metaData.getNumberOfShards()); for (IndexShardRoutingTable shardRoutingTable : allocation.routingTable().index(mergeSourceIndex.getName())) { if (shardIds.contains(shardRoutingTable.shardId())) { targetShardSize += info.getShardSize(shardRoutingTable.primaryShard(), 0); diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/FilterAllocationDecider.java b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/FilterAllocationDecider.java index 933b0a829d569..06a0859cee7c5 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/FilterAllocationDecider.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/FilterAllocationDecider.java @@ -31,6 +31,7 @@ import org.elasticsearch.common.settings.Settings; import java.util.EnumSet; +import java.util.Map; import static org.elasticsearch.cluster.node.DiscoveryNodeFilters.IP_VALIDATOR; import static org.elasticsearch.cluster.node.DiscoveryNodeFilters.OpType.AND; @@ -70,12 +71,15 @@ public class FilterAllocationDecider extends AllocationDecider { private static final String CLUSTER_ROUTING_REQUIRE_GROUP_PREFIX = "cluster.routing.allocation.require"; private static final String CLUSTER_ROUTING_INCLUDE_GROUP_PREFIX = "cluster.routing.allocation.include"; private static final String CLUSTER_ROUTING_EXCLUDE_GROUP_PREFIX = "cluster.routing.allocation.exclude"; - public static final Setting CLUSTER_ROUTING_REQUIRE_GROUP_SETTING = - Setting.groupSetting(CLUSTER_ROUTING_REQUIRE_GROUP_PREFIX + ".", IP_VALIDATOR, Property.Dynamic, Property.NodeScope); - public static final Setting CLUSTER_ROUTING_INCLUDE_GROUP_SETTING = - Setting.groupSetting(CLUSTER_ROUTING_INCLUDE_GROUP_PREFIX + ".", IP_VALIDATOR, Property.Dynamic, Property.NodeScope); - public static final Setting CLUSTER_ROUTING_EXCLUDE_GROUP_SETTING = - Setting.groupSetting(CLUSTER_ROUTING_EXCLUDE_GROUP_PREFIX + ".", IP_VALIDATOR, Property.Dynamic, Property.NodeScope); + public static final Setting.AffixSetting CLUSTER_ROUTING_REQUIRE_GROUP_SETTING = + Setting.prefixKeySetting(CLUSTER_ROUTING_REQUIRE_GROUP_PREFIX + ".", (key) -> + Setting.simpleString(key, (value, map) -> IP_VALIDATOR.accept(key, value), Property.Dynamic, Property.NodeScope)); + public static final Setting.AffixSetting CLUSTER_ROUTING_INCLUDE_GROUP_SETTING = + Setting.prefixKeySetting(CLUSTER_ROUTING_INCLUDE_GROUP_PREFIX + ".", (key) -> + Setting.simpleString(key, (value, map) -> IP_VALIDATOR.accept(key, value), Property.Dynamic, Property.NodeScope)); + public static final Setting.AffixSettingCLUSTER_ROUTING_EXCLUDE_GROUP_SETTING = + Setting.prefixKeySetting(CLUSTER_ROUTING_EXCLUDE_GROUP_PREFIX + ".", (key) -> + Setting.simpleString(key, (value, map) -> IP_VALIDATOR.accept(key, value), Property.Dynamic, Property.NodeScope)); /** * The set of {@link RecoverySource.Type} values for which the @@ -94,12 +98,12 @@ public class FilterAllocationDecider extends AllocationDecider { public FilterAllocationDecider(Settings settings, ClusterSettings clusterSettings) { super(settings); - setClusterRequireFilters(CLUSTER_ROUTING_REQUIRE_GROUP_SETTING.get(settings)); - setClusterExcludeFilters(CLUSTER_ROUTING_EXCLUDE_GROUP_SETTING.get(settings)); - setClusterIncludeFilters(CLUSTER_ROUTING_INCLUDE_GROUP_SETTING.get(settings)); - clusterSettings.addSettingsUpdateConsumer(CLUSTER_ROUTING_REQUIRE_GROUP_SETTING, this::setClusterRequireFilters); - clusterSettings.addSettingsUpdateConsumer(CLUSTER_ROUTING_EXCLUDE_GROUP_SETTING, this::setClusterExcludeFilters); - clusterSettings.addSettingsUpdateConsumer(CLUSTER_ROUTING_INCLUDE_GROUP_SETTING, this::setClusterIncludeFilters); + setClusterRequireFilters(CLUSTER_ROUTING_REQUIRE_GROUP_SETTING.getAsMap(settings)); + setClusterExcludeFilters(CLUSTER_ROUTING_EXCLUDE_GROUP_SETTING.getAsMap(settings)); + setClusterIncludeFilters(CLUSTER_ROUTING_INCLUDE_GROUP_SETTING.getAsMap(settings)); + clusterSettings.addAffixMapUpdateConsumer(CLUSTER_ROUTING_REQUIRE_GROUP_SETTING, this::setClusterRequireFilters, (a,b)-> {}, true); + clusterSettings.addAffixMapUpdateConsumer(CLUSTER_ROUTING_EXCLUDE_GROUP_SETTING, this::setClusterExcludeFilters, (a,b)-> {}, true); + clusterSettings.addAffixMapUpdateConsumer(CLUSTER_ROUTING_INCLUDE_GROUP_SETTING, this::setClusterIncludeFilters, (a,b)-> {}, true); } @Override @@ -196,13 +200,13 @@ private Decision shouldClusterFilter(RoutingNode node, RoutingAllocation allocat return null; } - private void setClusterRequireFilters(Settings settings) { - clusterRequireFilters = DiscoveryNodeFilters.buildFromKeyValue(AND, settings.getAsMap()); + private void setClusterRequireFilters(Map filters) { + clusterRequireFilters = DiscoveryNodeFilters.buildFromKeyValue(AND, filters); } - private void setClusterIncludeFilters(Settings settings) { - clusterIncludeFilters = DiscoveryNodeFilters.buildFromKeyValue(OR, settings.getAsMap()); + private void setClusterIncludeFilters(Map filters) { + clusterIncludeFilters = DiscoveryNodeFilters.buildFromKeyValue(OR, filters); } - private void setClusterExcludeFilters(Settings settings) { - clusterExcludeFilters = DiscoveryNodeFilters.buildFromKeyValue(OR, settings.getAsMap()); + private void setClusterExcludeFilters(Map filters) { + clusterExcludeFilters = DiscoveryNodeFilters.buildFromKeyValue(OR, filters); } } diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/MaxRetryAllocationDecider.java b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/MaxRetryAllocationDecider.java index 59a836abece37..c3817b429bbf3 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/MaxRetryAllocationDecider.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/MaxRetryAllocationDecider.java @@ -34,7 +34,6 @@ * Note: This allocation decider also allows allocation of repeatedly failing shards when the /_cluster/reroute?retry_failed=true * API is manually invoked. This allows single retries without raising the limits. * - * @see RoutingAllocation#isRetryFailed() */ public class MaxRetryAllocationDecider extends AllocationDecider { @@ -59,14 +58,7 @@ public Decision canAllocate(ShardRouting shardRouting, RoutingAllocation allocat if (unassignedInfo != null && unassignedInfo.getNumFailedAllocations() > 0) { final IndexMetaData indexMetaData = allocation.metaData().getIndexSafe(shardRouting.index()); final int maxRetry = SETTING_ALLOCATION_MAX_RETRY.get(indexMetaData.getSettings()); - if (allocation.isRetryFailed()) { // manual allocation - retry - // if we are called via the _reroute API we ignore the failure counter and try to allocate - // this improves the usability since people don't need to raise the limits to issue retries since a simple _reroute call is - // enough to manually retry. - decision = allocation.decision(Decision.YES, NAME, "shard has exceeded the maximum number of retries [%d] on " + - "failed allocation attempts - retrying once due to a manual reroute command, [%s]", - maxRetry, unassignedInfo.toString()); - } else if (unassignedInfo.getNumFailedAllocations() >= maxRetry) { + if (unassignedInfo.getNumFailedAllocations() >= maxRetry) { decision = allocation.decision(Decision.NO, NAME, "shard has exceeded the maximum number of retries [%d] on " + "failed allocation attempts - manually call [/_cluster/reroute?retry_failed=true] to retry, [%s]", maxRetry, unassignedInfo.toString()); diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/NodeVersionAllocationDecider.java b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/NodeVersionAllocationDecider.java index 52a5184032484..898294264274e 100644 --- a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/NodeVersionAllocationDecider.java +++ b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/NodeVersionAllocationDecider.java @@ -79,7 +79,8 @@ private Decision isVersionCompatible(final RoutingNodes routingNodes, final Stri return allocation.decision(Decision.YES, NAME, "target node version [%s] is the same or newer than source node version [%s]", target.node().getVersion(), source.node().getVersion()); } else { - return allocation.decision(Decision.NO, NAME, "target node version [%s] is older than the source node version [%s]", + return allocation.decision(Decision.NO, NAME, "target node version [%s] is older than the source node version [%s] and may " + + "not support codecs or postings formats for a newer Lucene version", target.node().getVersion(), source.node().getVersion()); } } @@ -90,7 +91,8 @@ private Decision isVersionCompatible(SnapshotRecoverySource recoverySource, fina return allocation.decision(Decision.YES, NAME, "target node version [%s] is the same or newer than snapshot version [%s]", target.node().getVersion(), recoverySource.version()); } else { - return allocation.decision(Decision.NO, NAME, "target node version [%s] is older than the snapshot version [%s]", + return allocation.decision(Decision.NO, NAME, "target node version [%s] is older than the snapshot version [%s] and may " + + "not support codecs or postings formats for a newer Lucene version", target.node().getVersion(), recoverySource.version()); } } diff --git a/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/ResizeAllocationDecider.java b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/ResizeAllocationDecider.java new file mode 100644 index 0000000000000..a0ebf7ddba923 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/ResizeAllocationDecider.java @@ -0,0 +1,102 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.cluster.routing.allocation.decider; + +import org.elasticsearch.Version; +import org.elasticsearch.action.admin.indices.shrink.ResizeAction; +import org.elasticsearch.cluster.metadata.IndexMetaData; +import org.elasticsearch.cluster.routing.RecoverySource; +import org.elasticsearch.cluster.routing.RoutingNode; +import org.elasticsearch.cluster.routing.ShardRouting; +import org.elasticsearch.cluster.routing.UnassignedInfo; +import org.elasticsearch.cluster.routing.allocation.RoutingAllocation; +import org.elasticsearch.common.settings.Setting; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.index.Index; +import org.elasticsearch.index.IndexNotFoundException; +import org.elasticsearch.index.shard.ShardId; + +import java.util.Set; + +/** + * An allocation decider that ensures we allocate the shards of a target index for resize operations next to the source primaries + */ +public class ResizeAllocationDecider extends AllocationDecider { + + public static final String NAME = "resize"; + + /** + * Initializes a new {@link ResizeAllocationDecider} + * + * @param settings {@link Settings} used by this {@link AllocationDecider} + */ + public ResizeAllocationDecider(Settings settings) { + super(settings); + } + + @Override + public Decision canAllocate(ShardRouting shardRouting, RoutingAllocation allocation) { + return canAllocate(shardRouting, null, allocation); + } + + @Override + public Decision canAllocate(ShardRouting shardRouting, RoutingNode node, RoutingAllocation allocation) { + final UnassignedInfo unassignedInfo = shardRouting.unassignedInfo(); + if (unassignedInfo != null && shardRouting.recoverySource().getType() == RecoverySource.Type.LOCAL_SHARDS) { + // we only make decisions here if we have an unassigned info and we have to recover from another index ie. split / shrink + final IndexMetaData indexMetaData = allocation.metaData().getIndexSafe(shardRouting.index()); + Index resizeSourceIndex = indexMetaData.getResizeSourceIndex(); + assert resizeSourceIndex != null; + if (allocation.metaData().index(resizeSourceIndex) == null) { + return allocation.decision(Decision.NO, NAME, "resize source index [%s] doesn't exists", resizeSourceIndex.toString()); + } + IndexMetaData sourceIndexMetaData = allocation.metaData().getIndexSafe(resizeSourceIndex); + if (indexMetaData.getNumberOfShards() < sourceIndexMetaData.getNumberOfShards()) { + // this only handles splits so far. + return Decision.ALWAYS; + } + + ShardId shardId = IndexMetaData.selectSplitShard(shardRouting.id(), sourceIndexMetaData, indexMetaData.getNumberOfShards()); + ShardRouting sourceShardRouting = allocation.routingNodes().activePrimary(shardId); + if (sourceShardRouting == null) { + return allocation.decision(Decision.NO, NAME, "source primary shard [%s] is not active", shardId); + } + if (node != null) { // we might get called from the 2 param canAllocate method.. + if (node.node().getVersion().before(ResizeAction.COMPATIBILITY_VERSION)) { + return allocation.decision(Decision.NO, NAME, "node [%s] is too old to split a shard", node.nodeId()); + } + if (sourceShardRouting.currentNodeId().equals(node.nodeId())) { + return allocation.decision(Decision.YES, NAME, "source primary is allocated on this node"); + } else { + return allocation.decision(Decision.NO, NAME, "source primary is allocated on another node"); + } + } else { + return allocation.decision(Decision.YES, NAME, "source primary is active"); + } + } + return super.canAllocate(shardRouting, node, allocation); + } + + @Override + public Decision canForceAllocatePrimary(ShardRouting shardRouting, RoutingNode node, RoutingAllocation allocation) { + assert shardRouting.primary() : "must not call canForceAllocatePrimary on a non-primary shard " + shardRouting; + return canAllocate(shardRouting, node, allocation); + } +} diff --git a/core/src/main/java/org/elasticsearch/common/LegacyTimeBasedUUIDGenerator.java b/core/src/main/java/org/elasticsearch/common/LegacyTimeBasedUUIDGenerator.java index 2bf19f1dcbb97..74a08711042f7 100644 --- a/core/src/main/java/org/elasticsearch/common/LegacyTimeBasedUUIDGenerator.java +++ b/core/src/main/java/org/elasticsearch/common/LegacyTimeBasedUUIDGenerator.java @@ -23,8 +23,9 @@ import java.util.concurrent.atomic.AtomicInteger; /** - * These are essentially flake ids (http://boundary.com/blog/2012/01/12/flake-a-decentralized-k-ordered-unique-id-generator-in-erlang) but - * we use 6 (not 8) bytes for timestamp, and use 3 (not 2) bytes for sequence number. + * These are essentially flake ids, but we use 6 (not 8) bytes for timestamp, and use 3 (not 2) bytes for sequence number. + * For more information about flake ids, check out + * https://archive.fo/2015.07.08-082503/http://www.boundary.com/blog/2012/01/flake-a-decentralized-k-ordered-unique-id-generator-in-erlang/ */ class LegacyTimeBasedUUIDGenerator implements UUIDGenerator { diff --git a/core/src/main/java/org/elasticsearch/common/Numbers.java b/core/src/main/java/org/elasticsearch/common/Numbers.java index 1735a0dfa6570..2c4d700c92ce3 100644 --- a/core/src/main/java/org/elasticsearch/common/Numbers.java +++ b/core/src/main/java/org/elasticsearch/common/Numbers.java @@ -29,6 +29,9 @@ */ public final class Numbers { + private static final BigInteger MAX_LONG_VALUE = BigInteger.valueOf(Long.MAX_VALUE); + private static final BigInteger MIN_LONG_VALUE = BigInteger.valueOf(Long.MIN_VALUE); + private Numbers() { } @@ -205,6 +208,33 @@ public static long toLongExact(Number n) { } } + /** Return the long that {@code stringValue} stores or throws an exception if the + * stored value cannot be converted to a long that stores the exact same + * value and {@code coerce} is false. */ + public static long toLong(String stringValue, boolean coerce) { + try { + return Long.parseLong(stringValue); + } catch (NumberFormatException e) { + // we will try again with BigDecimal + } + + final BigInteger bigIntegerValue; + try { + BigDecimal bigDecimalValue = new BigDecimal(stringValue); + bigIntegerValue = coerce ? bigDecimalValue.toBigInteger() : bigDecimalValue.toBigIntegerExact(); + } catch (ArithmeticException e) { + throw new IllegalArgumentException("Value [" + stringValue + "] has a decimal part"); + } catch (NumberFormatException e) { + throw new IllegalArgumentException("For input string: \"" + stringValue + "\""); + } + + if (bigIntegerValue.compareTo(MAX_LONG_VALUE) > 0 || bigIntegerValue.compareTo(MIN_LONG_VALUE) < 0) { + throw new IllegalArgumentException("Value [" + stringValue + "] is out of range for a long"); + } + + return bigIntegerValue.longValue(); + } + /** Return the int that {@code n} stores, or throws an exception if the * stored value cannot be converted to an int that stores the exact same * value. */ diff --git a/core/src/main/java/org/elasticsearch/common/TimeBasedUUIDGenerator.java b/core/src/main/java/org/elasticsearch/common/TimeBasedUUIDGenerator.java index 550559eac9f6b..c30a8d0aaa222 100644 --- a/core/src/main/java/org/elasticsearch/common/TimeBasedUUIDGenerator.java +++ b/core/src/main/java/org/elasticsearch/common/TimeBasedUUIDGenerator.java @@ -22,9 +22,13 @@ import java.util.Base64; import java.util.concurrent.atomic.AtomicInteger; -/** These are essentially flake ids (http://boundary.com/blog/2012/01/12/flake-a-decentralized-k-ordered-unique-id-generator-in-erlang) but - * we use 6 (not 8) bytes for timestamp, and use 3 (not 2) bytes for sequence number. We also reorder bytes in a way that does not make ids - * sort in order anymore, but is more friendly to the way that the Lucene terms dictionary is structured. */ +/** + * These are essentially flake ids but we use 6 (not 8) bytes for timestamp, and use 3 (not 2) bytes for sequence number. We also reorder + * bytes in a way that does not make ids sort in order anymore, but is more friendly to the way that the Lucene terms dictionary is + * structured. + * For more information about flake ids, check out + * https://archive.fo/2015.07.08-082503/http://www.boundary.com/blog/2012/01/flake-a-decentralized-k-ordered-unique-id-generator-in-erlang/ + */ class TimeBasedUUIDGenerator implements UUIDGenerator { diff --git a/core/src/main/java/org/elasticsearch/common/blobstore/fs/FsBlobContainer.java b/core/src/main/java/org/elasticsearch/common/blobstore/fs/FsBlobContainer.java index 757cce7d8379a..1e384109aebce 100644 --- a/core/src/main/java/org/elasticsearch/common/blobstore/fs/FsBlobContainer.java +++ b/core/src/main/java/org/elasticsearch/common/blobstore/fs/FsBlobContainer.java @@ -140,7 +140,9 @@ public void move(String source, String target) throws IOException { Path targetPath = path.resolve(target); // If the target file exists then Files.move() behaviour is implementation specific // the existing file might be replaced or this method fails by throwing an IOException. - assert !Files.exists(targetPath); + if (Files.exists(targetPath)) { + throw new FileAlreadyExistsException("blob [" + targetPath + "] already exists, cannot overwrite"); + } Files.move(sourcePath, targetPath, StandardCopyOption.ATOMIC_MOVE); IOUtils.fsync(path, true); } diff --git a/core/src/main/java/org/elasticsearch/common/blobstore/fs/FsBlobStore.java b/core/src/main/java/org/elasticsearch/common/blobstore/fs/FsBlobStore.java index ce696678896f8..60055130fbe1d 100644 --- a/core/src/main/java/org/elasticsearch/common/blobstore/fs/FsBlobStore.java +++ b/core/src/main/java/org/elasticsearch/common/blobstore/fs/FsBlobStore.java @@ -39,10 +39,15 @@ public class FsBlobStore extends AbstractComponent implements BlobStore { private final int bufferSizeInBytes; + private final boolean readOnly; + public FsBlobStore(Settings settings, Path path) throws IOException { super(settings); this.path = path; - Files.createDirectories(path); + this.readOnly = settings.getAsBoolean("readonly", false); + if (!this.readOnly) { + Files.createDirectories(path); + } this.bufferSizeInBytes = (int) settings.getAsBytesSize("repositories.fs.buffer_size", new ByteSizeValue(100, ByteSizeUnit.KB)).getBytes(); } @@ -80,7 +85,9 @@ public void close() { private synchronized Path buildAndCreate(BlobPath path) throws IOException { Path f = buildPath(path); - Files.createDirectories(f); + if (!readOnly) { + Files.createDirectories(f); + } return f; } diff --git a/core/src/main/java/org/elasticsearch/common/cache/Cache.java b/core/src/main/java/org/elasticsearch/common/cache/Cache.java index df30123c35b42..91d011ba03cad 100644 --- a/core/src/main/java/org/elasticsearch/common/cache/Cache.java +++ b/core/src/main/java/org/elasticsearch/common/cache/Cache.java @@ -34,6 +34,7 @@ import java.util.concurrent.locks.ReentrantLock; import java.util.concurrent.locks.ReentrantReadWriteLock; import java.util.function.BiFunction; +import java.util.function.Consumer; import java.util.function.Predicate; import java.util.function.ToLongBiFunction; @@ -195,14 +196,15 @@ private static class CacheSegment { /** * get an entry from the segment; expired entries will be returned as null but not removed from the cache until the LRU list is - * pruned or a manual {@link Cache#refresh()} is performed + * pruned or a manual {@link Cache#refresh()} is performed however a caller can take action using the provided callback * * @param key the key of the entry to get from the cache * @param now the access time of this entry * @param isExpired test if the entry is expired + * @param onExpiration a callback if the entry associated to the key is expired * @return the entry if there was one, otherwise null */ - Entry get(K key, long now, Predicate> isExpired) { + Entry get(K key, long now, Predicate> isExpired, Consumer> onExpiration) { CompletableFuture> future; Entry entry = null; try (ReleasableLock ignored = readLock.acquire()) { @@ -217,6 +219,10 @@ Entry get(K key, long now, Predicate> isExpired) { return ok; } else { segmentStats.miss(); + if (ok != null) { + assert isExpired.test(ok); + onExpiration.accept(ok); + } return null; } }).get(); @@ -330,12 +336,12 @@ void eviction() { * @return the value to which the specified key is mapped, or null if this map contains no mapping for the key */ public V get(K key) { - return get(key, now()); + return get(key, now(), e -> {}); } - private V get(K key, long now) { + private V get(K key, long now, Consumer> onExpiration) { CacheSegment segment = getCacheSegment(key); - Entry entry = segment.get(key, now, e -> isExpired(e, now)); + Entry entry = segment.get(key, now, e -> isExpired(e, now), onExpiration); if (entry == null) { return null; } else { @@ -360,7 +366,12 @@ private V get(K key, long now) { */ public V computeIfAbsent(K key, CacheLoader loader) throws ExecutionException { long now = now(); - V value = get(key, now); + // we have to eagerly evict expired entries or our putIfAbsent call below will fail + V value = get(key, now, e -> { + try (ReleasableLock ignored = lruLock.acquire()) { + evictEntry(e); + } + }); if (value == null) { // we need to synchronize loading of a value for a given key; however, holding the segment lock while // invoking load can lead to deadlock against another thread due to dependent key loading; therefore, we @@ -691,13 +702,18 @@ private void evict(long now) { assert lruLock.isHeldByCurrentThread(); while (tail != null && shouldPrune(tail, now)) { - CacheSegment segment = getCacheSegment(tail.key); - Entry entry = tail; - if (segment != null) { - segment.remove(tail.key); - } - delete(entry, RemovalNotification.RemovalReason.EVICTED); + evictEntry(tail); + } + } + + private void evictEntry(Entry entry) { + assert lruLock.isHeldByCurrentThread(); + + CacheSegment segment = getCacheSegment(entry.key); + if (segment != null) { + segment.remove(entry.key); } + delete(entry, RemovalNotification.RemovalReason.EVICTED); } private void delete(Entry entry, RemovalNotification.RemovalReason removalReason) { diff --git a/core/src/main/java/org/elasticsearch/common/geo/builders/ShapeBuilder.java b/core/src/main/java/org/elasticsearch/common/geo/builders/ShapeBuilder.java index dbac02ecb4ccc..ea3c001949a83 100644 --- a/core/src/main/java/org/elasticsearch/common/geo/builders/ShapeBuilder.java +++ b/core/src/main/java/org/elasticsearch/common/geo/builders/ShapeBuilder.java @@ -26,7 +26,7 @@ import org.apache.logging.log4j.Logger; import org.elasticsearch.Assertions; import org.elasticsearch.ElasticsearchParseException; -import org.elasticsearch.action.support.ToXContentToBytes; +import org.elasticsearch.common.Strings; import org.elasticsearch.common.io.stream.NamedWriteable; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; @@ -52,7 +52,7 @@ /** * Basic class for building GeoJSON shapes like Polygons, Linestrings, etc */ -public abstract class ShapeBuilder extends ToXContentToBytes implements NamedWriteable { +public abstract class ShapeBuilder implements NamedWriteable, ToXContentObject { protected static final Logger LOGGER = ESLoggerFactory.getLogger(ShapeBuilder.class.getName()); @@ -708,4 +708,9 @@ protected static GeometryCollectionBuilder parseGeometries(XContentParser parser public String getWriteableName() { return type().shapeName(); } + + @Override + public String toString() { + return Strings.toString(this, true, true); + } } diff --git a/core/src/main/java/org/elasticsearch/common/io/FastStringReader.java b/core/src/main/java/org/elasticsearch/common/io/FastStringReader.java index 17398b7139b67..2ac7e9022e687 100644 --- a/core/src/main/java/org/elasticsearch/common/io/FastStringReader.java +++ b/core/src/main/java/org/elasticsearch/common/io/FastStringReader.java @@ -34,6 +34,7 @@ public class FastStringReader extends Reader implements CharSequence { private int length; private int next = 0; private int mark = 0; + private boolean closed = false; /** * Creates a new string reader. @@ -49,8 +50,9 @@ public FastStringReader(String s) { * Check to make sure that the stream has not been closed */ private void ensureOpen() throws IOException { - if (length == -1) + if (closed) { throw new IOException("Stream closed"); + } } @Override @@ -196,7 +198,7 @@ public void reset() throws IOException { */ @Override public void close() { - length = -1; + closed = true; } @Override diff --git a/core/src/main/java/org/elasticsearch/common/io/FileSystemUtils.java b/core/src/main/java/org/elasticsearch/common/io/FileSystemUtils.java index b2c6340ebe40f..a976fe779db70 100644 --- a/core/src/main/java/org/elasticsearch/common/io/FileSystemUtils.java +++ b/core/src/main/java/org/elasticsearch/common/io/FileSystemUtils.java @@ -20,6 +20,7 @@ package org.elasticsearch.common.io; import org.apache.logging.log4j.Logger; +import org.apache.lucene.util.Constants; import org.apache.lucene.util.IOUtils; import org.elasticsearch.common.Strings; import org.elasticsearch.common.SuppressForbidden; @@ -65,6 +66,16 @@ public static boolean isHidden(Path path) { return fileName.toString().startsWith("."); } + /** + * Check whether the file denoted by the given path is a desktop services store created by Finder on macOS. + * + * @param path the path + * @return true if the current system is macOS and the specified file appears to be a desktop services store file + */ + public static boolean isDesktopServicesStore(final Path path) { + return Constants.MAC_OS_X && Files.isRegularFile(path) && ".DS_Store".equals(path.getFileName().toString()); + } + /** * Appends the path to the given base and strips N elements off the path if strip is > 0. */ diff --git a/core/src/main/java/org/elasticsearch/common/io/stream/InputStreamStreamInput.java b/core/src/main/java/org/elasticsearch/common/io/stream/InputStreamStreamInput.java index 6d952b01a21e3..5999427e1a206 100644 --- a/core/src/main/java/org/elasticsearch/common/io/stream/InputStreamStreamInput.java +++ b/core/src/main/java/org/elasticsearch/common/io/stream/InputStreamStreamInput.java @@ -28,9 +28,28 @@ public class InputStreamStreamInput extends StreamInput { private final InputStream is; + private final long sizeLimit; + /** + * Creates a new InputStreamStreamInput with unlimited size + * @param is the input stream to wrap + */ public InputStreamStreamInput(InputStream is) { + this(is, Long.MAX_VALUE); + } + + /** + * Creates a new InputStreamStreamInput with a size limit + * @param is the input stream to wrap + * @param sizeLimit a hard limit of the number of bytes in the given input stream. This is used for internal input validation + */ + public InputStreamStreamInput(InputStream is, long sizeLimit) { this.is = is; + if (sizeLimit < 0) { + throw new IllegalArgumentException("size limit must be positive"); + } + this.sizeLimit = sizeLimit; + } @Override @@ -98,6 +117,8 @@ public long skip(long n) throws IOException { @Override protected void ensureCanReadBytes(int length) throws EOFException { - // TODO what can we do here? + if (length > sizeLimit) { + throw new EOFException("tried to read: " + length + " bytes but this stream is limited to: " + sizeLimit); + } } } diff --git a/core/src/main/java/org/elasticsearch/common/io/stream/StreamInput.java b/core/src/main/java/org/elasticsearch/common/io/stream/StreamInput.java index ac627cfd95d7f..31f53874f1949 100644 --- a/core/src/main/java/org/elasticsearch/common/io/stream/StreamInput.java +++ b/core/src/main/java/org/elasticsearch/common/io/stream/StreamInput.java @@ -928,7 +928,7 @@ public static StreamInput wrap(byte[] bytes) { } public static StreamInput wrap(byte[] bytes, int offset, int length) { - return new InputStreamStreamInput(new ByteArrayInputStream(bytes, offset, length)); + return new InputStreamStreamInput(new ByteArrayInputStream(bytes, offset, length), length); } /** diff --git a/core/src/main/java/org/elasticsearch/common/io/stream/Streamable.java b/core/src/main/java/org/elasticsearch/common/io/stream/Streamable.java index 99c054c4c7810..86a4d3ed95c2f 100644 --- a/core/src/main/java/org/elasticsearch/common/io/stream/Streamable.java +++ b/core/src/main/java/org/elasticsearch/common/io/stream/Streamable.java @@ -20,6 +20,7 @@ package org.elasticsearch.common.io.stream; import java.io.IOException; +import java.util.function.Supplier; /** * Implementers can be written to a {@linkplain StreamOutput} and read from a {@linkplain StreamInput}. This allows them to be "thrown @@ -43,4 +44,12 @@ public interface Streamable { * Write this object's fields to a {@linkplain StreamOutput}. */ void writeTo(StreamOutput out) throws IOException; + + static Writeable.Reader newWriteableReader(Supplier supplier) { + return (StreamInput in) -> { + T request = supplier.get(); + request.readFrom(in); + return request; + }; + } } diff --git a/core/src/main/java/org/elasticsearch/common/joda/Joda.java b/core/src/main/java/org/elasticsearch/common/joda/Joda.java index 832043af63e63..35ae6e2341f8d 100644 --- a/core/src/main/java/org/elasticsearch/common/joda/Joda.java +++ b/core/src/main/java/org/elasticsearch/common/joda/Joda.java @@ -79,7 +79,7 @@ public static FormatDateTimeFormatter forPattern(String input, Locale locale) { formatter = ISODateTimeFormat.basicTime(); } else if ("basicTimeNoMillis".equals(input) || "basic_time_no_millis".equals(input)) { formatter = ISODateTimeFormat.basicTimeNoMillis(); - } else if ("basicTTime".equals(input) || "basic_t_Time".equals(input)) { + } else if ("basicTTime".equals(input) || "basic_t_time".equals(input)) { formatter = ISODateTimeFormat.basicTTime(); } else if ("basicTTimeNoMillis".equals(input) || "basic_t_time_no_millis".equals(input)) { formatter = ISODateTimeFormat.basicTTimeNoMillis(); diff --git a/core/src/main/java/org/elasticsearch/common/logging/DeprecationLogger.java b/core/src/main/java/org/elasticsearch/common/logging/DeprecationLogger.java index 3ed1d9d30ac1a..1c559cf64fbb7 100644 --- a/core/src/main/java/org/elasticsearch/common/logging/DeprecationLogger.java +++ b/core/src/main/java/org/elasticsearch/common/logging/DeprecationLogger.java @@ -26,11 +26,14 @@ import org.elasticsearch.common.SuppressLoggerChecks; import org.elasticsearch.common.util.concurrent.ThreadContext; +import java.io.CharArrayWriter; +import java.nio.charset.Charset; import java.time.ZoneId; import java.time.ZonedDateTime; import java.time.format.DateTimeFormatter; import java.time.format.DateTimeFormatterBuilder; import java.time.format.SignStyle; +import java.util.BitSet; import java.util.Collections; import java.util.HashMap; import java.util.Iterator; @@ -228,7 +231,7 @@ public void deprecatedAndMaybeLog(final String key, final String msg, final Obje public static Pattern WARNING_HEADER_PATTERN = Pattern.compile( "299 " + // warn code "Elasticsearch-\\d+\\.\\d+\\.\\d+(?:-(?:alpha|beta|rc)\\d+)?(?:-SNAPSHOT)?-(?:[a-f0-9]{7}|Unknown) " + // warn agent - "\"((?:\t| |!|[\\x23-\\x5b]|[\\x5d-\\x7e]|[\\x80-\\xff]|\\\\|\\\\\")*)\" " + // quoted warning value, captured + "\"((?:\t| |!|[\\x23-\\x5B]|[\\x5D-\\x7E]|[\\x80-\\xFF]|\\\\|\\\\\")*)\" " + // quoted warning value, captured // quoted RFC 1123 date format "\"" + // opening quote "(?:Mon|Tue|Wed|Thu|Fri|Sat|Sun), " + // weekday @@ -304,7 +307,7 @@ void deprecated(final Set threadContexts, final String message, f final String formattedMessage = LoggerMessageFormat.format(message, params); final String warningHeaderValue = formatWarning(formattedMessage); assert WARNING_HEADER_PATTERN.matcher(warningHeaderValue).matches(); - assert extractWarningValueFromWarningHeader(warningHeaderValue).equals(escape(formattedMessage)); + assert extractWarningValueFromWarningHeader(warningHeaderValue).equals(escapeAndEncode(formattedMessage)); while (iterator.hasNext()) { try { final ThreadContext next = iterator.next(); @@ -328,7 +331,17 @@ void deprecated(final Set threadContexts, final String message, f * @return a warning value formatted according to RFC 7234 */ public static String formatWarning(final String s) { - return String.format(Locale.ROOT, WARNING_FORMAT, escape(s), RFC_7231_DATE_TIME.format(ZonedDateTime.now(GMT))); + return String.format(Locale.ROOT, WARNING_FORMAT, escapeAndEncode(s), RFC_7231_DATE_TIME.format(ZonedDateTime.now(GMT))); + } + + /** + * Escape and encode a string as a valid RFC 7230 quoted-string. + * + * @param s the string to escape and encode + * @return the escaped and encoded string + */ + public static String escapeAndEncode(final String s) { + return encode(escapeBackslashesAndQuotes(s)); } /** @@ -337,8 +350,81 @@ public static String formatWarning(final String s) { * @param s the string to escape * @return the escaped string */ - public static String escape(String s) { + static String escapeBackslashesAndQuotes(final String s) { return s.replaceAll("([\"\\\\])", "\\\\$1"); } + private static BitSet doesNotNeedEncoding; + + static { + doesNotNeedEncoding = new BitSet(1 + 0xFF); + doesNotNeedEncoding.set('\t'); + doesNotNeedEncoding.set(' '); + doesNotNeedEncoding.set('!'); + doesNotNeedEncoding.set('\\'); + doesNotNeedEncoding.set('"'); + // we have to skip '%' which is 0x25 so that it is percent-encoded too + for (int i = 0x23; i <= 0x24; i++) { + doesNotNeedEncoding.set(i); + } + for (int i = 0x26; i <= 0x5B; i++) { + doesNotNeedEncoding.set(i); + } + for (int i = 0x5D; i <= 0x7E; i++) { + doesNotNeedEncoding.set(i); + } + for (int i = 0x80; i <= 0xFF; i++) { + doesNotNeedEncoding.set(i); + } + assert !doesNotNeedEncoding.get('%'); + } + + private static final Charset UTF_8 = Charset.forName("UTF-8"); + + /** + * Encode a string containing characters outside of the legal characters for an RFC 7230 quoted-string. + * + * @param s the string to encode + * @return the encoded string + */ + static String encode(final String s) { + final StringBuilder sb = new StringBuilder(s.length()); + boolean encodingNeeded = false; + for (int i = 0; i < s.length();) { + int current = (int) s.charAt(i); + /* + * Either the character does not need encoding or it does; when the character does not need encoding we append the character to + * a buffer and move to the next character and when the character does need encoding, we peel off as many characters as possible + * which we encode using UTF-8 until we encounter another character that does not need encoding. + */ + if (doesNotNeedEncoding.get(current)) { + // append directly and move to the next character + sb.append((char) current); + i++; + } else { + int startIndex = i; + do { + i++; + } while (i < s.length() && !doesNotNeedEncoding.get(s.charAt(i))); + + final byte[] bytes = s.substring(startIndex, i).getBytes(UTF_8); + // noinspection ForLoopReplaceableByForEach + for (int j = 0; j < bytes.length; j++) { + sb.append('%').append(hex(bytes[j] >> 4)).append(hex(bytes[j])); + } + encodingNeeded = true; + } + } + return encodingNeeded ? sb.toString() : s; + } + + private static char hex(int b) { + final char ch = Character.forDigit(b & 0xF, 16); + if (Character.isLetter(ch)) { + return Character.toUpperCase(ch); + } else { + return ch; + } + } + } diff --git a/core/src/main/java/org/elasticsearch/common/logging/ESLoggerFactory.java b/core/src/main/java/org/elasticsearch/common/logging/ESLoggerFactory.java index 16f47f78ddbdb..d8f2ebe9be843 100644 --- a/core/src/main/java/org/elasticsearch/common/logging/ESLoggerFactory.java +++ b/core/src/main/java/org/elasticsearch/common/logging/ESLoggerFactory.java @@ -37,7 +37,7 @@ private ESLoggerFactory() { public static final Setting LOG_DEFAULT_LEVEL_SETTING = new Setting<>("logger.level", Level.INFO.name(), Level::valueOf, Property.NodeScope); - public static final Setting LOG_LEVEL_SETTING = + public static final Setting.AffixSetting LOG_LEVEL_SETTING = Setting.prefixKeySetting("logger.", (key) -> new Setting<>(key, Level.INFO.name(), Level::valueOf, Property.Dynamic, Property.NodeScope)); @@ -46,7 +46,12 @@ public static Logger getLogger(String prefix, String name) { } public static Logger getLogger(String prefix, Class clazz) { - return getLogger(prefix, LogManager.getLogger(clazz)); + /* + * Do not use LogManager#getLogger(Class) as this now uses Class#getCanonicalName under the hood; as this returns null for local and + * anonymous classes, any place we create, for example, an abstract component defined as an anonymous class (e.g., in tests) will + * result in a logger with a null name which will blow up in a lookup inside of Log4j. + */ + return getLogger(prefix, LogManager.getLogger(clazz.getName())); } public static Logger getLogger(String prefix, Logger logger) { diff --git a/core/src/main/java/org/elasticsearch/common/logging/LogConfigurator.java b/core/src/main/java/org/elasticsearch/common/logging/LogConfigurator.java index 0024d83b80506..b97fc13e73038 100644 --- a/core/src/main/java/org/elasticsearch/common/logging/LogConfigurator.java +++ b/core/src/main/java/org/elasticsearch/common/logging/LogConfigurator.java @@ -52,7 +52,6 @@ import java.util.ArrayList; import java.util.EnumSet; import java.util.List; -import java.util.Map; import java.util.Objects; import java.util.Set; import java.util.concurrent.atomic.AtomicBoolean; @@ -182,15 +181,12 @@ private static void configureLoggerLevels(final Settings settings) { final Level level = ESLoggerFactory.LOG_DEFAULT_LEVEL_SETTING.get(settings); Loggers.setLevel(ESLoggerFactory.getRootLogger(), level); } - - final Map levels = settings.filter(ESLoggerFactory.LOG_LEVEL_SETTING::match).getAsMap(); - for (final String key : levels.keySet()) { + ESLoggerFactory.LOG_LEVEL_SETTING.getAllConcreteSettings(settings) // do not set a log level for a logger named level (from the default log setting) - if (!key.equals(ESLoggerFactory.LOG_DEFAULT_LEVEL_SETTING.getKey())) { - final Level level = ESLoggerFactory.LOG_LEVEL_SETTING.getConcreteSetting(key).get(settings); - Loggers.setLevel(ESLoggerFactory.getLogger(key.substring("logger.".length())), level); - } - } + .filter(s -> s.getKey().equals(ESLoggerFactory.LOG_DEFAULT_LEVEL_SETTING.getKey()) == false).forEach(s -> { + final Level level = s.get(settings); + Loggers.setLevel(ESLoggerFactory.getLogger(s.getKey().substring("logger.".length())), level); + }); } /** diff --git a/core/src/main/java/org/elasticsearch/common/lucene/Lucene.java b/core/src/main/java/org/elasticsearch/common/lucene/Lucene.java index b156c9bb2961e..597fa970a57ae 100644 --- a/core/src/main/java/org/elasticsearch/common/lucene/Lucene.java +++ b/core/src/main/java/org/elasticsearch/common/lucene/Lucene.java @@ -40,21 +40,17 @@ import org.apache.lucene.index.NoMergePolicy; import org.apache.lucene.index.SegmentCommitInfo; import org.apache.lucene.index.SegmentInfos; -import org.apache.lucene.search.Collector; import org.apache.lucene.search.DocIdSetIterator; import org.apache.lucene.search.Explanation; import org.apache.lucene.search.FieldDoc; import org.apache.lucene.search.IndexSearcher; -import org.apache.lucene.search.LeafCollector; import org.apache.lucene.search.Query; import org.apache.lucene.search.ScoreDoc; import org.apache.lucene.search.Scorer; import org.apache.lucene.search.ScorerSupplier; -import org.apache.lucene.search.SimpleCollector; import org.apache.lucene.search.SortField; import org.apache.lucene.search.SortedNumericSortField; import org.apache.lucene.search.SortedSetSortField; -import org.apache.lucene.search.TimeLimitingCollector; import org.apache.lucene.search.TopDocs; import org.apache.lucene.search.TopFieldDocs; import org.apache.lucene.search.TwoPhaseIterator; @@ -66,9 +62,7 @@ import org.apache.lucene.store.Lock; import org.apache.lucene.util.Bits; import org.apache.lucene.util.BytesRef; -import org.apache.lucene.util.Counter; import org.apache.lucene.util.Version; -import org.elasticsearch.ElasticsearchException; import org.elasticsearch.ExceptionsHelper; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.Strings; @@ -87,7 +81,6 @@ import java.util.Collections; import java.util.List; import java.util.Map; -import java.util.Objects; public class Lucene { public static final String LATEST_DOC_VALUES_FORMAT = "Lucene70"; @@ -769,7 +762,7 @@ public static Bits asSequentialAccessBits(final int maxDoc, @Nullable ScorerSupp return new Bits.MatchNoBits(maxDoc); } // Since we want bits, we need random-access - final Scorer scorer = scorerSupplier.get(true); // this never returns null + final Scorer scorer = scorerSupplier.get(Long.MAX_VALUE); // this never returns null final TwoPhaseIterator twoPhase = scorer.twoPhaseIterator(); final DocIdSetIterator iterator; if (twoPhase == null) { diff --git a/core/src/main/java/org/elasticsearch/common/lucene/all/AllEntries.java b/core/src/main/java/org/elasticsearch/common/lucene/all/AllEntries.java deleted file mode 100644 index ffd85213e8a04..0000000000000 --- a/core/src/main/java/org/elasticsearch/common/lucene/all/AllEntries.java +++ /dev/null @@ -1,64 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.common.lucene.all; - -import java.util.ArrayList; -import java.util.List; - -public class AllEntries { - public static class Entry { - private final String name; - private final String value; - private final float boost; - - public Entry(String name, String value, float boost) { - this.name = name; - this.value = value; - this.boost = boost; - } - - public String name() { - return this.name; - } - - public float boost() { - return this.boost; - } - - public String value() { - return this.value; - } - } - - private final List entries = new ArrayList<>(); - - public void addText(String name, String text, float boost) { - Entry entry = new Entry(name, text, boost); - entries.add(entry); - } - - public void clear() { - this.entries.clear(); - } - - public List entries() { - return this.entries; - } -} diff --git a/core/src/main/java/org/elasticsearch/common/lucene/all/AllField.java b/core/src/main/java/org/elasticsearch/common/lucene/all/AllField.java deleted file mode 100644 index f52683a0009f5..0000000000000 --- a/core/src/main/java/org/elasticsearch/common/lucene/all/AllField.java +++ /dev/null @@ -1,49 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.common.lucene.all; - -import org.apache.lucene.analysis.Analyzer; -import org.apache.lucene.analysis.TokenStream; -import org.apache.lucene.document.Field; -import org.apache.lucene.document.FieldType; -import org.apache.lucene.index.IndexOptions; - -public class AllField extends Field { - private final float boost; - - public AllField(String name, String value, float boost, FieldType fieldType) { - super(name, value, fieldType); - this.boost = boost; - } - - @Override - public TokenStream tokenStream(Analyzer analyzer, TokenStream previous) { - TokenStream ts = analyzer.tokenStream(name(), stringValue()); - if (boost != 1.0f && fieldType().indexOptions().compareTo(IndexOptions.DOCS_AND_FREQS_AND_POSITIONS) >= 0) { - // TODO: we should be able to reuse "previous" if its instanceof AllTokenStream? - // but we need to be careful this optimization is safe (and tested)... - - // AllTokenStream maps boost to 4-byte payloads, so we only need to use it any field had non-default (!= 1.0f) boost and if - // positions are indexed: - return new AllTokenStream(ts, boost); - } - return ts; - } -} diff --git a/core/src/main/java/org/elasticsearch/common/lucene/all/AllTermQuery.java b/core/src/main/java/org/elasticsearch/common/lucene/all/AllTermQuery.java deleted file mode 100644 index 7df146a11c231..0000000000000 --- a/core/src/main/java/org/elasticsearch/common/lucene/all/AllTermQuery.java +++ /dev/null @@ -1,237 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.common.lucene.all; - -import org.apache.lucene.analysis.payloads.PayloadHelper; -import org.apache.lucene.index.IndexReader; -import org.apache.lucene.index.LeafReaderContext; -import org.apache.lucene.index.PostingsEnum; -import org.apache.lucene.index.Term; -import org.apache.lucene.index.TermContext; -import org.apache.lucene.index.TermState; -import org.apache.lucene.index.Terms; -import org.apache.lucene.index.TermsEnum; -import org.apache.lucene.search.CollectionStatistics; -import org.apache.lucene.search.DocIdSetIterator; -import org.apache.lucene.search.Explanation; -import org.apache.lucene.search.IndexSearcher; -import org.apache.lucene.search.Query; -import org.apache.lucene.search.Scorer; -import org.apache.lucene.search.TermQuery; -import org.apache.lucene.search.TermStatistics; -import org.apache.lucene.search.Weight; -import org.apache.lucene.search.similarities.Similarity; -import org.apache.lucene.search.similarities.Similarity.SimScorer; -import org.apache.lucene.search.similarities.Similarity.SimWeight; -import org.apache.lucene.util.BytesRef; -import org.apache.lucene.util.SmallFloat; - -import java.io.IOException; -import java.util.Objects; -import java.util.Set; - -/** - * A term query that takes all payload boost values into account. - *

- * It is like PayloadTermQuery with AveragePayloadFunction, except - * unlike PayloadTermQuery, it doesn't plug into the similarity to - * determine how the payload should be factored in, it just parses - * the float and multiplies the average with the regular score. - */ -public final class AllTermQuery extends Query { - - private final Term term; - - public AllTermQuery(Term term) { - this.term = term; - } - - public Term getTerm() { - return term; - } - - @Override - public boolean equals(Object obj) { - if (sameClassAs(obj) == false) { - return false; - } - return Objects.equals(term, ((AllTermQuery) obj).term); - } - - @Override - public int hashCode() { - return 31 * classHash() + term.hashCode(); - } - - @Override - public Query rewrite(IndexReader reader) throws IOException { - Query rewritten = super.rewrite(reader); - if (rewritten != this) { - return rewritten; - } - boolean hasPayloads = false; - for (LeafReaderContext context : reader.leaves()) { - final Terms terms = context.reader().terms(term.field()); - if (terms != null) { - if (terms.hasPayloads()) { - hasPayloads = true; - break; - } - } - } - // if the terms does not exist we could return a MatchNoDocsQuery but this would break the unified highlighter - // which rewrites query with an empty reader. - if (hasPayloads == false) { - return new TermQuery(term); - } - return this; - } - - @Override - public Weight createWeight(IndexSearcher searcher, boolean needsScores, float boost) throws IOException { - if (needsScores == false) { - return new TermQuery(term).createWeight(searcher, needsScores, boost); - } - final TermContext termStates = TermContext.build(searcher.getTopReaderContext(), term); - final CollectionStatistics collectionStats = searcher.collectionStatistics(term.field()); - final TermStatistics termStats = searcher.termStatistics(term, termStates); - final Similarity similarity = searcher.getSimilarity(needsScores); - final SimWeight stats = similarity.computeWeight(boost, collectionStats, termStats); - return new Weight(this) { - - @Override - public void extractTerms(Set terms) { - terms.add(term); - } - - @Override - public Explanation explain(LeafReaderContext context, int doc) throws IOException { - AllTermScorer scorer = scorer(context); - if (scorer != null) { - int newDoc = scorer.iterator().advance(doc); - if (newDoc == doc) { - float score = scorer.score(); - float freq = scorer.freq(); - SimScorer docScorer = similarity.simScorer(stats, context); - Explanation freqExplanation = Explanation.match(freq, "termFreq=" + freq); - Explanation termScoreExplanation = docScorer.explain(doc, freqExplanation); - Explanation payloadBoostExplanation = - Explanation.match(scorer.payloadBoost(), "payloadBoost=" + scorer.payloadBoost()); - return Explanation.match( - score, - "weight(" + getQuery() + " in " + doc + ") [" - + similarity.getClass().getSimpleName() + "], product of:", - termScoreExplanation, payloadBoostExplanation); - } - } - return Explanation.noMatch("no matching term"); - } - - @Override - public AllTermScorer scorer(LeafReaderContext context) throws IOException { - final Terms terms = context.reader().terms(term.field()); - if (terms == null) { - return null; - } - final TermsEnum termsEnum = terms.iterator(); - if (termsEnum == null) { - return null; - } - final TermState state = termStates.get(context.ord); - if (state == null) { - // Term does not exist in this segment - return null; - } - termsEnum.seekExact(term.bytes(), state); - PostingsEnum docs = termsEnum.postings(null, PostingsEnum.PAYLOADS); - assert docs != null; - return new AllTermScorer(this, docs, similarity.simScorer(stats, context)); - } - - }; - } - - private static class AllTermScorer extends Scorer { - - final PostingsEnum postings; - final Similarity.SimScorer docScorer; - int doc = -1; - float payloadBoost; - - AllTermScorer(Weight weight, PostingsEnum postings, Similarity.SimScorer docScorer) { - super(weight); - this.postings = postings; - this.docScorer = docScorer; - } - - float payloadBoost() throws IOException { - if (doc != docID()) { - final int freq = postings.freq(); - payloadBoost = 0; - for (int i = 0; i < freq; ++i) { - postings.nextPosition(); - final BytesRef payload = postings.getPayload(); - float boost; - if (payload == null) { - boost = 1; - } else if (payload.length == 1) { - boost = SmallFloat.byte315ToFloat(payload.bytes[payload.offset]); - } else if (payload.length == 4) { - // TODO: for bw compat only, remove this in 6.0 - boost = PayloadHelper.decodeFloat(payload.bytes, payload.offset); - } else { - throw new IllegalStateException("Payloads are expected to have a length of 1 or 4 but got: " - + payload); - } - payloadBoost += boost; - } - payloadBoost /= freq; - doc = docID(); - } - return payloadBoost; - } - - @Override - public float score() throws IOException { - return payloadBoost() * docScorer.score(postings.docID(), postings.freq()); - } - - @Override - public int freq() throws IOException { - return postings.freq(); - } - - @Override - public int docID() { - return postings.docID(); - } - - @Override - public DocIdSetIterator iterator() { - return postings; - } - } - - @Override - public String toString(String field) { - return new TermQuery(term).toString(field); - } - -} diff --git a/core/src/main/java/org/elasticsearch/common/lucene/all/AllTokenStream.java b/core/src/main/java/org/elasticsearch/common/lucene/all/AllTokenStream.java deleted file mode 100644 index 0d29a65d5d521..0000000000000 --- a/core/src/main/java/org/elasticsearch/common/lucene/all/AllTokenStream.java +++ /dev/null @@ -1,53 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.common.lucene.all; - -import org.apache.lucene.analysis.Analyzer; -import org.apache.lucene.analysis.TokenFilter; -import org.apache.lucene.analysis.TokenStream; -import org.apache.lucene.analysis.tokenattributes.PayloadAttribute; -import org.apache.lucene.util.BytesRef; -import org.apache.lucene.util.SmallFloat; - -import java.io.IOException; - -public final class AllTokenStream extends TokenFilter { - public static TokenStream allTokenStream(String allFieldName, String value, float boost, Analyzer analyzer) throws IOException { - return new AllTokenStream(analyzer.tokenStream(allFieldName, value), boost); - } - - private final BytesRef payloadSpare = new BytesRef(new byte[1]); - private final PayloadAttribute payloadAttribute; - - AllTokenStream(TokenStream input, float boost) { - super(input); - payloadAttribute = addAttribute(PayloadAttribute.class); - payloadSpare.bytes[0] = SmallFloat.floatToByte315(boost); - } - - @Override - public boolean incrementToken() throws IOException { - if (!input.incrementToken()) { - return false; - } - payloadAttribute.setPayload(payloadSpare); - return true; - } -} diff --git a/core/src/main/java/org/elasticsearch/common/lucene/search/Queries.java b/core/src/main/java/org/elasticsearch/common/lucene/search/Queries.java index 36b94718776dd..5129cd5485e35 100644 --- a/core/src/main/java/org/elasticsearch/common/lucene/search/Queries.java +++ b/core/src/main/java/org/elasticsearch/common/lucene/search/Queries.java @@ -116,15 +116,6 @@ public static Query fixNegativeQueryIfNeeded(Query q) { return q; } - public static boolean isConstantMatchAllQuery(Query query) { - if (query instanceof ConstantScoreQuery) { - return isConstantMatchAllQuery(((ConstantScoreQuery) query).getQuery()); - } else if (query instanceof MatchAllDocsQuery) { - return true; - } - return false; - } - public static Query applyMinimumShouldMatch(BooleanQuery query, @Nullable String minimumShouldMatch) { if (minimumShouldMatch == null) { return query; diff --git a/core/src/main/java/org/elasticsearch/common/lucene/uid/PerThreadIDVersionAndSeqNoLookup.java b/core/src/main/java/org/elasticsearch/common/lucene/uid/PerThreadIDVersionAndSeqNoLookup.java index 2b37c338c9a40..f8ccd827019a4 100644 --- a/core/src/main/java/org/elasticsearch/common/lucene/uid/PerThreadIDVersionAndSeqNoLookup.java +++ b/core/src/main/java/org/elasticsearch/common/lucene/uid/PerThreadIDVersionAndSeqNoLookup.java @@ -32,7 +32,7 @@ import org.elasticsearch.common.lucene.uid.VersionsAndSeqNoResolver.DocIdAndVersion; import org.elasticsearch.index.mapper.SeqNoFieldMapper; import org.elasticsearch.index.mapper.VersionFieldMapper; -import org.elasticsearch.index.seqno.SequenceNumbersService; +import org.elasticsearch.index.seqno.SequenceNumbers; import java.io.IOException; @@ -138,7 +138,7 @@ DocIdAndSeqNo lookupSeqNo(BytesRef id, LeafReaderContext context) throws IOExcep if (seqNos != null && seqNos.advanceExact(docID)) { seqNo = seqNos.longValue(); } else { - seqNo = SequenceNumbersService.UNASSIGNED_SEQ_NO; + seqNo = SequenceNumbers.UNASSIGNED_SEQ_NO; } return new DocIdAndSeqNo(docID, seqNo, context); } else { diff --git a/core/src/main/java/org/elasticsearch/common/network/InetAddresses.java b/core/src/main/java/org/elasticsearch/common/network/InetAddresses.java index 4d3d140ae636a..2e68d8358f0b2 100644 --- a/core/src/main/java/org/elasticsearch/common/network/InetAddresses.java +++ b/core/src/main/java/org/elasticsearch/common/network/InetAddresses.java @@ -16,6 +16,8 @@ package org.elasticsearch.common.network; +import org.elasticsearch.common.collect.Tuple; + import java.net.Inet4Address; import java.net.Inet6Address; import java.net.InetAddress; @@ -354,4 +356,32 @@ private static InetAddress bytesToInetAddress(byte[] addr) { throw new AssertionError(e); } } + + /** + * Parse an IP address and its prefix length using the CIDR notation. + * @throws IllegalArgumentException if the string is not formatted as {@code ip_address/prefix_length} + * @throws IllegalArgumentException if the IP address is an IPv6-mapped ipv4 address + * @throws IllegalArgumentException if the prefix length is not in 0-32 for IPv4 addresses and 0-128 for IPv6 addresses + * @throws NumberFormatException if the prefix length is not an integer + */ + public static Tuple parseCidr(String maskedAddress) { + String[] fields = maskedAddress.split("/"); + if (fields.length == 2) { + final String addressString = fields[0]; + final InetAddress address = forString(addressString); + if (addressString.contains(":") && address.getAddress().length == 4) { + throw new IllegalArgumentException("CIDR notation is not allowed with IPv6-mapped IPv4 address [" + addressString + + " as it introduces ambiguity as to whether the prefix length should be interpreted as a v4 prefix length or a" + + " v6 prefix length"); + } + final int prefixLength = Integer.parseInt(fields[1]); + if (prefixLength < 0 || prefixLength > 8 * address.getAddress().length) { + throw new IllegalArgumentException("Illegal prefix length [" + prefixLength + "] in [" + maskedAddress + + "]. Must be 0-32 for IPv4 ranges, 0-128 for IPv6 ranges"); + } + return new Tuple<>(address, prefixLength); + } else { + throw new IllegalArgumentException("Expected [ip/prefix] but was [" + maskedAddress + "]"); + } + } } diff --git a/core/src/main/java/org/elasticsearch/common/network/NetworkModule.java b/core/src/main/java/org/elasticsearch/common/network/NetworkModule.java index a8356bfe10f82..8cb13647fb6af 100644 --- a/core/src/main/java/org/elasticsearch/common/network/NetworkModule.java +++ b/core/src/main/java/org/elasticsearch/common/network/NetworkModule.java @@ -63,7 +63,6 @@ public final class NetworkModule { public static final String TRANSPORT_TYPE_KEY = "transport.type"; public static final String HTTP_TYPE_KEY = "http.type"; - public static final String LOCAL_TRANSPORT = "local"; public static final String HTTP_TYPE_DEFAULT_KEY = "http.type.default"; public static final String TRANSPORT_TYPE_DEFAULT_KEY = "transport.type.default"; diff --git a/core/src/main/java/org/elasticsearch/common/settings/AbstractScopedSettings.java b/core/src/main/java/org/elasticsearch/common/settings/AbstractScopedSettings.java index 05b7d96c8f6db..61f32c67c20cb 100644 --- a/core/src/main/java/org/elasticsearch/common/settings/AbstractScopedSettings.java +++ b/core/src/main/java/org/elasticsearch/common/settings/AbstractScopedSettings.java @@ -86,7 +86,7 @@ protected AbstractScopedSettings(Settings settings, Set> settingsSet, protected void validateSettingKey(Setting setting) { if (isValidKey(setting.getKey()) == false && (setting.isGroupSetting() && isValidGroupKey(setting.getKey()) - || isValidAffixKey(setting.getKey())) == false) { + || isValidAffixKey(setting.getKey())) == false || setting.getKey().endsWith(".0")) { throw new IllegalArgumentException("illegal settings key: [" + setting.getKey() + "]"); } } @@ -207,26 +207,50 @@ public synchronized void addAffixUpdateConsumer(Setting.AffixSetting sett addSettingsUpdater(setting.newAffixUpdater(consumer, logger, validator)); } + /** + * Adds a settings consumer for affix settings. Affix settings have a namespace associated to it that needs to be available to the + * consumer in order to be processed correctly. This consumer will get a namespace to value map instead of each individual namespace + * and value as in {@link #addAffixUpdateConsumer(Setting.AffixSetting, BiConsumer, BiConsumer)} + */ + public synchronized void addAffixMapUpdateConsumer(Setting.AffixSetting setting, Consumer> consumer, + BiConsumer validator, boolean omitDefaults) { + final Setting registeredSetting = this.complexMatchers.get(setting.getKey()); + if (setting != registeredSetting) { + throw new IllegalArgumentException("Setting is not registered for key [" + setting.getKey() + "]"); + } + addSettingsUpdater(setting.newAffixMapUpdater(consumer, logger, validator, omitDefaults)); + } + synchronized void addSettingsUpdater(SettingUpdater updater) { this.settingUpdaters.add(updater); } /** - * Adds a settings consumer that accepts the values for two settings. The consumer if only notified if one or both settings change. + * Adds a settings consumer that accepts the values for two settings. + * See {@link #addSettingsUpdateConsumer(Setting, Setting, BiConsumer, BiConsumer)} for details. + */ + public synchronized void addSettingsUpdateConsumer(Setting a, Setting b, BiConsumer consumer) { + addSettingsUpdateConsumer(a, b, consumer, (i, j) -> {} ); + } + + /** + * Adds a settings consumer that accepts the values for two settings. The consumer is only notified if one or both settings change + * and if the provided validator succeeded. *

* Note: Only settings registered in {@link SettingsModule} can be changed dynamically. *

- * This method registers a compound updater that is useful if two settings are depending on each other. The consumer is always provided - * with both values even if only one of the two changes. + * This method registers a compound updater that is useful if two settings are depending on each other. + * The consumer is always provided with both values even if only one of the two changes. */ - public synchronized void addSettingsUpdateConsumer(Setting a, Setting b, BiConsumer consumer) { + public synchronized void addSettingsUpdateConsumer(Setting a, Setting b, + BiConsumer consumer, BiConsumer validator) { if (a != get(a.getKey())) { throw new IllegalArgumentException("Setting is not registered for key [" + a.getKey() + "]"); } if (b != get(b.getKey())) { throw new IllegalArgumentException("Setting is not registered for key [" + b.getKey() + "]"); } - addSettingsUpdater(Setting.compoundUpdater(consumer, a, b, logger)); + addSettingsUpdater(Setting.compoundUpdater(consumer, validator, a, b, logger)); } /** @@ -479,22 +503,25 @@ private boolean updateSettings(Settings toApply, Settings.Builder target, Settin (onlyDynamic && isDynamicSetting(key) // it's a dynamicSetting and we only do dynamic settings || get(key) == null && key.startsWith(ARCHIVED_SETTINGS_PREFIX) // the setting is not registered AND it's been archived || (onlyDynamic == false && get(key) != null))); // if it's not dynamic AND we have a key - for (Map.Entry entry : toApply.getAsMap().entrySet()) { - if (entry.getValue() == null && (canRemove.test(entry.getKey()) || entry.getKey().endsWith("*"))) { + for (String key : toApply.keySet()) { + boolean isNull = toApply.get(key) == null; + if (isNull && (canRemove.test(key) || key.endsWith("*"))) { // this either accepts null values that suffice the canUpdate test OR wildcard expressions (key ends with *) // we don't validate if there is any dynamic setting with that prefix yet we could do in the future - toRemove.add(entry.getKey()); + toRemove.add(key); // we don't set changed here it's set after we apply deletes below if something actually changed - } else if (entry.getValue() != null && canUpdate.test(entry.getKey())) { - validate(entry.getKey(), toApply); - settingsBuilder.put(entry.getKey(), entry.getValue()); - updates.put(entry.getKey(), entry.getValue()); + } else if (get(key) == null) { + throw new IllegalArgumentException(type + " setting [" + key + "], not recognized"); + } else if (isNull == false && canUpdate.test(key)) { + validate(key, toApply); + settingsBuilder.copy(key, toApply); + updates.copy(key, toApply); changed = true; } else { - if (isFinalSetting(entry.getKey())) { - throw new IllegalArgumentException("final " + type + " setting [" + entry.getKey() + "], not updateable"); + if (isFinalSetting(key)) { + throw new IllegalArgumentException("final " + type + " setting [" + key + "], not updateable"); } else { - throw new IllegalArgumentException(type + " setting [" + entry.getKey() + "], not dynamically updateable"); + throw new IllegalArgumentException(type + " setting [" + key + "], not dynamically updateable"); } } } @@ -507,7 +534,7 @@ private static boolean applyDeletes(Set deletes, Settings.Builder builde boolean changed = false; for (String entry : deletes) { Set keysToRemove = new HashSet<>(); - Set keySet = builder.internalMap().keySet(); + Set keySet = builder.keys(); for (String key : keySet) { if (Regex.simpleMatch(entry, key) && canRemove.test(key)) { // we have to re-check with canRemove here since we might have a wildcard expression foo.* that matches @@ -558,35 +585,35 @@ public Settings archiveUnknownOrInvalidSettings( final BiConsumer, IllegalArgumentException> invalidConsumer) { Settings.Builder builder = Settings.builder(); boolean changed = false; - for (Map.Entry entry : settings.getAsMap().entrySet()) { + for (String key : settings.keySet()) { try { - Setting setting = get(entry.getKey()); + Setting setting = get(key); if (setting != null) { setting.get(settings); - builder.put(entry.getKey(), entry.getValue()); + builder.copy(key, settings); } else { - if (entry.getKey().startsWith(ARCHIVED_SETTINGS_PREFIX) || isPrivateSetting(entry.getKey())) { - builder.put(entry.getKey(), entry.getValue()); + if (key.startsWith(ARCHIVED_SETTINGS_PREFIX) || isPrivateSetting(key)) { + builder.copy(key, settings); } else { changed = true; - unknownConsumer.accept(entry); + unknownConsumer.accept(new Entry(key, settings)); /* * We put them back in here such that tools can check from the outside if there are any indices with invalid * settings. The setting can remain there but we want users to be aware that some of their setting are invalid and * they can research why and what they need to do to replace them. */ - builder.put(ARCHIVED_SETTINGS_PREFIX + entry.getKey(), entry.getValue()); + builder.copy(ARCHIVED_SETTINGS_PREFIX + key, key, settings); } } } catch (IllegalArgumentException ex) { changed = true; - invalidConsumer.accept(entry, ex); + invalidConsumer.accept(new Entry(key, settings), ex); /* * We put them back in here such that tools can check from the outside if there are any indices with invalid settings. The * setting can remain there but we want users to be aware that some of their setting are invalid and they can research why * and what they need to do to replace them. */ - builder.put(ARCHIVED_SETTINGS_PREFIX + entry.getKey(), entry.getValue()); + builder.copy(ARCHIVED_SETTINGS_PREFIX + key, key, settings); } } if (changed) { @@ -596,6 +623,32 @@ public Settings archiveUnknownOrInvalidSettings( } } + private static final class Entry implements Map.Entry { + + private final String key; + private final Settings settings; + + private Entry(String key, Settings settings) { + this.key = key; + this.settings = settings; + } + + @Override + public String getKey() { + return key; + } + + @Override + public String getValue() { + return settings.get(key); + } + + @Override + public String setValue(String value) { + throw new UnsupportedOperationException(); + } + } + /** * Returns true iff the setting is a private setting ie. it should be treated as valid even though it has no internal * representation. Otherwise false diff --git a/core/src/main/java/org/elasticsearch/common/settings/ClusterSettings.java b/core/src/main/java/org/elasticsearch/common/settings/ClusterSettings.java index d8cb231523dca..1ade10e4c7dd1 100644 --- a/core/src/main/java/org/elasticsearch/common/settings/ClusterSettings.java +++ b/core/src/main/java/org/elasticsearch/common/settings/ClusterSettings.java @@ -33,6 +33,7 @@ import org.elasticsearch.cluster.action.index.MappingUpdatedAction; import org.elasticsearch.cluster.metadata.IndexGraveyard; import org.elasticsearch.cluster.metadata.MetaData; +import org.elasticsearch.cluster.routing.OperationRouting; import org.elasticsearch.cluster.routing.allocation.DiskThresholdSettings; import org.elasticsearch.cluster.routing.allocation.allocator.BalancedShardsAllocator; import org.elasticsearch.cluster.routing.allocation.decider.AwarenessAllocationDecider; @@ -118,19 +119,19 @@ private static final class LoggingSettingUpdater implements SettingUpdater { Map groups = s.getAsGroups(); @@ -196,6 +201,8 @@ protected boolean isPrivateSetting(String key) { case MergePolicyConfig.INDEX_MERGE_ENABLED: case IndexMetaData.INDEX_SHRINK_SOURCE_UUID_KEY: case IndexMetaData.INDEX_SHRINK_SOURCE_NAME_KEY: + case IndexMetaData.INDEX_RESIZE_SOURCE_UUID_KEY: + case IndexMetaData.INDEX_RESIZE_SOURCE_NAME_KEY: case IndexSettings.INDEX_MAPPING_SINGLE_TYPE_SETTING_KEY: // this was settable in 5.x but not anymore in 6.x so we have to preserve the value ie. make it read-only // this can be removed in later versions diff --git a/core/src/main/java/org/elasticsearch/common/settings/KeyStoreWrapper.java b/core/src/main/java/org/elasticsearch/common/settings/KeyStoreWrapper.java index f36fb8ec9bd81..441bb131f039c 100644 --- a/core/src/main/java/org/elasticsearch/common/settings/KeyStoreWrapper.java +++ b/core/src/main/java/org/elasticsearch/common/settings/KeyStoreWrapper.java @@ -30,6 +30,7 @@ import java.nio.CharBuffer; import java.nio.charset.CharsetEncoder; import java.nio.charset.StandardCharsets; +import java.nio.file.AccessDeniedException; import java.nio.file.Files; import java.nio.file.Path; import java.nio.file.StandardCopyOption; @@ -59,6 +60,8 @@ import org.apache.lucene.store.SimpleFSDirectory; import org.apache.lucene.util.SetOnce; import org.elasticsearch.bootstrap.BootstrapSettings; +import org.elasticsearch.cli.ExitCodes; +import org.elasticsearch.cli.UserException; import org.elasticsearch.common.Randomness; /** @@ -151,7 +154,7 @@ private KeyStoreWrapper(int formatVersion, boolean hasPassword, String type, } /** Returns a path representing the ES keystore in the given config dir. */ - static Path keystorePath(Path configDir) { + public static Path keystorePath(Path configDir) { return configDir.resolve(KEYSTORE_FILENAME); } @@ -168,7 +171,8 @@ public static KeyStoreWrapper create(char[] password) throws Exception { } /** Add the bootstrap seed setting, which may be used as a unique, secure, random value by the node */ - private static void addBootstrapSeed(KeyStoreWrapper wrapper) throws GeneralSecurityException { + public static void addBootstrapSeed(KeyStoreWrapper wrapper) throws GeneralSecurityException { + assert wrapper.getSettingNames().contains(SEED_SETTING.getKey()) == false; SecureRandom random = Randomness.createSecure(); int passwordLength = 20; // Generate 20 character passwords char[] characters = new char[passwordLength]; @@ -224,6 +228,16 @@ public static KeyStoreWrapper load(Path configDir) throws IOException { } } + /** Upgrades the format of the keystore, if necessary. */ + public static void upgrade(KeyStoreWrapper wrapper, Path configDir) throws Exception { + // ensure keystore.seed exists + if (wrapper.getSettingNames().contains(SEED_SETTING.getKey())) { + return; + } + addBootstrapSeed(wrapper); + wrapper.save(configDir); + } + @Override public boolean isLoaded() { return keystore.get() != null; @@ -249,11 +263,9 @@ public void decrypt(char[] password) throws GeneralSecurityException, IOExceptio } finally { Arrays.fill(keystoreBytes, (byte)0); } - keystorePassword.set(new KeyStore.PasswordProtection(password)); Arrays.fill(password, '\0'); - Enumeration aliases = keystore.get().aliases(); if (formatVersion == 1) { while (aliases.hasMoreElements()) { @@ -276,6 +288,7 @@ public void decrypt(char[] password) throws GeneralSecurityException, IOExceptio /** Write the keystore to the given config directory. */ public void save(Path configDir) throws Exception { + assert isLoaded(); char[] password = this.keystorePassword.get().getPassword(); SimpleFSDirectory directory = new SimpleFSDirectory(configDir); @@ -304,6 +317,12 @@ public void save(Path configDir) throws Exception { output.writeInt(keystoreBytes.length); output.writeBytes(keystoreBytes, keystoreBytes.length); CodecUtil.writeFooter(output); + } catch (final AccessDeniedException e) { + final String message = String.format( + Locale.ROOT, + "unable to create temporary keystore at [%s], please check filesystem permissions", + configDir.resolve(tmpFile)); + throw new UserException(ExitCodes.CONFIG, message, e); } Path keystoreFile = keystorePath(configDir); @@ -311,7 +330,7 @@ public void save(Path configDir) throws Exception { PosixFileAttributeView attrs = Files.getFileAttributeView(keystoreFile, PosixFileAttributeView.class); if (attrs != null) { // don't rely on umask: ensure the keystore has minimal permissions - attrs.setPermissions(PosixFilePermissions.fromString("rw-------")); + attrs.setPermissions(PosixFilePermissions.fromString("rw-rw----")); } } @@ -323,6 +342,7 @@ public Set getSettingNames() { // TODO: make settings accessible only to code that registered the setting @Override public SecureString getString(String setting) throws GeneralSecurityException { + assert isLoaded(); KeyStore.Entry entry = keystore.get().getEntry(setting, keystorePassword.get()); if (settingTypes.get(setting) != KeyType.STRING || entry instanceof KeyStore.SecretKeyEntry == false) { @@ -338,6 +358,7 @@ public SecureString getString(String setting) throws GeneralSecurityException { @Override public InputStream getFile(String setting) throws GeneralSecurityException { + assert isLoaded(); KeyStore.Entry entry = keystore.get().getEntry(setting, keystorePassword.get()); if (settingTypes.get(setting) != KeyType.FILE || entry instanceof KeyStore.SecretKeyEntry == false) { @@ -368,6 +389,7 @@ public void close() throws IOException { * @throws IllegalArgumentException if the value is not ASCII */ void setString(String setting, char[] value) throws GeneralSecurityException { + assert isLoaded(); if (ASCII_ENCODER.canEncode(CharBuffer.wrap(value)) == false) { throw new IllegalArgumentException("Value must be ascii"); } @@ -378,6 +400,7 @@ void setString(String setting, char[] value) throws GeneralSecurityException { /** Set a file setting. */ void setFile(String setting, byte[] bytes) throws GeneralSecurityException { + assert isLoaded(); bytes = Base64.getEncoder().encode(bytes); char[] chars = new char[bytes.length]; for (int i = 0; i < chars.length; ++i) { @@ -390,6 +413,7 @@ void setFile(String setting, byte[] bytes) throws GeneralSecurityException { /** Remove the given setting from the keystore. */ void remove(String setting) throws KeyStoreException { + assert isLoaded(); keystore.get().deleteEntry(setting); settingTypes.remove(setting); } diff --git a/core/src/main/java/org/elasticsearch/common/settings/SecureSetting.java b/core/src/main/java/org/elasticsearch/common/settings/SecureSetting.java index 7c55eaaeff23b..4a1e598bba8ad 100644 --- a/core/src/main/java/org/elasticsearch/common/settings/SecureSetting.java +++ b/core/src/main/java/org/elasticsearch/common/settings/SecureSetting.java @@ -118,19 +118,7 @@ public void diff(Settings.Builder builder, Settings source, Settings defaultSett */ public static Setting secureString(String name, Setting fallback, Property... properties) { - return new SecureSetting(name, properties) { - @Override - protected SecureString getSecret(SecureSettings secureSettings) throws GeneralSecurityException { - return secureSettings.getString(getKey()); - } - @Override - SecureString getFallback(Settings settings) { - if (fallback != null) { - return fallback.get(settings); - } - return new SecureString(new char[0]); // this means "setting does not exist" - } - }; + return new SecureStringSetting(name, fallback, properties); } /** @@ -138,16 +126,7 @@ SecureString getFallback(Settings settings) { * @see #secureString(String, Setting, Property...) */ public static Setting insecureString(String name) { - return new Setting(name, "", SecureString::new, Property.Deprecated, Property.Filtered, Property.NodeScope) { - @Override - public SecureString get(Settings settings) { - if (ALLOW_INSECURE_SETTINGS == false && exists(settings)) { - throw new IllegalArgumentException("Setting [" + name + "] is insecure, " + - "but property [allow_insecure_settings] is not set"); - } - return super.get(settings); - } - }; + return new InsecureStringSetting(name); } /** @@ -157,19 +136,68 @@ public SecureString get(Settings settings) { */ public static Setting secureFile(String name, Setting fallback, Property... properties) { - return new SecureSetting(name, properties) { - @Override - protected InputStream getSecret(SecureSettings secureSettings) throws GeneralSecurityException { - return secureSettings.getFile(getKey()); + return new SecureFileSetting(name, fallback, properties); + } + + private static class SecureStringSetting extends SecureSetting { + private final Setting fallback; + + private SecureStringSetting(String name, Setting fallback, Property... properties) { + super(name, properties); + this.fallback = fallback; + } + + @Override + protected SecureString getSecret(SecureSettings secureSettings) throws GeneralSecurityException { + return secureSettings.getString(getKey()); + } + + @Override + SecureString getFallback(Settings settings) { + if (fallback != null) { + return fallback.get(settings); } - @Override - InputStream getFallback(Settings settings) { - if (fallback != null) { - return fallback.get(settings); - } - return null; + return new SecureString(new char[0]); // this means "setting does not exist" + } + } + + private static class InsecureStringSetting extends Setting { + private final String name; + + private InsecureStringSetting(String name) { + super(name, "", SecureString::new, Property.Deprecated, Property.Filtered, Property.NodeScope); + this.name = name; + } + + @Override + public SecureString get(Settings settings) { + if (ALLOW_INSECURE_SETTINGS == false && exists(settings)) { + throw new IllegalArgumentException("Setting [" + name + "] is insecure, " + + "but property [allow_insecure_settings] is not set"); } - }; + return super.get(settings); + } } + private static class SecureFileSetting extends SecureSetting { + private final Setting fallback; + + private SecureFileSetting(String name, Setting fallback, Property... properties) { + super(name, properties); + this.fallback = fallback; + } + + @Override + protected InputStream getSecret(SecureSettings secureSettings) throws GeneralSecurityException { + return secureSettings.getFile(getKey()); + } + + @Override + InputStream getFallback(Settings settings) { + if (fallback != null) { + return fallback.get(settings); + } + return null; + } + } } diff --git a/core/src/main/java/org/elasticsearch/common/settings/Setting.java b/core/src/main/java/org/elasticsearch/common/settings/Setting.java index 75adf3d61339b..9b99e67c8c4da 100644 --- a/core/src/main/java/org/elasticsearch/common/settings/Setting.java +++ b/core/src/main/java/org/elasticsearch/common/settings/Setting.java @@ -21,7 +21,6 @@ import org.apache.logging.log4j.Logger; import org.elasticsearch.ElasticsearchException; import org.elasticsearch.ElasticsearchParseException; -import org.elasticsearch.action.support.ToXContentToBytes; import org.elasticsearch.common.Booleans; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.Strings; @@ -31,6 +30,7 @@ import org.elasticsearch.common.unit.MemorySizeValue; import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.common.xcontent.NamedXContentRegistry; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.common.xcontent.XContentParser; @@ -47,9 +47,11 @@ import java.util.List; import java.util.Map; import java.util.Objects; +import java.util.Set; import java.util.function.BiConsumer; import java.util.function.Consumer; import java.util.function.Function; +import java.util.function.IntConsumer; import java.util.regex.Matcher; import java.util.regex.Pattern; import java.util.stream.Collectors; @@ -78,7 +80,7 @@ * } * */ -public class Setting extends ToXContentToBytes { +public class Setting implements ToXContentObject { public enum Property { /** @@ -328,7 +330,7 @@ public T getDefault(Settings settings) { * Returns true iff this setting is present in the given settings object. Otherwise false */ public boolean exists(Settings settings) { - return settings.getAsMap().containsKey(getKey()); + return settings.keySet().contains(getKey()); } /** @@ -425,6 +427,11 @@ public final XContentBuilder toXContent(XContentBuilder builder, Params params) return builder; } + @Override + public String toString() { + return Strings.toString(this, true, true); + } + /** * Returns the value for this setting but falls back to the second provided settings object */ @@ -474,7 +481,7 @@ AbstractScopedSettings.SettingUpdater newUpdater(Consumer consumer, Logger * See {@link AbstractScopedSettings#addSettingsUpdateConsumer(Setting, Setting, BiConsumer)} and its usage for details. */ static AbstractScopedSettings.SettingUpdater> compoundUpdater(final BiConsumer consumer, - final Setting aSetting, final Setting bSetting, Logger logger) { + final BiConsumer validator, final Setting aSetting, final Setting bSetting, Logger logger) { final AbstractScopedSettings.SettingUpdater aSettingUpdater = aSetting.newUpdater(null, logger); final AbstractScopedSettings.SettingUpdater bSettingUpdater = bSetting.newUpdater(null, logger); return new AbstractScopedSettings.SettingUpdater>() { @@ -485,7 +492,10 @@ public boolean hasChanged(Settings current, Settings previous) { @Override public Tuple getValue(Settings current, Settings previous) { - return new Tuple<>(aSettingUpdater.getValue(current, previous), bSettingUpdater.getValue(current, previous)); + A valueA = aSettingUpdater.getValue(current, previous); + B valueB = bSettingUpdater.getValue(current, previous); + validator.accept(valueA, valueB); + return new Tuple<>(valueA, valueB); } @Override @@ -521,7 +531,7 @@ boolean isGroupSetting() { } private Stream matchStream(Settings settings) { - return settings.getAsMap().keySet().stream().filter((key) -> match(key)).map(settingKey -> key.getConcreteString(settingKey)); + return settings.keySet().stream().filter((key) -> match(key)).map(settingKey -> key.getConcreteString(settingKey)); } AbstractScopedSettings.SettingUpdater, T>> newAffixUpdater( @@ -539,8 +549,9 @@ public Map, T> getValue(Settings curren final Map, T> result = new IdentityHashMap<>(); Stream.concat(matchStream(current), matchStream(previous)).distinct().forEach(aKey -> { String namespace = key.getNamespace(aKey); + Setting concreteSetting = getConcreteSetting(aKey); AbstractScopedSettings.SettingUpdater updater = - getConcreteSetting(aKey).newUpdater((v) -> consumer.accept(namespace, v), logger, + concreteSetting.newUpdater((v) -> consumer.accept(namespace, v), logger, (v) -> validator.accept(namespace, v)); if (updater.hasChanged(current, previous)) { // only the ones that have changed otherwise we might get too many updates @@ -561,6 +572,43 @@ public void apply(Map, T> value, Settin }; } + AbstractScopedSettings.SettingUpdater> newAffixMapUpdater(Consumer> consumer, Logger logger, + BiConsumer validator, boolean omitDefaults) { + return new AbstractScopedSettings.SettingUpdater>() { + + @Override + public boolean hasChanged(Settings current, Settings previous) { + return Stream.concat(matchStream(current), matchStream(previous)).findAny().isPresent(); + } + + @Override + public Map getValue(Settings current, Settings previous) { + // we collect all concrete keys and then delegate to the actual setting for validation and settings extraction + final Map result = new IdentityHashMap<>(); + Stream.concat(matchStream(current), matchStream(previous)).distinct().forEach(aKey -> { + String namespace = key.getNamespace(aKey); + Setting concreteSetting = getConcreteSetting(aKey); + AbstractScopedSettings.SettingUpdater updater = + concreteSetting.newUpdater((v) -> {}, logger, (v) -> validator.accept(namespace, v)); + if (updater.hasChanged(current, previous)) { + // only the ones that have changed otherwise we might get too many updates + // the hasChanged above checks only if there are any changes + T value = updater.getValue(current, previous); + if ((omitDefaults && value.equals(concreteSetting.getDefault(current))) == false) { + result.put(namespace, value); + } + } + }); + return result; + } + + @Override + public void apply(Map value, Settings current, Settings previous) { + consumer.accept(value); + } + }; + } + @Override public T get(Settings settings) { throw new UnsupportedOperationException("affix settings can't return values" + @@ -610,6 +658,18 @@ public String getNamespace(Setting concreteSetting) { public Stream> getAllConcreteSettings(Settings settings) { return matchStream(settings).distinct().map(this::getConcreteSetting); } + + /** + * Returns a map of all namespaces to it's values give the provided settings + */ + public Map getAsMap(Settings settings) { + Map map = new HashMap<>(); + matchStream(settings).distinct().forEach(key -> { + Setting concreteSetting = getConcreteSetting(key); + map.put(getNamespace(concreteSetting), concreteSetting.get(settings)); + }); + return Collections.unmodifiableMap(map); + } } /** @@ -640,6 +700,140 @@ default Iterator> settings() { } + private static class GroupSetting extends Setting { + private final String key; + private final Consumer validator; + + private GroupSetting(String key, Consumer validator, Property... properties) { + super(new GroupKey(key), (s) -> "", (s) -> null, properties); + this.key = key; + this.validator = validator; + } + + @Override + public boolean isGroupSetting() { + return true; + } + + @Override + public String getRaw(Settings settings) { + Settings subSettings = get(settings); + try { + XContentBuilder builder = XContentFactory.jsonBuilder(); + builder.startObject(); + subSettings.toXContent(builder, EMPTY_PARAMS); + builder.endObject(); + return builder.string(); + } catch (IOException e) { + throw new RuntimeException(e); + } + } + + @Override + public Settings get(Settings settings) { + Settings byPrefix = settings.getByPrefix(getKey()); + validator.accept(byPrefix); + return byPrefix; + } + + @Override + public boolean exists(Settings settings) { + for (String settingsKey : settings.keySet()) { + if (settingsKey.startsWith(key)) { + return true; + } + } + return false; + } + + @Override + public void diff(Settings.Builder builder, Settings source, Settings defaultSettings) { + Set leftGroup = get(source).keySet(); + Settings defaultGroup = get(defaultSettings); + + builder.put(Settings.builder().put(defaultGroup.filter(k -> leftGroup.contains(k) == false), false) + .normalizePrefix(getKey()).build(), false); + } + + @Override + public AbstractScopedSettings.SettingUpdater newUpdater(Consumer consumer, Logger logger, + Consumer validator) { + if (isDynamic() == false) { + throw new IllegalStateException("setting [" + getKey() + "] is not dynamic"); + } + final Setting setting = this; + return new AbstractScopedSettings.SettingUpdater() { + + @Override + public boolean hasChanged(Settings current, Settings previous) { + Settings currentSettings = get(current); + Settings previousSettings = get(previous); + return currentSettings.equals(previousSettings) == false; + } + + @Override + public Settings getValue(Settings current, Settings previous) { + Settings currentSettings = get(current); + Settings previousSettings = get(previous); + try { + validator.accept(currentSettings); + } catch (Exception | AssertionError e) { + throw new IllegalArgumentException("illegal value can't update [" + key + "] from [" + + previousSettings + "] to [" + currentSettings+ "]", e); + } + return currentSettings; + } + + @Override + public void apply(Settings value, Settings current, Settings previous) { + if (logger.isInfoEnabled()) { // getRaw can create quite some objects + logger.info("updating [{}] from [{}] to [{}]", key, getRaw(previous), getRaw(current)); + } + consumer.accept(value); + } + + @Override + public String toString() { + return "Updater for: " + setting.toString(); + } + }; + } + } + + private static class ListSetting extends Setting> { + private final Function> defaultStringValue; + + private ListSetting(String key, Function> defaultStringValue, Function> parser, + Property... properties) { + super(new ListKey(key), (s) -> Setting.arrayToParsableString(defaultStringValue.apply(s)), parser, + properties); + this.defaultStringValue = defaultStringValue; + } + + @Override + public String getRaw(Settings settings) { + List array = settings.getAsList(getKey(), null); + return array == null ? defaultValue.apply(settings) : arrayToParsableString(array); + } + + @Override + boolean hasComplexMatcher() { + return true; + } + + @Override + public void diff(Settings.Builder builder, Settings source, Settings defaultSettings) { + if (exists(source) == false) { + List asList = defaultSettings.getAsList(getKey(), null); + if (asList == null) { + builder.putList(getKey(), defaultStringValue.apply(defaultSettings)); + } else { + builder.putList(getKey(), asList); + } + } + } + } + private final class Updater implements AbstractScopedSettings.SettingUpdater { private final Consumer consumer; private final Logger logger; @@ -714,6 +908,12 @@ public static Setting intSetting(String key, Setting fallbackS return new Setting<>(key, fallbackSetting, (s) -> parseInt(s, minValue, key), properties); } + public static Setting intSetting(String key, Setting fallbackSetting, int minValue, Validator validator, + Property... properties) { + return new Setting<>(new SimpleKey(key), fallbackSetting, fallbackSetting::getRaw, (s) -> parseInt(s, minValue, key),validator, + properties); + } + public static Setting longSetting(String key, long defaultValue, long minValue, Property... properties) { return new Setting<>(key, (s) -> Long.toString(defaultValue), (s) -> parseLong(s, minValue, key), properties); } @@ -722,6 +922,14 @@ public static Setting simpleString(String key, Property... properties) { return new Setting<>(key, s -> "", Function.identity(), properties); } + public static Setting simpleString(String key, Setting fallback, Property... properties) { + return new Setting<>(key, fallback, Function.identity(), properties); + } + + public static Setting simpleString(String key, Validator validator, Property... properties) { + return new Setting<>(new SimpleKey(key), null, s -> "", Function.identity(), validator, properties); + } + public static int parseInt(String s, int minValue, String key) { return parseInt(s, minValue, Integer.MAX_VALUE, key); } @@ -867,37 +1075,7 @@ public static Setting> listSetting(String key, Function> parser = (s) -> parseableStringToList(s).stream().map(singleValueParser).collect(Collectors.toList()); - return new Setting>(new ListKey(key), - (s) -> arrayToParsableString(defaultStringValue.apply(s).toArray(Strings.EMPTY_ARRAY)), parser, properties) { - @Override - public String getRaw(Settings settings) { - String[] array = settings.getAsArray(getKey(), null); - return array == null ? defaultValue.apply(settings) : arrayToParsableString(array); - } - - @Override - boolean hasComplexMatcher() { - return true; - } - - @Override - public boolean exists(Settings settings) { - boolean exists = super.exists(settings); - return exists || settings.get(getKey() + ".0") != null; - } - - @Override - public void diff(Settings.Builder builder, Settings source, Settings defaultSettings) { - if (exists(source) == false) { - String[] asArray = defaultSettings.getAsArray(getKey(), null); - if (asArray == null) { - builder.putArray(getKey(), defaultStringValue.apply(defaultSettings)); - } else { - builder.putArray(getKey(), asArray); - } - } - } - }; + return new ListSetting<>(key, defaultStringValue, parser, properties); } private static List parseableStringToList(String parsableString) { @@ -920,7 +1098,7 @@ private static List parseableStringToList(String parsableString) { } } - private static String arrayToParsableString(String[] array) { + private static String arrayToParsableString(List array) { try { XContentBuilder builder = XContentBuilder.builder(XContentType.JSON.xContent()); builder.startArray(); @@ -939,98 +1117,7 @@ public static Setting groupSetting(String key, Property... properties) } public static Setting groupSetting(String key, Consumer validator, Property... properties) { - return new Setting(new GroupKey(key), (s) -> "", (s) -> null, properties) { - @Override - public boolean isGroupSetting() { - return true; - } - - @Override - public String getRaw(Settings settings) { - Settings subSettings = get(settings); - try { - XContentBuilder builder = XContentFactory.jsonBuilder(); - builder.startObject(); - subSettings.toXContent(builder, EMPTY_PARAMS); - builder.endObject(); - return builder.string(); - } catch (IOException e) { - throw new RuntimeException(e); - } - } - - @Override - public Settings get(Settings settings) { - Settings byPrefix = settings.getByPrefix(getKey()); - validator.accept(byPrefix); - return byPrefix; - } - - @Override - public boolean exists(Settings settings) { - for (Map.Entry entry : settings.getAsMap().entrySet()) { - if (entry.getKey().startsWith(key)) { - return true; - } - } - return false; - } - - @Override - public void diff(Settings.Builder builder, Settings source, Settings defaultSettings) { - Map leftGroup = get(source).getAsMap(); - Settings defaultGroup = get(defaultSettings); - for (Map.Entry entry : defaultGroup.getAsMap().entrySet()) { - if (leftGroup.containsKey(entry.getKey()) == false) { - builder.put(getKey() + entry.getKey(), entry.getValue()); - } - } - } - - @Override - public AbstractScopedSettings.SettingUpdater newUpdater(Consumer consumer, Logger logger, - Consumer validator) { - if (isDynamic() == false) { - throw new IllegalStateException("setting [" + getKey() + "] is not dynamic"); - } - final Setting setting = this; - return new AbstractScopedSettings.SettingUpdater() { - - @Override - public boolean hasChanged(Settings current, Settings previous) { - Settings currentSettings = get(current); - Settings previousSettings = get(previous); - return currentSettings.equals(previousSettings) == false; - } - - @Override - public Settings getValue(Settings current, Settings previous) { - Settings currentSettings = get(current); - Settings previousSettings = get(previous); - try { - validator.accept(currentSettings); - } catch (Exception | AssertionError e) { - throw new IllegalArgumentException("illegal value can't update [" + key + "] from [" - + previousSettings.getAsMap() + "] to [" + currentSettings.getAsMap() + "]", e); - } - return currentSettings; - } - - @Override - public void apply(Settings value, Settings current, Settings previous) { - if (logger.isInfoEnabled()) { // getRaw can create quite some objects - logger.info("updating [{}] from [{}] to [{}]", key, getRaw(previous), getRaw(current)); - } - consumer.accept(value); - } - - @Override - public String toString() { - return "Updater for: " + setting.toString(); - } - }; - } - }; + return new GroupSetting(key, validator, properties); } public static Setting timeSetting(String key, Function defaultValue, TimeValue minValue, diff --git a/core/src/main/java/org/elasticsearch/common/settings/Settings.java b/core/src/main/java/org/elasticsearch/common/settings/Settings.java index b5b94f8794adb..41acefdd8e879 100644 --- a/core/src/main/java/org/elasticsearch/common/settings/Settings.java +++ b/core/src/main/java/org/elasticsearch/common/settings/Settings.java @@ -19,32 +19,35 @@ package org.elasticsearch.common.settings; +import org.apache.logging.log4j.Level; +import org.apache.lucene.util.IOUtils; import org.apache.lucene.util.SetOnce; +import org.elasticsearch.ElasticsearchParseException; import org.elasticsearch.Version; import org.elasticsearch.common.Booleans; import org.elasticsearch.common.Strings; -import org.elasticsearch.common.io.Streams; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.logging.DeprecationLogger; import org.elasticsearch.common.logging.LogConfigurator; import org.elasticsearch.common.logging.Loggers; -import org.elasticsearch.common.settings.loader.SettingsLoader; -import org.elasticsearch.common.settings.loader.SettingsLoaderFactory; import org.elasticsearch.common.unit.ByteSizeUnit; import org.elasticsearch.common.unit.ByteSizeValue; import org.elasticsearch.common.unit.MemorySizeValue; import org.elasticsearch.common.unit.RatioValue; import org.elasticsearch.common.unit.SizeValue; import org.elasticsearch.common.unit.TimeValue; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; +import org.elasticsearch.common.xcontent.ToXContentFragment; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentFactory; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.common.xcontent.XContentParserUtils; import org.elasticsearch.common.xcontent.XContentType; import java.io.IOException; import java.io.InputStream; -import java.io.InputStreamReader; -import java.nio.charset.StandardCharsets; +import java.io.UncheckedIOException; import java.nio.file.Files; import java.nio.file.Path; import java.security.GeneralSecurityException; @@ -53,23 +56,18 @@ import java.util.ArrayList; import java.util.Arrays; import java.util.Collections; -import java.util.Dictionary; import java.util.HashMap; import java.util.HashSet; import java.util.Iterator; import java.util.List; -import java.util.Locale; import java.util.Map; import java.util.NoSuchElementException; -import java.util.Objects; import java.util.Set; import java.util.TreeMap; import java.util.concurrent.TimeUnit; import java.util.function.Function; import java.util.function.Predicate; import java.util.function.UnaryOperator; -import java.util.regex.Matcher; -import java.util.regex.Pattern; import java.util.stream.Collectors; import java.util.stream.Stream; @@ -80,13 +78,12 @@ /** * An immutable settings implementation. */ -public final class Settings implements ToXContent { +public final class Settings implements ToXContentFragment { public static final Settings EMPTY = new Builder().build(); - private static final Pattern ARRAY_PATTERN = Pattern.compile("(.*)\\.\\d+$"); /** The raw settings from the full key to raw string value. */ - private final Map settings; + private final Map settings; /** The secure settings storage associated with these settings. */ private final SecureSettings secureSettings; @@ -100,7 +97,7 @@ public final class Settings implements ToXContent { */ private final SetOnce> keys = new SetOnce<>(); - Settings(Map settings, SecureSettings secureSettings) { + Settings(Map settings, SecureSettings secureSettings) { // we use a sorted map for consistent serialization when using getAsMap() this.settings = Collections.unmodifiableSortedMap(new TreeMap<>(settings)); this.secureSettings = secureSettings; @@ -114,21 +111,9 @@ SecureSettings getSecureSettings() { return secureSettings; } - /** - * The settings as a flat {@link java.util.Map}. - * @return an unmodifiable map of settings - */ - public Map getAsMap() { - // settings is always unmodifiable - return this.settings; - } - - /** - * The settings as a structured {@link java.util.Map}. - */ - public Map getAsStructuredMap() { + private Map getAsStructuredMap() { Map map = new HashMap<>(2); - for (Map.Entry entry : settings.entrySet()) { + for (Map.Entry entry : settings.entrySet()) { processSetting(map, "", entry.getKey(), entry.getValue()); } for (Map.Entry entry : map.entrySet()) { @@ -141,7 +126,7 @@ public Map getAsStructuredMap() { return map; } - private void processSetting(Map map, String prefix, String setting, String value) { + private void processSetting(Map map, String prefix, String setting, Object value) { int prefixLength = setting.indexOf('.'); if (prefixLength == -1) { @SuppressWarnings("unchecked") Map innerMap = (Map) map.get(prefix + setting); @@ -245,7 +230,7 @@ public Settings getAsSettings(String setting) { * @return The setting value, null if it does not exists. */ public String get(String setting) { - return settings.get(setting); + return toString(settings.get(setting)); } /** @@ -338,36 +323,6 @@ public Boolean getAsBoolean(String setting, Boolean defaultValue) { return Booleans.parseBoolean(get(setting), defaultValue); } - // TODO #22298: Delete this method and update call sites to #getAsBoolean(String, Boolean). - /** - * Returns the setting value (as boolean) associated with the setting key. If it does not exist, returns the default value provided. - * If the index was created on Elasticsearch below 6.0, booleans will be parsed leniently otherwise they are parsed strictly. - * - * See {@link Booleans#isBooleanLenient(char[], int, int)} for the definition of a "lenient boolean" - * and {@link Booleans#isBoolean(char[], int, int)} for the definition of a "strict boolean". - * - * @deprecated Only used to provide automatic upgrades for pre 6.0 indices. - */ - @Deprecated - public Boolean getAsBooleanLenientForPreEs6Indices( - final Version indexVersion, - final String setting, - final Boolean defaultValue, - final DeprecationLogger deprecationLogger) { - if (indexVersion.before(Version.V_6_0_0_alpha1)) { - //Only emit a warning if the setting's value is not a proper boolean - final String value = get(setting, "false"); - if (Booleans.isBoolean(value) == false) { - @SuppressWarnings("deprecation") - boolean convertedValue = Booleans.parseBooleanLenient(get(setting), defaultValue); - deprecationLogger.deprecated("The value [{}] of setting [{}] is not coerced into boolean anymore. Please change " + - "this value to [{}].", value, setting, String.valueOf(convertedValue)); - return convertedValue; - } - } - return getAsBoolean(setting, defaultValue); - } - /** * Returns the setting value (as time) associated with the setting key. If it does not exists, * returns the default value provided. @@ -411,87 +366,64 @@ public SizeValue getAsSize(String setting, SizeValue defaultValue) throws Settin } /** - * The values associated with a setting prefix as an array. The settings array is in the format of: - * settingPrefix.[index]. + * The values associated with a setting key as an immutable list. *

* It will also automatically load a comma separated list under the settingPrefix and merge with * the numbered format. * - * @param settingPrefix The setting prefix to load the array by - * @return The setting array values + * @param key The setting key to load the list by + * @return The setting list values */ - public String[] getAsArray(String settingPrefix) throws SettingsException { - return getAsArray(settingPrefix, Strings.EMPTY_ARRAY, true); + public List getAsList(String key) throws SettingsException { + return getAsList(key, Collections.emptyList()); } /** - * The values associated with a setting prefix as an array. The settings array is in the format of: - * settingPrefix.[index]. + * The values associated with a setting key as an immutable list. *

* If commaDelimited is true, it will automatically load a comma separated list under the settingPrefix and merge with * the numbered format. * - * @param settingPrefix The setting prefix to load the array by - * @return The setting array values + * @param key The setting key to load the list by + * @return The setting list values */ - public String[] getAsArray(String settingPrefix, String[] defaultArray) throws SettingsException { - return getAsArray(settingPrefix, defaultArray, true); + public List getAsList(String key, List defaultValue) throws SettingsException { + return getAsList(key, defaultValue, true); } /** - * The values associated with a setting prefix as an array. The settings array is in the format of: - * settingPrefix.[index]. + * The values associated with a setting key as an immutable list. *

* It will also automatically load a comma separated list under the settingPrefix and merge with * the numbered format. * - * @param settingPrefix The setting prefix to load the array by - * @param defaultArray The default array to use if no value is specified + * @param key The setting key to load the list by + * @param defaultValue The default value to use if no value is specified * @param commaDelimited Whether to try to parse a string as a comma-delimited value - * @return The setting array values + * @return The setting list values */ - public String[] getAsArray(String settingPrefix, String[] defaultArray, Boolean commaDelimited) throws SettingsException { + public List getAsList(String key, List defaultValue, Boolean commaDelimited) throws SettingsException { List result = new ArrayList<>(); - - final String valueFromPrefix = get(settingPrefix); - final String valueFromPreifx0 = get(settingPrefix + ".0"); - - if (valueFromPrefix != null && valueFromPreifx0 != null) { - final String message = String.format( - Locale.ROOT, - "settings object contains values for [%s=%s] and [%s=%s]", - settingPrefix, - valueFromPrefix, - settingPrefix + ".0", - valueFromPreifx0); - throw new IllegalStateException(message); - } - - if (get(settingPrefix) != null) { - if (commaDelimited) { - String[] strings = Strings.splitStringByCommaToArray(get(settingPrefix)); + final Object valueFromPrefix = settings.get(key); + if (valueFromPrefix != null) { + if (valueFromPrefix instanceof List) { + return ((List) valueFromPrefix); // it's already unmodifiable since the builder puts it as a such + } else if (commaDelimited) { + String[] strings = Strings.splitStringByCommaToArray(get(key)); if (strings.length > 0) { for (String string : strings) { result.add(string.trim()); } } } else { - result.add(get(settingPrefix).trim()); + result.add(get(key).trim()); } } - int counter = 0; - while (true) { - String value = get(settingPrefix + '.' + (counter++)); - if (value == null) { - break; - } - result.add(value.trim()); - } if (result.isEmpty()) { - return defaultArray; + return defaultValue; } - return result.toArray(new String[result.size()]); + return Collections.unmodifiableList(result); } @@ -588,7 +520,7 @@ public Set names() { */ public String toDelimitedString(char delimiter) { StringBuilder sb = new StringBuilder(); - for (Map.Entry entry : settings.entrySet()) { + for (Map.Entry entry : settings.entrySet()) { sb.append(entry.getKey()).append("=").append(entry.getValue()).append(delimiter); } return sb.toString(); @@ -613,19 +545,52 @@ public int hashCode() { public static Settings readSettingsFromStream(StreamInput in) throws IOException { Builder builder = new Builder(); int numberOfSettings = in.readVInt(); - for (int i = 0; i < numberOfSettings; i++) { - builder.put(in.readString(), in.readOptionalString()); + if (in.getVersion().onOrAfter(Version.V_6_1_0)) { + for (int i = 0; i < numberOfSettings; i++) { + String key = in.readString(); + Object value = in.readGenericValue(); + if (value == null) { + builder.putNull(key); + } else if (value instanceof List) { + builder.putList(key, (List) value); + } else { + builder.put(key, value.toString()); + } + } + } else { + for (int i = 0; i < numberOfSettings; i++) { + String key = in.readString(); + String value = in.readOptionalString(); + builder.put(key, value); + } } return builder.build(); } public static void writeSettingsToStream(Settings settings, StreamOutput out) throws IOException { - // pull getAsMap() to exclude secure settings in size() - Set> entries = settings.getAsMap().entrySet(); - out.writeVInt(entries.size()); - for (Map.Entry entry : entries) { - out.writeString(entry.getKey()); - out.writeOptionalString(entry.getValue()); + // pull settings to exclude secure settings in size() + Set> entries = settings.settings.entrySet(); + if (out.getVersion().onOrAfter(Version.V_6_1_0)) { + out.writeVInt(entries.size()); + for (Map.Entry entry : entries) { + out.writeString(entry.getKey()); + out.writeGenericValue(entry.getValue()); + } + } else { + int size = entries.stream().mapToInt(e -> e.getValue() instanceof List ? ((List)e.getValue()).size() : 1).sum(); + out.writeVInt(size); + for (Map.Entry entry : entries) { + if (entry.getValue() instanceof List) { + int idx = 0; + for (String value : (List)entry.getValue()) { + out.writeString(entry.getKey() + "." + idx++); + out.writeOptionalString(value); + } + } else { + out.writeString(entry.getKey()); + out.writeOptionalString(toString(entry.getValue())); + } + } } } @@ -644,13 +609,122 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws builder.field(entry.getKey(), entry.getValue()); } } else { - for (Map.Entry entry : settings.getAsMap().entrySet()) { + for (Map.Entry entry : settings.settings.entrySet()) { builder.field(entry.getKey(), entry.getValue()); } } return builder; } + /** + * Parsers the generated xconten from {@link Settings#toXContent(XContentBuilder, Params)} into a new Settings object. + * Note this method requires the parser to either be positioned on a null token or on + * {@link org.elasticsearch.common.xcontent.XContentParser.Token#START_OBJECT}. + */ + public static Settings fromXContent(XContentParser parser) throws IOException { + return fromXContent(parser, true, false); + } + + private static Settings fromXContent(XContentParser parser, boolean allowNullValues, boolean validateEndOfStream) throws IOException { + if (parser.currentToken() == null) { + parser.nextToken(); + } + XContentParserUtils.ensureExpectedToken(XContentParser.Token.START_OBJECT, parser.currentToken(), parser::getTokenLocation); + Builder innerBuilder = Settings.builder(); + StringBuilder currentKeyBuilder = new StringBuilder(); + fromXContent(parser, currentKeyBuilder, innerBuilder, allowNullValues); + if (validateEndOfStream) { + // ensure we reached the end of the stream + XContentParser.Token lastToken = null; + try { + while (!parser.isClosed() && (lastToken = parser.nextToken()) == null) ; + } catch (Exception e) { + throw new ElasticsearchParseException( + "malformed, expected end of settings but encountered additional content starting at line number: [{}], " + + "column number: [{}]", + e, parser.getTokenLocation().lineNumber, parser.getTokenLocation().columnNumber); + } + if (lastToken != null) { + throw new ElasticsearchParseException( + "malformed, expected end of settings but encountered additional content starting at line number: [{}], " + + "column number: [{}]", + parser.getTokenLocation().lineNumber, parser.getTokenLocation().columnNumber); + } + } + return innerBuilder.build(); + } + + private static void fromXContent(XContentParser parser, StringBuilder keyBuilder, Settings.Builder builder, + boolean allowNullValues) throws IOException { + final int length = keyBuilder.length(); + while (parser.nextToken() != XContentParser.Token.END_OBJECT) { + if (parser.currentToken() == XContentParser.Token.FIELD_NAME) { + keyBuilder.setLength(length); + keyBuilder.append(parser.currentName()); + } else if (parser.currentToken() == XContentParser.Token.START_OBJECT) { + keyBuilder.append('.'); + fromXContent(parser, keyBuilder, builder, allowNullValues); + } else if (parser.currentToken() == XContentParser.Token.START_ARRAY) { + List list = new ArrayList<>(); + while (parser.nextToken() != XContentParser.Token.END_ARRAY) { + if (parser.currentToken() == XContentParser.Token.VALUE_STRING) { + list.add(parser.text()); + } else if (parser.currentToken() == XContentParser.Token.VALUE_NUMBER) { + list.add(parser.text()); // just use the string representation here + } else if (parser.currentToken() == XContentParser.Token.VALUE_BOOLEAN) { + list.add(String.valueOf(parser.text())); + } else { + throw new IllegalStateException("only value lists are allowed in serialized settings"); + } + } + String key = keyBuilder.toString(); + validateValue(key, list, builder, parser, allowNullValues); + builder.putList(key, list); + } else if (parser.currentToken() == XContentParser.Token.VALUE_NULL) { + String key = keyBuilder.toString(); + validateValue(key, null, builder, parser, allowNullValues); + builder.putNull(key); + } else if (parser.currentToken() == XContentParser.Token.VALUE_STRING + || parser.currentToken() == XContentParser.Token.VALUE_NUMBER) { + String key = keyBuilder.toString(); + String value = parser.text(); + validateValue(key, value, builder, parser, allowNullValues); + builder.put(key, value); + } else if (parser.currentToken() == XContentParser.Token.VALUE_BOOLEAN) { + String key = keyBuilder.toString(); + validateValue(key, parser.text(), builder, parser, allowNullValues); + builder.put(key, parser.booleanValue()); + } else { + XContentParserUtils.throwUnknownToken(parser.currentToken(), parser.getTokenLocation()); + } + } + } + + private static void validateValue(String key, Object currentValue, Settings.Builder builder, XContentParser parser, + boolean allowNullValues) { + if (builder.map.containsKey(key)) { + throw new ElasticsearchParseException( + "duplicate settings key [{}] found at line number [{}], column number [{}], previous value [{}], current value [{}]", + key, + parser.getTokenLocation().lineNumber, + parser.getTokenLocation().columnNumber, + builder.map.get(key), + currentValue + ); + } + + if (currentValue == null && allowNullValues == false) { + throw new ElasticsearchParseException( + "null-valued setting found for key [{}] found at line number [{}], column number [{}]", + key, + parser.getTokenLocation().lineNumber, + parser.getTokenLocation().columnNumber + ); + } + } + + + public static final Set FORMAT_PARAMS = Collections.unmodifiableSet(new HashSet<>(Arrays.asList("settings_filter", "flat_settings"))); @@ -693,7 +767,7 @@ public static class Builder { public static final Settings EMPTY_SETTINGS = new Builder().build(); // we use a sorted map for consistent serialization when using getAsMap() - private final Map map = new TreeMap<>(); + private final Map map = new TreeMap<>(); private SetOnce secureSettings = new SetOnce<>(); @@ -701,22 +775,22 @@ private Builder() { } - public Map internalMap() { - return this.map; + public Set keys() { + return this.map.keySet(); } /** * Removes the provided setting from the internal map holding the current list of settings. */ public String remove(String key) { - return map.remove(key); + return Settings.toString(map.remove(key)); } /** * Returns a setting value based on the setting key. */ public String get(String key) { - return map.get(key); + return Settings.toString(map.get(key)); } /** Return the current secure settings, or {@code null} if none have been set. */ @@ -737,27 +811,69 @@ public Builder setSecureSettings(SecureSettings secureSettings) { } /** - * Puts tuples of key value pairs of settings. Simplified version instead of repeating calling - * put for each one. + * Sets a path setting with the provided setting key and path. + * + * @param key The setting key + * @param path The setting path + * @return The builder */ - public Builder put(Object... settings) { - if (settings.length == 1) { - // support cases where the actual type gets lost down the road... - if (settings[0] instanceof Map) { - //noinspection unchecked - return put((Map) settings[0]); - } else if (settings[0] instanceof Settings) { - return put((Settings) settings[0]); - } - } - if ((settings.length % 2) != 0) { - throw new IllegalArgumentException( - "array settings of key + value order doesn't hold correct number of arguments (" + settings.length + ")"); - } - for (int i = 0; i < settings.length; i++) { - put(settings[i++].toString(), settings[i].toString()); - } - return this; + public Builder put(String key, Path path) { + return put(key, path.toString()); + } + + /** + * Sets a time value setting with the provided setting key and value. + * + * @param key The setting key + * @param timeValue The setting timeValue + * @return The builder + */ + public Builder put(String key, TimeValue timeValue) { + return put(key, timeValue.toString()); + } + + /** + * Sets a byteSizeValue setting with the provided setting key and byteSizeValue. + * + * @param key The setting key + * @param byteSizeValue The setting value + * @return The builder + */ + public Builder put(String key, ByteSizeValue byteSizeValue) { + return put(key, byteSizeValue.toString()); + } + + /** + * Sets an enum setting with the provided setting key and enum instance. + * + * @param key The setting key + * @param enumValue The setting value + * @return The builder + */ + public Builder put(String key, Enum enumValue) { + return put(key, enumValue.toString()); + } + + /** + * Sets an level setting with the provided setting key and level instance. + * + * @param key The setting key + * @param level The setting value + * @return The builder + */ + public Builder put(String key, Level level) { + return put(key, level.toString()); + } + + /** + * Sets an lucene version setting with the provided setting key and lucene version instance. + * + * @param key The setting key + * @param luceneVersion The setting value + * @return The builder + */ + public Builder put(String key, org.apache.lucene.util.Version luceneVersion) { + return put(key, luceneVersion.toString()); } /** @@ -772,6 +888,27 @@ public Builder put(String key, String value) { return this; } + public Builder copy(String key, Settings source) { + return copy(key, key, source); + } + + public Builder copy(String key, String sourceKey, Settings source) { + if (source.settings.containsKey(sourceKey) == false) { + throw new IllegalArgumentException("source key not found in the source settings"); + } + final Object value = source.settings.get(sourceKey); + if (value instanceof List) { + return putList(key, (List)value); + } else if (value == null) { + return putNull(key); + } else { + return put(key, Settings.toString(value)); + } + } + + /** + * Sets a null value for the given setting key + */ public Builder putNull(String key) { return put(key, (String) null); } @@ -877,13 +1014,6 @@ public Builder put(String setting, long value, ByteSizeUnit sizeUnit) { return this; } - /** - * Sets the setting with the provided setting key and an array of values. - * - * @param setting The setting key - * @param values The values - * @return The builder - */ /** * Sets the setting with the provided setting key and an array of values. @@ -892,8 +1022,8 @@ public Builder put(String setting, long value, ByteSizeUnit sizeUnit) { * @param values The values * @return The builder */ - public Builder putArray(String setting, String... values) { - return putArray(setting, Arrays.asList(values)); + public Builder putList(String setting, String... values) { + return putList(setting, Arrays.asList(values)); } /** @@ -903,38 +1033,9 @@ public Builder putArray(String setting, String... values) { * @param values The values * @return The builder */ - public Builder putArray(String setting, List values) { + public Builder putList(String setting, List values) { remove(setting); - int counter = 0; - while (true) { - String value = map.remove(setting + '.' + (counter++)); - if (value == null) { - break; - } - } - for (int i = 0; i < values.size(); i++) { - put(setting + "." + i, values.get(i)); - } - return this; - } - - /** - * Sets the setting as an array of values, but keeps existing elements for the key. - */ - public Builder extendArray(String setting, String... values) { - // check for a singular (non array) value - String oldSingle = remove(setting); - // find the highest array index - int counter = 0; - while (map.containsKey(setting + '.' + counter)) { - ++counter; - } - if (oldSingle != null) { - put(setting + '.' + counter++, oldSingle); - } - for (String value : values) { - put(setting + '.' + counter++, value); - } + map.put(setting, Collections.unmodifiableList(new ArrayList<>(values))); return this; } @@ -955,93 +1056,59 @@ public Builder put(String settingPrefix, String groupName, String[] settings, St } /** - * Sets all the provided settings. + * Sets all the provided settings including secure settings */ public Builder put(Settings settings) { - removeNonArraysFieldsIfNewSettingsContainsFieldAsArray(settings.getAsMap()); - map.putAll(settings.getAsMap()); - if (settings.getSecureSettings() != null) { - setSecureSettings(settings.getSecureSettings()); - } - return this; + return put(settings, true); } /** * Sets all the provided settings. + * @param settings the settings to set + * @param copySecureSettings if true all settings including secure settings are copied. */ - public Builder put(Map settings) { - removeNonArraysFieldsIfNewSettingsContainsFieldAsArray(settings); - map.putAll(settings); + public Builder put(Settings settings, boolean copySecureSettings) { + Map settingsMap = new HashMap<>(settings.settings); + processLegacyLists(settingsMap); + map.putAll(settingsMap); + if (copySecureSettings && settings.getSecureSettings() != null) { + setSecureSettings(settings.getSecureSettings()); + } return this; } - /** - * Removes non array values from the existing map, if settings contains an array value instead - * - * Example: - * Existing map contains: {key:value} - * New map contains: {key:[value1,value2]} (which has been flattened to {}key.0:value1,key.1:value2}) - * - * This ensure that that the 'key' field gets removed from the map in order to override all the - * data instead of merging - */ - private void removeNonArraysFieldsIfNewSettingsContainsFieldAsArray(Map settings) { - List prefixesToRemove = new ArrayList<>(); - for (final Map.Entry entry : settings.entrySet()) { - final Matcher matcher = ARRAY_PATTERN.matcher(entry.getKey()); - if (matcher.matches()) { - prefixesToRemove.add(matcher.group(1)); - } else if (map.keySet().stream().anyMatch(key -> key.startsWith(entry.getKey() + "."))) { - prefixesToRemove.add(entry.getKey()); - } - } - for (String prefix : prefixesToRemove) { - Iterator> iterator = map.entrySet().iterator(); - while (iterator.hasNext()) { - Map.Entry entry = iterator.next(); - if (entry.getKey().startsWith(prefix + ".") || entry.getKey().equals(prefix)) { - iterator.remove(); + private void processLegacyLists(Map map) { + String[] array = map.keySet().toArray(new String[map.size()]); + for (String key : array) { + if (key.endsWith(".0")) { // let's only look at the head of the list and convert in order starting there. + int counter = 0; + String prefix = key.substring(0, key.lastIndexOf('.')); + if (map.containsKey(prefix)) { + throw new IllegalStateException("settings builder can't contain values for [" + prefix + "=" + map.get(prefix) + + "] and [" + key + "=" + map.get(key) + "]"); + } + List values = new ArrayList<>(); + while (true) { + String listKey = prefix + '.' + (counter++); + String value = get(listKey); + if (value == null) { + map.put(prefix, values); + break; + } else { + values.add(value); + map.remove(listKey); + } } } } } /** - * Sets all the provided settings. - */ - public Builder put(Dictionary properties) { - for (Object key : Collections.list(properties.keys())) { - map.put(Objects.toString(key), Objects.toString(properties.get(key))); - } - return this; - } - - /** - * Loads settings from the actual string content that represents them using the - * {@link SettingsLoaderFactory#loaderFromSource(String)}. - * @deprecated use {@link #loadFromSource(String, XContentType)} to avoid content type detection - */ - @Deprecated - public Builder loadFromSource(String source) { - SettingsLoader settingsLoader = SettingsLoaderFactory.loaderFromSource(source); - try { - Map loadedSettings = settingsLoader.load(source); - put(loadedSettings); - } catch (Exception e) { - throw new SettingsException("Failed to load settings from [" + source + "]", e); - } - return this; - } - - /** - * Loads settings from the actual string content that represents them using the - * {@link SettingsLoaderFactory#loaderFromXContentType(XContentType)} method to obtain a loader + * Loads settings from the actual string content that represents them using {@link #fromXContent(XContentParser)} */ public Builder loadFromSource(String source, XContentType xContentType) { - SettingsLoader settingsLoader = SettingsLoaderFactory.loaderFromXContentType(xContentType); - try { - Map loadedSettings = settingsLoader.load(source); - put(loadedSettings); + try (XContentParser parser = XContentFactory.xContent(xContentType).createParser(NamedXContentRegistry.EMPTY, source)) { + this.put(fromXContent(parser, true, true)); } catch (Exception e) { throw new SettingsException("Failed to load settings from [" + source + "]", e); } @@ -1049,31 +1116,47 @@ public Builder loadFromSource(String source, XContentType xContentType) { } /** - * Loads settings from a url that represents them using the - * {@link SettingsLoaderFactory#loaderFromResource(String)}. + * Loads settings from a url that represents them using {@link #fromXContent(XContentParser)} + * Note: Loading from a path doesn't allow null values in the incoming xcontent */ public Builder loadFromPath(Path path) throws IOException { // NOTE: loadFromStream will close the input stream - return loadFromStream(path.getFileName().toString(), Files.newInputStream(path)); + return loadFromStream(path.getFileName().toString(), Files.newInputStream(path), false); } /** - * Loads settings from a stream that represents them using the - * {@link SettingsLoaderFactory#loaderFromResource(String)}. + * Loads settings from a stream that represents them using {@link #fromXContent(XContentParser)} */ - public Builder loadFromStream(String resourceName, InputStream is) throws IOException { - SettingsLoader settingsLoader = SettingsLoaderFactory.loaderFromResource(resourceName); - // NOTE: copyToString will close the input stream - Map loadedSettings = - settingsLoader.load(Streams.copyToString(new InputStreamReader(is, StandardCharsets.UTF_8))); - put(loadedSettings); + public Builder loadFromStream(String resourceName, InputStream is, boolean acceptNullValues) throws IOException { + final XContentType xContentType; + if (resourceName.endsWith(".json")) { + xContentType = XContentType.JSON; + } else if (resourceName.endsWith(".yml") || resourceName.endsWith(".yaml")) { + xContentType = XContentType.YAML; + } else { + throw new IllegalArgumentException("unable to detect content type from resource name [" + resourceName + "]"); + } + try (XContentParser parser = XContentFactory.xContent(xContentType).createParser(NamedXContentRegistry.EMPTY, is)) { + if (parser.currentToken() == null) { + if (parser.nextToken() == null) { + return this; // empty file + } + } + put(fromXContent(parser, acceptNullValues, true)); + } catch (ElasticsearchParseException e) { + throw e; + } catch (Exception e) { + throw new SettingsException("Failed to load settings from [" + resourceName + "]", e); + } finally { + IOUtils.close(is); + } return this; } public Builder putProperties(final Map esSettings, final Function keyFunction) { for (final Map.Entry esSetting : esSettings.entrySet()) { final String key = esSetting.getKey(); - map.put(keyFunction.apply(key), esSetting.getValue()); + put(keyFunction.apply(key), esSetting.getValue()); } return this; } @@ -1097,7 +1180,7 @@ public String resolvePlaceholder(String placeholderName) { if (value != null) { return value; } - return map.get(placeholderName); + return Settings.toString(map.get(placeholderName)); } @Override @@ -1117,14 +1200,14 @@ public boolean shouldRemoveMissingPlaceholder(String placeholderName) { } }; - Iterator> entryItr = map.entrySet().iterator(); + Iterator> entryItr = map.entrySet().iterator(); while (entryItr.hasNext()) { - Map.Entry entry = entryItr.next(); - if (entry.getValue() == null) { + Map.Entry entry = entryItr.next(); + if (entry.getValue() == null || entry.getValue() instanceof List) { // a null value obviously can't be replaced continue; } - String value = propertyPlaceholder.replacePlaceholders(entry.getValue(), placeholderResolver); + String value = propertyPlaceholder.replacePlaceholders(Settings.toString(entry.getValue()), placeholderResolver); // if the values exists and has length, we should maintain it in the map // otherwise, the replace process resolved into removing it if (Strings.hasLength(value)) { @@ -1142,10 +1225,10 @@ public boolean shouldRemoveMissingPlaceholder(String placeholderName) { * If a setting doesn't start with the prefix, the builder appends the prefix to such setting. */ public Builder normalizePrefix(String prefix) { - Map replacements = new HashMap<>(); - Iterator> iterator = map.entrySet().iterator(); + Map replacements = new HashMap<>(); + Iterator> iterator = map.entrySet().iterator(); while(iterator.hasNext()) { - Map.Entry entry = iterator.next(); + Map.Entry entry = iterator.next(); if (entry.getKey().startsWith(prefix) == false) { replacements.put(prefix + entry.getKey(), entry.getValue()); iterator.remove(); @@ -1160,30 +1243,31 @@ public Builder normalizePrefix(String prefix) { * set on this builder. */ public Settings build() { + processLegacyLists(map); return new Settings(map, secureSettings.get()); } } // TODO We could use an FST internally to make things even faster and more compact - private static final class FilteredMap extends AbstractMap { - private final Map delegate; + private static final class FilteredMap extends AbstractMap { + private final Map delegate; private final Predicate filter; private final String prefix; // we cache that size since we have to iterate the entire set // this is safe to do since this map is only used with unmodifiable maps private int size = -1; @Override - public Set> entrySet() { - Set> delegateSet = delegate.entrySet(); - AbstractSet> filterSet = new AbstractSet>() { + public Set> entrySet() { + Set> delegateSet = delegate.entrySet(); + AbstractSet> filterSet = new AbstractSet>() { @Override - public Iterator> iterator() { - Iterator> iter = delegateSet.iterator(); + public Iterator> iterator() { + Iterator> iter = delegateSet.iterator(); - return new Iterator>() { + return new Iterator>() { private int numIterated; - private Entry currentElement; + private Entry currentElement; @Override public boolean hasNext() { if (currentElement != null) { @@ -1206,29 +1290,29 @@ public boolean hasNext() { } @Override - public Entry next() { + public Entry next() { if (currentElement == null && hasNext() == false) { // protect against no #hasNext call or not respecting it throw new NoSuchElementException("make sure to call hasNext first"); } - final Entry current = this.currentElement; + final Entry current = this.currentElement; this.currentElement = null; if (prefix == null) { return current; } - return new Entry() { + return new Entry() { @Override public String getKey() { return current.getKey().substring(prefix.length()); } @Override - public String getValue() { + public Object getValue() { return current.getValue(); } @Override - public String setValue(String value) { + public Object setValue(Object value) { throw new UnsupportedOperationException(); } }; @@ -1244,14 +1328,14 @@ public int size() { return filterSet; } - private FilteredMap(Map delegate, Predicate filter, String prefix) { + private FilteredMap(Map delegate, Predicate filter, String prefix) { this.delegate = delegate; this.filter = filter; this.prefix = prefix; } @Override - public String get(Object key) { + public Object get(Object key) { if (key instanceof String) { final String theKey = prefix == null ? (String)key : prefix + key; if (filter.test(theKey)) { @@ -1327,4 +1411,21 @@ public void close() throws IOException { delegate.close(); } } + + @Override + public String toString() { + try (XContentBuilder builder = XContentBuilder.builder(XContentType.JSON.xContent())) { + builder.startObject(); + toXContent(builder, new MapParams(Collections.singletonMap("flat_settings", "true"))); + builder.endObject(); + return builder.string(); + } catch (IOException e) { + throw new UncheckedIOException(e); + } + } + + private static String toString(Object o) { + return o == null ? null : o.toString(); + } + } diff --git a/core/src/main/java/org/elasticsearch/common/settings/SettingsFilter.java b/core/src/main/java/org/elasticsearch/common/settings/SettingsFilter.java index 32c5e7a0da318..1c67318e28286 100644 --- a/core/src/main/java/org/elasticsearch/common/settings/SettingsFilter.java +++ b/core/src/main/java/org/elasticsearch/common/settings/SettingsFilter.java @@ -30,8 +30,6 @@ import java.util.HashSet; import java.util.Iterator; import java.util.List; -import java.util.Map; -import java.util.Map.Entry; import java.util.Set; /** @@ -107,10 +105,10 @@ private static Settings filterSettings(Iterable patterns, Settings setti } if (!simpleMatchPatternList.isEmpty()) { String[] simpleMatchPatterns = simpleMatchPatternList.toArray(new String[simpleMatchPatternList.size()]); - Iterator> iterator = builder.internalMap().entrySet().iterator(); + Iterator iterator = builder.keys().iterator(); while (iterator.hasNext()) { - Map.Entry current = iterator.next(); - if (Regex.simpleMatch(simpleMatchPatterns, current.getKey())) { + String key = iterator.next(); + if (Regex.simpleMatch(simpleMatchPatterns, key)) { iterator.remove(); } } diff --git a/core/src/main/java/org/elasticsearch/common/settings/loader/SettingsLoader.java b/core/src/main/java/org/elasticsearch/common/settings/loader/SettingsLoader.java deleted file mode 100644 index 080824befb3e5..0000000000000 --- a/core/src/main/java/org/elasticsearch/common/settings/loader/SettingsLoader.java +++ /dev/null @@ -1,106 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.common.settings.loader; - -import org.elasticsearch.common.Nullable; - -import java.io.IOException; -import java.util.ArrayList; -import java.util.HashMap; -import java.util.List; -import java.util.Map; - -/** - * Provides the ability to load settings (in the form of a simple Map) from - * the actual source content that represents them. - */ -public interface SettingsLoader { - - class Helper { - - public static Map loadNestedFromMap(@Nullable Map map) { - Map settings = new HashMap<>(); - if (map == null) { - return settings; - } - StringBuilder sb = new StringBuilder(); - List path = new ArrayList<>(); - serializeMap(settings, sb, path, map); - return settings; - } - - private static void serializeMap(Map settings, StringBuilder sb, List path, Map map) { - for (Map.Entry entry : map.entrySet()) { - if (entry.getValue() instanceof Map) { - path.add((String) entry.getKey()); - serializeMap(settings, sb, path, (Map) entry.getValue()); - path.remove(path.size() - 1); - } else if (entry.getValue() instanceof List) { - path.add((String) entry.getKey()); - serializeList(settings, sb, path, (List) entry.getValue()); - path.remove(path.size() - 1); - } else { - serializeValue(settings, sb, path, (String) entry.getKey(), entry.getValue()); - } - } - } - - private static void serializeList(Map settings, StringBuilder sb, List path, List list) { - int counter = 0; - for (Object listEle : list) { - if (listEle instanceof Map) { - path.add(Integer.toString(counter)); - serializeMap(settings, sb, path, (Map) listEle); - path.remove(path.size() - 1); - } else if (listEle instanceof List) { - path.add(Integer.toString(counter)); - serializeList(settings, sb, path, (List) listEle); - path.remove(path.size() - 1); - } else { - serializeValue(settings, sb, path, Integer.toString(counter), listEle); - } - counter++; - } - } - - private static void serializeValue(Map settings, StringBuilder sb, List path, String name, Object value) { - if (value == null) { - return; - } - sb.setLength(0); - for (String pathEle : path) { - sb.append(pathEle).append('.'); - } - sb.append(name); - settings.put(sb.toString(), value.toString()); - } - } - - - /** - * Loads (parses) the settings from a source string. - */ - Map load(String source) throws IOException; - - /** - * Loads (parses) the settings from a source bytes. - */ - Map load(byte[] source) throws IOException; -} diff --git a/core/src/main/java/org/elasticsearch/common/settings/loader/SettingsLoaderFactory.java b/core/src/main/java/org/elasticsearch/common/settings/loader/SettingsLoaderFactory.java deleted file mode 100644 index 5d8cb4918b2ea..0000000000000 --- a/core/src/main/java/org/elasticsearch/common/settings/loader/SettingsLoaderFactory.java +++ /dev/null @@ -1,97 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.common.settings.loader; - -import org.elasticsearch.common.xcontent.XContentType; - -/** - * A class holding factory methods for settings loaders that attempts - * to infer the type of the underlying settings content. - */ -public final class SettingsLoaderFactory { - - private SettingsLoaderFactory() { - } - - /** - * Returns a {@link SettingsLoader} based on the source resource - * name. This factory method assumes that if the resource name ends - * with ".json" then the content should be parsed as JSON, else if - * the resource name ends with ".yml" or ".yaml" then the content - * should be parsed as YAML, otherwise throws an exception. Note that the - * parsers returned by this method will not accept null-valued - * keys. - * - * @param resourceName The resource name containing the settings - * content. - * @return A settings loader. - */ - public static SettingsLoader loaderFromResource(String resourceName) { - if (resourceName.endsWith(".json")) { - return new JsonSettingsLoader(false); - } else if (resourceName.endsWith(".yml") || resourceName.endsWith(".yaml")) { - return new YamlSettingsLoader(false); - } else { - throw new IllegalArgumentException("unable to detect content type from resource name [" + resourceName + "]"); - } - } - - /** - * Returns a {@link SettingsLoader} based on the source content. - * This factory method assumes that if the underlying content - * contains an opening and closing brace ('{' and '}') then the - * content should be parsed as JSON, else if the underlying content - * fails this condition but contains a ':' then the content should - * be parsed as YAML, and otherwise throws an exception. - * Note that the JSON and YAML parsers returned by this method will - * accept null-valued keys. - * - * @param source The underlying settings content. - * @return A settings loader. - * @deprecated use {@link #loaderFromXContentType(XContentType)} instead - */ - @Deprecated - public static SettingsLoader loaderFromSource(String source) { - if (source.indexOf('{') != -1 && source.indexOf('}') != -1) { - return new JsonSettingsLoader(true); - } else if (source.indexOf(':') != -1) { - return new YamlSettingsLoader(true); - } else { - throw new IllegalArgumentException("unable to detect content type from source [" + source + "]"); - } - } - - /** - * Returns a {@link SettingsLoader} based on the {@link XContentType}. Note only {@link XContentType#JSON} and - * {@link XContentType#YAML} are supported - * - * @param xContentType The content type - * @return A settings loader. - */ - public static SettingsLoader loaderFromXContentType(XContentType xContentType) { - if (xContentType == XContentType.JSON) { - return new JsonSettingsLoader(true); - } else if (xContentType == XContentType.YAML) { - return new YamlSettingsLoader(true); - } else { - throw new IllegalArgumentException("unsupported content type [" + xContentType + "]"); - } - } -} diff --git a/core/src/main/java/org/elasticsearch/common/settings/loader/XContentSettingsLoader.java b/core/src/main/java/org/elasticsearch/common/settings/loader/XContentSettingsLoader.java deleted file mode 100644 index d7eaa627a2868..0000000000000 --- a/core/src/main/java/org/elasticsearch/common/settings/loader/XContentSettingsLoader.java +++ /dev/null @@ -1,177 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.common.settings.loader; - -import org.elasticsearch.ElasticsearchParseException; -import org.elasticsearch.common.xcontent.NamedXContentRegistry; -import org.elasticsearch.common.xcontent.XContentFactory; -import org.elasticsearch.common.xcontent.XContentParser; -import org.elasticsearch.common.xcontent.XContentType; - -import java.io.IOException; -import java.util.ArrayList; -import java.util.HashMap; -import java.util.List; -import java.util.Map; - -/** - * Settings loader that loads (parses) the settings in a xcontent format by flattening them - * into a map. - */ -public abstract class XContentSettingsLoader implements SettingsLoader { - - public abstract XContentType contentType(); - - private final boolean allowNullValues; - - XContentSettingsLoader(boolean allowNullValues) { - this.allowNullValues = allowNullValues; - } - - @Override - public Map load(String source) throws IOException { - // It is safe to use EMPTY here because this never uses namedObject - try (XContentParser parser = XContentFactory.xContent(contentType()).createParser(NamedXContentRegistry.EMPTY, source)) { - return load(parser); - } - } - - @Override - public Map load(byte[] source) throws IOException { - // It is safe to use EMPTY here because this never uses namedObject - try (XContentParser parser = XContentFactory.xContent(contentType()).createParser(NamedXContentRegistry.EMPTY, source)) { - return load(parser); - } - } - - public Map load(XContentParser jp) throws IOException { - StringBuilder sb = new StringBuilder(); - Map settings = new HashMap<>(); - List path = new ArrayList<>(); - XContentParser.Token token = jp.nextToken(); - if (token == null) { - return settings; - } - if (token != XContentParser.Token.START_OBJECT) { - throw new ElasticsearchParseException("malformed, expected settings to start with 'object', instead was [{}]", token); - } - serializeObject(settings, sb, path, jp, null); - - // ensure we reached the end of the stream - XContentParser.Token lastToken = null; - try { - while (!jp.isClosed() && (lastToken = jp.nextToken()) == null); - } catch (Exception e) { - throw new ElasticsearchParseException( - "malformed, expected end of settings but encountered additional content starting at line number: [{}], " - + "column number: [{}]", - e, jp.getTokenLocation().lineNumber, jp.getTokenLocation().columnNumber); - } - if (lastToken != null) { - throw new ElasticsearchParseException( - "malformed, expected end of settings but encountered additional content starting at line number: [{}], " - + "column number: [{}]", - jp.getTokenLocation().lineNumber, jp.getTokenLocation().columnNumber); - } - - return settings; - } - - private void serializeObject(Map settings, StringBuilder sb, List path, XContentParser parser, - String objFieldName) throws IOException { - if (objFieldName != null) { - path.add(objFieldName); - } - - String currentFieldName = null; - XContentParser.Token token; - while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { - if (token == XContentParser.Token.START_OBJECT) { - serializeObject(settings, sb, path, parser, currentFieldName); - } else if (token == XContentParser.Token.START_ARRAY) { - serializeArray(settings, sb, path, parser, currentFieldName); - } else if (token == XContentParser.Token.FIELD_NAME) { - currentFieldName = parser.currentName(); - } else if (token == XContentParser.Token.VALUE_NULL) { - serializeValue(settings, sb, path, parser, currentFieldName, true); - } else { - serializeValue(settings, sb, path, parser, currentFieldName, false); - - } - } - - if (objFieldName != null) { - path.remove(path.size() - 1); - } - } - - private void serializeArray(Map settings, StringBuilder sb, List path, XContentParser parser, String fieldName) - throws IOException { - XContentParser.Token token; - int counter = 0; - while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { - if (token == XContentParser.Token.START_OBJECT) { - serializeObject(settings, sb, path, parser, fieldName + '.' + (counter++)); - } else if (token == XContentParser.Token.START_ARRAY) { - serializeArray(settings, sb, path, parser, fieldName + '.' + (counter++)); - } else if (token == XContentParser.Token.FIELD_NAME) { - fieldName = parser.currentName(); - } else if (token == XContentParser.Token.VALUE_NULL) { - serializeValue(settings, sb, path, parser, fieldName + '.' + (counter++), true); - // ignore - } else { - serializeValue(settings, sb, path, parser, fieldName + '.' + (counter++), false); - } - } - } - - private void serializeValue(Map settings, StringBuilder sb, List path, XContentParser parser, String fieldName, - boolean isNull) throws IOException { - sb.setLength(0); - for (String pathEle : path) { - sb.append(pathEle).append('.'); - } - sb.append(fieldName); - String key = sb.toString(); - String currentValue = isNull ? null : parser.text(); - - if (settings.containsKey(key)) { - throw new ElasticsearchParseException( - "duplicate settings key [{}] found at line number [{}], column number [{}], previous value [{}], current value [{}]", - key, - parser.getTokenLocation().lineNumber, - parser.getTokenLocation().columnNumber, - settings.get(key), - currentValue - ); - } - - if (currentValue == null && !allowNullValues) { - throw new ElasticsearchParseException( - "null-valued setting found for key [{}] found at line number [{}], column number [{}]", - key, - parser.getTokenLocation().lineNumber, - parser.getTokenLocation().columnNumber - ); - } - - settings.put(key, currentValue); - } -} diff --git a/core/src/main/java/org/elasticsearch/common/settings/loader/YamlSettingsLoader.java b/core/src/main/java/org/elasticsearch/common/settings/loader/YamlSettingsLoader.java deleted file mode 100644 index 12cde97669104..0000000000000 --- a/core/src/main/java/org/elasticsearch/common/settings/loader/YamlSettingsLoader.java +++ /dev/null @@ -1,52 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.common.settings.loader; - -import org.elasticsearch.common.xcontent.XContentType; - -import java.io.IOException; -import java.util.Map; - -/** - * Settings loader that loads (parses) the settings in a yaml format by flattening them - * into a map. - */ -public class YamlSettingsLoader extends XContentSettingsLoader { - - public YamlSettingsLoader(boolean allowNullValues) { - super(allowNullValues); - } - - @Override - public XContentType contentType() { - return XContentType.YAML; - } - - @Override - public Map load(String source) throws IOException { - /* - * #8259: Better handling of tabs vs spaces in elasticsearch.yml - */ - if (source.indexOf('\t') > -1) { - throw new IOException("Tabs are illegal in YAML. Did you mean to use whitespace character instead?"); - } - return super.load(source); - } -} diff --git a/core/src/main/java/org/elasticsearch/common/unit/Fuzziness.java b/core/src/main/java/org/elasticsearch/common/unit/Fuzziness.java index 100ddefd99dfe..179870f865349 100644 --- a/core/src/main/java/org/elasticsearch/common/unit/Fuzziness.java +++ b/core/src/main/java/org/elasticsearch/common/unit/Fuzziness.java @@ -18,6 +18,8 @@ */ package org.elasticsearch.common.unit; +import org.elasticsearch.ElasticsearchParseException; +import org.elasticsearch.Version; import org.elasticsearch.common.ParseField; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; @@ -43,8 +45,12 @@ public final class Fuzziness implements ToXContentFragment, Writeable { public static final Fuzziness TWO = new Fuzziness(2); public static final Fuzziness AUTO = new Fuzziness("AUTO"); public static final ParseField FIELD = new ParseField(X_FIELD_NAME); + private static final int DEFAULT_LOW_DISTANCE = 3; + private static final int DEFAULT_HIGH_DISTANCE = 6; private final String fuzziness; + private int lowDistance = DEFAULT_LOW_DISTANCE; + private int highDistance = DEFAULT_HIGH_DISTANCE; private Fuzziness(int fuzziness) { if (fuzziness != 0 && fuzziness != 1 && fuzziness != 2) { @@ -54,22 +60,48 @@ private Fuzziness(int fuzziness) { } private Fuzziness(String fuzziness) { - if (fuzziness == null) { + if (fuzziness == null || fuzziness.isEmpty()) { throw new IllegalArgumentException("fuzziness can't be null!"); } this.fuzziness = fuzziness.toUpperCase(Locale.ROOT); } + private Fuzziness(String fuzziness, int lowDistance, int highDistance) { + this(fuzziness); + if (lowDistance < 0 || highDistance < 0 || lowDistance > highDistance) { + throw new IllegalArgumentException("fuzziness wrongly configured, must be: lowDistance > 0, highDistance" + + " > 0 and lowDistance <= highDistance "); + } + this.lowDistance = lowDistance; + this.highDistance = highDistance; + } + /** * Read from a stream. */ public Fuzziness(StreamInput in) throws IOException { fuzziness = in.readString(); + if (in.getVersion().onOrAfter(Version.V_6_1_0) && in.readBoolean()) { + lowDistance = in.readVInt(); + highDistance = in.readVInt(); + } } @Override public void writeTo(StreamOutput out) throws IOException { out.writeString(fuzziness); + if (out.getVersion().onOrAfter(Version.V_6_1_0)) { + // we cannot serialize the low/high bounds since the other node does not know about them. + // This is a best-effort to not fail queries in case the cluster is being upgraded and users + // start using features that are not available on all nodes. + if (isAutoWithCustomValues()) { + out.writeBoolean(true); + out.writeVInt(lowDistance); + out.writeVInt(highDistance); + } else { + out.writeBoolean(false); + } + } } /** @@ -88,10 +120,29 @@ public static Fuzziness build(Object fuzziness) { String string = fuzziness.toString(); if (AUTO.asString().equalsIgnoreCase(string)) { return AUTO; + } else if (string.toUpperCase(Locale.ROOT).startsWith(AUTO.asString() + ":")) { + return parseCustomAuto(string); } return new Fuzziness(string); } + private static Fuzziness parseCustomAuto( final String string) { + assert string.toUpperCase(Locale.ROOT).startsWith(AUTO.asString() + ":"); + String[] fuzzinessLimit = string.substring(AUTO.asString().length() + 1).split(","); + if (fuzzinessLimit.length == 2) { + try { + int lowerLimit = Integer.parseInt(fuzzinessLimit[0]); + int highLimit = Integer.parseInt(fuzzinessLimit[1]); + return new Fuzziness("AUTO", lowerLimit, highLimit); + } catch (NumberFormatException e) { + throw new ElasticsearchParseException("failed to parse [{}] as a \"auto:int,int\"", e, + string); + } + } else { + throw new ElasticsearchParseException("failed to find low and high distance values"); + } + } + public static Fuzziness parse(XContentParser parser) throws IOException { XContentParser.Token token = parser.currentToken(); switch (token) { @@ -100,6 +151,8 @@ public static Fuzziness parse(XContentParser parser) throws IOException { final String fuzziness = parser.text(); if (AUTO.asString().equalsIgnoreCase(fuzziness)) { return AUTO; + } else if (fuzziness.toUpperCase(Locale.ROOT).startsWith(AUTO.asString() + ":")) { + return parseCustomAuto(fuzziness); } try { final int minimumSimilarity = Integer.parseInt(fuzziness); @@ -135,19 +188,19 @@ public int asDistance() { public int asDistance(String text) { if (this.equals(AUTO)) { //AUTO final int len = termLen(text); - if (len <= 2) { + if (len < lowDistance) { return 0; - } else if (len > 5) { - return 2; - } else { + } else if (len < highDistance) { return 1; + } else { + return 2; } } return Math.min(2, (int) asFloat()); } public float asFloat() { - if (this.equals(AUTO)) { + if (this.equals(AUTO) || isAutoWithCustomValues()) { return 1f; } return Float.parseFloat(fuzziness.toString()); @@ -158,9 +211,17 @@ private int termLen(String text) { } public String asString() { + if (isAutoWithCustomValues()) { + return fuzziness.toString() + ":" + lowDistance + "," + highDistance; + } return fuzziness.toString(); } + private boolean isAutoWithCustomValues() { + return fuzziness.startsWith("AUTO") && (lowDistance != DEFAULT_LOW_DISTANCE || + highDistance != DEFAULT_HIGH_DISTANCE); + } + @Override public boolean equals(Object obj) { if (this == obj) { diff --git a/core/src/main/java/org/elasticsearch/common/util/LocaleUtils.java b/core/src/main/java/org/elasticsearch/common/util/LocaleUtils.java index 2e6c01a1ca726..d447bc0567d5e 100644 --- a/core/src/main/java/org/elasticsearch/common/util/LocaleUtils.java +++ b/core/src/main/java/org/elasticsearch/common/util/LocaleUtils.java @@ -20,7 +20,9 @@ package org.elasticsearch.common.util; +import java.util.Arrays; import java.util.Locale; +import java.util.MissingResourceException; /** * Utilities for for dealing with {@link Locale} objects @@ -28,33 +30,77 @@ public class LocaleUtils { /** - * Parse the string describing a locale into a {@link Locale} object + * Parse the given locale as {@code language}, {@code language-country} or + * {@code language-country-variant}. + * Either underscores or hyphens may be used as separators, but consistently, ie. + * you may not use an hyphen to separate the language from the country and an + * underscore to separate the country from the variant. + * @throws IllegalArgumentException if there are too many parts in the locale string + * @throws IllegalArgumentException if the language or country is not recognized */ public static Locale parse(String localeStr) { - final String[] parts = localeStr.split("_", -1); - switch (parts.length) { - case 3: - // lang_country_variant - return new Locale(parts[0], parts[1], parts[2]); - case 2: - // lang_country - return new Locale(parts[0], parts[1]); - case 1: - if ("ROOT".equalsIgnoreCase(parts[0])) { - return Locale.ROOT; - } - // lang - return new Locale(parts[0]); - default: - throw new IllegalArgumentException("Can't parse locale: [" + localeStr + "]"); + boolean useUnderscoreAsSeparator = false; + for (int i = 0; i < localeStr.length(); ++i) { + final char c = localeStr.charAt(i); + if (c == '-') { + // the locale uses - as a separator, as expected + break; + } else if (c == '_') { + useUnderscoreAsSeparator = true; + break; + } + } + + final String[] parts; + if (useUnderscoreAsSeparator) { + parts = localeStr.split("_", -1); + } else { + parts = localeStr.split("-", -1); } + + final Locale locale = parseParts(parts); + + try { + locale.getISO3Language(); + } catch (MissingResourceException e) { + throw new IllegalArgumentException("Unknown language: " + parts[0], e); + } + + try { + locale.getISO3Country(); + } catch (MissingResourceException e) { + throw new IllegalArgumentException("Unknown country: " + parts[1], e); + } + + return locale; } /** - * Return a string for a {@link Locale} object + * Parse the string describing a locale into a {@link Locale} object + * for 5.x indices. */ - public static String toString(Locale locale) { - // JAVA7 - use .toLanguageTag instead of .toString() - return locale.toString(); + @Deprecated + public static Locale parse5x(String localeStr) { + final String[] parts = localeStr.split("_", -1); + return parseParts(parts); + } + + private static Locale parseParts(String[] parts) { + switch (parts.length) { + case 3: + // lang, country, variant + return new Locale(parts[0], parts[1], parts[2]); + case 2: + // lang, country + return new Locale(parts[0], parts[1]); + case 1: + if ("ROOT".equalsIgnoreCase(parts[0])) { + return Locale.ROOT; + } + // lang + return new Locale(parts[0]); + default: + throw new IllegalArgumentException("Locales can have at most 3 parts but got " + parts.length + ": " + Arrays.asList(parts)); + } } } diff --git a/core/src/main/java/org/elasticsearch/common/util/concurrent/EsExecutors.java b/core/src/main/java/org/elasticsearch/common/util/concurrent/EsExecutors.java index b37a6e14f02b3..45d9a208284f5 100644 --- a/core/src/main/java/org/elasticsearch/common/util/concurrent/EsExecutors.java +++ b/core/src/main/java/org/elasticsearch/common/util/concurrent/EsExecutors.java @@ -92,10 +92,6 @@ public static EsThreadPoolExecutor newFixed(String name, int size, int queueCapa public static EsThreadPoolExecutor newAutoQueueFixed(String name, int size, int initialQueueCapacity, int minQueueSize, int maxQueueSize, int frameSize, TimeValue targetedResponseTime, ThreadFactory threadFactory, ThreadContext contextHolder) { - if (initialQueueCapacity == minQueueSize && initialQueueCapacity == maxQueueSize) { - return newFixed(name, size, initialQueueCapacity, threadFactory, contextHolder); - } - if (initialQueueCapacity <= 0) { throw new IllegalArgumentException("initial queue capacity for [" + name + "] executor must be positive, got: " + initialQueueCapacity); diff --git a/core/src/main/java/org/elasticsearch/common/util/concurrent/QueueResizingEsThreadPoolExecutor.java b/core/src/main/java/org/elasticsearch/common/util/concurrent/QueueResizingEsThreadPoolExecutor.java index 1f694d73fa709..8062d5510c7bf 100644 --- a/core/src/main/java/org/elasticsearch/common/util/concurrent/QueueResizingEsThreadPoolExecutor.java +++ b/core/src/main/java/org/elasticsearch/common/util/concurrent/QueueResizingEsThreadPoolExecutor.java @@ -79,9 +79,7 @@ public final class QueueResizingEsThreadPoolExecutor extends EsThreadPoolExecuto this.minQueueSize = minQueueSize; this.maxQueueSize = maxQueueSize; this.targetedResponseTimeNanos = targetedResponseTime.getNanos(); - // We choose to start the EWMA with the targeted response time, reasoning that it is a - // better start point for a realistic task execution time than starting at 0 - this.executionEWMA = new ExponentiallyWeightedMovingAverage(EWMA_ALPHA, targetedResponseTimeNanos); + this.executionEWMA = new ExponentiallyWeightedMovingAverage(EWMA_ALPHA, 0); logger.debug("thread pool [{}] will adjust queue by [{}] when determining automatic queue size", name, QUEUE_ADJUSTMENT_AMOUNT); } @@ -227,7 +225,7 @@ protected void afterExecute(Runnable r, Throwable t) { // - Since taskCount will now be incremented forever, it will never be 10 again, // so there will be no further adjustments logger.debug("[{}]: too many incoming tasks while queue size adjustment occurs, resetting measurements to 0", name); - totalTaskNanos.getAndSet(0); + totalTaskNanos.getAndSet(1); taskCount.getAndSet(0); startNs = System.nanoTime(); } else { diff --git a/core/src/main/java/org/elasticsearch/common/util/concurrent/ThreadContext.java b/core/src/main/java/org/elasticsearch/common/util/concurrent/ThreadContext.java index 1ce119636f734..95c08e8889857 100644 --- a/core/src/main/java/org/elasticsearch/common/util/concurrent/ThreadContext.java +++ b/core/src/main/java/org/elasticsearch/common/util/concurrent/ThreadContext.java @@ -407,11 +407,10 @@ private ThreadContextStruct putHeaders(Map headers) { if (headers.isEmpty()) { return this; } else { - final Map newHeaders = new HashMap<>(); + final Map newHeaders = new HashMap<>(this.requestHeaders); for (Map.Entry entry : headers.entrySet()) { putSingleHeader(entry.getKey(), entry.getValue(), newHeaders); } - newHeaders.putAll(this.requestHeaders); return new ThreadContextStruct(newHeaders, responseHeaders, transientHeaders, isSystemContext); } } diff --git a/core/src/main/java/org/elasticsearch/common/util/concurrent/TimedRunnable.java b/core/src/main/java/org/elasticsearch/common/util/concurrent/TimedRunnable.java index 2ee80badb74ba..2d8934ba3b30e 100644 --- a/core/src/main/java/org/elasticsearch/common/util/concurrent/TimedRunnable.java +++ b/core/src/main/java/org/elasticsearch/common/util/concurrent/TimedRunnable.java @@ -23,19 +23,19 @@ * A class used to wrap a {@code Runnable} that allows capturing the time of the task since creation * through execution as well as only execution time. */ -class TimedRunnable implements Runnable { +class TimedRunnable extends AbstractRunnable { private final Runnable original; private final long creationTimeNanos; private long startTimeNanos; private long finishTimeNanos = -1; - TimedRunnable(Runnable original) { + TimedRunnable(final Runnable original) { this.original = original; this.creationTimeNanos = System.nanoTime(); } @Override - public void run() { + public void doRun() { try { startTimeNanos = System.nanoTime(); original.run(); @@ -44,6 +44,32 @@ public void run() { } } + @Override + public void onRejection(final Exception e) { + if (original instanceof AbstractRunnable) { + ((AbstractRunnable) original).onRejection(e); + } + } + + @Override + public void onAfter() { + if (original instanceof AbstractRunnable) { + ((AbstractRunnable) original).onAfter(); + } + } + + @Override + public void onFailure(final Exception e) { + if (original instanceof AbstractRunnable) { + ((AbstractRunnable) original).onFailure(e); + } + } + + @Override + public boolean isForceExecution() { + return original instanceof AbstractRunnable && ((AbstractRunnable) original).isForceExecution(); + } + /** * Return the time since this task was created until it finished running. * If the task is still running or has not yet been run, returns -1. @@ -67,4 +93,5 @@ long getTotalExecutionNanos() { } return finishTimeNanos - startTimeNanos; } + } diff --git a/core/src/main/java/org/elasticsearch/common/xcontent/ConstructingObjectParser.java b/core/src/main/java/org/elasticsearch/common/xcontent/ConstructingObjectParser.java index 53b809ba6ed70..03f6b14f525ec 100644 --- a/core/src/main/java/org/elasticsearch/common/xcontent/ConstructingObjectParser.java +++ b/core/src/main/java/org/elasticsearch/common/xcontent/ConstructingObjectParser.java @@ -108,7 +108,7 @@ public final class ConstructingObjectParser extends AbstractObje * ObjectParser. * @param builder A function that builds the object from an array of Objects. Declare this inline with the parser, casting the elements * of the array to the arguments so they work with your favorite constructor. The objects in the array will be in the same order - * that you declared the {{@link #constructorArg()}s and none will be null. If any of the constructor arguments aren't defined in + * that you declared the {@link #constructorArg()}s and none will be null. If any of the constructor arguments aren't defined in * the XContent then parsing will throw an error. We use an array here rather than a {@code Map} to save on * allocations. */ @@ -125,7 +125,7 @@ public ConstructingObjectParser(String name, Function builder) * from external systems, never when parsing requests from users. * @param builder A function that builds the object from an array of Objects. Declare this inline with the parser, casting the elements * of the array to the arguments so they work with your favorite constructor. The objects in the array will be in the same order - * that you declared the {{@link #constructorArg()}s and none will be null. If any of the constructor arguments aren't defined in + * that you declared the {@link #constructorArg()}s and none will be null. If any of the constructor arguments aren't defined in * the XContent then parsing will throw an error. We use an array here rather than a {@code Map} to save on * allocations. */ @@ -142,7 +142,7 @@ public ConstructingObjectParser(String name, boolean ignoreUnknownFields, Functi * from external systems, never when parsing requests from users. * @param builder A binary function that builds the object from an array of Objects and the parser context. Declare this inline with * the parser, casting the elements of the array to the arguments so they work with your favorite constructor. The objects in - * the array will be in the same order that you declared the {{@link #constructorArg()}s and none will be null. The second + * the array will be in the same order that you declared the {@link #constructorArg()}s and none will be null. The second * argument is the value of the context provided to the {@link #parse(XContentParser, Object) parse function}. If any of the * constructor arguments aren't defined in the XContent then parsing will throw an error. We use an array here rather than a * {@code Map} to save on allocations. @@ -453,7 +453,7 @@ private Value finish() { * use of ConstructingObjectParser. You should be using ObjectParser instead. Since this is more of a programmer error and the * parser ought to still work we just assert this. */ - assert false == constructorArgInfos.isEmpty() : "[" + objectParser.getName() + "] must configure at least on constructor " + assert false == constructorArgInfos.isEmpty() : "[" + objectParser.getName() + "] must configure at least one constructor " + "argument. If it doesn't have any it should use ObjectParser instead of ConstructingObjectParser. This is a bug " + "in the parser declaration."; // All missing constructor arguments were optional. Just build the target and return it. diff --git a/core/src/main/java/org/elasticsearch/common/xcontent/ObjectParser.java b/core/src/main/java/org/elasticsearch/common/xcontent/ObjectParser.java index ed1d85b5a7644..8ba30178dc945 100644 --- a/core/src/main/java/org/elasticsearch/common/xcontent/ObjectParser.java +++ b/core/src/main/java/org/elasticsearch/common/xcontent/ObjectParser.java @@ -147,7 +147,7 @@ public Value parse(XContentParser parser, Value value, Context context) throws I } else { token = parser.nextToken(); if (token != XContentParser.Token.START_OBJECT) { - throw new IllegalStateException("[" + name + "] Expected START_OBJECT but was: " + token); + throw new ParsingException(parser.getTokenLocation(), "[" + name + "] Expected START_OBJECT but was: " + token); } } @@ -159,13 +159,13 @@ public Value parse(XContentParser parser, Value value, Context context) throws I fieldParser = getParser(currentFieldName); } else { if (currentFieldName == null) { - throw new IllegalStateException("[" + name + "] no field found"); + throw new ParsingException(parser.getTokenLocation(), "[" + name + "] no field found"); } if (fieldParser == null) { assert ignoreUnknownFields : "this should only be possible if configured to ignore known fields"; parser.skipChildren(); // noop if parser points to a value, skips children if parser is start object or start array } else { - fieldParser.assertSupports(name, token, currentFieldName); + fieldParser.assertSupports(name, token, currentFieldName, parser.getTokenLocation()); parseSub(parser, fieldParser, currentFieldName, value, context); } fieldParser = null; @@ -330,7 +330,7 @@ private void parseSub(XContentParser parser, FieldParser fieldParser, String cur case END_OBJECT: case END_ARRAY: case FIELD_NAME: - throw new IllegalStateException("[" + name + "]" + token + " is unexpected"); + throw new ParsingException(parser.getTokenLocation(), "[" + name + "]" + token + " is unexpected"); case VALUE_STRING: case VALUE_NUMBER: case VALUE_BOOLEAN: @@ -361,12 +361,12 @@ private class FieldParser { this.type = type; } - void assertSupports(String parserName, XContentParser.Token token, String currentFieldName) { + void assertSupports(String parserName, XContentParser.Token token, String currentFieldName, XContentLocation location) { if (parseField.match(currentFieldName) == false) { - throw new IllegalStateException("[" + parserName + "] parsefield doesn't accept: " + currentFieldName); + throw new ParsingException(location, "[" + parserName + "] parsefield doesn't accept: " + currentFieldName); } if (supportedTokens.contains(token) == false) { - throw new IllegalArgumentException( + throw new ParsingException(location, "[" + parserName + "] " + currentFieldName + " doesn't support values of type: " + token); } } diff --git a/core/src/main/java/org/elasticsearch/common/xcontent/XContentParserUtils.java b/core/src/main/java/org/elasticsearch/common/xcontent/XContentParserUtils.java index e28b44b42c5a2..77d62f8d3095a 100644 --- a/core/src/main/java/org/elasticsearch/common/xcontent/XContentParserUtils.java +++ b/core/src/main/java/org/elasticsearch/common/xcontent/XContentParserUtils.java @@ -39,7 +39,7 @@ private XContentParserUtils() { } /** - * Makes sure that current token is of type {@link XContentParser.Token#FIELD_NAME} and the the field name is equal to the provided one + * Makes sure that current token is of type {@link XContentParser.Token#FIELD_NAME} and the field name is equal to the provided one * @throws ParsingException if the token is not of type {@link XContentParser.Token#FIELD_NAME} or is not equal to the given field name */ public static void ensureFieldName(XContentParser parser, Token token, String fieldName) throws IOException { diff --git a/core/src/main/java/org/elasticsearch/common/xcontent/support/AbstractXContentParser.java b/core/src/main/java/org/elasticsearch/common/xcontent/support/AbstractXContentParser.java index be128b2f21264..9aae1ca396c12 100644 --- a/core/src/main/java/org/elasticsearch/common/xcontent/support/AbstractXContentParser.java +++ b/core/src/main/java/org/elasticsearch/common/xcontent/support/AbstractXContentParser.java @@ -22,6 +22,7 @@ import org.apache.lucene.util.BytesRef; import org.elasticsearch.ElasticsearchParseException; import org.elasticsearch.common.Booleans; +import org.elasticsearch.common.Numbers; import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.common.xcontent.XContentParser; @@ -133,7 +134,14 @@ public short shortValue(boolean coerce) throws IOException { Token token = currentToken(); if (token == Token.VALUE_STRING) { checkCoerceString(coerce, Short.class); - return (short) Double.parseDouble(text()); + + double doubleValue = Double.parseDouble(text()); + + if (doubleValue < Short.MIN_VALUE || doubleValue > Short.MAX_VALUE) { + throw new IllegalArgumentException("Value [" + text() + "] is out of range for a short"); + } + + return (short) doubleValue; } short result = doShortValue(); ensureNumberConversion(coerce, result, Short.class); @@ -152,7 +160,13 @@ public int intValue(boolean coerce) throws IOException { Token token = currentToken(); if (token == Token.VALUE_STRING) { checkCoerceString(coerce, Integer.class); - return (int) Double.parseDouble(text()); + double doubleValue = Double.parseDouble(text()); + + if (doubleValue < Integer.MIN_VALUE || doubleValue > Integer.MAX_VALUE) { + throw new IllegalArgumentException("Value [" + text() + "] is out of range for an integer"); + } + + return (int) doubleValue; } int result = doIntValue(); ensureNumberConversion(coerce, result, Integer.class); @@ -171,13 +185,7 @@ public long longValue(boolean coerce) throws IOException { Token token = currentToken(); if (token == Token.VALUE_STRING) { checkCoerceString(coerce, Long.class); - // longs need special handling so we don't lose precision while parsing - String stringValue = text(); - try { - return Long.parseLong(stringValue); - } catch (NumberFormatException e) { - return (long) Double.parseDouble(stringValue); - } + return Numbers.toLong(text(), coerce); } long result = doLongValue(); ensureNumberConversion(coerce, result, Long.class); diff --git a/core/src/main/java/org/elasticsearch/discovery/DiscoveryModule.java b/core/src/main/java/org/elasticsearch/discovery/DiscoveryModule.java index 0ecf40e65a1ba..179692cd516c8 100644 --- a/core/src/main/java/org/elasticsearch/discovery/DiscoveryModule.java +++ b/core/src/main/java/org/elasticsearch/discovery/DiscoveryModule.java @@ -19,6 +19,8 @@ package org.elasticsearch.discovery; +import org.elasticsearch.cluster.ClusterState; +import org.elasticsearch.cluster.node.DiscoveryNode; import org.elasticsearch.cluster.routing.allocation.AllocationService; import org.elasticsearch.cluster.service.ClusterApplier; import org.elasticsearch.cluster.service.MasterService; @@ -36,12 +38,15 @@ import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.TransportService; +import java.util.ArrayList; +import java.util.Collection; import java.util.Collections; import java.util.HashMap; import java.util.List; import java.util.Map; import java.util.Objects; import java.util.Optional; +import java.util.function.BiConsumer; import java.util.function.Function; import java.util.function.Supplier; @@ -62,7 +67,7 @@ public DiscoveryModule(Settings settings, ThreadPool threadPool, TransportServic ClusterApplier clusterApplier, ClusterSettings clusterSettings, List plugins, AllocationService allocationService) { final UnicastHostsProvider hostsProvider; - + final Collection> joinValidators = new ArrayList<>(); Map> hostProviders = new HashMap<>(); for (DiscoveryPlugin plugin : plugins) { plugin.getZenHostsProviders(transportService, networkService).entrySet().forEach(entry -> { @@ -70,6 +75,10 @@ public DiscoveryModule(Settings settings, ThreadPool threadPool, TransportServic throw new IllegalArgumentException("Cannot register zen hosts provider [" + entry.getKey() + "] twice"); } }); + BiConsumer joinValidator = plugin.getJoinValidator(); + if (joinValidator != null) { + joinValidators.add(joinValidator); + } } Optional hostsProviderName = DISCOVERY_HOSTS_PROVIDER_SETTING.get(settings); if (hostsProviderName.isPresent()) { @@ -85,7 +94,7 @@ public DiscoveryModule(Settings settings, ThreadPool threadPool, TransportServic Map> discoveryTypes = new HashMap<>(); discoveryTypes.put("zen", () -> new ZenDiscovery(settings, threadPool, transportService, namedWriteableRegistry, masterService, clusterApplier, - clusterSettings, hostsProvider, allocationService)); + clusterSettings, hostsProvider, allocationService, Collections.unmodifiableCollection(joinValidators))); discoveryTypes.put("single-node", () -> new SingleNodeDiscovery(settings, transportService, masterService, clusterApplier)); for (DiscoveryPlugin plugin : plugins) { plugin.getDiscoveryTypes(threadPool, transportService, namedWriteableRegistry, diff --git a/core/src/main/java/org/elasticsearch/discovery/DiscoveryStats.java b/core/src/main/java/org/elasticsearch/discovery/DiscoveryStats.java index 37e916d65fbd9..a4c788d5c22ad 100644 --- a/core/src/main/java/org/elasticsearch/discovery/DiscoveryStats.java +++ b/core/src/main/java/org/elasticsearch/discovery/DiscoveryStats.java @@ -19,6 +19,7 @@ package org.elasticsearch.discovery; +import org.elasticsearch.Version; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; @@ -26,33 +27,48 @@ import org.elasticsearch.common.xcontent.ToXContentFragment; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.discovery.zen.PendingClusterStateStats; +import org.elasticsearch.discovery.zen.PublishClusterStateStats; import java.io.IOException; public class DiscoveryStats implements Writeable, ToXContentFragment { - @Nullable private final PendingClusterStateStats queueStats; + private final PublishClusterStateStats publishStats; - public DiscoveryStats(PendingClusterStateStats queueStats) { + public DiscoveryStats(PendingClusterStateStats queueStats, PublishClusterStateStats publishStats) { this.queueStats = queueStats; + this.publishStats = publishStats; } public DiscoveryStats(StreamInput in) throws IOException { queueStats = in.readOptionalWriteable(PendingClusterStateStats::new); + + if (in.getVersion().onOrAfter(Version.V_6_1_0)) { + publishStats = in.readOptionalWriteable(PublishClusterStateStats::new); + } else { + publishStats = null; + } } @Override public void writeTo(StreamOutput out) throws IOException { out.writeOptionalWriteable(queueStats); + + if (out.getVersion().onOrAfter(Version.V_6_1_0)) { + out.writeOptionalWriteable(publishStats); + } } @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { builder.startObject(Fields.DISCOVERY); - if (queueStats != null ){ + if (queueStats != null) { queueStats.toXContent(builder, params); } + if (publishStats != null) { + publishStats.toXContent(builder, params); + } builder.endObject(); return builder; } @@ -64,4 +80,8 @@ static final class Fields { public PendingClusterStateStats getQueueStats() { return queueStats; } + + public PublishClusterStateStats getPublishStats() { + return publishStats; + } } diff --git a/core/src/main/java/org/elasticsearch/discovery/single/SingleNodeDiscovery.java b/core/src/main/java/org/elasticsearch/discovery/single/SingleNodeDiscovery.java index 2f3124010cca0..2a32caabc77a4 100644 --- a/core/src/main/java/org/elasticsearch/discovery/single/SingleNodeDiscovery.java +++ b/core/src/main/java/org/elasticsearch/discovery/single/SingleNodeDiscovery.java @@ -21,7 +21,6 @@ import org.apache.logging.log4j.message.ParameterizedMessage; import org.elasticsearch.cluster.ClusterChangedEvent; -import org.elasticsearch.cluster.ClusterName; import org.elasticsearch.cluster.ClusterState; import org.elasticsearch.cluster.ClusterStateTaskListener; import org.elasticsearch.cluster.block.ClusterBlocks; @@ -34,6 +33,7 @@ import org.elasticsearch.discovery.Discovery; import org.elasticsearch.discovery.DiscoveryStats; import org.elasticsearch.discovery.zen.PendingClusterStateStats; +import org.elasticsearch.discovery.zen.PublishClusterStateStats; import org.elasticsearch.transport.TransportService; import java.io.IOException; @@ -94,7 +94,7 @@ public void onFailure(String source, Exception e) { @Override public DiscoveryStats stats() { - return new DiscoveryStats((PendingClusterStateStats) null); + return new DiscoveryStats(null, null); } @Override diff --git a/core/src/main/java/org/elasticsearch/discovery/zen/MembershipAction.java b/core/src/main/java/org/elasticsearch/discovery/zen/MembershipAction.java index 18cac5818049f..fdfcd8ac29079 100644 --- a/core/src/main/java/org/elasticsearch/discovery/zen/MembershipAction.java +++ b/core/src/main/java/org/elasticsearch/discovery/zen/MembershipAction.java @@ -39,7 +39,10 @@ import org.elasticsearch.transport.TransportService; import java.io.IOException; +import java.util.Collection; import java.util.concurrent.TimeUnit; +import java.util.function.BiConsumer; +import java.util.function.Supplier; public class MembershipAction extends AbstractComponent { @@ -63,7 +66,8 @@ public interface MembershipListener { private final MembershipListener listener; - public MembershipAction(Settings settings, TransportService transportService, MembershipListener listener) { + public MembershipAction(Settings settings, TransportService transportService, MembershipListener listener, + Collection> joinValidators) { super(settings); this.transportService = transportService; this.listener = listener; @@ -73,7 +77,7 @@ public MembershipAction(Settings settings, TransportService transportService, Me ThreadPool.Names.GENERIC, new JoinRequestRequestHandler()); transportService.registerRequestHandler(DISCOVERY_JOIN_VALIDATE_ACTION_NAME, () -> new ValidateJoinRequest(), ThreadPool.Names.GENERIC, - new ValidateJoinRequestRequestHandler()); + new ValidateJoinRequestRequestHandler(transportService::getLocalNode, joinValidators)); transportService.registerRequestHandler(DISCOVERY_LEAVE_ACTION_NAME, LeaveRequest::new, ThreadPool.Names.GENERIC, new LeaveRequestRequestHandler()); } @@ -176,12 +180,20 @@ public void writeTo(StreamOutput out) throws IOException { } static class ValidateJoinRequestRequestHandler implements TransportRequestHandler { + private final Supplier localNodeSupplier; + private final Collection> joinValidators; + + ValidateJoinRequestRequestHandler(Supplier localNodeSupplier, + Collection> joinValidators) { + this.localNodeSupplier = localNodeSupplier; + this.joinValidators = joinValidators; + } @Override public void messageReceived(ValidateJoinRequest request, TransportChannel channel) throws Exception { - ensureNodesCompatibility(Version.CURRENT, request.state.getNodes()); - ensureIndexCompatibility(Version.CURRENT, request.state.getMetaData()); - // for now, the mere fact that we can serialize the cluster state acts as validation.... + DiscoveryNode node = localNodeSupplier.get(); + assert node != null : "local node is null"; + joinValidators.stream().forEach(action -> action.accept(node, request.state)); channel.sendResponse(TransportResponse.Empty.INSTANCE); } } diff --git a/core/src/main/java/org/elasticsearch/discovery/zen/PendingClusterStateStats.java b/core/src/main/java/org/elasticsearch/discovery/zen/PendingClusterStateStats.java index 8facf2f282cde..a10d56c606de7 100644 --- a/core/src/main/java/org/elasticsearch/discovery/zen/PendingClusterStateStats.java +++ b/core/src/main/java/org/elasticsearch/discovery/zen/PendingClusterStateStats.java @@ -22,7 +22,8 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContent.Params; +import org.elasticsearch.common.xcontent.ToXContentFragment; import org.elasticsearch.common.xcontent.XContentBuilder; import java.io.IOException; @@ -30,7 +31,7 @@ /** * Class encapsulating stats about the PendingClusterStatsQueue */ -public class PendingClusterStateStats implements Writeable, ToXContent { +public class PendingClusterStateStats implements Writeable, ToXContentFragment { private final int total; private final int pending; diff --git a/core/src/main/java/org/elasticsearch/discovery/zen/PublishClusterStateAction.java b/core/src/main/java/org/elasticsearch/discovery/zen/PublishClusterStateAction.java index ae469d162aead..95de654928ea0 100644 --- a/core/src/main/java/org/elasticsearch/discovery/zen/PublishClusterStateAction.java +++ b/core/src/main/java/org/elasticsearch/discovery/zen/PublishClusterStateAction.java @@ -65,6 +65,7 @@ import java.util.concurrent.CountDownLatch; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicBoolean; +import java.util.concurrent.atomic.AtomicLong; public class PublishClusterStateAction extends AbstractComponent { @@ -90,6 +91,10 @@ public interface IncomingClusterStateListener { private final IncomingClusterStateListener incomingClusterStateListener; private final DiscoverySettings discoverySettings; + private final AtomicLong fullClusterStateReceivedCount = new AtomicLong(); + private final AtomicLong incompatibleClusterStateDiffReceivedCount = new AtomicLong(); + private final AtomicLong compatibleClusterStateDiffReceivedCount = new AtomicLong(); + public PublishClusterStateAction( Settings settings, TransportService transportService, @@ -380,11 +385,13 @@ protected void handleIncomingClusterStateRequest(BytesTransportRequest request, // If true we received full cluster state - otherwise diffs if (in.readBoolean()) { incomingState = ClusterState.readFrom(in, transportService.getLocalNode()); + fullClusterStateReceivedCount.incrementAndGet(); logger.debug("received full cluster state version [{}] with size [{}]", incomingState.version(), request.bytes().length()); } else if (lastSeenClusterState != null) { Diff diff = ClusterState.readDiffFrom(in, lastSeenClusterState.nodes().getLocalNode()); incomingState = diff.apply(lastSeenClusterState); + compatibleClusterStateDiffReceivedCount.incrementAndGet(); logger.debug("received diff cluster state version [{}] with uuid [{}], diff size [{}]", incomingState.version(), incomingState.stateUUID(), request.bytes().length()); } else { @@ -394,6 +401,9 @@ protected void handleIncomingClusterStateRequest(BytesTransportRequest request, incomingClusterStateListener.onIncomingClusterState(incomingState); lastSeenClusterState = incomingState; } + } catch (IncompatibleClusterStateVersionException e) { + incompatibleClusterStateDiffReceivedCount.incrementAndGet(); + throw e; } finally { IOUtils.close(in); } @@ -636,4 +646,11 @@ public void setPublishingTimedOut(boolean isTimedOut) { publishingTimedOut.set(isTimedOut); } } + + public PublishClusterStateStats stats() { + return new PublishClusterStateStats( + fullClusterStateReceivedCount.get(), + incompatibleClusterStateDiffReceivedCount.get(), + compatibleClusterStateDiffReceivedCount.get()); + } } diff --git a/core/src/main/java/org/elasticsearch/discovery/zen/PublishClusterStateStats.java b/core/src/main/java/org/elasticsearch/discovery/zen/PublishClusterStateStats.java new file mode 100644 index 0000000000000..8a84819875995 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/discovery/zen/PublishClusterStateStats.java @@ -0,0 +1,90 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.discovery.zen; + +import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.io.stream.Writeable; +import org.elasticsearch.common.xcontent.ToXContentObject; +import org.elasticsearch.common.xcontent.XContentBuilder; + +import java.io.IOException; + +/** + * Class encapsulating stats about the PublishClusterStateAction + */ +public class PublishClusterStateStats implements Writeable, ToXContentObject { + + private final long fullClusterStateReceivedCount; + private final long incompatibleClusterStateDiffReceivedCount; + private final long compatibleClusterStateDiffReceivedCount; + + /** + * @param fullClusterStateReceivedCount the number of times this node has received a full copy of the cluster state from the master. + * @param incompatibleClusterStateDiffReceivedCount the number of times this node has received a cluster-state diff from the master. + * @param compatibleClusterStateDiffReceivedCount the number of times that received cluster-state diffs were compatible with + */ + public PublishClusterStateStats(long fullClusterStateReceivedCount, + long incompatibleClusterStateDiffReceivedCount, + long compatibleClusterStateDiffReceivedCount) { + this.fullClusterStateReceivedCount = fullClusterStateReceivedCount; + this.incompatibleClusterStateDiffReceivedCount = incompatibleClusterStateDiffReceivedCount; + this.compatibleClusterStateDiffReceivedCount = compatibleClusterStateDiffReceivedCount; + } + + public PublishClusterStateStats(StreamInput in) throws IOException { + fullClusterStateReceivedCount = in.readVLong(); + incompatibleClusterStateDiffReceivedCount = in.readVLong(); + compatibleClusterStateDiffReceivedCount = in.readVLong(); + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + out.writeVLong(fullClusterStateReceivedCount); + out.writeVLong(incompatibleClusterStateDiffReceivedCount); + out.writeVLong(compatibleClusterStateDiffReceivedCount); + } + + @Override + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject("published_cluster_states"); + { + builder.field("full_states", fullClusterStateReceivedCount); + builder.field("incompatible_diffs", incompatibleClusterStateDiffReceivedCount); + builder.field("compatible_diffs", compatibleClusterStateDiffReceivedCount); + } + builder.endObject(); + return builder; + } + + long getFullClusterStateReceivedCount() { return fullClusterStateReceivedCount; } + + long getIncompatibleClusterStateDiffReceivedCount() { return incompatibleClusterStateDiffReceivedCount; } + + long getCompatibleClusterStateDiffReceivedCount() { return compatibleClusterStateDiffReceivedCount; } + + @Override + public String toString() { + return "PublishClusterStateStats(full=" + fullClusterStateReceivedCount + + ", incompatible=" + incompatibleClusterStateDiffReceivedCount + + ", compatible=" + compatibleClusterStateDiffReceivedCount + + ")"; + } +} diff --git a/core/src/main/java/org/elasticsearch/discovery/zen/ZenDiscovery.java b/core/src/main/java/org/elasticsearch/discovery/zen/ZenDiscovery.java index a4817fada36d2..d688a5d5cdbae 100644 --- a/core/src/main/java/org/elasticsearch/discovery/zen/ZenDiscovery.java +++ b/core/src/main/java/org/elasticsearch/discovery/zen/ZenDiscovery.java @@ -69,6 +69,8 @@ import java.io.IOException; import java.util.ArrayList; +import java.util.Collection; +import java.util.Collections; import java.util.List; import java.util.Locale; import java.util.Set; @@ -78,6 +80,7 @@ import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.atomic.AtomicInteger; import java.util.concurrent.atomic.AtomicReference; +import java.util.function.BiConsumer; import java.util.function.Consumer; import java.util.stream.Collectors; @@ -124,9 +127,9 @@ public class ZenDiscovery extends AbstractLifecycleComponent implements Discover private final TimeValue pingTimeout; private final TimeValue joinTimeout; - /** how many retry attempts to perform if join request failed with an retriable error */ + /** how many retry attempts to perform if join request failed with an retryable error */ private final int joinRetryAttempts; - /** how long to wait before performing another join attempt after a join request failed with an retriable error */ + /** how long to wait before performing another join attempt after a join request failed with an retryable error */ private final TimeValue joinRetryDelay; /** how many pings from *another* master to tolerate before forcing a rejoin on other or local master */ @@ -146,15 +149,17 @@ public class ZenDiscovery extends AbstractLifecycleComponent implements Discover private final NodeJoinController nodeJoinController; private final NodeRemovalClusterStateTaskExecutor nodeRemovalExecutor; - private final ClusterApplier clusterApplier; private final AtomicReference committedState; // last committed cluster state private final Object stateMutex = new Object(); + private final Collection> onJoinValidators; public ZenDiscovery(Settings settings, ThreadPool threadPool, TransportService transportService, NamedWriteableRegistry namedWriteableRegistry, MasterService masterService, ClusterApplier clusterApplier, - ClusterSettings clusterSettings, UnicastHostsProvider hostsProvider, AllocationService allocationService) { + ClusterSettings clusterSettings, UnicastHostsProvider hostsProvider, AllocationService allocationService, + Collection> onJoinValidators) { super(settings); + this.onJoinValidators = addBuiltInJoinValidators(onJoinValidators); this.masterService = masterService; this.clusterApplier = clusterApplier; this.transportService = transportService; @@ -211,7 +216,7 @@ public ZenDiscovery(Settings settings, ThreadPool threadPool, TransportService t namedWriteableRegistry, this, discoverySettings); - this.membership = new MembershipAction(settings, transportService, new MembershipListener()); + this.membership = new MembershipAction(settings, transportService, new MembershipListener(), onJoinValidators); this.joinThreadControl = new JoinThreadControl(); this.nodeJoinController = new NodeJoinController(masterService, allocationService, electMaster, settings); @@ -223,6 +228,17 @@ public ZenDiscovery(Settings settings, ThreadPool threadPool, TransportService t DISCOVERY_REJOIN_ACTION_NAME, RejoinClusterRequest::new, ThreadPool.Names.SAME, new RejoinClusterRequestHandler()); } + static Collection> addBuiltInJoinValidators( + Collection> onJoinValidators) { + Collection> validators = new ArrayList<>(); + validators.add((node, state) -> { + MembershipAction.ensureNodesCompatibility(node.getVersion(), state.getNodes()); + MembershipAction.ensureIndexCompatibility(node.getVersion(), state.getMetaData()); + }); + validators.addAll(onJoinValidators); + return Collections.unmodifiableCollection(validators); + } + // protected to allow overriding in tests protected ZenPing newZenPing(Settings settings, ThreadPool threadPool, TransportService transportService, UnicastHostsProvider hostsProvider) { @@ -396,8 +412,7 @@ Set getFaultDetectionNodes() { @Override public DiscoveryStats stats() { - PendingClusterStateStats queueStats = pendingStatesQueue.stats(); - return new DiscoveryStats(queueStats); + return new DiscoveryStats(pendingStatesQueue.stats(), publishClusterState.stats()); } public DiscoverySettings getDiscoverySettings() { @@ -885,8 +900,7 @@ void handleJoinRequest(final DiscoveryNode node, final ClusterState state, final } else { // we do this in a couple of places including the cluster update thread. This one here is really just best effort // to ensure we fail as fast as possible. - MembershipAction.ensureNodesCompatibility(node.getVersion(), state.getNodes()); - MembershipAction.ensureIndexCompatibility(node.getVersion(), state.getMetaData()); + onJoinValidators.stream().forEach(a -> a.accept(node, state)); if (state.getBlocks().hasGlobalBlock(STATE_NOT_RECOVERED_BLOCK) == false) { MembershipAction.ensureMajorVersionBarrier(node.getVersion(), state.getNodes().getMinNodeVersion()); } @@ -898,7 +912,8 @@ void handleJoinRequest(final DiscoveryNode node, final ClusterState state, final try { membership.sendValidateJoinRequestBlocking(node, state, joinTimeout); } catch (Exception e) { - logger.warn((Supplier) () -> new ParameterizedMessage("failed to validate incoming join request from node [{}]", node), e); + logger.warn((Supplier) () -> new ParameterizedMessage("failed to validate incoming join request from node [{}]", node), + e); callback.onFailure(new IllegalStateException("failure when sending a validation request to node", e)); return; } @@ -1313,4 +1328,9 @@ public void start() { } } + + public final Collection> getOnJoinValidators() { + return onJoinValidators; + } + } diff --git a/core/src/main/java/org/elasticsearch/env/Environment.java b/core/src/main/java/org/elasticsearch/env/Environment.java index 8f386f79dcf27..721cdcf9ba6db 100644 --- a/core/src/main/java/org/elasticsearch/env/Environment.java +++ b/core/src/main/java/org/elasticsearch/env/Environment.java @@ -85,10 +85,6 @@ public class Environment { /** Path to the temporary file directory used by the JDK */ private final Path tmpFile = PathUtils.get(System.getProperty("java.io.tmpdir")); - public Environment(Settings settings) { - this(settings, null); - } - public Environment(final Settings settings, final Path configPath) { final Path homeFile; if (PATH_HOME_SETTING.exists(settings)) { @@ -153,9 +149,9 @@ public Environment(final Settings settings, final Path configPath) { Settings.Builder finalSettings = Settings.builder().put(settings); finalSettings.put(PATH_HOME_SETTING.getKey(), homeFile); if (PATH_DATA_SETTING.exists(settings)) { - finalSettings.putArray(PATH_DATA_SETTING.getKey(), dataPaths); + finalSettings.putList(PATH_DATA_SETTING.getKey(), dataPaths); } - finalSettings.put(PATH_LOGS_SETTING.getKey(), logsFile); + finalSettings.put(PATH_LOGS_SETTING.getKey(), logsFile.toString()); this.settings = finalSettings.build(); } diff --git a/core/src/main/java/org/elasticsearch/env/NodeEnvironment.java b/core/src/main/java/org/elasticsearch/env/NodeEnvironment.java index ecebe411534cc..9c58335117407 100644 --- a/core/src/main/java/org/elasticsearch/env/NodeEnvironment.java +++ b/core/src/main/java/org/elasticsearch/env/NodeEnvironment.java @@ -845,6 +845,29 @@ public Set findAllShardIds(final Index index) throws IOException { return shardIds; } + /** + * Find all the shards for this index, returning a map of the {@code NodePath} to the number of shards on that path + * @param index the index by which to filter shards + * @return a map of NodePath to count of the shards for the index on that path + * @throws IOException if an IOException occurs + */ + public Map shardCountPerPath(final Index index) throws IOException { + assert index != null; + if (nodePaths == null || locks == null) { + throw new IllegalStateException("node is not configured to store local location"); + } + assertEnvIsLocked(); + final Map shardCountPerPath = new HashMap<>(); + final String indexUniquePathId = index.getUUID(); + for (final NodePath nodePath : nodePaths) { + Path indexLocation = nodePath.indicesPath.resolve(indexUniquePathId); + if (Files.isDirectory(indexLocation)) { + shardCountPerPath.put(nodePath, (long) findAllShardsForIndex(indexLocation, index).size()); + } + } + return shardCountPerPath; + } + private static Set findAllShardsForIndex(Path indexPath, Index index) throws IOException { assert indexPath.getFileName().toString().equals(index.getUUID()); Set shardIds = new HashSet<>(); diff --git a/core/src/main/java/org/elasticsearch/gateway/Gateway.java b/core/src/main/java/org/elasticsearch/gateway/Gateway.java index 2e258ca54de69..f4d191ac28a8a 100644 --- a/core/src/main/java/org/elasticsearch/gateway/Gateway.java +++ b/core/src/main/java/org/elasticsearch/gateway/Gateway.java @@ -23,9 +23,7 @@ import com.carrotsearch.hppc.cursors.ObjectCursor; import org.apache.logging.log4j.message.ParameterizedMessage; import org.elasticsearch.action.FailedNodeException; -import org.elasticsearch.cluster.ClusterChangedEvent; import org.elasticsearch.cluster.ClusterState; -import org.elasticsearch.cluster.ClusterStateApplier; import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.cluster.metadata.MetaData; import org.elasticsearch.cluster.service.ClusterService; @@ -39,27 +37,23 @@ import java.util.Arrays; import java.util.Map; -public class Gateway extends AbstractComponent implements ClusterStateApplier { +public class Gateway extends AbstractComponent { private final ClusterService clusterService; - private final GatewayMetaState metaState; - private final TransportNodesListGatewayMetaState listGatewayMetaState; private final int minimumMasterNodes; private final IndicesService indicesService; - public Gateway(Settings settings, ClusterService clusterService, GatewayMetaState metaState, + public Gateway(Settings settings, ClusterService clusterService, TransportNodesListGatewayMetaState listGatewayMetaState, IndicesService indicesService) { super(settings); this.indicesService = indicesService; this.clusterService = clusterService; - this.metaState = metaState; this.listGatewayMetaState = listGatewayMetaState; this.minimumMasterNodes = ElectMasterService.DISCOVERY_ZEN_MINIMUM_MASTER_NODES_SETTING.get(settings); - clusterService.addLowPriorityApplier(this); } public void performStateRecovery(final GatewayStateRecoveredListener listener) throws GatewayException { @@ -174,13 +168,6 @@ private void logInvalidSetting(String settingType, Map.Entry e, ex); } - @Override - public void applyClusterState(final ClusterChangedEvent event) { - // order is important, first metaState, and then shardsState - // so dangling indices will be recorded - metaState.applyClusterState(event); - } - public interface GatewayStateRecoveredListener { void onSuccess(ClusterState build); diff --git a/core/src/main/java/org/elasticsearch/gateway/GatewayMetaState.java b/core/src/main/java/org/elasticsearch/gateway/GatewayMetaState.java index 99a51adf96183..719626b7e1870 100644 --- a/core/src/main/java/org/elasticsearch/gateway/GatewayMetaState.java +++ b/core/src/main/java/org/elasticsearch/gateway/GatewayMetaState.java @@ -33,7 +33,6 @@ import org.elasticsearch.common.Nullable; import org.elasticsearch.common.collect.ImmutableOpenMap; import org.elasticsearch.common.component.AbstractComponent; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.common.util.IndexFolderUpgrader; @@ -41,6 +40,7 @@ import org.elasticsearch.index.Index; import org.elasticsearch.plugins.MetaDataUpgrader; +import java.io.IOException; import java.nio.file.DirectoryStream; import java.nio.file.Files; import java.nio.file.Path; @@ -68,15 +68,11 @@ public class GatewayMetaState extends AbstractComponent implements ClusterStateA private volatile Set previouslyWrittenIndices = emptySet(); - @Inject public GatewayMetaState(Settings settings, NodeEnvironment nodeEnv, MetaStateService metaStateService, - TransportNodesListGatewayMetaState nodesListGatewayMetaState, - MetaDataIndexUpgradeService metaDataIndexUpgradeService, MetaDataUpgrader metaDataUpgrader) - throws Exception { + MetaDataIndexUpgradeService metaDataIndexUpgradeService, MetaDataUpgrader metaDataUpgrader) throws IOException { super(settings); this.nodeEnv = nodeEnv; this.metaStateService = metaStateService; - nodesListGatewayMetaState.init(this); if (DiscoveryNode.isDataNode(settings)) { ensureNoPre019ShardState(nodeEnv); @@ -114,7 +110,7 @@ public GatewayMetaState(Settings settings, NodeEnvironment nodeEnv, MetaStateSer } } - public MetaData loadMetaState() throws Exception { + public MetaData loadMetaState() throws IOException { return metaStateService.loadFullState(); } @@ -209,7 +205,7 @@ protected static boolean isDataOnlyNode(ClusterState state) { /** * Throws an IAE if a pre 0.19 state is detected */ - private void ensureNoPre019State() throws Exception { + private void ensureNoPre019State() throws IOException { for (Path dataLocation : nodeEnv.nodeDataPaths()) { final Path stateLocation = dataLocation.resolve(MetaDataStateFormat.STATE_DIR_NAME); if (!Files.exists(stateLocation)) { @@ -241,7 +237,7 @@ private void ensureNoPre019State() throws Exception { */ static MetaData upgradeMetaData(MetaData metaData, MetaDataIndexUpgradeService metaDataIndexUpgradeService, - MetaDataUpgrader metaDataUpgrader) throws Exception { + MetaDataUpgrader metaDataUpgrader) throws IOException { // upgrade index meta data boolean changed = false; final MetaData.Builder upgradedMetaData = MetaData.builder(metaData); @@ -287,7 +283,7 @@ private static boolean applyPluginUpgraders(ImmutableOpenMap list(String[] nodesIds, @Nullable TimeValue timeout) { diff --git a/core/src/main/java/org/elasticsearch/http/HttpTransportSettings.java b/core/src/main/java/org/elasticsearch/http/HttpTransportSettings.java index 9bf8be2da45dd..54be8b4ecd78f 100644 --- a/core/src/main/java/org/elasticsearch/http/HttpTransportSettings.java +++ b/core/src/main/java/org/elasticsearch/http/HttpTransportSettings.java @@ -20,6 +20,7 @@ package org.elasticsearch.http; import org.elasticsearch.common.Booleans; +import org.elasticsearch.common.network.NetworkService; import org.elasticsearch.common.settings.Setting; import org.elasticsearch.common.settings.Setting.Property; import org.elasticsearch.common.transport.PortsRange; @@ -30,6 +31,7 @@ import java.util.function.Function; import static java.util.Collections.emptyList; +import static org.elasticsearch.common.settings.Setting.boolSetting; import static org.elasticsearch.common.settings.Setting.listSetting; public final class HttpTransportSettings { @@ -91,6 +93,17 @@ public final class HttpTransportSettings { public static final Setting SETTING_HTTP_RESET_COOKIES = Setting.boolSetting("http.reset_cookies", false, Property.NodeScope); + public static final Setting SETTING_HTTP_TCP_NO_DELAY = + boolSetting("http.tcp_no_delay", NetworkService.TCP_NO_DELAY, Setting.Property.NodeScope); + public static final Setting SETTING_HTTP_TCP_KEEP_ALIVE = + boolSetting("http.tcp.keep_alive", NetworkService.TCP_KEEP_ALIVE, Setting.Property.NodeScope); + public static final Setting SETTING_HTTP_TCP_REUSE_ADDRESS = + boolSetting("http.tcp.reuse_address", NetworkService.TCP_REUSE_ADDRESS, Setting.Property.NodeScope); + public static final Setting SETTING_HTTP_TCP_SEND_BUFFER_SIZE = + Setting.byteSizeSetting("http.tcp.send_buffer_size", NetworkService.TCP_SEND_BUFFER_SIZE, Setting.Property.NodeScope); + public static final Setting SETTING_HTTP_TCP_RECEIVE_BUFFER_SIZE = + Setting.byteSizeSetting("http.tcp.receive_buffer_size", NetworkService.TCP_RECEIVE_BUFFER_SIZE, Setting.Property.NodeScope); + private HttpTransportSettings() { } } diff --git a/core/src/main/java/org/elasticsearch/index/IndexModule.java b/core/src/main/java/org/elasticsearch/index/IndexModule.java index 630fe11e0a811..869f8c9ca72db 100644 --- a/core/src/main/java/org/elasticsearch/index/IndexModule.java +++ b/core/src/main/java/org/elasticsearch/index/IndexModule.java @@ -22,7 +22,6 @@ import org.apache.lucene.util.SetOnce; import org.elasticsearch.client.Client; import org.elasticsearch.common.Strings; -import org.elasticsearch.common.TriFunction; import org.elasticsearch.common.io.stream.NamedWriteableRegistry; import org.elasticsearch.common.settings.Setting; import org.elasticsearch.common.settings.Setting.Property; @@ -40,6 +39,7 @@ import org.elasticsearch.index.shard.IndexSearcherWrapper; import org.elasticsearch.index.shard.IndexingOperationListener; import org.elasticsearch.index.shard.SearchOperationListener; +import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.index.similarity.BM25SimilarityProvider; import org.elasticsearch.index.similarity.SimilarityProvider; import org.elasticsearch.index.similarity.SimilarityService; @@ -101,11 +101,6 @@ public final class IndexModule { public static final Setting INDEX_QUERY_CACHE_EVERYTHING_SETTING = Setting.boolSetting("index.queries.cache.everything", false, Property.IndexScope); - // This setting is an escape hatch in case not caching term queries would slow some users down - // Do not document. - public static final Setting INDEX_QUERY_CACHE_TERM_QUERIES_SETTING = - Setting.boolSetting("index.queries.cache.term_queries", false, Property.IndexScope); - private final IndexSettings indexSettings; private final AnalysisRegistry analysisRegistry; // pkg private so tests can mock diff --git a/core/src/main/java/org/elasticsearch/index/IndexService.java b/core/src/main/java/org/elasticsearch/index/IndexService.java index a4d03929cbb57..d192e8781d6da 100644 --- a/core/src/main/java/org/elasticsearch/index/IndexService.java +++ b/core/src/main/java/org/elasticsearch/index/IndexService.java @@ -25,11 +25,15 @@ import org.apache.lucene.store.AlreadyClosedException; import org.apache.lucene.util.Accountable; import org.apache.lucene.util.IOUtils; +import org.elasticsearch.action.ActionListener; import org.elasticsearch.client.Client; import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.cluster.routing.ShardRouting; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.io.stream.NamedWriteableRegistry; +import org.elasticsearch.common.lease.Releasable; +import org.elasticsearch.common.settings.Setting; +import org.elasticsearch.common.settings.Setting.Property; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.common.util.BigArrays; @@ -82,6 +86,7 @@ import java.util.concurrent.ScheduledFuture; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicBoolean; +import java.util.function.Consumer; import java.util.function.LongSupplier; import java.util.function.Supplier; @@ -109,10 +114,11 @@ public class IndexService extends AbstractIndexComponent implements IndicesClust private final AtomicBoolean closed = new AtomicBoolean(false); private final AtomicBoolean deleted = new AtomicBoolean(false); private final IndexSettings indexSettings; - private final List indexingOperationListeners; private final List searchOperationListeners; + private final List indexingOperationListeners; private volatile AsyncRefreshTask refreshTask; private volatile AsyncTranslogFSync fsyncTask; + private volatile AsyncGlobalCheckpointTask globalCheckpointTask; // don't convert to Setting<> and register... we only set this in tests and register via a plugin private final String INDEX_TRANSLOG_RETENTION_CHECK_INTERVAL_SETTING = "index.translog.retention.check_interval"; @@ -182,11 +188,12 @@ public IndexService( this.engineFactory = engineFactory; // initialize this last -- otherwise if the wrapper requires any other member to be non-null we fail with an NPE this.searcherWrapper = wrapperFactory.newWrapper(this); - this.indexingOperationListeners = Collections.unmodifiableList(indexingOperationListeners); this.searchOperationListeners = Collections.unmodifiableList(searchOperationListeners); + this.indexingOperationListeners = Collections.unmodifiableList(indexingOperationListeners); // kick off async ops for the first shard in this index this.refreshTask = new AsyncRefreshTask(this); this.trimTranslogTask = new AsyncTrimTranslogTask(this); + this.globalCheckpointTask = new AsyncGlobalCheckpointTask(this); rescheduleFsyncTask(indexSettings.getTranslogDurability()); } @@ -268,7 +275,15 @@ public synchronized void close(final String reason, boolean delete) throws IOExc } } } finally { - IOUtils.close(bitsetFilterCache, indexCache, indexFieldData, mapperService, refreshTask, fsyncTask, trimTranslogTask); + IOUtils.close( + bitsetFilterCache, + indexCache, + indexFieldData, + mapperService, + refreshTask, + fsyncTask, + trimTranslogTask, + globalCheckpointTask); } } } @@ -293,8 +308,7 @@ private long getAvgShardSizeInBytes() throws IOException { } } - public synchronized IndexShard createShard(ShardRouting routing) throws IOException { - final boolean primary = routing.primary(); + public synchronized IndexShard createShard(ShardRouting routing, Consumer globalCheckpointSyncer) throws IOException { /* * TODO: we execute this in parallel but it's a synced method. Yet, we might * be able to serialize the execution via the cluster state in the future. for now we just @@ -365,7 +379,7 @@ public synchronized IndexShard createShard(ShardRouting routing) throws IOExcept indexShard = new IndexShard(routing, this.indexSettings, path, store, indexSortSupplier, indexCache, mapperService, similarityService, engineFactory, eventListener, searcherWrapper, threadPool, bigArrays, engineWarmer, - searchOperationListeners, indexingOperationListeners); + searchOperationListeners, indexingOperationListeners, () -> globalCheckpointSyncer.accept(shardId)); eventListener.indexShardStateChanged(indexShard, null, indexShard.state(), "shard created"); eventListener.afterIndexShardCreated(indexShard); shards = newMapBuilder(shards).put(shardId.id(), indexShard).immutableMap(); @@ -710,6 +724,48 @@ private void maybeTrimTranslog() { } } + private void maybeSyncGlobalCheckpoints() { + for (final IndexShard shard : this.shards.values()) { + if (shard.routingEntry().active() && shard.routingEntry().primary()) { + switch (shard.state()) { + case CLOSED: + case CREATED: + case RECOVERING: + case RELOCATED: + continue; + case POST_RECOVERY: + assert false : "shard " + shard.shardId() + " is in post-recovery but marked as active"; + continue; + case STARTED: + try { + shard.acquirePrimaryOperationPermit( + ActionListener.wrap( + releasable -> { + try (Releasable ignored = releasable) { + shard.maybeSyncGlobalCheckpoint("background"); + } + }, + e -> { + if (!(e instanceof AlreadyClosedException || e instanceof IndexShardClosedException)) { + logger.info( + new ParameterizedMessage( + "{} failed to execute background global checkpoint sync", + shard.shardId()), + e); + } + }), + ThreadPool.Names.SAME); + } catch (final AlreadyClosedException | IndexShardClosedException e) { + // the shard was closed concurrently, continue + } + continue; + default: + throw new IllegalStateException("unknown state [" + shard.state() + "]"); + } + } + } + } + abstract static class BaseAsyncTask implements Runnable, Closeable { protected final IndexService indexService; protected final ThreadPool threadPool; @@ -877,6 +933,41 @@ public String toString() { } } + // this setting is intentionally not registered, it is only used in tests + public static final Setting GLOBAL_CHECKPOINT_SYNC_INTERVAL_SETTING = + Setting.timeSetting( + "index.global_checkpoint_sync.interval", + new TimeValue(30, TimeUnit.SECONDS), + new TimeValue(0, TimeUnit.MILLISECONDS), + Property.Dynamic, + Property.IndexScope); + + /** + * Background task that syncs the global checkpoint to replicas. + */ + final class AsyncGlobalCheckpointTask extends BaseAsyncTask { + + AsyncGlobalCheckpointTask(final IndexService indexService) { + // index.global_checkpoint_sync_interval is not a real setting, it is only registered in tests + super(indexService, GLOBAL_CHECKPOINT_SYNC_INTERVAL_SETTING.get(indexService.getIndexSettings().getSettings())); + } + + @Override + protected void runInternal() { + indexService.maybeSyncGlobalCheckpoints(); + } + + @Override + protected String getThreadPool() { + return ThreadPool.Names.GENERIC; + } + + @Override + public String toString() { + return "global_checkpoint_sync"; + } + } + AsyncRefreshTask getRefreshTask() { // for tests return refreshTask; } @@ -885,6 +976,10 @@ AsyncTranslogFSync getFsyncTask() { // for tests return fsyncTask; } + AsyncGlobalCheckpointTask getGlobalCheckpointTask() { + return globalCheckpointTask; + } + /** * Clears the caches for the given shard id if the shard is still allocated on this node */ diff --git a/core/src/main/java/org/elasticsearch/index/IndexSettings.java b/core/src/main/java/org/elasticsearch/index/IndexSettings.java index fc2e476afc3e4..9e390fb5b22cf 100644 --- a/core/src/main/java/org/elasticsearch/index/IndexSettings.java +++ b/core/src/main/java/org/elasticsearch/index/IndexSettings.java @@ -30,11 +30,11 @@ import org.elasticsearch.common.unit.ByteSizeUnit; import org.elasticsearch.common.unit.ByteSizeValue; import org.elasticsearch.common.unit.TimeValue; -import org.elasticsearch.index.mapper.AllFieldMapper; -import org.elasticsearch.index.mapper.MapperService; import org.elasticsearch.index.translog.Translog; import org.elasticsearch.node.Node; +import java.util.Collections; +import java.util.List; import java.util.Locale; import java.util.concurrent.TimeUnit; import java.util.function.Consumer; @@ -48,9 +48,9 @@ * be called for each settings update. */ public final class IndexSettings { - - public static final Setting DEFAULT_FIELD_SETTING = - new Setting<>("index.query.default_field", AllFieldMapper.NAME, Function.identity(), Property.IndexScope); + public static final Setting> DEFAULT_FIELD_SETTING = + Setting.listSetting("index.query.default_field", Collections.singletonList("*"), + Function.identity(), Property.IndexScope, Property.Dynamic); public static final Setting QUERY_STRING_LENIENT_SETTING = Setting.boolSetting("index.query_string.lenient", false, Property.IndexScope); public static final Setting QUERY_STRING_ANALYZE_WILDCARD = @@ -91,6 +91,49 @@ public final class IndexSettings { */ public static final Setting MAX_RESULT_WINDOW_SETTING = Setting.intSetting("index.max_result_window", 10000, 1, Property.Dynamic, Property.IndexScope); + /** + * Index setting describing the maximum value of from + size on an individual inner hit definition or + * top hits aggregation. The default maximum of 100 is defensive for the reason that the number of inner hit responses + * and number of top hits buckets returned is unbounded. Profile your cluster when increasing this setting. + */ + public static final Setting MAX_INNER_RESULT_WINDOW_SETTING = + Setting.intSetting("index.max_inner_result_window", 100, 1, Property.Dynamic, Property.IndexScope); + + /** + * Index setting describing the maximum value of allowed `script_fields`that can be retrieved + * per search request. The default maximum of 32 is defensive for the reason that retrieving + * script fields is a costly operation. + */ + public static final Setting MAX_SCRIPT_FIELDS_SETTING = + Setting.intSetting("index.max_script_fields", 32, 0, Property.Dynamic, Property.IndexScope); + + /** + * Index setting describing for NGramTokenizer and NGramTokenFilter + * the maximum difference between + * max_gram (maximum length of characters in a gram) and + * min_gram (minimum length of characters in a gram). + * The default value is 1 as this is default difference in NGramTokenizer, + * and is defensive as it prevents generating too many index terms. + */ + public static final Setting MAX_NGRAM_DIFF_SETTING = + Setting.intSetting("index.max_ngram_diff", 1, 0, Property.Dynamic, Property.IndexScope); + + /** + * Index setting describing for ShingleTokenFilter + * the maximum difference between + * max_shingle_size and min_shingle_size. + * The default value is 3 is defensive as it prevents generating too many tokens. + */ + public static final Setting MAX_SHINGLE_DIFF_SETTING = + Setting.intSetting("index.max_shingle_diff", 3, 0, Property.Dynamic, Property.IndexScope); + + /** + * Index setting describing the maximum value of allowed `docvalue_fields`that can be retrieved + * per search request. The default maximum of 100 is defensive for the reason that retrieving + * doc values might incur a per-field per-document seek. + */ + public static final Setting MAX_DOCVALUE_FIELDS_SEARCH_SETTING = + Setting.intSetting("index.max_docvalue_fields_search", 100, 0, Property.Dynamic, Property.IndexScope); /** * Index setting describing the maximum size of the rescore window. Defaults to {@link #MAX_RESULT_WINDOW_SETTING} * because they both do the same thing: control the size of the heap of hits. @@ -192,7 +235,7 @@ public final class IndexSettings { // volatile fields are updated via #updateIndexMetaData(IndexMetaData) under lock private volatile Settings settings; private volatile IndexMetaData indexMetaData; - private final String defaultField; + private volatile List defaultFields; private final boolean queryStringLenient; private final boolean queryStringAnalyzeWildcard; private final boolean queryStringAllowLeadingWildcard; @@ -211,8 +254,13 @@ public final class IndexSettings { private long gcDeletesInMillis = DEFAULT_GC_DELETES.millis(); private volatile boolean warmerEnabled; private volatile int maxResultWindow; + private volatile int maxInnerResultWindow; private volatile int maxAdjacencyMatrixFilters; private volatile int maxRescoreWindow; + private volatile int maxDocvalueFields; + private volatile int maxScriptFields; + private volatile int maxNgramDiff; + private volatile int maxShingleDiff; private volatile boolean TTLPurgeDisabled; /** * The maximum number of refresh listeners allows on this shard. @@ -228,10 +276,14 @@ public final class IndexSettings { private final boolean singleType; /** - * Returns the default search field for this index. + * Returns the default search fields for this index. */ - public String getDefaultField() { - return defaultField; + public List getDefaultFields() { + return defaultFields; + } + + private void setDefaultFields(List defaultFields) { + this.defaultFields = defaultFields; } /** @@ -291,12 +343,12 @@ public IndexSettings(final IndexMetaData indexMetaData, final Settings nodeSetti this.indexMetaData = indexMetaData; numberOfShards = settings.getAsInt(IndexMetaData.SETTING_NUMBER_OF_SHARDS, null); - this.defaultField = DEFAULT_FIELD_SETTING.get(settings); this.queryStringLenient = QUERY_STRING_LENIENT_SETTING.get(settings); this.queryStringAnalyzeWildcard = QUERY_STRING_ANALYZE_WILDCARD.get(nodeSettings); this.queryStringAllowLeadingWildcard = QUERY_STRING_ALLOW_LEADING_WILDCARD.get(nodeSettings); this.defaultAllowUnmappedFields = scopedSettings.get(ALLOW_UNMAPPED); this.durability = scopedSettings.get(INDEX_TRANSLOG_DURABILITY_SETTING); + defaultFields = scopedSettings.get(DEFAULT_FIELD_SETTING); syncInterval = INDEX_TRANSLOG_SYNC_INTERVAL_SETTING.get(settings); refreshInterval = scopedSettings.get(INDEX_REFRESH_INTERVAL_SETTING); flushThresholdSize = scopedSettings.get(INDEX_TRANSLOG_FLUSH_THRESHOLD_SIZE_SETTING); @@ -307,8 +359,13 @@ public IndexSettings(final IndexMetaData indexMetaData, final Settings nodeSetti gcDeletesInMillis = scopedSettings.get(INDEX_GC_DELETES_SETTING).getMillis(); warmerEnabled = scopedSettings.get(INDEX_WARMER_ENABLED_SETTING); maxResultWindow = scopedSettings.get(MAX_RESULT_WINDOW_SETTING); + maxInnerResultWindow = scopedSettings.get(MAX_INNER_RESULT_WINDOW_SETTING); maxAdjacencyMatrixFilters = scopedSettings.get(MAX_ADJACENCY_MATRIX_FILTERS_SETTING); maxRescoreWindow = scopedSettings.get(MAX_RESCORE_WINDOW_SETTING); + maxDocvalueFields = scopedSettings.get(MAX_DOCVALUE_FIELDS_SEARCH_SETTING); + maxScriptFields = scopedSettings.get(MAX_SCRIPT_FIELDS_SETTING); + maxNgramDiff = scopedSettings.get(MAX_NGRAM_DIFF_SETTING); + maxShingleDiff = scopedSettings.get(MAX_SHINGLE_DIFF_SETTING); TTLPurgeDisabled = scopedSettings.get(INDEX_TTL_DISABLE_PURGE_SETTING); maxRefreshListeners = scopedSettings.get(MAX_REFRESH_LISTENERS_PER_SHARD); maxSlicesPerScroll = scopedSettings.get(MAX_SLICES_PER_SCROLL); @@ -335,8 +392,13 @@ public IndexSettings(final IndexMetaData indexMetaData, final Settings nodeSetti scopedSettings.addSettingsUpdateConsumer(INDEX_TRANSLOG_DURABILITY_SETTING, this::setTranslogDurability); scopedSettings.addSettingsUpdateConsumer(INDEX_TTL_DISABLE_PURGE_SETTING, this::setTTLPurgeDisabled); scopedSettings.addSettingsUpdateConsumer(MAX_RESULT_WINDOW_SETTING, this::setMaxResultWindow); + scopedSettings.addSettingsUpdateConsumer(MAX_INNER_RESULT_WINDOW_SETTING, this::setMaxInnerResultWindow); scopedSettings.addSettingsUpdateConsumer(MAX_ADJACENCY_MATRIX_FILTERS_SETTING, this::setMaxAdjacencyMatrixFilters); scopedSettings.addSettingsUpdateConsumer(MAX_RESCORE_WINDOW_SETTING, this::setMaxRescoreWindow); + scopedSettings.addSettingsUpdateConsumer(MAX_DOCVALUE_FIELDS_SEARCH_SETTING, this::setMaxDocvalueFields); + scopedSettings.addSettingsUpdateConsumer(MAX_SCRIPT_FIELDS_SETTING, this::setMaxScriptFields); + scopedSettings.addSettingsUpdateConsumer(MAX_NGRAM_DIFF_SETTING, this::setMaxNgramDiff); + scopedSettings.addSettingsUpdateConsumer(MAX_SHINGLE_DIFF_SETTING, this::setMaxShingleDiff); scopedSettings.addSettingsUpdateConsumer(INDEX_WARMER_ENABLED_SETTING, this::setEnableWarmer); scopedSettings.addSettingsUpdateConsumer(INDEX_GC_DELETES_SETTING, this::setGCDeletes); scopedSettings.addSettingsUpdateConsumer(INDEX_TRANSLOG_FLUSH_THRESHOLD_SIZE_SETTING, this::setTranslogFlushThresholdSize); @@ -348,6 +410,7 @@ public IndexSettings(final IndexMetaData indexMetaData, final Settings nodeSetti scopedSettings.addSettingsUpdateConsumer(INDEX_REFRESH_INTERVAL_SETTING, this::setRefreshInterval); scopedSettings.addSettingsUpdateConsumer(MAX_REFRESH_LISTENERS_PER_SHARD, this::setMaxRefreshListeners); scopedSettings.addSettingsUpdateConsumer(MAX_SLICES_PER_SCROLL, this::setMaxSlicesPerScroll); + scopedSettings.addSettingsUpdateConsumer(DEFAULT_FIELD_SETTING, this::setDefaultFields); } private void setTranslogFlushThresholdSize(ByteSizeValue byteSizeValue) { @@ -469,7 +532,8 @@ public synchronized boolean updateIndexMetaData(IndexMetaData indexMetaData) { } this.indexMetaData = indexMetaData; final Settings existingSettings = this.settings; - if (existingSettings.filter(IndexScopedSettings.INDEX_SETTINGS_KEY_PREDICATE).getAsMap().equals(newSettings.filter(IndexScopedSettings.INDEX_SETTINGS_KEY_PREDICATE).getAsMap())) { + if (existingSettings.filter(IndexScopedSettings.INDEX_SETTINGS_KEY_PREDICATE) + .equals(newSettings.filter(IndexScopedSettings.INDEX_SETTINGS_KEY_PREDICATE))) { // nothing to update, same settings return false; } @@ -559,6 +623,17 @@ private void setMaxResultWindow(int maxResultWindow) { this.maxResultWindow = maxResultWindow; } + /** + * Returns the max result window for an individual inner hit definition or top hits aggregation. + */ + public int getMaxInnerResultWindow() { + return maxInnerResultWindow; + } + + private void setMaxInnerResultWindow(int maxInnerResultWindow) { + this.maxInnerResultWindow = maxInnerResultWindow; + } + /** * Returns the max number of filters in adjacency_matrix aggregation search requests */ @@ -581,6 +656,42 @@ private void setMaxRescoreWindow(int maxRescoreWindow) { this.maxRescoreWindow = maxRescoreWindow; } + /** + * Returns the maximum number of allowed docvalue_fields to retrieve in a search request + */ + public int getMaxDocvalueFields() { + return this.maxDocvalueFields; + } + + private void setMaxDocvalueFields(int maxDocvalueFields) { + this.maxDocvalueFields = maxDocvalueFields; + } + + /** + * Returns the maximum allowed difference between max and min length of ngram + */ + public int getMaxNgramDiff() { return this.maxNgramDiff; } + + private void setMaxNgramDiff(int maxNgramDiff) { this.maxNgramDiff = maxNgramDiff; } + + /** + * Returns the maximum allowed difference between max and min shingle_size + */ + public int getMaxShingleDiff() { return this.maxShingleDiff; } + + private void setMaxShingleDiff(int maxShingleDiff) { this.maxShingleDiff = maxShingleDiff; } + + /** + * Returns the maximum number of allowed script_fields to retrieve in a search request + */ + public int getMaxScriptFields() { + return this.maxScriptFields; + } + + private void setMaxScriptFields(int maxScriptFields) { + this.maxScriptFields = maxScriptFields; + } + /** * Returns the GC deletes cycle in milliseconds. */ diff --git a/core/src/main/java/org/elasticsearch/index/IndexingSlowLog.java b/core/src/main/java/org/elasticsearch/index/IndexingSlowLog.java index b1d8ac188626f..94c3892ef361e 100644 --- a/core/src/main/java/org/elasticsearch/index/IndexingSlowLog.java +++ b/core/src/main/java/org/elasticsearch/index/IndexingSlowLog.java @@ -181,9 +181,9 @@ public String toString() { sb.append("type[").append(doc.type()).append("], "); sb.append("id[").append(doc.id()).append("], "); if (doc.routing() == null) { - sb.append("routing[] "); + sb.append("routing[]"); } else { - sb.append("routing[").append(doc.routing()).append("] "); + sb.append("routing[").append(doc.routing()).append("]"); } if (maxSourceCharsToLog == 0 || doc.source() == null || doc.source().length() == 0) { @@ -193,7 +193,7 @@ public String toString() { String source = XContentHelper.convertToJson(doc.source(), reformat, doc.getXContentType()); sb.append(", source[").append(Strings.cleanTruncate(source, maxSourceCharsToLog)).append("]"); } catch (IOException e) { - sb.append(", source[_failed_to_convert_]"); + sb.append(", source[_failed_to_convert_[").append(e.getMessage()).append("]]"); } return sb.toString(); } diff --git a/core/src/main/java/org/elasticsearch/index/MergePolicyConfig.java b/core/src/main/java/org/elasticsearch/index/MergePolicyConfig.java index b3a0224f9834c..0f7305789ecb5 100644 --- a/core/src/main/java/org/elasticsearch/index/MergePolicyConfig.java +++ b/core/src/main/java/org/elasticsearch/index/MergePolicyConfig.java @@ -23,7 +23,6 @@ import org.apache.lucene.index.MergePolicy; import org.apache.lucene.index.NoMergePolicy; import org.apache.lucene.index.TieredMergePolicy; -import org.elasticsearch.common.logging.DeprecationLogger; import org.elasticsearch.common.settings.Setting; import org.elasticsearch.common.settings.Setting.Property; import org.elasticsearch.common.unit.ByteSizeUnit; @@ -165,8 +164,7 @@ public final class MergePolicyConfig { ByteSizeValue maxMergedSegment = indexSettings.getValue(INDEX_MERGE_POLICY_MAX_MERGED_SEGMENT_SETTING); double segmentsPerTier = indexSettings.getValue(INDEX_MERGE_POLICY_SEGMENTS_PER_TIER_SETTING); double reclaimDeletesWeight = indexSettings.getValue(INDEX_MERGE_POLICY_RECLAIM_DELETES_WEIGHT_SETTING); - this.mergesEnabled = indexSettings.getSettings() - .getAsBooleanLenientForPreEs6Indices(indexSettings.getIndexVersionCreated(), INDEX_MERGE_ENABLED, true, new DeprecationLogger(logger)); + this.mergesEnabled = indexSettings.getSettings().getAsBoolean(INDEX_MERGE_ENABLED, true); if (mergesEnabled == false) { logger.warn("[{}] is set to false, this should only be used in tests and can cause serious problems in production environments", INDEX_MERGE_ENABLED); } diff --git a/core/src/main/java/org/elasticsearch/index/analysis/Analysis.java b/core/src/main/java/org/elasticsearch/index/analysis/Analysis.java index fadcdcbd262c4..d736703f6418e 100644 --- a/core/src/main/java/org/elasticsearch/index/analysis/Analysis.java +++ b/core/src/main/java/org/elasticsearch/index/analysis/Analysis.java @@ -23,6 +23,7 @@ import org.apache.lucene.analysis.CharArraySet; import org.apache.lucene.analysis.ar.ArabicAnalyzer; import org.apache.lucene.analysis.bg.BulgarianAnalyzer; +import org.apache.lucene.analysis.bn.BengaliAnalyzer; import org.apache.lucene.analysis.br.BrazilianAnalyzer; import org.apache.lucene.analysis.ca.CatalanAnalyzer; import org.apache.lucene.analysis.ckb.SoraniAnalyzer; @@ -55,9 +56,6 @@ import org.apache.lucene.analysis.tr.TurkishAnalyzer; import org.apache.lucene.util.Version; import org.elasticsearch.common.Strings; -import org.elasticsearch.common.io.FileSystemUtils; -import org.elasticsearch.common.logging.DeprecationLogger; -import org.elasticsearch.common.logging.ESLoggerFactory; import org.elasticsearch.common.lucene.Lucene; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.env.Environment; @@ -70,7 +68,6 @@ import java.nio.file.Files; import java.nio.file.Path; import java.util.ArrayList; -import java.util.Arrays; import java.util.Collection; import java.util.HashMap; import java.util.List; @@ -82,8 +79,6 @@ public class Analysis { - private static final DeprecationLogger deprecationLogger = new DeprecationLogger(ESLoggerFactory.getLogger(Analysis.class)); - public static Version parseAnalysisVersion(Settings indexSettings, Settings settings, Logger logger) { // check for explicit version on the specific analyzer component String sVersion = settings.get("version"); @@ -106,18 +101,13 @@ public static boolean isNoStopwords(Settings settings) { public static CharArraySet parseStemExclusion(Settings settings, CharArraySet defaultStemExclusion) { String value = settings.get("stem_exclusion"); - if (value != null) { - if ("_none_".equals(value)) { - return CharArraySet.EMPTY_SET; - } else { - // LUCENE 4 UPGRADE: Should be settings.getAsBoolean("stem_exclusion_case", false)? - return new CharArraySet(Strings.commaDelimitedListToSet(value), false); - } + if ("_none_".equals(value)) { + return CharArraySet.EMPTY_SET; } - String[] stemExclusion = settings.getAsArray("stem_exclusion", null); + List stemExclusion = settings.getAsList("stem_exclusion", null); if (stemExclusion != null) { // LUCENE 4 UPGRADE: Should be settings.getAsBoolean("stem_exclusion_case", false)? - return new CharArraySet(Arrays.asList(stemExclusion), false); + return new CharArraySet(stemExclusion, false); } else { return defaultStemExclusion; } @@ -129,6 +119,7 @@ public static CharArraySet parseStemExclusion(Settings settings, CharArraySet de namedStopWords.put("_arabic_", ArabicAnalyzer.getDefaultStopSet()); namedStopWords.put("_armenian_", ArmenianAnalyzer.getDefaultStopSet()); namedStopWords.put("_basque_", BasqueAnalyzer.getDefaultStopSet()); + namedStopWords.put("_bengali_", BengaliAnalyzer.getDefaultStopSet()); namedStopWords.put("_brazilian_", BrazilianAnalyzer.getDefaultStopSet()); namedStopWords.put("_bulgarian_", BulgarianAnalyzer.getDefaultStopSet()); namedStopWords.put("_catalan_", CatalanAnalyzer.getDefaultStopSet()); @@ -169,7 +160,7 @@ public static CharArraySet parseWords(Environment env, Settings settings, String if ("_none_".equals(value)) { return CharArraySet.EMPTY_SET; } else { - return resolveNamedWords(Strings.commaDelimitedListToSet(value), namedWords, ignoreCase); + return resolveNamedWords(settings.getAsList(name), namedWords, ignoreCase); } } List pathLoadedWords = getWordList(env, settings, name); @@ -183,15 +174,14 @@ public static CharArraySet parseCommonWords(Environment env, Settings settings, return parseWords(env, settings, "common_words", defaultCommonWords, NAMED_STOP_WORDS, ignoreCase); } - public static CharArraySet parseArticles(Environment env, org.elasticsearch.Version indexCreatedVersion, Settings settings) { - boolean articlesCase = settings.getAsBooleanLenientForPreEs6Indices(indexCreatedVersion, "articles_case", false, deprecationLogger); + public static CharArraySet parseArticles(Environment env, Settings settings) { + boolean articlesCase = settings.getAsBoolean("articles_case", false); return parseWords(env, settings, "articles", null, null, articlesCase); } - public static CharArraySet parseStopWords(Environment env, org.elasticsearch.Version indexCreatedVersion, Settings settings, + public static CharArraySet parseStopWords(Environment env, Settings settings, CharArraySet defaultStopWords) { - boolean stopwordsCase = - settings.getAsBooleanLenientForPreEs6Indices(indexCreatedVersion, "stopwords_case", false, deprecationLogger); + boolean stopwordsCase = settings.getAsBoolean("stopwords_case", false); return parseStopWords(env, settings, defaultStopWords, stopwordsCase); } @@ -214,14 +204,12 @@ private static CharArraySet resolveNamedWords(Collection words, Map wordList = getWordList(env, settings, settingsPrefix); if (wordList == null) { return null; } - boolean ignoreCase = - settings.getAsBooleanLenientForPreEs6Indices(indexCreatedVersion, settingsPrefix + "_case", false, deprecationLogger); + boolean ignoreCase = settings.getAsBoolean(settingsPrefix + "_case", false); return new CharArraySet(wordList, ignoreCase); } @@ -236,11 +224,11 @@ public static List getWordList(Environment env, Settings settings, Strin String wordListPath = settings.get(settingPrefix + "_path", null); if (wordListPath == null) { - String[] explicitWordList = settings.getAsArray(settingPrefix, null); + List explicitWordList = settings.getAsList(settingPrefix, null); if (explicitWordList == null) { return null; } else { - return Arrays.asList(explicitWordList); + return explicitWordList; } } diff --git a/core/src/main/java/org/elasticsearch/index/analysis/AnalysisRegistry.java b/core/src/main/java/org/elasticsearch/index/analysis/AnalysisRegistry.java index 334295ef30fb5..039aaba5a2490 100644 --- a/core/src/main/java/org/elasticsearch/index/analysis/AnalysisRegistry.java +++ b/core/src/main/java/org/elasticsearch/index/analysis/AnalysisRegistry.java @@ -18,16 +18,12 @@ */ package org.elasticsearch.index.analysis; -import org.apache.logging.log4j.Logger; import org.apache.lucene.analysis.Analyzer; import org.apache.lucene.util.IOUtils; import org.elasticsearch.ElasticsearchException; import org.elasticsearch.Version; import org.elasticsearch.cluster.metadata.IndexMetaData; -import org.elasticsearch.common.logging.DeprecationLogger; -import org.elasticsearch.common.logging.Loggers; import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.common.util.set.Sets; import org.elasticsearch.env.Environment; import org.elasticsearch.index.Index; import org.elasticsearch.index.IndexSettings; @@ -42,7 +38,6 @@ import java.util.HashMap; import java.util.Locale; import java.util.Map; -import java.util.Set; import java.util.concurrent.ConcurrentHashMap; import java.util.stream.Collectors; @@ -188,9 +183,9 @@ public Map> buildAnalyzerFactories(IndexSettings ind } public Map> buildNormalizerFactories(IndexSettings indexSettings) throws IOException { - final Map noralizersSettings = indexSettings.getSettings().getGroups("index.analysis.normalizer"); + final Map normalizersSettings = indexSettings.getSettings().getGroups("index.analysis.normalizer"); // TODO: Have pre-built normalizers - return buildMapping(Component.NORMALIZER, indexSettings, noralizersSettings, normalizers, Collections.emptyMap()); + return buildMapping(Component.NORMALIZER, indexSettings, normalizersSettings, normalizers, Collections.emptyMap()); } /** @@ -455,33 +450,20 @@ public IndexAnalyzers build(IndexSettings indexSettings, Index index = indexSettings.getIndex(); analyzerProviders = new HashMap<>(analyzerProviders); - Logger logger = Loggers.getLogger(getClass(), indexSettings.getSettings()); - DeprecationLogger deprecationLogger = new DeprecationLogger(logger); - Map analyzerAliases = new HashMap<>(); Map analyzers = new HashMap<>(); Map normalizers = new HashMap<>(); for (Map.Entry> entry : analyzerProviders.entrySet()) { - processAnalyzerFactory(deprecationLogger, indexSettings, entry.getKey(), entry.getValue(), analyzerAliases, analyzers, + processAnalyzerFactory(indexSettings, entry.getKey(), entry.getValue(), analyzers, tokenFilterFactoryFactories, charFilterFactoryFactories, tokenizerFactoryFactories); } for (Map.Entry> entry : normalizerProviders.entrySet()) { - processNormalizerFactory(deprecationLogger, indexSettings, entry.getKey(), entry.getValue(), normalizers, + processNormalizerFactory(entry.getKey(), entry.getValue(), normalizers, tokenizerFactoryFactories.get("keyword"), tokenFilterFactoryFactories, charFilterFactoryFactories); } - for (Map.Entry entry : analyzerAliases.entrySet()) { - String key = entry.getKey(); - if (analyzers.containsKey(key) && - ("default".equals(key) || "default_search".equals(key) || "default_search_quoted".equals(key)) == false) { - throw new IllegalStateException("already registered analyzer with name: " + key); - } else { - NamedAnalyzer configured = entry.getValue(); - analyzers.put(key, configured); - } - } if (!analyzers.containsKey("default")) { - processAnalyzerFactory(deprecationLogger, indexSettings, "default", new StandardAnalyzerProvider(indexSettings, null, "default", Settings.Builder.EMPTY_SETTINGS), - analyzerAliases, analyzers, tokenFilterFactoryFactories, charFilterFactoryFactories, tokenizerFactoryFactories); + processAnalyzerFactory(indexSettings, "default", new StandardAnalyzerProvider(indexSettings, null, "default", Settings.Builder.EMPTY_SETTINGS), + analyzers, tokenFilterFactoryFactories, charFilterFactoryFactories, tokenizerFactoryFactories); } if (!analyzers.containsKey("default_search")) { analyzers.put("default_search", analyzers.get("default")); @@ -490,7 +472,6 @@ public IndexAnalyzers build(IndexSettings indexSettings, analyzers.put("default_search_quoted", analyzers.get("default_search")); } - NamedAnalyzer defaultAnalyzer = analyzers.get("default"); if (defaultAnalyzer == null) { throw new IllegalArgumentException("no default analyzer configured"); @@ -498,8 +479,8 @@ public IndexAnalyzers build(IndexSettings indexSettings, if (analyzers.containsKey("default_index")) { throw new IllegalArgumentException("setting [index.analysis.analyzer.default_index] is not supported anymore, use [index.analysis.analyzer.default] instead for index [" + index.getName() + "]"); } - NamedAnalyzer defaultSearchAnalyzer = analyzers.containsKey("default_search") ? analyzers.get("default_search") : defaultAnalyzer; - NamedAnalyzer defaultSearchQuoteAnalyzer = analyzers.containsKey("default_search_quote") ? analyzers.get("default_search_quote") : defaultSearchAnalyzer; + NamedAnalyzer defaultSearchAnalyzer = analyzers.getOrDefault("default_search", defaultAnalyzer); + NamedAnalyzer defaultSearchQuoteAnalyzer = analyzers.getOrDefault("default_search_quote", defaultSearchAnalyzer); for (Map.Entry analyzer : analyzers.entrySet()) { if (analyzer.getKey().startsWith("_")) { @@ -510,11 +491,9 @@ public IndexAnalyzers build(IndexSettings indexSettings, unmodifiableMap(analyzers), unmodifiableMap(normalizers)); } - private void processAnalyzerFactory(DeprecationLogger deprecationLogger, - IndexSettings indexSettings, + private void processAnalyzerFactory(IndexSettings indexSettings, String name, AnalyzerProvider analyzerFactory, - Map analyzerAliases, Map analyzers, Map tokenFilters, Map charFilters, Map tokenizers) { /* @@ -557,25 +536,11 @@ private void processAnalyzerFactory(DeprecationLogger deprecationLogger, // TODO: remove alias support completely when we no longer support pre 5.0 indices final String analyzerAliasKey = "index.analysis.analyzer." + analyzerFactory.name() + ".alias"; if (indexSettings.getSettings().get(analyzerAliasKey) != null) { - if (indexSettings.getIndexVersionCreated().onOrAfter(Version.V_5_0_0_beta1)) { - // do not allow alias creation if the index was created on or after v5.0 alpha6 - throw new IllegalArgumentException("setting [" + analyzerAliasKey + "] is not supported"); - } - - // the setting is now removed but we only support it for loading indices created before v5.0 - deprecationLogger.deprecated("setting [{}] is only allowed on index [{}] because it was created before 5.x; " + - "analyzer aliases can no longer be created on new indices.", analyzerAliasKey, indexSettings.getIndex().getName()); - Set aliases = Sets.newHashSet(indexSettings.getSettings().getAsArray(analyzerAliasKey)); - for (String alias : aliases) { - if (analyzerAliases.putIfAbsent(alias, analyzer) != null) { - throw new IllegalStateException("alias [" + alias + "] is already used by [" + analyzerAliases.get(alias).name() + "]"); - } - } + throw new IllegalArgumentException("setting [" + analyzerAliasKey + "] is not supported"); } } - private void processNormalizerFactory(DeprecationLogger deprecationLogger, - IndexSettings indexSettings, + private void processNormalizerFactory( String name, AnalyzerProvider normalizerFactory, Map normalizers, diff --git a/core/src/main/java/org/elasticsearch/index/analysis/ArabicAnalyzerProvider.java b/core/src/main/java/org/elasticsearch/index/analysis/ArabicAnalyzerProvider.java index 10d8f22bde7e8..8dcc6cc907569 100644 --- a/core/src/main/java/org/elasticsearch/index/analysis/ArabicAnalyzerProvider.java +++ b/core/src/main/java/org/elasticsearch/index/analysis/ArabicAnalyzerProvider.java @@ -32,7 +32,7 @@ public class ArabicAnalyzerProvider extends AbstractIndexAnalyzerProvider { + + private final BengaliAnalyzer analyzer; + + public BengaliAnalyzerProvider(IndexSettings indexSettings, Environment env, String name, Settings settings) { + super(indexSettings, name, settings); + analyzer = new BengaliAnalyzer( + Analysis.parseStopWords(env, settings, BengaliAnalyzer.getDefaultStopSet()), + Analysis.parseStemExclusion(settings, CharArraySet.EMPTY_SET) + ); + analyzer.setVersion(version); + } + + @Override + public BengaliAnalyzer get() { + return this.analyzer; + } +} diff --git a/core/src/main/java/org/elasticsearch/index/analysis/BrazilianAnalyzerProvider.java b/core/src/main/java/org/elasticsearch/index/analysis/BrazilianAnalyzerProvider.java index 7ca11542ac632..36b13e67bf4ee 100644 --- a/core/src/main/java/org/elasticsearch/index/analysis/BrazilianAnalyzerProvider.java +++ b/core/src/main/java/org/elasticsearch/index/analysis/BrazilianAnalyzerProvider.java @@ -32,7 +32,7 @@ public class BrazilianAnalyzerProvider extends AbstractIndexAnalyzerProvider tokenizers, final Map charFiltersList = new ArrayList<>(charFilterNames.length); + List charFilterNames = analyzerSettings.getAsList("char_filter"); + List charFiltersList = new ArrayList<>(charFilterNames.size()); for (String charFilterName : charFilterNames) { CharFilterFactory charFilter = charFilters.get(charFilterName); if (charFilter == null) { @@ -74,8 +74,8 @@ public void build(final Map tokenizers, final Map tokenFilterList = new ArrayList<>(tokenFilterNames.length); + List tokenFilterNames = analyzerSettings.getAsList("filter"); + List tokenFilterList = new ArrayList<>(tokenFilterNames.size()); for (String tokenFilterName : tokenFilterNames) { TokenFilterFactory tokenFilter = tokenFilters.get(tokenFilterName); if (tokenFilter == null) { diff --git a/core/src/main/java/org/elasticsearch/index/analysis/CustomNormalizerProvider.java b/core/src/main/java/org/elasticsearch/index/analysis/CustomNormalizerProvider.java index a375c1e8e3b9d..a0a7859d50cfd 100644 --- a/core/src/main/java/org/elasticsearch/index/analysis/CustomNormalizerProvider.java +++ b/core/src/main/java/org/elasticsearch/index/analysis/CustomNormalizerProvider.java @@ -50,8 +50,8 @@ public void build(final TokenizerFactory keywordTokenizerFactory, final Map charFiltersList = new ArrayList<>(charFilterNames.length); + List charFilterNames = analyzerSettings.getAsList("char_filter"); + List charFiltersList = new ArrayList<>(charFilterNames.size()); for (String charFilterName : charFilterNames) { CharFilterFactory charFilter = charFilters.get(charFilterName); if (charFilter == null) { @@ -66,8 +66,8 @@ public void build(final TokenizerFactory keywordTokenizerFactory, final Map tokenFilterList = new ArrayList<>(tokenFilterNames.length); + List tokenFilterNames = analyzerSettings.getAsList("filter"); + List tokenFilterList = new ArrayList<>(tokenFilterNames.size()); for (String tokenFilterName : tokenFilterNames) { TokenFilterFactory tokenFilter = tokenFilters.get(tokenFilterName); if (tokenFilter == null) { diff --git a/core/src/main/java/org/elasticsearch/index/analysis/CzechAnalyzerProvider.java b/core/src/main/java/org/elasticsearch/index/analysis/CzechAnalyzerProvider.java index 27d20beef4325..12d2349d9bac5 100644 --- a/core/src/main/java/org/elasticsearch/index/analysis/CzechAnalyzerProvider.java +++ b/core/src/main/java/org/elasticsearch/index/analysis/CzechAnalyzerProvider.java @@ -32,7 +32,7 @@ public class CzechAnalyzerProvider extends AbstractIndexAnalyzerProvider characterClasses) { + if (characterClasses == null || characterClasses.isEmpty()) { return null; } CharMatcher.Builder builder = new CharMatcher.Builder(); @@ -83,9 +85,22 @@ static CharMatcher parseTokenChars(String[] characterClasses) { public NGramTokenizerFactory(IndexSettings indexSettings, Environment environment, String name, Settings settings) { super(indexSettings, name, settings); + int maxAllowedNgramDiff = indexSettings.getMaxNgramDiff(); this.minGram = settings.getAsInt("min_gram", NGramTokenizer.DEFAULT_MIN_NGRAM_SIZE); this.maxGram = settings.getAsInt("max_gram", NGramTokenizer.DEFAULT_MAX_NGRAM_SIZE); - this.matcher = parseTokenChars(settings.getAsArray("token_chars")); + int ngramDiff = maxGram - minGram; + if (ngramDiff > maxAllowedNgramDiff) { + if (indexSettings.getIndexVersionCreated().onOrAfter(Version.V_7_0_0_alpha1)) { + throw new IllegalArgumentException( + "The difference between max_gram and min_gram in NGram Tokenizer must be less than or equal to: [" + + maxAllowedNgramDiff + "] but was [" + ngramDiff + "]. This limit can be set by changing the [" + + IndexSettings.MAX_NGRAM_DIFF_SETTING.getKey() + "] index level setting."); + } else { + deprecationLogger.deprecated("Deprecated big difference between max_gram and min_gram in NGram Tokenizer," + + "expected difference must be less than or equal to: [" + maxAllowedNgramDiff + "]"); + } + } + this.matcher = parseTokenChars(settings.getAsList("token_chars")); } @Override diff --git a/core/src/main/java/org/elasticsearch/index/analysis/NorwegianAnalyzerProvider.java b/core/src/main/java/org/elasticsearch/index/analysis/NorwegianAnalyzerProvider.java index ca7d898fb47c0..b98d839f1992d 100644 --- a/core/src/main/java/org/elasticsearch/index/analysis/NorwegianAnalyzerProvider.java +++ b/core/src/main/java/org/elasticsearch/index/analysis/NorwegianAnalyzerProvider.java @@ -32,7 +32,7 @@ public class NorwegianAnalyzerProvider extends AbstractIndexAnalyzerProvider maxAllowedShingleDiff) { + if (indexSettings.getIndexVersionCreated().onOrAfter(Version.V_7_0_0_alpha1)) { + throw new IllegalArgumentException( + "In Shingle TokenFilter the difference between max_shingle_size and min_shingle_size (and +1 if outputting unigrams)" + + " must be less than or equal to: [" + maxAllowedShingleDiff + "] but was [" + shingleDiff + "]. This limit" + + " can be set by changing the [" + IndexSettings.MAX_SHINGLE_DIFF_SETTING.getKey() + "] index level setting."); + } else { + deprecationLogger.deprecated("Deprecated big difference between maxShingleSize and minShingleSize in Shingle TokenFilter," + + "expected difference must be less than or equal to: [" + maxAllowedShingleDiff + "]"); + } + } + + Boolean outputUnigramsIfNoShingles = settings.getAsBoolean("output_unigrams_if_no_shingles", false); String tokenSeparator = settings.get("token_separator", ShingleFilter.DEFAULT_TOKEN_SEPARATOR); String fillerToken = settings.get("filler_token", ShingleFilter.DEFAULT_FILLER_TOKEN); factory = new Factory("shingle", minShingleSize, maxShingleSize, outputUnigrams, outputUnigramsIfNoShingles, tokenSeparator, fillerToken); diff --git a/core/src/main/java/org/elasticsearch/index/analysis/SnowballAnalyzerProvider.java b/core/src/main/java/org/elasticsearch/index/analysis/SnowballAnalyzerProvider.java index bd3201e3c8a54..84f1931633100 100644 --- a/core/src/main/java/org/elasticsearch/index/analysis/SnowballAnalyzerProvider.java +++ b/core/src/main/java/org/elasticsearch/index/analysis/SnowballAnalyzerProvider.java @@ -65,7 +65,7 @@ public SnowballAnalyzerProvider(IndexSettings indexSettings, Environment env, St String language = settings.get("language", settings.get("name", "English")); CharArraySet defaultStopwords = DEFAULT_LANGUAGE_STOPWORDS.getOrDefault(language, CharArraySet.EMPTY_SET); - CharArraySet stopWords = Analysis.parseStopWords(env, indexSettings.getIndexVersionCreated(), settings, defaultStopwords); + CharArraySet stopWords = Analysis.parseStopWords(env, settings, defaultStopwords); analyzer = new SnowballAnalyzer(language, stopWords); analyzer.setVersion(version); diff --git a/core/src/main/java/org/elasticsearch/index/analysis/SoraniAnalyzerProvider.java b/core/src/main/java/org/elasticsearch/index/analysis/SoraniAnalyzerProvider.java index d3b9fcd3f5c47..ea5ab7d885a73 100644 --- a/core/src/main/java/org/elasticsearch/index/analysis/SoraniAnalyzerProvider.java +++ b/core/src/main/java/org/elasticsearch/index/analysis/SoraniAnalyzerProvider.java @@ -35,7 +35,7 @@ public class SoraniAnalyzerProvider extends AbstractIndexAnalyzerProvider tokenizerFactoryFactory = - analysisRegistry.getTokenizerProvider(tokenizerName, indexSettings); - if (tokenizerFactoryFactory == null) { - throw new IllegalArgumentException("failed to find tokenizer [" + tokenizerName + "] for synonym token filter"); - } - final TokenizerFactory tokenizerFactory = tokenizerFactoryFactory.get(indexSettings, env, tokenizerName, - AnalysisRegistry.getSettingsFromIndexSettings(indexSettings, - AnalysisRegistry.INDEX_ANALYSIS_TOKENIZER + "." + tokenizerName)); - this.tokenizerFactory = tokenizerFactory; - } else { - this.tokenizerFactory = null; - } - + this.expand = settings.getAsBoolean("expand", true); this.format = settings.get("format", ""); } @@ -91,7 +62,7 @@ public TokenStream create(TokenStream tokenStream) { protected Reader getRulesFromSettings(Environment env) { Reader rulesReader; - if (settings.getAsArray("synonyms", null) != null) { + if (settings.getAsList("synonyms", null) != null) { List rulesList = Analysis.getWordList(env, settings, "synonyms"); StringBuilder sb = new StringBuilder(); for (String line : rulesList) { @@ -110,13 +81,6 @@ Factory createPerAnalyzerSynonymFactory(Analyzer analyzerForParseSynonym, Enviro return new Factory("synonym", analyzerForParseSynonym, getRulesFromSettings(env)); } - // for backward compatibility - /** - * @deprecated This filter tokenize synonyms with whatever tokenizer and token filters appear before it in the chain in 6.0. - */ - @Deprecated - protected final TokenizerFactory tokenizerFactory; - public class Factory implements TokenFilterFactory{ private final String name; @@ -126,36 +90,18 @@ public Factory(String name, Analyzer analyzerForParseSynonym, Reader rulesReader this.name = name; - Analyzer analyzer; - if (tokenizerFactory != null) { - analyzer = new Analyzer() { - @Override - protected TokenStreamComponents createComponents(String fieldName) { - Tokenizer tokenizer = tokenizerFactory.create(); - TokenStream stream = ignoreCase ? new LowerCaseFilter(tokenizer) : tokenizer; - return new TokenStreamComponents(tokenizer, stream); - } - }; - } else { - analyzer = analyzerForParseSynonym; - } - try { SynonymMap.Builder parser; if ("wordnet".equalsIgnoreCase(format)) { - parser = new WordnetSynonymParser(true, expand, analyzer); + parser = new WordnetSynonymParser(true, expand, analyzerForParseSynonym); ((WordnetSynonymParser) parser).parse(rulesReader); } else { - parser = new SolrSynonymParser(true, expand, analyzer); + parser = new SolrSynonymParser(true, expand, analyzerForParseSynonym); ((SolrSynonymParser) parser).parse(rulesReader); } synonymMap = parser.build(); } catch (Exception e) { throw new IllegalArgumentException("failed to build synonyms", e); - } finally { - if (tokenizerFactory != null) { - analyzer.close(); - } } } @@ -167,7 +113,7 @@ public String name() { @Override public TokenStream create(TokenStream tokenStream) { // fst is null means no synonyms - return synonymMap.fst == null ? tokenStream : new SynonymFilter(tokenStream, synonymMap, ignoreCase); + return synonymMap.fst == null ? tokenStream : new SynonymFilter(tokenStream, synonymMap, false); } } diff --git a/core/src/main/java/org/elasticsearch/index/analysis/ThaiAnalyzerProvider.java b/core/src/main/java/org/elasticsearch/index/analysis/ThaiAnalyzerProvider.java index 119eb81d7482d..e692e952c5d41 100644 --- a/core/src/main/java/org/elasticsearch/index/analysis/ThaiAnalyzerProvider.java +++ b/core/src/main/java/org/elasticsearch/index/analysis/ThaiAnalyzerProvider.java @@ -31,7 +31,7 @@ public class ThaiAnalyzerProvider extends AbstractIndexAnalyzerProvider searcherFactory) throws EngineException { - final Searcher searcher = searcherFactory.apply("get"); + protected final GetResult getFromSearcher(Get get, BiFunction searcherFactory, + SearcherScope scope) throws EngineException { + final Searcher searcher = searcherFactory.apply("get", scope); final DocIdAndVersion docIdAndVersion; try { docIdAndVersion = VersionsAndSeqNoResolver.loadDocIdAndVersion(searcher.reader(), get.uid()); @@ -487,23 +498,40 @@ protected final GetResult getFromSearcher(Get get, Function se } } - public abstract GetResult get(Get get, Function searcherFactory) throws EngineException; + public abstract GetResult get(Get get, BiFunction searcherFactory) throws EngineException; + /** * Returns a new searcher instance. The consumer of this * API is responsible for releasing the returned searcher in a * safe manner, preferably in a try/finally block. * + * @param source the source API or routing that triggers this searcher acquire + * * @see Searcher#close() */ public final Searcher acquireSearcher(String source) throws EngineException { + return acquireSearcher(source, SearcherScope.EXTERNAL); + } + + /** + * Returns a new searcher instance. The consumer of this + * API is responsible for releasing the returned searcher in a + * safe manner, preferably in a try/finally block. + * + * @param source the source API or routing that triggers this searcher acquire + * @param scope the scope of this searcher ie. if the searcher will be used for get or search purposes + * + * @see Searcher#close() + */ + public final Searcher acquireSearcher(String source, SearcherScope scope) throws EngineException { boolean success = false; /* Acquire order here is store -> manager since we need * to make sure that the store is not closed before * the searcher is acquired. */ store.incRef(); try { - final SearcherManager manager = getSearcherManager(); // can never be null + final SearcherManager manager = getSearcherManager(source, scope); // can never be null /* This might throw NPE but that's fine we will run ensureOpen() * in the catch block and throw the right exception */ final IndexSearcher searcher = manager.acquire(); @@ -529,6 +557,10 @@ public final Searcher acquireSearcher(String source) throws EngineException { } } + public enum SearcherScope { + EXTERNAL, INTERNAL + } + /** returns the translog for this engine */ public abstract Translog getTranslog(); @@ -543,7 +575,11 @@ public CommitStats commitStats() { return new CommitStats(getLastCommittedSegmentInfos()); } - /** get the sequence number service */ + /** + * The sequence number service for this engine. + * + * @return the sequence number service + */ public abstract SequenceNumbersService seqNoService(); /** @@ -674,17 +710,16 @@ protected void writerSegmentStats(SegmentsStats stats) { } /** How much heap is used that would be freed by a refresh. Note that this may throw {@link AlreadyClosedException}. */ - public abstract long getIndexBufferRAMBytesUsed(); + public abstract long getIndexBufferRAMBytesUsed(); protected Segment[] getSegmentInfo(SegmentInfos lastCommittedSegmentInfos, boolean verbose) { ensureOpen(); Map segments = new HashMap<>(); - // first, go over and compute the search ones... - Searcher searcher = acquireSearcher("segments"); - try { + try (Searcher searcher = acquireSearcher("segments")){ for (LeafReaderContext reader : searcher.reader().leaves()) { - SegmentCommitInfo info = segmentReader(reader.reader()).getSegmentInfo(); + final SegmentReader segmentReader = segmentReader(reader.reader()); + SegmentCommitInfo info = segmentReader.getSegmentInfo(); assert !segments.containsKey(info.info.name); Segment segment = new Segment(info.info.name); segment.search = true; @@ -697,17 +732,15 @@ protected Segment[] getSegmentInfo(SegmentInfos lastCommittedSegmentInfos, boole } catch (IOException e) { logger.trace((Supplier) () -> new ParameterizedMessage("failed to get size for [{}]", info.info.name), e); } - final SegmentReader segmentReader = segmentReader(reader.reader()); segment.memoryInBytes = segmentReader.ramBytesUsed(); segment.segmentSort = info.info.getIndexSort(); if (verbose) { segment.ramTree = Accountables.namedAccountable("root", segmentReader); } + segment.attributes = info.info.getAttributes(); // TODO: add more fine grained mem stats values to per segment info here segments.put(info.info.name, segment); } - } finally { - searcher.close(); } // now, correlate or add the committed ones... @@ -760,7 +793,7 @@ public final boolean refreshNeeded() { the store is closed so we need to make sure we increment it here */ try { - return getSearcherManager().isSearcherCurrent() == false; + return getSearcherManager("refresh_needed", SearcherScope.EXTERNAL).isSearcherCurrent() == false; } catch (IOException e) { logger.error("failed to access searcher manager", e); failEngine("failed to access searcher manager", e); @@ -1045,7 +1078,7 @@ public Index(Term uid, ParsedDocument doc) { Index(Term uid, ParsedDocument doc, long version) { // use a primary term of 2 to allow tests to reduce it to a valid >0 term - this(uid, doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 2, version, VersionType.INTERNAL, + this(uid, doc, SequenceNumbers.UNASSIGNED_SEQ_NO, 2, version, VersionType.INTERNAL, Origin.PRIMARY, System.nanoTime(), -1, false); } // TEST ONLY @@ -1121,7 +1154,7 @@ public Delete(String type, String id, Term uid, long seqNo, long primaryTerm, lo } public Delete(String type, String id, Term uid) { - this(type, id, uid, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, Versions.MATCH_ANY, VersionType.INTERNAL, Origin.PRIMARY, System.nanoTime()); + this(type, id, uid, SequenceNumbers.UNASSIGNED_SEQ_NO, 0, Versions.MATCH_ANY, VersionType.INTERNAL, Origin.PRIMARY, System.nanoTime()); } public Delete(Delete template, VersionType versionType) { @@ -1298,7 +1331,7 @@ public void release() { } } - protected abstract SearcherManager getSearcherManager(); + protected abstract SearcherManager getSearcherManager(String source, SearcherScope scope); /** * Method to close the engine while the write lock is held. diff --git a/core/src/main/java/org/elasticsearch/index/engine/EngineConfig.java b/core/src/main/java/org/elasticsearch/index/engine/EngineConfig.java index d7019c77321da..fbc87f2279b3d 100644 --- a/core/src/main/java/org/elasticsearch/index/engine/EngineConfig.java +++ b/core/src/main/java/org/elasticsearch/index/engine/EngineConfig.java @@ -51,6 +51,7 @@ */ public final class EngineConfig { private final ShardId shardId; + private final String allocationId; private final IndexSettings indexSettings; private final ByteSizeValue indexingBufferSize; private volatile boolean enableGcDeletes = true; @@ -70,6 +71,7 @@ public final class EngineConfig { private final List refreshListeners; @Nullable private final Sort indexSort; + private final boolean forceNewHistoryUUID; private final TranslogRecoveryRunner translogRecoveryRunner; /** @@ -109,17 +111,19 @@ public final class EngineConfig { /** * Creates a new {@link org.elasticsearch.index.engine.EngineConfig} */ - public EngineConfig(OpenMode openMode, ShardId shardId, ThreadPool threadPool, + public EngineConfig(OpenMode openMode, ShardId shardId, String allocationId, ThreadPool threadPool, IndexSettings indexSettings, Engine.Warmer warmer, Store store, MergePolicy mergePolicy, Analyzer analyzer, Similarity similarity, CodecService codecService, Engine.EventListener eventListener, QueryCache queryCache, QueryCachingPolicy queryCachingPolicy, - TranslogConfig translogConfig, TimeValue flushMergesAfter, List refreshListeners, - Sort indexSort, TranslogRecoveryRunner translogRecoveryRunner) { + boolean forceNewHistoryUUID, TranslogConfig translogConfig, TimeValue flushMergesAfter, + List refreshListeners, Sort indexSort, + TranslogRecoveryRunner translogRecoveryRunner) { if (openMode == null) { throw new IllegalArgumentException("openMode must not be null"); } this.shardId = shardId; + this.allocationId = allocationId; this.indexSettings = indexSettings; this.threadPool = threadPool; this.warmer = warmer == null ? (a) -> {} : warmer; @@ -139,6 +143,7 @@ public EngineConfig(OpenMode openMode, ShardId shardId, ThreadPool threadPool, this.translogConfig = translogConfig; this.flushMergesAfter = flushMergesAfter; this.openMode = openMode; + this.forceNewHistoryUUID = forceNewHistoryUUID; this.refreshListeners = refreshListeners; this.indexSort = indexSort; this.translogRecoveryRunner = translogRecoveryRunner; @@ -240,6 +245,15 @@ public IndexSettings getIndexSettings() { */ public ShardId getShardId() { return shardId; } + /** + * Returns the allocation ID for the shard. + * + * @return the allocation ID + */ + public String getAllocationId() { + return allocationId; + } + /** * Returns the analyzer as the default analyzer in the engines {@link org.apache.lucene.index.IndexWriter} */ @@ -289,6 +303,15 @@ public OpenMode getOpenMode() { return openMode; } + + /** + * Returns true if a new history uuid must be generated. If false, a new uuid will only be generated if no existing + * one is found. + */ + public boolean getForceNewHistoryUUID() { + return forceNewHistoryUUID; + } + @FunctionalInterface public interface TranslogRecoveryRunner { int run(Engine engine, Translog.Snapshot snapshot) throws IOException; diff --git a/core/src/main/java/org/elasticsearch/index/engine/InternalEngine.java b/core/src/main/java/org/elasticsearch/index/engine/InternalEngine.java index 5ede2ff872c21..cb07cf5e6966a 100644 --- a/core/src/main/java/org/elasticsearch/index/engine/InternalEngine.java +++ b/core/src/main/java/org/elasticsearch/index/engine/InternalEngine.java @@ -48,6 +48,7 @@ import org.elasticsearch.Version; import org.elasticsearch.action.index.IndexRequest; import org.elasticsearch.common.Nullable; +import org.elasticsearch.common.UUIDs; import org.elasticsearch.common.lease.Releasable; import org.elasticsearch.common.lucene.LoggerInfoStream; import org.elasticsearch.common.lucene.Lucene; @@ -92,7 +93,7 @@ import java.util.concurrent.atomic.AtomicLong; import java.util.concurrent.locks.Lock; import java.util.concurrent.locks.ReentrantLock; -import java.util.function.Function; +import java.util.function.BiFunction; import java.util.function.LongSupplier; public class InternalEngine extends Engine { @@ -107,20 +108,18 @@ public class InternalEngine extends Engine { private final IndexWriter indexWriter; - private final SearcherFactory searcherFactory; - private final SearcherManager searcherManager; + private final SearcherManager externalSearcherManager; + private final SearcherManager internalSearcherManager; private final Lock flushLock = new ReentrantLock(); private final ReentrantLock optimizeLock = new ReentrantLock(); // A uid (in the form of BytesRef) to the version map // we use the hashed variant since we iterate over it and check removal and additions on existing keys - private final LiveVersionMap versionMap; + private final LiveVersionMap versionMap = new LiveVersionMap(); private final KeyedLock keyedLock = new KeyedLock<>(); - private final AtomicBoolean versionMapRefreshPending = new AtomicBoolean(); - private volatile SegmentInfos lastCommittedSegmentInfos; private final IndexThrottle throttle; @@ -141,26 +140,40 @@ public class InternalEngine extends Engine { private final AtomicLong maxUnsafeAutoIdTimestamp = new AtomicLong(-1); private final CounterMetric numVersionLookups = new CounterMetric(); private final CounterMetric numIndexVersionsLookups = new CounterMetric(); + /** + * How many bytes we are currently moving to disk, via either IndexWriter.flush or refresh. IndexingMemoryController polls this + * across all shards to decide if throttling is necessary because moving bytes to disk is falling behind vs incoming documents + * being indexed/deleted. + */ + private final AtomicLong writingBytes = new AtomicLong(); + @Nullable + private final String historyUUID; - public InternalEngine(EngineConfig engineConfig) throws EngineException { + public InternalEngine(EngineConfig engineConfig) { + this(engineConfig, InternalEngine::sequenceNumberService); + } + + InternalEngine( + final EngineConfig engineConfig, + final BiFunction seqNoServiceSupplier) { super(engineConfig); openMode = engineConfig.getOpenMode(); if (engineConfig.isAutoGeneratedIDsOptimizationEnabled() == false) { maxUnsafeAutoIdTimestamp.set(Long.MAX_VALUE); } this.uidField = engineConfig.getIndexSettings().isSingleType() ? IdFieldMapper.NAME : UidFieldMapper.NAME; - this.versionMap = new LiveVersionMap(); final TranslogDeletionPolicy translogDeletionPolicy = new TranslogDeletionPolicy( - engineConfig.getIndexSettings().getTranslogRetentionSize().getBytes(), - engineConfig.getIndexSettings().getTranslogRetentionAge().getMillis() + engineConfig.getIndexSettings().getTranslogRetentionSize().getBytes(), + engineConfig.getIndexSettings().getTranslogRetentionAge().getMillis() ); this.deletionPolicy = new CombinedDeletionPolicy( - new SnapshotDeletionPolicy(new KeepOnlyLastCommitDeletionPolicy()), translogDeletionPolicy, openMode); + new SnapshotDeletionPolicy(new KeepOnlyLastCommitDeletionPolicy()), translogDeletionPolicy, openMode); store.incRef(); IndexWriter writer = null; Translog translog = null; - SearcherManager manager = null; + SearcherManager externalSearcherManager = null; + SearcherManager internalSearcherManager = null; EngineMergeScheduler scheduler = null; boolean success = false; try { @@ -168,7 +181,6 @@ public InternalEngine(EngineConfig engineConfig) throws EngineException { mergeScheduler = scheduler = new EngineMergeScheduler(engineConfig.getShardId(), engineConfig.getIndexSettings()); throttle = new IndexThrottle(); - this.searcherFactory = new SearchFactory(logger, isClosed, engineConfig); try { final SeqNoStats seqNoStats; switch (openMode) { @@ -179,24 +191,28 @@ public InternalEngine(EngineConfig engineConfig) throws EngineException { break; case OPEN_INDEX_CREATE_TRANSLOG: writer = createWriter(false); - seqNoStats = store.loadSeqNoStats(SequenceNumbersService.UNASSIGNED_SEQ_NO); + seqNoStats = store.loadSeqNoStats(SequenceNumbers.UNASSIGNED_SEQ_NO); break; case CREATE_INDEX_AND_TRANSLOG: writer = createWriter(true); seqNoStats = new SeqNoStats( - SequenceNumbersService.NO_OPS_PERFORMED, - SequenceNumbersService.NO_OPS_PERFORMED, - SequenceNumbersService.UNASSIGNED_SEQ_NO); + SequenceNumbers.NO_OPS_PERFORMED, + SequenceNumbers.NO_OPS_PERFORMED, + SequenceNumbers.UNASSIGNED_SEQ_NO); break; default: throw new IllegalArgumentException(openMode.toString()); } logger.trace("recovered [{}]", seqNoStats); - seqNoService = sequenceNumberService(shardId, engineConfig.getIndexSettings(), seqNoStats); + seqNoService = seqNoServiceSupplier.apply(engineConfig, seqNoStats); updateMaxUnsafeAutoIdTimestampFromWriter(writer); + historyUUID = loadOrGenerateHistoryUUID(writer, engineConfig.getForceNewHistoryUUID()); + Objects.requireNonNull(historyUUID, "history uuid should not be null"); indexWriter = writer; - translog = openTranslog(engineConfig, writer, translogDeletionPolicy, () -> seqNoService().getGlobalCheckpoint()); + translog = openTranslog(engineConfig, writer, translogDeletionPolicy, () -> seqNoService.getGlobalCheckpoint()); assert translog.getGeneration() != null; + this.translog = translog; + updateWriterOnOpen(); } catch (IOException | TranslogCorruptedException e) { throw new EngineCreationFailureException(shardId, "failed to create engine", e); } catch (AssertionError e) { @@ -208,22 +224,21 @@ public InternalEngine(EngineConfig engineConfig) throws EngineException { throw e; } } - - this.translog = translog; - manager = createSearcherManager(); - this.searcherManager = manager; - this.versionMap.setManager(searcherManager); + internalSearcherManager = createSearcherManager(new SearcherFactory(), false); + externalSearcherManager = createSearcherManager(new SearchFactory(logger, isClosed, engineConfig), true); + this.internalSearcherManager = internalSearcherManager; + this.externalSearcherManager = externalSearcherManager; + internalSearcherManager.addListener(versionMap); assert pendingTranslogRecovery.get() == false : "translog recovery can't be pending before we set it"; // don't allow commits until we are done with recovering pendingTranslogRecovery.set(openMode == EngineConfig.OpenMode.OPEN_INDEX_AND_TRANSLOG); for (ReferenceManager.RefreshListener listener: engineConfig.getRefreshListeners()) { - searcherManager.addListener(listener); + this.externalSearcherManager.addListener(listener); } success = true; } finally { if (success == false) { - IOUtils.closeWhileHandlingException(writer, translog, manager, scheduler); - versionMap.clear(); + IOUtils.closeWhileHandlingException(writer, translog, externalSearcherManager, internalSearcherManager, scheduler); if (isClosed.get() == false) { // failure we need to dec the store reference store.decRef(); @@ -237,12 +252,12 @@ public InternalEngine(EngineConfig engineConfig) throws EngineException { public void restoreLocalCheckpointFromTranslog() throws IOException { try (ReleasableLock ignored = writeLock.acquire()) { ensureOpen(); - final long localCheckpoint = seqNoService().getLocalCheckpoint(); + final long localCheckpoint = seqNoService.getLocalCheckpoint(); try (Translog.Snapshot snapshot = getTranslog().newSnapshotFromMinSeqNo(localCheckpoint + 1)) { Translog.Operation operation; while ((operation = snapshot.next()) != null) { if (operation.seqNo() > localCheckpoint) { - seqNoService().markSeqNoAsCompleted(operation.seqNo()); + seqNoService.markSeqNoAsCompleted(operation.seqNo()); } } } @@ -253,17 +268,17 @@ public void restoreLocalCheckpointFromTranslog() throws IOException { public int fillSeqNoGaps(long primaryTerm) throws IOException { try (ReleasableLock ignored = writeLock.acquire()) { ensureOpen(); - final long localCheckpoint = seqNoService().getLocalCheckpoint(); - final long maxSeqNo = seqNoService().getMaxSeqNo(); + final long localCheckpoint = seqNoService.getLocalCheckpoint(); + final long maxSeqNo = seqNoService.getMaxSeqNo(); int numNoOpsAdded = 0; for ( long seqNo = localCheckpoint + 1; seqNo <= maxSeqNo; - seqNo = seqNoService().getLocalCheckpoint() + 1 /* the local checkpoint might have advanced so we leap-frog */) { + seqNo = seqNoService.getLocalCheckpoint() + 1 /* the local checkpoint might have advanced so we leap-frog */) { innerNoOp(new NoOp(seqNo, primaryTerm, Operation.Origin.PRIMARY, System.nanoTime(), "filling gaps")); numNoOpsAdded++; - assert seqNo <= seqNoService().getLocalCheckpoint() - : "local checkpoint did not advance; was [" + seqNo + "], now [" + seqNoService().getLocalCheckpoint() + "]"; + assert seqNo <= seqNoService.getLocalCheckpoint() + : "local checkpoint did not advance; was [" + seqNo + "], now [" + seqNoService.getLocalCheckpoint() + "]"; } return numNoOpsAdded; @@ -281,13 +296,13 @@ private void updateMaxUnsafeAutoIdTimestampFromWriter(IndexWriter writer) { maxUnsafeAutoIdTimestamp.set(Math.max(maxUnsafeAutoIdTimestamp.get(), commitMaxUnsafeAutoIdTimestamp)); } - private static SequenceNumbersService sequenceNumberService( - final ShardId shardId, - final IndexSettings indexSettings, + static SequenceNumbersService sequenceNumberService( + final EngineConfig engineConfig, final SeqNoStats seqNoStats) { return new SequenceNumbersService( - shardId, - indexSettings, + engineConfig.getShardId(), + engineConfig.getAllocationId(), + engineConfig.getIndexSettings(), seqNoStats.getMaxSeqNo(), seqNoStats.getLocalCheckpoint(), seqNoStats.getGlobalCheckpoint()); @@ -338,8 +353,15 @@ private void recoverFromTranslogInternal() throws IOException { logger.trace("flushing post recovery from translog. ops recovered [{}]. committed translog id [{}]. current id [{}]", opsRecovered, translogGeneration == null ? null : translogGeneration.translogFileGeneration, translog.currentFileGeneration()); flush(true, true); + refresh("translog_recovery"); } else if (translog.isCurrent(translogGeneration) == false) { commitIndexWriter(indexWriter, translog, lastCommittedSegmentInfos.getUserData().get(Engine.SYNC_COMMIT_ID)); + refreshLastCommittedSegmentInfos(); + } else if (lastCommittedSegmentInfos.getUserData().containsKey(HISTORY_UUID_KEY) == false) { + assert historyUUID != null; + // put the history uuid into the index + commitIndexWriter(indexWriter, translog, lastCommittedSegmentInfos.getUserData().get(Engine.SYNC_COMMIT_ID)); + refreshLastCommittedSegmentInfos(); } // clean up what's not needed translog.trimUnreferencedReaders(); @@ -356,30 +378,49 @@ private Translog openTranslog(EngineConfig engineConfig, IndexWriter writer, Tra throw new IndexFormatTooOldException("translog", "translog has no generation nor a UUID - this might be an index from a previous version consider upgrading to N-1 first"); } } - final Translog translog = new Translog(translogConfig, translogUUID, translogDeletionPolicy, globalCheckpointSupplier); - if (translogUUID == null) { - assert openMode != EngineConfig.OpenMode.OPEN_INDEX_AND_TRANSLOG : "OpenMode must not be " - + EngineConfig.OpenMode.OPEN_INDEX_AND_TRANSLOG; - boolean success = false; - try { - commitIndexWriter(writer, translog, openMode == EngineConfig.OpenMode.OPEN_INDEX_CREATE_TRANSLOG - ? commitDataAsMap(writer).get(SYNC_COMMIT_ID) : null); - success = true; - } finally { - if (success == false) { - IOUtils.closeWhileHandlingException(translog); - } - } + return new Translog(translogConfig, translogUUID, translogDeletionPolicy, globalCheckpointSupplier); + } + + /** If needed, updates the metadata in the index writer to match the potentially new translog and history uuid */ + private void updateWriterOnOpen() throws IOException { + Objects.requireNonNull(historyUUID); + final Map commitUserData = commitDataAsMap(indexWriter); + boolean needsCommit = false; + if (historyUUID.equals(commitUserData.get(HISTORY_UUID_KEY)) == false) { + needsCommit = true; + } else { + assert config().getForceNewHistoryUUID() == false : "config forced a new history uuid but it didn't change"; + assert openMode != EngineConfig.OpenMode.CREATE_INDEX_AND_TRANSLOG : "new index but it already has an existing history uuid"; + } + if (translog.getTranslogUUID().equals(commitUserData.get(Translog.TRANSLOG_UUID_KEY)) == false) { + needsCommit = true; + } else { + assert openMode == EngineConfig.OpenMode.OPEN_INDEX_AND_TRANSLOG : "translog uuid didn't change but open mode is " + openMode; + } + if (needsCommit) { + commitIndexWriter(indexWriter, translog, openMode == EngineConfig.OpenMode.OPEN_INDEX_CREATE_TRANSLOG + ? commitUserData.get(SYNC_COMMIT_ID) : null); } - return translog; } + @Override public Translog getTranslog() { ensureOpen(); return translog; } + @Override + public String getHistoryUUID() { + return historyUUID; + } + + /** Returns how many bytes we are currently moving from indexing buffer to segments on disk */ + @Override + public long getWritingBytes() { + return writingBytes.get(); + } + /** * Reads the current stored translog ID from the IW commit data. If the id is not found, recommits the current * translog id into lucene and returns null. @@ -399,14 +440,32 @@ private String loadTranslogUUIDFromCommit(IndexWriter writer) throws IOException } } - private SearcherManager createSearcherManager() throws EngineException { + /** + * Reads the current stored history ID from the IW commit data. Generates a new UUID if not found or if generation is forced. + */ + private String loadOrGenerateHistoryUUID(final IndexWriter writer, boolean forceNew) throws IOException { + String uuid = commitDataAsMap(writer).get(HISTORY_UUID_KEY); + if (uuid == null || forceNew) { + assert + forceNew || // recovery from a local store creates an index that doesn't have yet a history_uuid + openMode == EngineConfig.OpenMode.CREATE_INDEX_AND_TRANSLOG || + config().getIndexSettings().getIndexVersionCreated().before(Version.V_6_0_0_rc1) : + "existing index was created after 6_0_0_rc1 but has no history uuid"; + uuid = UUIDs.randomBase64UUID(); + } + return uuid; + } + + private SearcherManager createSearcherManager(SearcherFactory searcherFactory, boolean readSegmentsInfo) throws EngineException { boolean success = false; SearcherManager searcherManager = null; try { try { final DirectoryReader directoryReader = ElasticsearchDirectoryReader.wrap(DirectoryReader.open(indexWriter), shardId); searcherManager = new SearcherManager(directoryReader, searcherFactory); - lastCommittedSegmentInfos = readLastCommittedSegmentInfos(searcherManager, store); + if (readSegmentsInfo) { + lastCommittedSegmentInfos = readLastCommittedSegmentInfos(searcherManager, store); + } success = true; return searcherManager; } catch (IOException e) { @@ -426,10 +485,11 @@ private SearcherManager createSearcherManager() throws EngineException { } @Override - public GetResult get(Get get, Function searcherFactory) throws EngineException { + public GetResult get(Get get, BiFunction searcherFactory) throws EngineException { assert Objects.equals(get.uid().field(), uidField) : get.uid().field(); try (ReleasableLock ignored = readLock.acquire()) { ensureOpen(); + SearcherScope scope; if (get.realtime()) { VersionValue versionValue = versionMap.getUnderLock(get.uid()); if (versionValue != null) { @@ -440,12 +500,16 @@ public GetResult get(Get get, Function searcherFactory) throws throw new VersionConflictEngineException(shardId, get.type(), get.id(), get.versionType().explainConflictForReads(versionValue.version, get.version())); } - refresh("realtime_get"); + refresh("realtime_get", SearcherScope.INTERNAL); } + scope = SearcherScope.INTERNAL; + } else { + // we expose what has been externally expose in a point in time snapshot via an explicit refresh + scope = SearcherScope.EXTERNAL; } // no version, get the version from the index, we know that we refresh on flush - return getFromSearcher(get, searcherFactory); + return getFromSearcher(get, searcherFactory, scope); } } @@ -463,7 +527,7 @@ enum OpVsLuceneDocStatus { } private OpVsLuceneDocStatus compareOpToLuceneDocBasedOnSeqNo(final Operation op) throws IOException { - assert op.seqNo() != SequenceNumbersService.UNASSIGNED_SEQ_NO : "resolving ops based on seq# but no seqNo is found"; + assert op.seqNo() != SequenceNumbers.UNASSIGNED_SEQ_NO : "resolving ops based on seq# but no seqNo is found"; final OpVsLuceneDocStatus status; final VersionValue versionValue = versionMap.getUnderLock(op.uid()); assert incrementVersionLookup(); @@ -477,7 +541,7 @@ private OpVsLuceneDocStatus compareOpToLuceneDocBasedOnSeqNo(final Operation op) } else { // load from index assert incrementIndexVersionLookup(); - try (Searcher searcher = acquireSearcher("load_seq_no")) { + try (Searcher searcher = acquireSearcher("load_seq_no", SearcherScope.INTERNAL)) { DocIdAndSeqNo docAndSeqNo = VersionsAndSeqNoResolver.loadDocIdAndSeqNo(searcher.reader(), op.uid()); if (docAndSeqNo == null) { status = OpVsLuceneDocStatus.LUCENE_DOC_NOT_FOUND; @@ -507,7 +571,7 @@ private VersionValue resolveDocVersion(final Operation op) throws IOException { assert incrementIndexVersionLookup(); // used for asserting in tests final long currentVersion = loadCurrentVersionFromIndex(op.uid()); if (currentVersion != Versions.NOT_FOUND) { - versionValue = new VersionValue(currentVersion, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0L); + versionValue = new VersionValue(currentVersion, SequenceNumbers.UNASSIGNED_SEQ_NO, 0L); } } else if (engineConfig.isEnableGcDeletes() && versionValue.isDelete() && (engineConfig.getThreadPool().relativeTimeInMillis() - ((DeleteVersionValue)versionValue).time) > getGcDeletesInMillis()) { @@ -518,7 +582,7 @@ private VersionValue resolveDocVersion(final Operation op) throws IOException { private OpVsLuceneDocStatus compareOpToLuceneDocBasedOnVersions(final Operation op) throws IOException { - assert op.seqNo() == SequenceNumbersService.UNASSIGNED_SEQ_NO : "op is resolved based on versions but have a seq#"; + assert op.seqNo() == SequenceNumbers.UNASSIGNED_SEQ_NO : "op is resolved based on versions but have a seq#"; assert op.version() >= 0 : "versions should be non-negative. got " + op.version(); final VersionValue versionValue = resolveDocVersion(op); if (versionValue == null) { @@ -570,11 +634,10 @@ private boolean assertVersionType(final Engine.Operation operation) { private boolean assertIncomingSequenceNumber(final Engine.Operation.Origin origin, final long seqNo) { if (engineConfig.getIndexSettings().getIndexVersionCreated().before(Version.V_6_0_0_alpha1) && origin == Operation.Origin.LOCAL_TRANSLOG_RECOVERY) { // legacy support - assert seqNo == SequenceNumbersService.UNASSIGNED_SEQ_NO : "old op recovering but it already has a seq no.;" + + assert seqNo == SequenceNumbers.UNASSIGNED_SEQ_NO : "old op recovering but it already has a seq no.;" + " index version: " + engineConfig.getIndexSettings().getIndexVersionCreated() + ", seqNo: " + seqNo; } else if (origin == Operation.Origin.PRIMARY) { - // sequence number should not be set when operation origin is primary - assert seqNo == SequenceNumbersService.UNASSIGNED_SEQ_NO : "primary ops should never have an assigned seq no.; seqNo: " + seqNo; + assert assertOriginPrimarySequenceNumber(seqNo); } else if (engineConfig.getIndexSettings().getIndexVersionCreated().onOrAfter(Version.V_6_0_0_alpha1)) { // sequence number should be set when operation origin is not primary assert seqNo >= 0 : "recovery or replica ops should have an assigned seq no.; origin: " + origin; @@ -582,6 +645,13 @@ private boolean assertIncomingSequenceNumber(final Engine.Operation.Origin origi return true; } + protected boolean assertOriginPrimarySequenceNumber(final long seqNo) { + // sequence number should not be set when operation origin is primary + assert seqNo == SequenceNumbers.UNASSIGNED_SEQ_NO + : "primary operations must never have an assigned sequence number but was [" + seqNo + "]"; + return true; + } + private boolean assertSequenceNumberBeforeIndexing(final Engine.Operation.Origin origin, final long seqNo) { if (engineConfig.getIndexSettings().getIndexVersionCreated().onOrAfter(Version.V_6_0_0_alpha1) || origin == Operation.Origin.PRIMARY) { @@ -591,6 +661,20 @@ private boolean assertSequenceNumberBeforeIndexing(final Engine.Operation.Origin return true; } + private long generateSeqNoForOperation(final Operation operation) { + assert operation.origin() == Operation.Origin.PRIMARY; + return doGenerateSeqNoForOperation(operation); + } + + /** + * Generate the sequence number for the specified operation. + * + * @param operation the operation + * @return the sequence number + */ + protected long doGenerateSeqNoForOperation(final Operation operation) { + return seqNoService.generateSeqNo(); + } @Override public IndexResult index(Index index) throws IOException { @@ -651,7 +735,7 @@ public IndexResult index(Index index) throws IOException { final Translog.Location location; if (indexResult.hasFailure() == false) { location = translog.add(new Translog.Index(index, indexResult)); - } else if (indexResult.getSeqNo() != SequenceNumbersService.UNASSIGNED_SEQ_NO) { + } else if (indexResult.getSeqNo() != SequenceNumbers.UNASSIGNED_SEQ_NO) { // if we have document failure, record it as a no-op in the translog with the generated seq_no location = translog.add(new Translog.NoOp(indexResult.getSeqNo(), index.primaryTerm(), indexResult.getFailure().getMessage())); } else { @@ -659,8 +743,8 @@ public IndexResult index(Index index) throws IOException { } indexResult.setTranslogLocation(location); } - if (indexResult.getSeqNo() != SequenceNumbersService.UNASSIGNED_SEQ_NO) { - seqNoService().markSeqNoAsCompleted(indexResult.getSeqNo()); + if (indexResult.getSeqNo() != SequenceNumbers.UNASSIGNED_SEQ_NO) { + seqNoService.markSeqNoAsCompleted(indexResult.getSeqNo()); } indexResult.setTook(System.nanoTime() - index.startTime()); indexResult.freeze(); @@ -680,19 +764,17 @@ private IndexingStrategy planIndexingAsNonPrimary(Index index) throws IOExceptio final IndexingStrategy plan; if (canOptimizeAddDocument(index) && mayHaveBeenIndexedBefore(index) == false) { // no need to deal with out of order delivery - we never saw this one - assert index.version() == 1L : - "can optimize on replicas but incoming version is [" + index.version() + "]"; + assert index.version() == 1L : "can optimize on replicas but incoming version is [" + index.version() + "]"; plan = IndexingStrategy.optimizedAppendOnly(index.seqNo()); } else { // drop out of order operations assert index.versionType().versionTypeForReplicationAndRecovery() == index.versionType() : - "resolving out of order delivery based on versioning but version type isn't fit for it. got [" - + index.versionType() + "]"; + "resolving out of order delivery based on versioning but version type isn't fit for it. got [" + index.versionType() + "]"; // unlike the primary, replicas don't really care to about creation status of documents // this allows to ignore the case where a document was found in the live version maps in // a delete state and return false for the created flag in favor of code simplicity final OpVsLuceneDocStatus opVsLucene; - if (index.seqNo() == SequenceNumbersService.UNASSIGNED_SEQ_NO) { + if (index.seqNo() == SequenceNumbers.UNASSIGNED_SEQ_NO) { // This can happen if the primary is still on an old node and send traffic without seq# or we recover from translog // created by an old version. assert config().getIndexSettings().getIndexVersionCreated().before(Version.V_6_0_0_alpha1) : @@ -722,15 +804,14 @@ assert config().getIndexSettings().getIndexVersionCreated().before(Version.V_6_0 } private IndexingStrategy planIndexingAsPrimary(Index index) throws IOException { - assert index.origin() == Operation.Origin.PRIMARY : - "planing as primary but origin isn't. got " + index.origin(); + assert index.origin() == Operation.Origin.PRIMARY : "planing as primary but origin isn't. got " + index.origin(); final IndexingStrategy plan; // resolve an external operation into an internal one which is safe to replay if (canOptimizeAddDocument(index)) { if (mayHaveBeenIndexedBefore(index)) { - plan = IndexingStrategy.overrideExistingAsIfNotThere(seqNoService().generateSeqNo(), 1L); + plan = IndexingStrategy.overrideExistingAsIfNotThere(generateSeqNoForOperation(index), 1L); } else { - plan = IndexingStrategy.optimizedAppendOnly(seqNoService().generateSeqNo()); + plan = IndexingStrategy.optimizedAppendOnly(generateSeqNoForOperation(index)); } } else { // resolves incoming version @@ -751,7 +832,7 @@ private IndexingStrategy planIndexingAsPrimary(Index index) throws IOException { plan = IndexingStrategy.skipDueToVersionConflict(e, currentNotFoundOrDeleted, currentVersion); } else { plan = IndexingStrategy.processNormally(currentNotFoundOrDeleted, - seqNoService().generateSeqNo(), + generateSeqNoForOperation(index), index.versionType().updateVersion(currentVersion, index.version()) ); } @@ -873,7 +954,7 @@ static IndexingStrategy skipDueToVersionConflict( VersionConflictEngineException e, boolean currentNotFoundOrDeleted, long currentVersion) { final IndexResult result = new IndexResult(e, currentVersion); return new IndexingStrategy( - currentNotFoundOrDeleted, false, false, SequenceNumbersService.UNASSIGNED_SEQ_NO, Versions.NOT_FOUND, result); + currentNotFoundOrDeleted, false, false, SequenceNumbers.UNASSIGNED_SEQ_NO, Versions.NOT_FOUND, result); } static IndexingStrategy processNormally(boolean currentNotFoundOrDeleted, @@ -904,7 +985,7 @@ private boolean assertDocDoesNotExist(final Index index, final boolean allowDele throw new AssertionError("doc [" + index.type() + "][" + index.id() + "] exists in version map (version " + versionValue + ")"); } } else { - try (Searcher searcher = acquireSearcher("assert doc doesn't exist")) { + try (Searcher searcher = acquireSearcher("assert doc doesn't exist", SearcherScope.INTERNAL)) { final long docsWithId = searcher.searcher().count(new TermQuery(index.uid())); if (docsWithId > 0) { throw new AssertionError("doc [" + index.type() + "][" + index.id() + "] exists [" + docsWithId + "] times in index"); @@ -951,7 +1032,7 @@ public DeleteResult delete(Delete delete) throws IOException { final Translog.Location location; if (deleteResult.hasFailure() == false) { location = translog.add(new Translog.Delete(delete, deleteResult)); - } else if (deleteResult.getSeqNo() != SequenceNumbersService.UNASSIGNED_SEQ_NO) { + } else if (deleteResult.getSeqNo() != SequenceNumbers.UNASSIGNED_SEQ_NO) { location = translog.add(new Translog.NoOp(deleteResult.getSeqNo(), delete.primaryTerm(), deleteResult.getFailure().getMessage())); } else { @@ -959,8 +1040,8 @@ public DeleteResult delete(Delete delete) throws IOException { } deleteResult.setTranslogLocation(location); } - if (deleteResult.getSeqNo() != SequenceNumbersService.UNASSIGNED_SEQ_NO) { - seqNoService().markSeqNoAsCompleted(deleteResult.getSeqNo()); + if (deleteResult.getSeqNo() != SequenceNumbers.UNASSIGNED_SEQ_NO) { + seqNoService.markSeqNoAsCompleted(deleteResult.getSeqNo()); } deleteResult.setTook(System.nanoTime() - delete.startTime()); deleteResult.freeze(); @@ -977,8 +1058,7 @@ public DeleteResult delete(Delete delete) throws IOException { } private DeletionStrategy planDeletionAsNonPrimary(Delete delete) throws IOException { - assert delete.origin() != Operation.Origin.PRIMARY : "planing as primary but got " - + delete.origin(); + assert delete.origin() != Operation.Origin.PRIMARY : "planing as primary but got " + delete.origin(); // drop out of order operations assert delete.versionType().versionTypeForReplicationAndRecovery() == delete.versionType() : "resolving out of order delivery based on versioning but version type isn't fit for it. got [" @@ -987,7 +1067,7 @@ private DeletionStrategy planDeletionAsNonPrimary(Delete delete) throws IOExcept // this allows to ignore the case where a document was found in the live version maps in // a delete state and return true for the found flag in favor of code simplicity final OpVsLuceneDocStatus opVsLucene; - if (delete.seqNo() == SequenceNumbersService.UNASSIGNED_SEQ_NO) { + if (delete.seqNo() == SequenceNumbers.UNASSIGNED_SEQ_NO) { assert config().getIndexSettings().getIndexVersionCreated().before(Version.V_6_0_0_alpha1) : "index is newly created but op has no sequence numbers. op: " + delete; opVsLucene = compareOpToLuceneDocBasedOnVersions(delete); @@ -1016,8 +1096,7 @@ assert config().getIndexSettings().getIndexVersionCreated().before(Version.V_6_0 } private DeletionStrategy planDeletionAsPrimary(Delete delete) throws IOException { - assert delete.origin() == Operation.Origin.PRIMARY : "planing as primary but got " - + delete.origin(); + assert delete.origin() == Operation.Origin.PRIMARY : "planing as primary but got " + delete.origin(); // resolve operation from external to internal final VersionValue versionValue = resolveDocVersion(delete); assert incrementVersionLookup(); @@ -1035,9 +1114,10 @@ private DeletionStrategy planDeletionAsPrimary(Delete delete) throws IOException final VersionConflictEngineException e = new VersionConflictEngineException(shardId, delete, currentVersion, currentlyDeleted); plan = DeletionStrategy.skipDueToVersionConflict(e, currentVersion, currentlyDeleted); } else { - plan = DeletionStrategy.processNormally(currentlyDeleted, - seqNoService().generateSeqNo(), - delete.versionType().updateVersion(currentVersion, delete.version())); + plan = DeletionStrategy.processNormally( + currentlyDeleted, + generateSeqNoForOperation(delete), + delete.versionType().updateVersion(currentVersion, delete.version())); } return plan; } @@ -1091,7 +1171,7 @@ private DeletionStrategy(boolean deleteFromLucene, boolean currentlyDeleted, static DeletionStrategy skipDueToVersionConflict( VersionConflictEngineException e, long currentVersion, boolean currentlyDeleted) { - final long unassignedSeqNo = SequenceNumbersService.UNASSIGNED_SEQ_NO; + final long unassignedSeqNo = SequenceNumbers.UNASSIGNED_SEQ_NO; final DeleteResult deleteResult = new DeleteResult(e, currentVersion, unassignedSeqNo, currentlyDeleted == false); return new DeletionStrategy(false, currentlyDeleted, unassignedSeqNo, Versions.NOT_FOUND, deleteResult); } @@ -1127,7 +1207,7 @@ public NoOpResult noOp(final NoOp noOp) { private NoOpResult innerNoOp(final NoOp noOp) throws IOException { assert readLock.isHeldByCurrentThread() || writeLock.isHeldByCurrentThread(); - assert noOp.seqNo() > SequenceNumbersService.NO_OPS_PERFORMED; + assert noOp.seqNo() > SequenceNumbers.NO_OPS_PERFORMED; final long seqNo = noOp.seqNo(); try { final NoOpResult noOpResult = new NoOpResult(noOp.seqNo()); @@ -1137,76 +1217,67 @@ private NoOpResult innerNoOp(final NoOp noOp) throws IOException { noOpResult.freeze(); return noOpResult; } finally { - if (seqNo != SequenceNumbersService.UNASSIGNED_SEQ_NO) { - seqNoService().markSeqNoAsCompleted(seqNo); + if (seqNo != SequenceNumbers.UNASSIGNED_SEQ_NO) { + seqNoService.markSeqNoAsCompleted(seqNo); } } } @Override public void refresh(String source) throws EngineException { + refresh(source, SearcherScope.EXTERNAL); + } + + final void refresh(String source, SearcherScope scope) throws EngineException { + long bytes = 0; // we obtain a read lock here, since we don't want a flush to happen while we are refreshing // since it flushes the index as well (though, in terms of concurrency, we are allowed to do it) try (ReleasableLock lock = readLock.acquire()) { ensureOpen(); - searcherManager.maybeRefreshBlocking(); + bytes = indexWriter.ramBytesUsed(); + switch (scope) { + case EXTERNAL: + // even though we maintain 2 managers we really do the heavy-lifting only once. + // the second refresh will only do the extra work we have to do for warming caches etc. + writingBytes.addAndGet(bytes); + externalSearcherManager.maybeRefreshBlocking(); + // the break here is intentional we never refresh both internal / external together + break; + case INTERNAL: + final long versionMapBytes = versionMap.ramBytesUsedForRefresh(); + bytes += versionMapBytes; + writingBytes.addAndGet(bytes); + internalSearcherManager.maybeRefreshBlocking(); + break; + default: + throw new IllegalArgumentException("unknown scope: " + scope); + } } catch (AlreadyClosedException e) { failOnTragicEvent(e); throw e; } catch (Exception e) { try { - failEngine("refresh failed", e); + failEngine("refresh failed source[" + source + "]", e); } catch (Exception inner) { e.addSuppressed(inner); } throw new RefreshFailedEngineException(shardId, e); + } finally { + writingBytes.addAndGet(-bytes); } // TODO: maybe we should just put a scheduled job in threadPool? // We check for pruning in each delete request, but we also prune here e.g. in case a delete burst comes in and then no more deletes // for a long time: maybePruneDeletedTombstones(); - versionMapRefreshPending.set(false); mergeScheduler.refreshConfig(); } @Override public void writeIndexingBuffer() throws EngineException { - // we obtain a read lock here, since we don't want a flush to happen while we are writing // since it flushes the index as well (though, in terms of concurrency, we are allowed to do it) - try (ReleasableLock lock = readLock.acquire()) { - ensureOpen(); - - // TODO: it's not great that we secretly tie searcher visibility to "freeing up heap" here... really we should keep two - // searcher managers, one for searching which is only refreshed by the schedule the user requested (refresh_interval, or invoking - // refresh API), and another for version map interactions. See #15768. - final long versionMapBytes = versionMap.ramBytesUsedForRefresh(); - final long indexingBufferBytes = indexWriter.ramBytesUsed(); - - final boolean useRefresh = versionMapRefreshPending.get() || (indexingBufferBytes / 4 < versionMapBytes); - if (useRefresh) { - // The version map is using > 25% of the indexing buffer, so we do a refresh so the version map also clears - logger.debug("use refresh to write indexing buffer (heap size=[{}]), to also clear version map (heap size=[{}])", - new ByteSizeValue(indexingBufferBytes), new ByteSizeValue(versionMapBytes)); - refresh("write indexing buffer"); - } else { - // Most of our heap is used by the indexing buffer, so we do a cheaper (just writes segments, doesn't open a new searcher) IW.flush: - logger.debug("use IndexWriter.flush to write indexing buffer (heap size=[{}]) since version map is small (heap size=[{}])", - new ByteSizeValue(indexingBufferBytes), new ByteSizeValue(versionMapBytes)); - indexWriter.flush(); - } - } catch (AlreadyClosedException e) { - failOnTragicEvent(e); - throw e; - } catch (Exception e) { - try { - failEngine("writeIndexingBuffer failed", e); - } catch (Exception inner) { - e.addSuppressed(inner); - } - throw new RefreshFailedEngineException(shardId, e); - } + refresh("write indexing buffer", SearcherScope.INTERNAL); } @Override @@ -1260,10 +1331,11 @@ final boolean tryRenewSyncCommit() { maybeFailEngine("renew sync commit", ex); throw new EngineException(shardId, "failed to renew sync commit", ex); } - if (renewed) { // refresh outside of the write lock - refresh("renew sync commit"); + if (renewed) { + // refresh outside of the write lock + // we have to refresh internal searcher here to ensure we release unreferenced segments. + refresh("renew sync commit", SearcherScope.INTERNAL); } - return renewed; } @@ -1305,35 +1377,13 @@ public CommitId flush(boolean force, boolean waitIfOngoing) throws EngineExcepti commitIndexWriter(indexWriter, translog, null); logger.trace("finished commit for flush"); // we need to refresh in order to clear older version values - refresh("version_table_flush"); + refresh("version_table_flush", SearcherScope.INTERNAL); translog.trimUnreferencedReaders(); } catch (Exception e) { throw new FlushFailedEngineException(shardId, e); } - /* - * we have to inc-ref the store here since if the engine is closed by a tragic event - * we don't acquire the write lock and wait until we have exclusive access. This might also - * dec the store reference which can essentially close the store and unless we can inc the reference - * we can't use it. - */ - store.incRef(); - try { - // reread the last committed segment infos - lastCommittedSegmentInfos = store.readLastCommittedSegmentsInfo(); - } catch (Exception e) { - if (isClosed.get() == false) { - try { - logger.warn("failed to read latest segment infos on flush", e); - } catch (Exception inner) { - e.addSuppressed(inner); - } - if (Lucene.isCorruptionException(e)) { - throw new FlushFailedEngineException(shardId, e); - } - } - } finally { - store.decRef(); - } + refreshLastCommittedSegmentInfos(); + } newCommitId = lastCommittedSegmentInfos.getId(); } catch (FlushFailedEngineException ex) { @@ -1351,6 +1401,33 @@ public CommitId flush(boolean force, boolean waitIfOngoing) throws EngineExcepti return new CommitId(newCommitId); } + private void refreshLastCommittedSegmentInfos() { + /* + * we have to inc-ref the store here since if the engine is closed by a tragic event + * we don't acquire the write lock and wait until we have exclusive access. This might also + * dec the store reference which can essentially close the store and unless we can inc the reference + * we can't use it. + */ + store.incRef(); + try { + // reread the last committed segment infos + lastCommittedSegmentInfos = store.readLastCommittedSegmentsInfo(); + } catch (Exception e) { + if (isClosed.get() == false) { + try { + logger.warn("failed to read latest segment infos on flush", e); + } catch (Exception inner) { + e.addSuppressed(inner); + } + if (Lucene.isCorruptionException(e)) { + throw new FlushFailedEngineException(shardId, e); + } + } + } finally { + store.decRef(); + } + } + @Override public void rollTranslogGeneration() throws EngineException { try (ReleasableLock ignored = readLock.acquire()) { @@ -1502,23 +1579,15 @@ public IndexCommitRef acquireIndexCommit(final boolean flushFirst) throws Engine } } - @SuppressWarnings("finally") private boolean failOnTragicEvent(AlreadyClosedException ex) { final boolean engineFailed; // if we are already closed due to some tragic exception // we need to fail the engine. it might have already been failed before // but we are double-checking it's failed and closed if (indexWriter.isOpen() == false && indexWriter.getTragicException() != null) { - if (indexWriter.getTragicException() instanceof Error) { - try { - logger.error("tragic event in index writer", ex); - } finally { - throw (Error) indexWriter.getTragicException(); - } - } else { - failEngine("already closed by tragic event on the index writer", (Exception) indexWriter.getTragicException()); - engineFailed = true; - } + maybeDie("tragic event in index writer", indexWriter.getTragicException()); + failEngine("already closed by tragic event on the index writer", (Exception) indexWriter.getTragicException()); + engineFailed = true; } else if (translog.isOpen() == false && translog.getTragicException() != null) { failEngine("already closed by tragic event on the translog", translog.getTragicException()); engineFailed = true; @@ -1604,8 +1673,11 @@ protected final void closeNoLock(String reason, CountDownLatch closedLatch) { assert rwl.isWriteLockedByCurrentThread() || failEngineLock.isHeldByCurrentThread() : "Either the write lock must be held or the engine must be currently be failing itself"; try { this.versionMap.clear(); + if (internalSearcherManager != null) { + internalSearcherManager.removeListener(versionMap); + } try { - IOUtils.close(searcherManager); + IOUtils.close(externalSearcherManager, internalSearcherManager); } catch (Exception e) { logger.warn("Failed to close SearcherManager", e); } @@ -1637,8 +1709,15 @@ protected final void closeNoLock(String reason, CountDownLatch closedLatch) { } @Override - protected SearcherManager getSearcherManager() { - return searcherManager; + protected SearcherManager getSearcherManager(String source, SearcherScope scope) { + switch (scope) { + case INTERNAL: + return internalSearcherManager; + case EXTERNAL: + return externalSearcherManager; + default: + throw new IllegalStateException("unknown scope: " + scope); + } } private Releasable acquireLock(BytesRef uid) { @@ -1651,7 +1730,7 @@ private Releasable acquireLock(Term uid) { private long loadCurrentVersionFromIndex(Term uid) throws IOException { assert incrementIndexVersionLookup(); - try (Searcher searcher = acquireSearcher("load_version")) { + try (Searcher searcher = acquireSearcher("load_version", SearcherScope.INTERNAL)) { return VersionsAndSeqNoResolver.loadVersion(searcher.reader(), uid); } } @@ -1829,7 +1908,6 @@ protected void doRun() throws Exception { @Override protected void handleMergeException(final Directory dir, final Throwable exc) { - logger.error("failed to merge", exc); engineConfig.getThreadPool().generic().execute(new AbstractRunnable() { @Override public void onFailure(Exception e) { @@ -1838,13 +1916,39 @@ public void onFailure(Exception e) { @Override protected void doRun() throws Exception { - MergePolicy.MergeException e = new MergePolicy.MergeException(exc, dir); - failEngine("merge failed", e); + /* + * We do this on another thread rather than the merge thread that we are initially called on so that we have complete + * confidence that the call stack does not contain catch statements that would cause the error that might be thrown + * here from being caught and never reaching the uncaught exception handler. + */ + maybeDie("fatal error while merging", exc); + logger.error("failed to merge", exc); + failEngine("merge failed", new MergePolicy.MergeException(exc, dir)); } }); } } + /** + * If the specified throwable is a fatal error, this throwable will be thrown. Callers should ensure that there are no catch statements + * that would catch an error in the stack as the fatal error here should go uncaught and be handled by the uncaught exception handler + * that we install during bootstrap. If the specified throwable is indeed a fatal error, the specified message will attempt to be logged + * before throwing the fatal error. If the specified throwable is not a fatal error, this method is a no-op. + * + * @param maybeMessage the message to maybe log + * @param maybeFatal the throwable that is maybe fatal + */ + @SuppressWarnings("finally") + private void maybeDie(final String maybeMessage, final Throwable maybeFatal) { + if (maybeFatal instanceof Error) { + try { + logger.error(maybeMessage, maybeFatal); + } finally { + throw (Error) maybeFatal; + } + } + } + /** * Commits the specified index writer. * @@ -1856,7 +1960,7 @@ protected void doRun() throws Exception { protected void commitIndexWriter(final IndexWriter writer, final Translog translog, @Nullable final String syncId) throws IOException { ensureCanFlush(); try { - final long localCheckpoint = seqNoService().getLocalCheckpoint(); + final long localCheckpoint = seqNoService.getLocalCheckpoint(); final Translog.TranslogGeneration translogGeneration = translog.getMinGenerationForSeqNo(localCheckpoint + 1); final String translogFileGeneration = Long.toString(translogGeneration.translogFileGeneration); final String translogUUID = translogGeneration.translogUUID; @@ -1872,15 +1976,16 @@ protected void commitIndexWriter(final IndexWriter writer, final Translog transl * {@link IndexWriter#commit()} call flushes all documents, we defer computation of the maximum sequence number to the time * of invocation of the commit data iterator (which occurs after all documents have been flushed to Lucene). */ - final Map commitData = new HashMap<>(5); + final Map commitData = new HashMap<>(6); commitData.put(Translog.TRANSLOG_GENERATION_KEY, translogFileGeneration); commitData.put(Translog.TRANSLOG_UUID_KEY, translogUUID); commitData.put(SequenceNumbers.LOCAL_CHECKPOINT_KEY, localCheckpointValue); if (syncId != null) { commitData.put(Engine.SYNC_COMMIT_ID, syncId); } - commitData.put(SequenceNumbers.MAX_SEQ_NO, Long.toString(seqNoService().getMaxSeqNo())); + commitData.put(SequenceNumbers.MAX_SEQ_NO, Long.toString(seqNoService.getMaxSeqNo())); commitData.put(MAX_UNSAFE_AUTO_ID_TIMESTAMP_COMMIT_ID, Long.toString(maxUnsafeAutoIdTimestamp.get())); + commitData.put(HISTORY_UUID_KEY, historyUUID); logger.trace("committing writer with commit data [{}]", commitData); return commitData.entrySet().iterator(); }); @@ -1942,8 +2047,7 @@ public MergeStats getMergeStats() { return mergeScheduler.stats(); } - @Override - public SequenceNumbersService seqNoService() { + public final SequenceNumbersService seqNoService() { return seqNoService; } @@ -1990,7 +2094,7 @@ public boolean isRecovering() { * Gets the commit data from {@link IndexWriter} as a map. */ private static Map commitDataAsMap(final IndexWriter indexWriter) { - Map commitData = new HashMap<>(5); + Map commitData = new HashMap<>(6); for (Map.Entry entry : indexWriter.getLiveCommitData()) { commitData.put(entry.getKey(), entry.getValue()); } diff --git a/core/src/main/java/org/elasticsearch/index/engine/LiveVersionMap.java b/core/src/main/java/org/elasticsearch/index/engine/LiveVersionMap.java index 9ee4bd43c2129..7396c3143c651 100644 --- a/core/src/main/java/org/elasticsearch/index/engine/LiveVersionMap.java +++ b/core/src/main/java/org/elasticsearch/index/engine/LiveVersionMap.java @@ -59,8 +59,6 @@ private static class Maps { private volatile Maps maps = new Maps(); - private ReferenceManager mgr; - /** Bytes consumed for each BytesRef UID: * In this base value, we account for the {@link BytesRef} object itself as * well as the header of the byte[] array it holds, and some lost bytes due @@ -98,21 +96,6 @@ private static class Maps { /** Tracks bytes used by tombstones (deletes) */ final AtomicLong ramBytesUsedTombstones = new AtomicLong(); - /** Sync'd because we replace old mgr. */ - synchronized void setManager(ReferenceManager newMgr) { - if (mgr != null) { - mgr.removeListener(this); - } - mgr = newMgr; - - // In case InternalEngine closes & opens a new IndexWriter/SearcherManager, all deletes are made visible, so we clear old and - // current here. This is safe because caller holds writeLock here (so no concurrent adds/deletes can be happeninge): - maps = new Maps(); - - // So we are notified when reopen starts and finishes - mgr.addListener(this); - } - @Override public void beforeRefresh() throws IOException { // Start sending all updates after this point to the new @@ -249,11 +232,6 @@ synchronized void clear() { // and this will lead to an assert trip. Presumably it's fine if our ramBytesUsedTombstones is non-zero after clear since the index // is being closed: //ramBytesUsedTombstones.set(0); - - if (mgr != null) { - mgr.removeListener(this); - mgr = null; - } } @Override diff --git a/core/src/main/java/org/elasticsearch/index/engine/Segment.java b/core/src/main/java/org/elasticsearch/index/engine/Segment.java index b9df309970462..fa15e7dc09e8b 100644 --- a/core/src/main/java/org/elasticsearch/index/engine/Segment.java +++ b/core/src/main/java/org/elasticsearch/index/engine/Segment.java @@ -39,6 +39,7 @@ import java.util.ArrayList; import java.util.Collection; import java.util.List; +import java.util.Map; public class Segment implements Streamable { @@ -55,6 +56,7 @@ public class Segment implements Streamable { public long memoryInBytes; public Sort segmentSort; public Accountable ramTree = null; + public Map attributes; Segment() { } @@ -128,6 +130,14 @@ public Sort getSegmentSort() { return segmentSort; } + /** + * Return segment attributes. + * @see org.apache.lucene.index.SegmentInfo#getAttributes() + */ + public Map getAttributes() { + return attributes; + } + @Override public boolean equals(Object o) { if (this == o) return true; @@ -173,6 +183,11 @@ public void readFrom(StreamInput in) throws IOException { } else { segmentSort = null; } + if (in.getVersion().onOrAfter(Version.V_6_1_0) && in.readBoolean()) { + attributes = in.readMap(StreamInput::readString, StreamInput::readString); + } else { + attributes = null; + } } @Override @@ -196,6 +211,13 @@ public void writeTo(StreamOutput out) throws IOException { if (out.getVersion().onOrAfter(Version.V_6_0_0_alpha1)) { writeSegmentSort(out, segmentSort); } + if (out.getVersion().onOrAfter(Version.V_6_1_0)) { + boolean hasAttributes = attributes != null; + out.writeBoolean(hasAttributes); + if (hasAttributes) { + out.writeMap(attributes, StreamOutput::writeString, StreamOutput::writeString); + } + } } Sort readSegmentSort(StreamInput in) throws IOException { @@ -329,6 +351,7 @@ public String toString() { ", mergeId='" + mergeId + '\'' + ", memoryInBytes=" + memoryInBytes + (segmentSort != null ? ", sort=" + segmentSort : "") + + ", attributes=" + attributes + '}'; } } diff --git a/core/src/main/java/org/elasticsearch/index/engine/SegmentsStats.java b/core/src/main/java/org/elasticsearch/index/engine/SegmentsStats.java index ed8e150cd6c18..c99b9dacd31b7 100644 --- a/core/src/main/java/org/elasticsearch/index/engine/SegmentsStats.java +++ b/core/src/main/java/org/elasticsearch/index/engine/SegmentsStats.java @@ -26,13 +26,14 @@ import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Streamable; import org.elasticsearch.common.unit.ByteSizeValue; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContent.Params; +import org.elasticsearch.common.xcontent.ToXContentFragment; import org.elasticsearch.common.xcontent.XContentBuilder; import java.io.IOException; import java.util.Iterator; -public class SegmentsStats implements Streamable, ToXContent { +public class SegmentsStats implements Streamable, ToXContentFragment { private long count; private long memoryInBytes; diff --git a/core/src/main/java/org/elasticsearch/index/engine/VersionValue.java b/core/src/main/java/org/elasticsearch/index/engine/VersionValue.java index 1c2fa3005207d..f3d9618838f7b 100644 --- a/core/src/main/java/org/elasticsearch/index/engine/VersionValue.java +++ b/core/src/main/java/org/elasticsearch/index/engine/VersionValue.java @@ -34,7 +34,7 @@ class VersionValue implements Accountable { /** the seq number of the operation that last changed the associated uuid */ final long seqNo; - /** the the term of the operation that last changed the associated uuid */ + /** the term of the operation that last changed the associated uuid */ final long term; VersionValue(long version, long seqNo, long term) { diff --git a/core/src/main/java/org/elasticsearch/index/fielddata/IndexFieldData.java b/core/src/main/java/org/elasticsearch/index/fielddata/IndexFieldData.java index 0b63dfb8df80a..6519e8297bf4b 100644 --- a/core/src/main/java/org/elasticsearch/index/fielddata/IndexFieldData.java +++ b/core/src/main/java/org/elasticsearch/index/fielddata/IndexFieldData.java @@ -36,6 +36,7 @@ import org.apache.lucene.util.BitSet; import org.apache.lucene.util.BytesRef; import org.elasticsearch.common.Nullable; +import org.elasticsearch.common.lucene.search.Queries; import org.elasticsearch.index.IndexComponent; import org.elasticsearch.index.IndexSettings; import org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource.Nested; @@ -99,6 +100,24 @@ public static MemoryStorageFormat fromString(String string) { // on another node (we don't have the custom source them...) abstract class XFieldComparatorSource extends FieldComparatorSource { + protected final MultiValueMode sortMode; + protected final Object missingValue; + protected final Nested nested; + + public XFieldComparatorSource(Object missingValue, MultiValueMode sortMode, Nested nested) { + this.sortMode = sortMode; + this.missingValue = missingValue; + this.nested = nested; + } + + public MultiValueMode sortMode() { + return this.sortMode; + } + + public Nested nested() { + return this.nested; + } + /** * Simple wrapper class around a filter that matches parent documents * and a filter that matches child documents. For every root document R, @@ -116,6 +135,14 @@ public Nested(BitSetProducer rootFilter, Query innerQuery) { this.innerQuery = innerQuery; } + public Query getInnerQuery() { + return innerQuery; + } + + public BitSetProducer getRootFilter() { + return rootFilter; + } + /** * Get a {@link BitDocIdSet} that matches the root documents. */ diff --git a/core/src/main/java/org/elasticsearch/index/fielddata/IndexOrdinalsFieldData.java b/core/src/main/java/org/elasticsearch/index/fielddata/IndexOrdinalsFieldData.java index 2e714fc80a12b..8a9fabc9e1354 100644 --- a/core/src/main/java/org/elasticsearch/index/fielddata/IndexOrdinalsFieldData.java +++ b/core/src/main/java/org/elasticsearch/index/fielddata/IndexOrdinalsFieldData.java @@ -21,7 +21,7 @@ import org.apache.lucene.index.DirectoryReader; import org.apache.lucene.index.IndexReader; -import org.apache.lucene.index.MultiDocValues; +import org.apache.lucene.index.OrdinalMap; /** @@ -43,8 +43,8 @@ public interface IndexOrdinalsFieldData extends IndexFieldData.Global indexFieldData; - private final MultiValueMode sortMode; - private final Object missingValue; - private final Nested nested; public BytesRefFieldComparatorSource(IndexFieldData indexFieldData, Object missingValue, MultiValueMode sortMode, Nested nested) { + super(missingValue, sortMode, nested); this.indexFieldData = indexFieldData; - this.sortMode = sortMode; - this.missingValue = missingValue; - this.nested = nested; } @Override diff --git a/core/src/main/java/org/elasticsearch/index/fielddata/fieldcomparator/DoubleValuesComparatorSource.java b/core/src/main/java/org/elasticsearch/index/fielddata/fieldcomparator/DoubleValuesComparatorSource.java index 390a5493e273d..338d903a390b3 100644 --- a/core/src/main/java/org/elasticsearch/index/fielddata/fieldcomparator/DoubleValuesComparatorSource.java +++ b/core/src/main/java/org/elasticsearch/index/fielddata/fieldcomparator/DoubleValuesComparatorSource.java @@ -41,15 +41,10 @@ public class DoubleValuesComparatorSource extends IndexFieldData.XFieldComparatorSource { private final IndexNumericFieldData indexFieldData; - private final Object missingValue; - private final MultiValueMode sortMode; - private final Nested nested; public DoubleValuesComparatorSource(IndexNumericFieldData indexFieldData, @Nullable Object missingValue, MultiValueMode sortMode, Nested nested) { + super(missingValue, sortMode, nested); this.indexFieldData = indexFieldData; - this.missingValue = missingValue; - this.sortMode = sortMode; - this.nested = nested; } @Override diff --git a/core/src/main/java/org/elasticsearch/index/fielddata/fieldcomparator/FloatValuesComparatorSource.java b/core/src/main/java/org/elasticsearch/index/fielddata/fieldcomparator/FloatValuesComparatorSource.java index 0546a5e5e8ea7..a61a715547a24 100644 --- a/core/src/main/java/org/elasticsearch/index/fielddata/fieldcomparator/FloatValuesComparatorSource.java +++ b/core/src/main/java/org/elasticsearch/index/fielddata/fieldcomparator/FloatValuesComparatorSource.java @@ -39,15 +39,10 @@ public class FloatValuesComparatorSource extends IndexFieldData.XFieldComparatorSource { private final IndexNumericFieldData indexFieldData; - private final Object missingValue; - private final MultiValueMode sortMode; - private final Nested nested; public FloatValuesComparatorSource(IndexNumericFieldData indexFieldData, @Nullable Object missingValue, MultiValueMode sortMode, Nested nested) { + super(missingValue, sortMode, nested); this.indexFieldData = indexFieldData; - this.missingValue = missingValue; - this.sortMode = sortMode; - this.nested = nested; } @Override diff --git a/core/src/main/java/org/elasticsearch/index/fielddata/fieldcomparator/LongValuesComparatorSource.java b/core/src/main/java/org/elasticsearch/index/fielddata/fieldcomparator/LongValuesComparatorSource.java index d652673308594..aa206fe1baebe 100644 --- a/core/src/main/java/org/elasticsearch/index/fielddata/fieldcomparator/LongValuesComparatorSource.java +++ b/core/src/main/java/org/elasticsearch/index/fielddata/fieldcomparator/LongValuesComparatorSource.java @@ -38,15 +38,10 @@ public class LongValuesComparatorSource extends IndexFieldData.XFieldComparatorSource { private final IndexNumericFieldData indexFieldData; - private final Object missingValue; - private final MultiValueMode sortMode; - private final Nested nested; public LongValuesComparatorSource(IndexNumericFieldData indexFieldData, @Nullable Object missingValue, MultiValueMode sortMode, Nested nested) { + super(missingValue, sortMode, nested); this.indexFieldData = indexFieldData; - this.missingValue = missingValue; - this.sortMode = sortMode; - this.nested = nested; } @Override diff --git a/core/src/main/java/org/elasticsearch/index/fielddata/ordinals/GlobalOrdinalMapping.java b/core/src/main/java/org/elasticsearch/index/fielddata/ordinals/GlobalOrdinalMapping.java index 3eda112674dfc..3b6b206c212d6 100644 --- a/core/src/main/java/org/elasticsearch/index/fielddata/ordinals/GlobalOrdinalMapping.java +++ b/core/src/main/java/org/elasticsearch/index/fielddata/ordinals/GlobalOrdinalMapping.java @@ -19,7 +19,7 @@ package org.elasticsearch.index.fielddata.ordinals; -import org.apache.lucene.index.MultiDocValues.OrdinalMap; +import org.apache.lucene.index.OrdinalMap; import org.apache.lucene.index.SortedSetDocValues; import org.apache.lucene.util.BytesRef; import org.apache.lucene.util.LongValues; diff --git a/core/src/main/java/org/elasticsearch/index/fielddata/ordinals/GlobalOrdinalsBuilder.java b/core/src/main/java/org/elasticsearch/index/fielddata/ordinals/GlobalOrdinalsBuilder.java index 0a8ab3ccd4b86..646c7adb40408 100644 --- a/core/src/main/java/org/elasticsearch/index/fielddata/ordinals/GlobalOrdinalsBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/fielddata/ordinals/GlobalOrdinalsBuilder.java @@ -22,7 +22,7 @@ import org.apache.logging.log4j.Logger; import org.apache.lucene.index.DocValues; import org.apache.lucene.index.IndexReader; -import org.apache.lucene.index.MultiDocValues.OrdinalMap; +import org.apache.lucene.index.OrdinalMap; import org.apache.lucene.index.SortedSetDocValues; import org.apache.lucene.util.Accountable; import org.apache.lucene.util.packed.PackedInts; diff --git a/core/src/main/java/org/elasticsearch/index/fielddata/ordinals/GlobalOrdinalsIndexFieldData.java b/core/src/main/java/org/elasticsearch/index/fielddata/ordinals/GlobalOrdinalsIndexFieldData.java index 795e4b992023f..a869e5a40d4ed 100644 --- a/core/src/main/java/org/elasticsearch/index/fielddata/ordinals/GlobalOrdinalsIndexFieldData.java +++ b/core/src/main/java/org/elasticsearch/index/fielddata/ordinals/GlobalOrdinalsIndexFieldData.java @@ -20,7 +20,7 @@ import org.apache.lucene.index.DirectoryReader; import org.apache.lucene.index.LeafReaderContext; -import org.apache.lucene.index.MultiDocValues; +import org.apache.lucene.index.OrdinalMap; import org.apache.lucene.index.SortedSetDocValues; import org.apache.lucene.search.SortField; import org.apache.lucene.util.Accountable; @@ -47,13 +47,13 @@ public class GlobalOrdinalsIndexFieldData extends AbstractIndexComponent impleme private final String fieldName; private final long memorySizeInBytes; - private final MultiDocValues.OrdinalMap ordinalMap; + private final OrdinalMap ordinalMap; private final Atomic[] atomicReaders; private final Function> scriptFunction; protected GlobalOrdinalsIndexFieldData(IndexSettings indexSettings, String fieldName, AtomicOrdinalsFieldData[] segmentAfd, - MultiDocValues.OrdinalMap ordinalMap, long memorySizeInBytes, Function> scriptFunction) { super(indexSettings); this.fieldName = fieldName; @@ -113,17 +113,17 @@ public AtomicOrdinalsFieldData load(LeafReaderContext context) { } @Override - public MultiDocValues.OrdinalMap getOrdinalMap() { + public OrdinalMap getOrdinalMap() { return ordinalMap; } private final class Atomic extends AbstractAtomicOrdinalsFieldData { private final AtomicOrdinalsFieldData afd; - private final MultiDocValues.OrdinalMap ordinalMap; + private final OrdinalMap ordinalMap; private final int segmentIndex; - private Atomic(AtomicOrdinalsFieldData afd, MultiDocValues.OrdinalMap ordinalMap, int segmentIndex) { + private Atomic(AtomicOrdinalsFieldData afd, OrdinalMap ordinalMap, int segmentIndex) { super(scriptFunction); this.afd = afd; this.ordinalMap = ordinalMap; diff --git a/core/src/main/java/org/elasticsearch/index/fielddata/plain/AbstractIndexOrdinalsFieldData.java b/core/src/main/java/org/elasticsearch/index/fielddata/plain/AbstractIndexOrdinalsFieldData.java index 1dbd082f93bc8..d89c6d64d4915 100644 --- a/core/src/main/java/org/elasticsearch/index/fielddata/plain/AbstractIndexOrdinalsFieldData.java +++ b/core/src/main/java/org/elasticsearch/index/fielddata/plain/AbstractIndexOrdinalsFieldData.java @@ -22,7 +22,7 @@ import org.apache.lucene.index.FilteredTermsEnum; import org.apache.lucene.index.LeafReader; import org.apache.lucene.index.LeafReaderContext; -import org.apache.lucene.index.MultiDocValues; +import org.apache.lucene.index.OrdinalMap; import org.apache.lucene.index.Terms; import org.apache.lucene.index.TermsEnum; import org.apache.lucene.util.BytesRef; @@ -53,7 +53,7 @@ protected AbstractIndexOrdinalsFieldData(IndexSettings indexSettings, String fie } @Override - public MultiDocValues.OrdinalMap getOrdinalMap() { + public OrdinalMap getOrdinalMap() { return null; } diff --git a/core/src/main/java/org/elasticsearch/index/fielddata/plain/SortedSetDVOrdinalsIndexFieldData.java b/core/src/main/java/org/elasticsearch/index/fielddata/plain/SortedSetDVOrdinalsIndexFieldData.java index 9e6e2e994c9d6..0834d2479f072 100644 --- a/core/src/main/java/org/elasticsearch/index/fielddata/plain/SortedSetDVOrdinalsIndexFieldData.java +++ b/core/src/main/java/org/elasticsearch/index/fielddata/plain/SortedSetDVOrdinalsIndexFieldData.java @@ -21,7 +21,7 @@ import org.apache.lucene.index.DirectoryReader; import org.apache.lucene.index.LeafReaderContext; -import org.apache.lucene.index.MultiDocValues; +import org.apache.lucene.index.OrdinalMap; import org.apache.lucene.index.SortedSetDocValues; import org.apache.lucene.search.SortField; import org.apache.lucene.search.SortedSetSelector; @@ -128,7 +128,7 @@ public IndexOrdinalsFieldData localGlobalDirect(DirectoryReader indexReader) thr } @Override - public MultiDocValues.OrdinalMap getOrdinalMap() { + public OrdinalMap getOrdinalMap() { return null; } } diff --git a/core/src/main/java/org/elasticsearch/index/get/GetResult.java b/core/src/main/java/org/elasticsearch/index/get/GetResult.java index a47bb8be89e37..75e283b4191b1 100644 --- a/core/src/main/java/org/elasticsearch/index/get/GetResult.java +++ b/core/src/main/java/org/elasticsearch/index/get/GetResult.java @@ -82,7 +82,7 @@ public GetResult(String index, String type, String id, long version, boolean exi } /** - * Does the document exists. + * Does the document exist. */ public boolean isExists() { return exists; diff --git a/core/src/main/java/org/elasticsearch/index/mapper/AllFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/AllFieldMapper.java deleted file mode 100644 index a13cbca17053d..0000000000000 --- a/core/src/main/java/org/elasticsearch/index/mapper/AllFieldMapper.java +++ /dev/null @@ -1,319 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.index.mapper; - -import org.apache.lucene.index.IndexOptions; -import org.apache.lucene.index.IndexableField; -import org.apache.lucene.index.Term; -import org.apache.lucene.search.Query; -import org.elasticsearch.Version; -import org.elasticsearch.common.io.stream.BytesStreamOutput; -import org.elasticsearch.common.lucene.all.AllEntries; -import org.elasticsearch.common.lucene.all.AllField; -import org.elasticsearch.common.lucene.all.AllTermQuery; -import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.common.xcontent.XContentBuilder; -import org.elasticsearch.index.analysis.NamedAnalyzer; -import org.elasticsearch.index.query.QueryShardContext; -import org.elasticsearch.index.similarity.SimilarityService; - -import java.io.IOException; -import java.util.Collections; -import java.util.Iterator; -import java.util.List; -import java.util.Map; - -import static org.elasticsearch.common.xcontent.support.XContentMapValues.nodeMapValue; -import static org.elasticsearch.index.mapper.TypeParsers.parseTextField; - -public class AllFieldMapper extends MetadataFieldMapper { - - public static final String NAME = "_all"; - - public static final String CONTENT_TYPE = "_all"; - - public static class Defaults { - public static final String NAME = AllFieldMapper.NAME; - public static final String INDEX_NAME = AllFieldMapper.NAME; - public static final EnabledAttributeMapper ENABLED = EnabledAttributeMapper.UNSET_DISABLED; - public static final int POSITION_INCREMENT_GAP = 100; - - public static final MappedFieldType FIELD_TYPE = new AllFieldType(); - - static { - FIELD_TYPE.setIndexOptions(IndexOptions.DOCS_AND_FREQS_AND_POSITIONS); - FIELD_TYPE.setTokenized(true); - FIELD_TYPE.setName(NAME); - FIELD_TYPE.freeze(); - } - } - - public static class Builder extends MetadataFieldMapper.Builder { - - private EnabledAttributeMapper enabled = Defaults.ENABLED; - - public Builder(MappedFieldType existing) { - super(Defaults.NAME, existing == null ? Defaults.FIELD_TYPE : existing, Defaults.FIELD_TYPE); - builder = this; - } - - public Builder enabled(EnabledAttributeMapper enabled) { - this.enabled = enabled; - return this; - } - - @Override - public AllFieldMapper build(BuilderContext context) { - // In case the mapping overrides these - // TODO: this should be an exception! it doesnt make sense to not index this field - if (fieldType.indexOptions() == IndexOptions.NONE) { - fieldType.setIndexOptions(Defaults.FIELD_TYPE.indexOptions()); - } else { - fieldType.setIndexAnalyzer(new NamedAnalyzer(fieldType.indexAnalyzer(), - Defaults.POSITION_INCREMENT_GAP)); - fieldType.setSearchAnalyzer(new NamedAnalyzer(fieldType.searchAnalyzer(), - Defaults.POSITION_INCREMENT_GAP)); - fieldType.setSearchQuoteAnalyzer(new NamedAnalyzer(fieldType.searchQuoteAnalyzer(), - Defaults.POSITION_INCREMENT_GAP)); - } - fieldType.setTokenized(true); - - return new AllFieldMapper(fieldType, enabled, context.indexSettings()); - } - } - - public static class TypeParser implements MetadataFieldMapper.TypeParser { - @Override - public MetadataFieldMapper.Builder parse(String name, Map node, - ParserContext parserContext) throws MapperParsingException { - if (node.isEmpty() == false && - parserContext.indexVersionCreated().onOrAfter(Version.V_6_0_0_alpha1)) { - throw new IllegalArgumentException("[_all] is disabled in 6.0. As a replacement, you can use an [copy_to] " + - "on mapping fields to create your own catch all field."); - } - Builder builder = new Builder(parserContext.mapperService().fullName(NAME)); - builder.fieldType().setIndexAnalyzer(parserContext.getIndexAnalyzers().getDefaultIndexAnalyzer()); - builder.fieldType().setSearchAnalyzer(parserContext.getIndexAnalyzers().getDefaultSearchAnalyzer()); - builder.fieldType().setSearchQuoteAnalyzer(parserContext.getIndexAnalyzers().getDefaultSearchQuoteAnalyzer()); - - // parseField below will happily parse the doc_values setting, but it is then never passed to - // the AllFieldMapper ctor in the builder since it is not valid. Here we validate - // the doc values settings (old and new) are rejected - Object docValues = node.get("doc_values"); - if (docValues != null && TypeParsers.nodeBooleanValueLenient(name, "doc_values", docValues)) { - throw new MapperParsingException("Field [" + name + - "] is always tokenized and cannot have doc values"); - } - // convoluted way of specifying doc values - Object fielddata = node.get("fielddata"); - if (fielddata != null) { - Map fielddataMap = nodeMapValue(fielddata, "fielddata"); - Object format = fielddataMap.get("format"); - if ("doc_values".equals(format)) { - throw new MapperParsingException("Field [" + name + - "] is always tokenized and cannot have doc values"); - } - } - - parseTextField(builder, builder.name, node, parserContext); - boolean enabledSet = false; - for (Iterator> iterator = node.entrySet().iterator(); iterator.hasNext();) { - Map.Entry entry = iterator.next(); - String fieldName = entry.getKey(); - Object fieldNode = entry.getValue(); - if (fieldName.equals("enabled")) { - boolean enabled = TypeParsers.nodeBooleanValueLenient(name, "enabled", fieldNode); - builder.enabled(enabled ? EnabledAttributeMapper.ENABLED : EnabledAttributeMapper.DISABLED); - enabledSet = true; - iterator.remove(); - } - } - if (enabledSet == false && parserContext.indexVersionCreated().before(Version.V_6_0_0_alpha1)) { - // So there is no "enabled" field, however, the index was created prior to 6.0, - // and therefore the default for this particular index should be "true" for - // enabling _all - builder.enabled(EnabledAttributeMapper.ENABLED); - } - return builder; - } - - @Override - public MetadataFieldMapper getDefault(MappedFieldType fieldType, ParserContext context) { - final Settings indexSettings = context.mapperService().getIndexSettings().getSettings(); - if (fieldType != null) { - if (context.indexVersionCreated().before(Version.V_6_0_0_alpha1)) { - // The index was created prior to 6.0, and therefore the default for this - // particular index should be "true" for enabling _all - return new AllFieldMapper(fieldType.clone(), EnabledAttributeMapper.ENABLED, indexSettings); - } else { - return new AllFieldMapper(indexSettings, fieldType); - } - } else { - return parse(NAME, Collections.emptyMap(), context) - .build(new BuilderContext(indexSettings, new ContentPath(1))); - } - } - } - - static final class AllFieldType extends StringFieldType { - - AllFieldType() { - } - - protected AllFieldType(AllFieldType ref) { - super(ref); - } - - @Override - public MappedFieldType clone() { - return new AllFieldType(this); - } - - @Override - public String typeName() { - return CONTENT_TYPE; - } - - @Override - public Query queryStringTermQuery(Term term) { - return new AllTermQuery(term); - } - - @Override - public Query termQuery(Object value, QueryShardContext context) { - return queryStringTermQuery(new Term(name(), indexedValueForSearch(value))); - } - } - - private EnabledAttributeMapper enabledState; - - private AllFieldMapper(Settings indexSettings, MappedFieldType existing) { - this(existing.clone(), Defaults.ENABLED, indexSettings); - } - - private AllFieldMapper(MappedFieldType fieldType, EnabledAttributeMapper enabled, Settings indexSettings) { - super(NAME, fieldType, Defaults.FIELD_TYPE, indexSettings); - this.enabledState = enabled; - } - - public boolean enabled() { - return this.enabledState.enabled; - } - - @Override - public void preParse(ParseContext context) throws IOException { - } - - @Override - public void postParse(ParseContext context) throws IOException { - super.parse(context); - } - - @Override - public Mapper parse(ParseContext context) throws IOException { - // we parse in post parse - return null; - } - - @Override - protected void parseCreateField(ParseContext context, List fields) throws IOException { - if (!enabledState.enabled) { - return; - } - for (AllEntries.Entry entry : context.allEntries().entries()) { - fields.add(new AllField(fieldType().name(), entry.value(), entry.boost(), fieldType())); - } - } - - @Override - protected String contentType() { - return CONTENT_TYPE; - } - - @Override - public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { - boolean includeDefaults = params.paramAsBoolean("include_defaults", false); - if (!includeDefaults) { - // simulate the generation to make sure we don't add unnecessary content if all is default - // if all are defaults, no need to write it at all - generating is twice is ok though - BytesStreamOutput bytesStreamOutput = new BytesStreamOutput(0); - XContentBuilder b = new XContentBuilder(builder.contentType().xContent(), bytesStreamOutput); - b.startObject().flush(); - long pos = bytesStreamOutput.position(); - innerToXContent(b, false); - b.flush(); - if (pos == bytesStreamOutput.position()) { - return builder; - } - } - builder.startObject(CONTENT_TYPE); - innerToXContent(builder, includeDefaults); - builder.endObject(); - return builder; - } - - private void innerToXContent(XContentBuilder builder, boolean includeDefaults) throws IOException { - if (includeDefaults || enabledState != Defaults.ENABLED) { - builder.field("enabled", enabledState.enabled); - } - if (enabled() == false) { - return; - } - if (includeDefaults || fieldType().stored() != Defaults.FIELD_TYPE.stored()) { - builder.field("store", fieldType().stored()); - } - if (includeDefaults || fieldType().storeTermVectors() != Defaults.FIELD_TYPE.storeTermVectors()) { - builder.field("store_term_vectors", fieldType().storeTermVectors()); - } - if (includeDefaults || fieldType().storeTermVectorOffsets() != Defaults.FIELD_TYPE.storeTermVectorOffsets()) { - builder.field("store_term_vector_offsets", fieldType().storeTermVectorOffsets()); - } - if (includeDefaults || - fieldType().storeTermVectorPositions() != Defaults.FIELD_TYPE.storeTermVectorPositions()) { - builder.field("store_term_vector_positions", fieldType().storeTermVectorPositions()); - } - if (includeDefaults || - fieldType().storeTermVectorPayloads() != Defaults.FIELD_TYPE.storeTermVectorPayloads()) { - builder.field("store_term_vector_payloads", fieldType().storeTermVectorPayloads()); - } - if (includeDefaults || fieldType().omitNorms() != Defaults.FIELD_TYPE.omitNorms()) { - builder.field("norms", !fieldType().omitNorms()); - } - - doXContentAnalyzers(builder, includeDefaults); - - if (fieldType().similarity() != null) { - builder.field("similarity", fieldType().similarity().name()); - } else if (includeDefaults) { - builder.field("similarity", SimilarityService.DEFAULT_SIMILARITY); - } - } - - @Override - protected void doMerge(Mapper mergeWith, boolean updateAllTypes) { - if (((AllFieldMapper)mergeWith).enabled() != this.enabled() && - ((AllFieldMapper)mergeWith).enabledState != Defaults.ENABLED) { - throw new IllegalArgumentException("mapper [" + fieldType().name() + - "] enabled is " + this.enabled() + " now encountering "+ ((AllFieldMapper)mergeWith).enabled()); - } - super.doMerge(mergeWith, updateAllTypes); - } - -} diff --git a/core/src/main/java/org/elasticsearch/index/mapper/BinaryFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/BinaryFieldMapper.java index 024e0439ac5d8..1838b60050e4c 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/BinaryFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/BinaryFieldMapper.java @@ -20,10 +20,14 @@ package org.elasticsearch.index.mapper; import com.carrotsearch.hppc.ObjectArrayList; + import org.apache.lucene.document.Field; import org.apache.lucene.index.IndexOptions; import org.apache.lucene.index.IndexableField; +import org.apache.lucene.index.Term; +import org.apache.lucene.search.DocValuesFieldExistsQuery; import org.apache.lucene.search.Query; +import org.apache.lucene.search.TermQuery; import org.apache.lucene.store.ByteArrayDataOutput; import org.apache.lucene.util.BytesRef; import org.elasticsearch.ElasticsearchException; @@ -126,6 +130,15 @@ public IndexFieldData.Builder fielddataBuilder(String fullyQualifiedIndexName) { return new BytesBinaryDVIndexFieldData.Builder(); } + @Override + public Query existsQuery(QueryShardContext context) { + if (hasDocValues()) { + return new DocValuesFieldExistsQuery(name()); + } else { + return new TermQuery(new Term(FieldNamesFieldMapper.NAME, name())); + } + } + @Override public Query termQuery(Object value, QueryShardContext context) { throw new QueryShardException(context, "Binary fields do not support searching"); @@ -165,6 +178,11 @@ protected void parseCreateField(ParseContext context, List field } else { field.add(value); } + } else { + // Only add an entry to the field names field if the field is stored + // but has no doc values so exists query will work on a field with + // no doc values + createFieldNamesField(context, fields); } } diff --git a/core/src/main/java/org/elasticsearch/index/mapper/BinaryRangeUtil.java b/core/src/main/java/org/elasticsearch/index/mapper/BinaryRangeUtil.java deleted file mode 100644 index e2b618bc222e7..0000000000000 --- a/core/src/main/java/org/elasticsearch/index/mapper/BinaryRangeUtil.java +++ /dev/null @@ -1,146 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ -package org.elasticsearch.index.mapper; - -import org.apache.lucene.store.ByteArrayDataOutput; -import org.apache.lucene.util.BytesRef; - -import java.io.IOException; -import java.util.ArrayList; -import java.util.List; -import java.util.Set; - -enum BinaryRangeUtil { - - ; - - static BytesRef encodeLongRanges(Set ranges) throws IOException { - List sortedRanges = new ArrayList<>(ranges); - sortedRanges.sort((r1, r2) -> { - long r1From = ((Number) r1.from).longValue(); - long r2From = ((Number) r2.from).longValue(); - int cmp = Long.compare(r1From, r2From); - if (cmp != 0) { - return cmp; - } else { - long r1To = ((Number) r1.from).longValue(); - long r2To = ((Number) r2.from).longValue(); - return Long.compare(r1To, r2To); - } - }); - - final byte[] encoded = new byte[5 + ((5 + 9) * 2) * sortedRanges.size()]; - ByteArrayDataOutput out = new ByteArrayDataOutput(encoded); - out.writeVInt(sortedRanges.size()); - for (RangeFieldMapper.Range range : sortedRanges) { - byte[] encodedFrom = encode(((Number) range.from).longValue()); - out.writeVInt(encodedFrom.length); - out.writeBytes(encodedFrom, encodedFrom.length); - byte[] encodedTo = encode(((Number) range.to).longValue()); - out.writeVInt(encodedTo.length); - out.writeBytes(encodedTo, encodedTo.length); - } - return new BytesRef(encoded, 0, out.getPosition()); - } - - static BytesRef encodeDoubleRanges(Set ranges) throws IOException { - List sortedRanges = new ArrayList<>(ranges); - sortedRanges.sort((r1, r2) -> { - double r1From = ((Number) r1.from).doubleValue(); - double r2From = ((Number) r2.from).doubleValue(); - int cmp = Double.compare(r1From, r2From); - if (cmp != 0) { - return cmp; - } else { - double r1To = ((Number) r1.from).doubleValue(); - double r2To = ((Number) r2.from).doubleValue(); - return Double.compare(r1To, r2To); - } - }); - - final byte[] encoded = new byte[5 + ((5 + 9) * 2) * sortedRanges.size()]; - ByteArrayDataOutput out = new ByteArrayDataOutput(encoded); - out.writeVInt(sortedRanges.size()); - for (RangeFieldMapper.Range range : sortedRanges) { - byte[] encodedFrom = BinaryRangeUtil.encode(((Number) range.from).doubleValue()); - out.writeVInt(encodedFrom.length); - out.writeBytes(encodedFrom, encodedFrom.length); - byte[] encodedTo = BinaryRangeUtil.encode(((Number) range.to).doubleValue()); - out.writeVInt(encodedTo.length); - out.writeBytes(encodedTo, encodedTo.length); - } - return new BytesRef(encoded, 0, out.getPosition()); - } - - /** - * Encodes the specified number of type long in a variable-length byte format. - * The byte format preserves ordering, which means the returned byte array can be used for comparing as is. - */ - static byte[] encode(long number) { - int sign = 1; // means positive - if (number < 0) { - number = -1 - number; - sign = 0; - } - return encode(number, sign); - } - - /** - * Encodes the specified number of type double in a variable-length byte format. - * The byte format preserves ordering, which means the returned byte array can be used for comparing as is. - */ - static byte[] encode(double number) { - long l; - int sign; - if (number < 0.0) { - l = Double.doubleToRawLongBits(-0d - number); - sign = 0; - } else { - l = Double.doubleToRawLongBits(number); - sign = 1; // means positive - } - return encode(l, sign); - } - - private static byte[] encode(long l, int sign) { - assert l >= 0; - int bits = 64 - Long.numberOfLeadingZeros(l); - - int numBytes = (bits + 7) / 8; // between 0 and 8 - byte[] encoded = new byte[1 + numBytes]; - // encode the sign first to make sure positive values compare greater than negative values - // and then the number of bytes, to make sure that large values compare greater than low values - if (sign > 0) { - encoded[0] = (byte) ((sign << 4) | numBytes); - } else { - encoded[0] = (byte) ((sign << 4) | (8 - numBytes)); - } - for (int b = 0; b < numBytes; ++b) { - if (sign == 1) { - encoded[encoded.length - 1 - b] = (byte) (l >>> (8 * b)); - } else if (sign == 0) { - encoded[encoded.length - 1 - b] = (byte) (0xFF - ((l >>> (8 * b)) & 0xFF)); - } else { - throw new AssertionError(); - } - } - return encoded; - } - -} diff --git a/core/src/main/java/org/elasticsearch/index/mapper/BooleanFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/BooleanFieldMapper.java index 6fe8a37a46ee9..45cd9e17ad119 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/BooleanFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/BooleanFieldMapper.java @@ -23,7 +23,10 @@ import org.apache.lucene.document.SortedNumericDocValuesField; import org.apache.lucene.index.IndexOptions; import org.apache.lucene.index.IndexableField; +import org.apache.lucene.index.Term; +import org.apache.lucene.search.DocValuesFieldExistsQuery; import org.apache.lucene.search.Query; +import org.apache.lucene.search.TermQuery; import org.apache.lucene.search.TermRangeQuery; import org.apache.lucene.util.BytesRef; import org.elasticsearch.Version; @@ -136,6 +139,15 @@ public String typeName() { return CONTENT_TYPE; } + @Override + public Query existsQuery(QueryShardContext context) { + if (hasDocValues()) { + return new DocValuesFieldExistsQuery(name()); + } else { + return new TermQuery(new Term(FieldNamesFieldMapper.NAME, name())); + } + } + @Override public Boolean nullValue() { return (Boolean)super.nullValue(); @@ -253,6 +265,8 @@ protected void parseCreateField(ParseContext context, List field } if (fieldType().hasDocValues()) { fields.add(new SortedNumericDocValuesField(fieldType().name(), value ? 1 : 0)); + } else { + createFieldNamesField(context, fields); } } diff --git a/core/src/main/java/org/elasticsearch/index/mapper/CompletionFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/CompletionFieldMapper.java index 1ab84eda6393d..1c92150676c19 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/CompletionFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/CompletionFieldMapper.java @@ -21,6 +21,8 @@ import org.apache.lucene.codecs.PostingsFormat; import org.apache.lucene.index.IndexableField; import org.apache.lucene.index.Term; +import org.apache.lucene.search.Query; +import org.apache.lucene.search.TermQuery; import org.apache.lucene.search.suggest.document.Completion50PostingsFormat; import org.apache.lucene.search.suggest.document.CompletionAnalyzer; import org.apache.lucene.search.suggest.document.CompletionQuery; @@ -40,11 +42,13 @@ import org.elasticsearch.common.xcontent.XContentParser.Token; import org.elasticsearch.index.analysis.AnalyzerScope; import org.elasticsearch.index.analysis.NamedAnalyzer; +import org.elasticsearch.index.query.QueryShardContext; import org.elasticsearch.search.suggest.completion.CompletionSuggester; import org.elasticsearch.search.suggest.completion.context.ContextMapping; import org.elasticsearch.search.suggest.completion.context.ContextMappings; import java.io.IOException; +import java.util.ArrayList; import java.util.Collections; import java.util.HashMap; import java.util.HashSet; @@ -257,6 +261,11 @@ public static synchronized PostingsFormat postingsFormat() { return postingsFormat; } + @Override + public Query existsQuery(QueryShardContext context) { + return new TermQuery(new Term(FieldNamesFieldMapper.NAME, name())); + } + /** * Completion prefix query */ @@ -456,6 +465,11 @@ public Mapper parse(ParseContext context) throws IOException { context.doc().add(new SuggestField(fieldType().name(), input, metaData.weight)); } } + List fields = new ArrayList<>(1); + createFieldNamesField(context, fields); + for (IndexableField field : fields) { + context.doc().add(field); + } multiFields.parse(this, context); return null; } diff --git a/core/src/main/java/org/elasticsearch/index/mapper/CustomDocValuesField.java b/core/src/main/java/org/elasticsearch/index/mapper/CustomDocValuesField.java index 60fbfc0698cda..f77d480e72298 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/CustomDocValuesField.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/CustomDocValuesField.java @@ -28,8 +28,8 @@ import java.io.Reader; -// used for binary and geo fields -abstract class CustomDocValuesField implements IndexableField { +// used for binary, geo and range fields +public abstract class CustomDocValuesField implements IndexableField { public static final FieldType TYPE = new FieldType(); static { @@ -39,7 +39,7 @@ abstract class CustomDocValuesField implements IndexableField { private final String name; - CustomDocValuesField(String name) { + protected CustomDocValuesField(String name) { this.name = name; } diff --git a/core/src/main/java/org/elasticsearch/index/mapper/DateFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/DateFieldMapper.java index 10c2df7eef06c..36e7a73aa9a5c 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/DateFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/DateFieldMapper.java @@ -26,12 +26,17 @@ import org.apache.lucene.index.IndexReader; import org.apache.lucene.index.IndexableField; import org.apache.lucene.index.PointValues; +import org.apache.lucene.index.Term; import org.apache.lucene.search.BoostQuery; +import org.apache.lucene.search.DocValuesFieldExistsQuery; import org.apache.lucene.search.IndexOrDocValuesQuery; import org.apache.lucene.search.Query; +import org.apache.lucene.search.TermQuery; import org.apache.lucene.util.BytesRef; +import org.elasticsearch.Version; import org.elasticsearch.common.Explicit; import org.elasticsearch.common.Nullable; +import org.elasticsearch.common.geo.ShapeRelation; import org.elasticsearch.common.joda.DateMathParser; import org.elasticsearch.common.joda.FormatDateTimeFormatter; import org.elasticsearch.common.joda.Joda; @@ -127,7 +132,7 @@ protected void setupFieldType(BuilderContext context) { public DateFieldMapper build(BuilderContext context) { setupFieldType(context); return new DateFieldMapper(name, fieldType, defaultFieldType, ignoreMalformed(context), - includeInAll, context.indexSettings(), multiFieldsBuilder.build(this, context), copyTo); + context.indexSettings(), multiFieldsBuilder.build(this, context), copyTo); } } @@ -154,7 +159,13 @@ public Mapper.Builder parse(String name, Map node, ParserCo builder.ignoreMalformed(TypeParsers.nodeBooleanValue(name, "ignore_malformed", propNode, parserContext)); iterator.remove(); } else if (propName.equals("locale")) { - builder.locale(LocaleUtils.parse(propNode.toString())); + Locale locale; + if (parserContext.indexVersionCreated().onOrAfter(Version.V_6_0_0_beta2)) { + locale = LocaleUtils.parse(propNode.toString()); + } else { + locale = LocaleUtils.parse5x(propNode.toString()); + } + builder.locale(locale); iterator.remove(); } else if (propName.equals("format")) { builder.dateTimeFormatter(parseDateTimeFormatter(propNode)); @@ -237,9 +248,18 @@ long parse(String value) { return dateTimeFormatter().parser().parseMillis(value); } + @Override + public Query existsQuery(QueryShardContext context) { + if (hasDocValues()) { + return new DocValuesFieldExistsQuery(name()); + } else { + return new TermQuery(new Term(FieldNamesFieldMapper.NAME, name())); + } + } + @Override public Query termQuery(Object value, @Nullable QueryShardContext context) { - Query query = innerRangeQuery(value, value, true, true, null, null, context); + Query query = rangeQuery(value, value, true, true, ShapeRelation.INTERSECTS, null, null, context); if (boost() != 1f) { query = new BoostQuery(query, boost()); } @@ -247,20 +267,13 @@ public Query termQuery(Object value, @Nullable QueryShardContext context) { } @Override - public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper, QueryShardContext context) { - failIfNotIndexed(); - return rangeQuery(lowerTerm, upperTerm, includeLower, includeUpper, null, null, context); - } - - public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper, - @Nullable DateTimeZone timeZone, @Nullable DateMathParser forcedDateParser, QueryShardContext context) { - failIfNotIndexed(); - return innerRangeQuery(lowerTerm, upperTerm, includeLower, includeUpper, timeZone, forcedDateParser, context); - } - - Query innerRangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper, + public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper, ShapeRelation relation, @Nullable DateTimeZone timeZone, @Nullable DateMathParser forcedDateParser, QueryShardContext context) { failIfNotIndexed(); + if (relation == ShapeRelation.DISJOINT) { + throw new IllegalArgumentException("Field [" + name() + "] of type [" + typeName() + + "] does not support DISJOINT ranges"); + } DateMathParser parser = forcedDateParser == null ? dateMathParser : forcedDateParser; @@ -383,8 +396,6 @@ public DocValueFormat docValueFormat(@Nullable String format, DateTimeZone timeZ } } - private Boolean includeInAll; - private Explicit ignoreMalformed; private DateFieldMapper( @@ -392,13 +403,11 @@ private DateFieldMapper( MappedFieldType fieldType, MappedFieldType defaultFieldType, Explicit ignoreMalformed, - Boolean includeInAll, Settings indexSettings, MultiFields multiFields, CopyTo copyTo) { super(simpleName, fieldType, defaultFieldType, indexSettings, multiFields, copyTo); this.ignoreMalformed = ignoreMalformed; - this.includeInAll = includeInAll; } @Override @@ -449,15 +458,13 @@ protected void parseCreateField(ParseContext context, List field } } - if (context.includeInAll(includeInAll, this)) { - context.allEntries().addText(fieldType().name(), dateAsString, fieldType().boost()); - } - if (fieldType().indexOptions() != IndexOptions.NONE) { fields.add(new LongPoint(fieldType().name(), timestamp)); } if (fieldType().hasDocValues()) { fields.add(new SortedNumericDocValuesField(fieldType().name(), timestamp)); + } else if (fieldType().stored() || fieldType().indexOptions() != IndexOptions.NONE) { + createFieldNamesField(context, fields); } if (fieldType().stored()) { fields.add(new StoredField(fieldType().name(), timestamp)); @@ -468,7 +475,6 @@ protected void parseCreateField(ParseContext context, List field protected void doMerge(Mapper mergeWith, boolean updateAllTypes) { final DateFieldMapper other = (DateFieldMapper) mergeWith; super.doMerge(mergeWith, updateAllTypes); - this.includeInAll = other.includeInAll; if (other.ignoreMalformed.explicit()) { this.ignoreMalformed = other.ignoreMalformed; } @@ -486,11 +492,6 @@ protected void doXContentBody(XContentBuilder builder, boolean includeDefaults, builder.field("null_value", fieldType().nullValueAsString()); } - if (includeInAll != null) { - builder.field("include_in_all", includeInAll); - } else if (includeDefaults) { - builder.field("include_in_all", false); - } if (includeDefaults || fieldType().dateTimeFormatter().format().equals(DEFAULT_DATE_TIME_FORMATTER.format()) == false) { builder.field("format", fieldType().dateTimeFormatter().format()); diff --git a/core/src/main/java/org/elasticsearch/index/mapper/DocumentMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/DocumentMapper.java index 3654103c1899c..c4de559d1d956 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/DocumentMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/DocumentMapper.java @@ -221,10 +221,6 @@ public SourceFieldMapper sourceMapper() { return metadataMapper(SourceFieldMapper.class); } - public AllFieldMapper allFieldMapper() { - return metadataMapper(AllFieldMapper.class); - } - public IdFieldMapper idFieldMapper() { return metadataMapper(IdFieldMapper.class); } @@ -296,21 +292,6 @@ public ObjectMapper findNestedObjectMapper(int nestedDocId, SearchContext sc, Le return nestedObjectMapper; } - /** - * Returns the parent {@link ObjectMapper} instance of the specified object mapper or null if there - * isn't any. - */ - // TODO: We should add: ObjectMapper#getParentObjectMapper() - public ObjectMapper findParentObjectMapper(ObjectMapper objectMapper) { - int indexOfLastDot = objectMapper.fullPath().lastIndexOf('.'); - if (indexOfLastDot != -1) { - String parentNestObjectPath = objectMapper.fullPath().substring(0, indexOfLastDot); - return objectMappers().get(parentNestObjectPath); - } else { - return null; - } - } - public boolean isParent(String type) { return mapperService.getParentTypes().contains(type); } diff --git a/core/src/main/java/org/elasticsearch/index/mapper/DocumentParser.java b/core/src/main/java/org/elasticsearch/index/mapper/DocumentParser.java index 85367723c546e..dd3aa69315d47 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/DocumentParser.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/DocumentParser.java @@ -27,6 +27,7 @@ import org.elasticsearch.common.joda.FormatDateTimeFormatter; import org.elasticsearch.common.xcontent.XContentHelper; import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.index.IndexSettings; import org.elasticsearch.index.mapper.DynamicTemplate.XContentFieldType; import org.elasticsearch.index.mapper.KeywordFieldMapper.KeywordFieldType; @@ -58,9 +59,10 @@ ParsedDocument parseDocument(SourceToParse source) throws MapperParsingException final Mapping mapping = docMapper.mapping(); final ParseContext.InternalParseContext context; - try (XContentParser parser = XContentHelper.createParser(docMapperParser.getXContentRegistry(), source.source())) { - context = new ParseContext.InternalParseContext(indexSettings.getSettings(), - docMapperParser, docMapper, source, parser); + final XContentType xContentType = source.getXContentType(); + + try (XContentParser parser = XContentHelper.createParser(docMapperParser.getXContentRegistry(), source.source(), xContentType)) { + context = new ParseContext.InternalParseContext(indexSettings.getSettings(), docMapperParser, docMapper, source, parser); validateStart(parser); internalParseDocument(mapping, context, parser); validateEnd(parser); @@ -74,8 +76,7 @@ ParsedDocument parseDocument(SourceToParse source) throws MapperParsingException reverseOrder(context); - ParsedDocument doc = parsedDocument(source, context, createDynamicUpdate(mapping, docMapper, context.getDynamicMappers())); - return doc; + return parsedDocument(source, context, createDynamicUpdate(mapping, docMapper, context.getDynamicMappers())); } private static void internalParseDocument(Mapping mapping, ParseContext.InternalParseContext context, XContentParser parser) throws IOException { @@ -89,7 +90,7 @@ private static void internalParseDocument(Mapping mapping, ParseContext.Internal // entire type is disabled parser.skipChildren(); } else if (emptyDoc == false) { - parseObjectOrNested(context, mapping.root, true); + parseObjectOrNested(context, mapping.root); } for (MetadataFieldMapper metadataMapper : mapping.metadataMappers) { @@ -175,14 +176,21 @@ private static MapperParsingException wrapInMapperParsingException(SourceToParse } private static String[] splitAndValidatePath(String fullFieldPath) { - String[] parts = fullFieldPath.split("\\."); - for (String part : parts) { - if (Strings.hasText(part) == false) { - throw new IllegalArgumentException( - "object field starting or ending with a [.] makes object resolution ambiguous: [" + fullFieldPath + "]"); + if (fullFieldPath.contains(".")) { + String[] parts = fullFieldPath.split("\\."); + for (String part : parts) { + if (Strings.hasText(part) == false) { + throw new IllegalArgumentException( + "object field starting or ending with a [.] makes object resolution ambiguous: [" + fullFieldPath + "]"); + } + } + return parts; + } else { + if (Strings.isEmpty(fullFieldPath)) { + throw new IllegalArgumentException("field name cannot be an empty string"); } + return new String[] {fullFieldPath}; } - return parts; } /** Creates a Mapping containing any dynamically added fields, or returns null if there were no dynamic mappings. */ @@ -331,7 +339,7 @@ private static ObjectMapper createUpdate(ObjectMapper parent, String[] nameParts return parent.mappingUpdate(mapper); } - static void parseObjectOrNested(ParseContext context, ObjectMapper mapper, boolean atRoot) throws IOException { + static void parseObjectOrNested(ParseContext context, ObjectMapper mapper) throws IOException { if (mapper.isEnabled() == false) { context.parser().skipChildren(); return; @@ -353,12 +361,6 @@ static void parseObjectOrNested(ParseContext context, ObjectMapper mapper, boole context = nestedContext(context, mapper); } - // update the default value of include_in_all if necessary - Boolean includeInAll = mapper.includeInAll(); - if (includeInAll != null) { - context = context.setIncludeInAllDefault(includeInAll); - } - // if we are at the end of the previous object, advance if (token == XContentParser.Token.END_OBJECT) { token = parser.nextToken(); @@ -466,7 +468,7 @@ private static ParseContext nestedContext(ParseContext context, ObjectMapper map private static void parseObjectOrField(ParseContext context, Mapper mapper) throws IOException { if (mapper instanceof ObjectMapper) { - parseObjectOrNested(context, (ObjectMapper) mapper, false); + parseObjectOrNested(context, (ObjectMapper) mapper); } else { FieldMapper fieldMapper = (FieldMapper)mapper; Mapper update = fieldMapper.parse(context); @@ -480,14 +482,13 @@ private static void parseObjectOrField(ParseContext context, Mapper mapper) thro private static void parseObject(final ParseContext context, ObjectMapper mapper, String currentFieldName) throws IOException { assert currentFieldName != null; - Mapper objectMapper = getMapper(mapper, currentFieldName); + final String[] paths = splitAndValidatePath(currentFieldName); + Mapper objectMapper = getMapper(mapper, currentFieldName, paths); if (objectMapper != null) { context.path().add(currentFieldName); parseObjectOrField(context, objectMapper); context.path().remove(); } else { - - final String[] paths = splitAndValidatePath(currentFieldName); currentFieldName = paths[paths.length - 1]; Tuple parentMapperTuple = getDynamicParentMapper(context, paths, mapper); ObjectMapper parentMapper = parentMapperTuple.v2(); @@ -517,7 +518,9 @@ private static void parseObject(final ParseContext context, ObjectMapper mapper, private static void parseArray(ParseContext context, ObjectMapper parentMapper, String lastFieldName) throws IOException { String arrayFieldName = lastFieldName; - Mapper mapper = getMapper(parentMapper, lastFieldName); + + final String[] paths = splitAndValidatePath(arrayFieldName); + Mapper mapper = getMapper(parentMapper, lastFieldName, paths); if (mapper != null) { // There is a concrete mapper for this field already. Need to check if the mapper // expects an array, if so we pass the context straight to the mapper and if not @@ -528,8 +531,6 @@ private static void parseArray(ParseContext context, ObjectMapper parentMapper, parseNonDynamicArray(context, parentMapper, lastFieldName, arrayFieldName); } } else { - - final String[] paths = splitAndValidatePath(arrayFieldName); arrayFieldName = paths[paths.length - 1]; lastFieldName = arrayFieldName; Tuple parentMapperTuple = getDynamicParentMapper(context, paths, parentMapper); @@ -588,12 +589,12 @@ private static void parseValue(final ParseContext context, ObjectMapper parentMa if (currentFieldName == null) { throw new MapperParsingException("object mapping [" + parentMapper.name() + "] trying to serialize a value with no field associated with it, current value [" + context.parser().textOrNull() + "]"); } - Mapper mapper = getMapper(parentMapper, currentFieldName); + + final String[] paths = splitAndValidatePath(currentFieldName); + Mapper mapper = getMapper(parentMapper, currentFieldName, paths); if (mapper != null) { parseObjectOrField(context, mapper); } else { - - final String[] paths = splitAndValidatePath(currentFieldName); currentFieldName = paths[paths.length - 1]; Tuple parentMapperTuple = getDynamicParentMapper(context, paths, parentMapper); parentMapper = parentMapperTuple.v2(); @@ -606,7 +607,7 @@ private static void parseValue(final ParseContext context, ObjectMapper parentMa private static void parseNullValue(ParseContext context, ObjectMapper parentMapper, String lastFieldName) throws IOException { // we can only handle null values if we have mappings for them - Mapper mapper = getMapper(parentMapper, lastFieldName); + Mapper mapper = getMapper(parentMapper, lastFieldName, splitAndValidatePath(lastFieldName)); if (mapper != null) { // TODO: passing null to an object seems bogus? parseObjectOrField(context, mapper); @@ -892,7 +893,7 @@ private static Tuple getDynamicParentMapper(ParseContext break; case FALSE: // Should not dynamically create any more mappers so return the last mapper - return new Tuple(pathsAdded, parent); + return new Tuple<>(pathsAdded, parent); } } @@ -900,7 +901,7 @@ private static Tuple getDynamicParentMapper(ParseContext pathsAdded++; parent = mapper; } - return new Tuple(pathsAdded, mapper); + return new Tuple<>(pathsAdded, mapper); } // find what the dynamic setting is given the current parse context and parent @@ -928,8 +929,7 @@ private static ObjectMapper.Dynamic dynamicOrDefault(ObjectMapper parentMapper, } // looks up a child mapper, but takes into account field names that expand to objects - static Mapper getMapper(ObjectMapper objectMapper, String fieldName) { - String[] subfields = splitAndValidatePath(fieldName); + private static Mapper getMapper(ObjectMapper objectMapper, String fieldName, String[] subfields) { for (int i = 0; i < subfields.length - 1; ++i) { Mapper mapper = objectMapper.getMapper(subfields[i]); if (mapper == null || (mapper instanceof ObjectMapper) == false) { diff --git a/core/src/main/java/org/elasticsearch/index/mapper/DynamicTemplate.java b/core/src/main/java/org/elasticsearch/index/mapper/DynamicTemplate.java index 8f3634e0132af..8da1915b8ca56 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/DynamicTemplate.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/DynamicTemplate.java @@ -186,7 +186,7 @@ public static DynamicTemplate parse(String name, Map conf, matchPattern = entry.getValue().toString(); } else if ("mapping".equals(propName)) { mapping = (Map) entry.getValue(); - } else if (indexVersionCreated.onOrAfter(Version.V_5_0_0_alpha1)) { + } else { // unknown parameters were ignored before but still carried through serialization // so we need to ignore them at parsing time for old indices throw new IllegalArgumentException("Illegal dynamic template parameter: [" + propName + "]"); diff --git a/core/src/main/java/org/elasticsearch/index/mapper/FieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/FieldMapper.java index c2af9f715dd45..c6e0dd9c00b72 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/FieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/FieldMapper.java @@ -21,6 +21,8 @@ import com.carrotsearch.hppc.cursors.ObjectCursor; import com.carrotsearch.hppc.cursors.ObjectObjectCursor; + +import org.apache.lucene.document.Field; import org.apache.lucene.document.FieldType; import org.apache.lucene.index.IndexOptions; import org.apache.lucene.index.IndexableField; @@ -32,6 +34,7 @@ import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.index.analysis.NamedAnalyzer; +import org.elasticsearch.index.mapper.FieldNamesFieldMapper.FieldNamesFieldType; import org.elasticsearch.index.similarity.SimilarityProvider; import org.elasticsearch.index.similarity.SimilarityService; @@ -57,7 +60,6 @@ public abstract static class Builder e protected final MappedFieldType defaultFieldType; private final IndexOptions defaultOptions; protected boolean omitNormsSet = false; - protected Boolean includeInAll; protected boolean indexOptionsSet = false; protected boolean docValuesSet = false; protected final MultiFields.Builder multiFieldsBuilder; @@ -182,11 +184,6 @@ public T searchQuoteAnalyzer(NamedAnalyzer searchQuoteAnalyzer) { return builder; } - public T includeInAll(Boolean includeInAll) { - this.includeInAll = includeInAll; - return builder; - } - public T similarity(SimilarityProvider similarity) { this.fieldType.setSimilarity(similarity); return builder; @@ -212,19 +209,11 @@ protected String buildFullName(BuilderContext context) { } protected boolean defaultDocValues(Version indexCreated) { - if (indexCreated.onOrAfter(Version.V_5_0_0_alpha1)) { - // add doc values by default to keyword (boolean, numerics, etc.) fields - return fieldType.tokenized() == false; - } else { - return fieldType.tokenized() == false && fieldType.indexOptions() != IndexOptions.NONE; - } + return fieldType.tokenized() == false; } protected void setupFieldType(BuilderContext context) { fieldType.setName(buildFullName(context)); - if (context.indexCreatedVersion().before(Version.V_5_0_0_alpha1)) { - fieldType.setOmitNorms(fieldType.omitNorms() && fieldType.boost() == 1.0f); - } if (fieldType.indexAnalyzer() == null && fieldType.tokenized() == false && fieldType.indexOptions() != IndexOptions.NONE) { fieldType.setIndexAnalyzer(Lucene.KEYWORD_ANALYZER); fieldType.setSearchAnalyzer(Lucene.KEYWORD_ANALYZER); @@ -247,10 +236,8 @@ protected FieldMapper(String simpleName, MappedFieldType fieldType, MappedFieldT super(simpleName); assert indexSettings != null; this.indexCreatedVersion = Version.indexCreated(indexSettings); - if (indexCreatedVersion.onOrAfter(Version.V_5_0_0_beta1)) { - if (simpleName.isEmpty()) { - throw new IllegalArgumentException("name cannot be empty string"); - } + if (simpleName.isEmpty()) { + throw new IllegalArgumentException("name cannot be empty string"); } fieldType.freeze(); this.fieldType = fieldType; @@ -300,6 +287,16 @@ public Mapper parse(ParseContext context) throws IOException { */ protected abstract void parseCreateField(ParseContext context, List fields) throws IOException; + protected void createFieldNamesField(ParseContext context, List fields) { + FieldNamesFieldType fieldNamesFieldType = (FieldNamesFieldMapper.FieldNamesFieldType) context.docMapper() + .metadataMapper(FieldNamesFieldMapper.class).fieldType(); + if (fieldNamesFieldType != null && fieldNamesFieldType.isEnabled()) { + for (String fieldName : FieldNamesFieldMapper.extractFieldNames(fieldType().name())) { + fields.add(new Field(FieldNamesFieldMapper.NAME, fieldName, fieldNamesFieldType)); + } + } + } + @Override public Iterator iterator() { return multiFields.iterator(); diff --git a/core/src/main/java/org/elasticsearch/index/mapper/FieldNamesFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/FieldNamesFieldMapper.java index 4396cba58cc28..8482a94cfc74c 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/FieldNamesFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/FieldNamesFieldMapper.java @@ -23,6 +23,10 @@ import org.apache.lucene.index.IndexOptions; import org.apache.lucene.index.IndexableField; import org.apache.lucene.search.Query; +import org.elasticsearch.Version; +import org.elasticsearch.cluster.metadata.IndexMetaData; +import org.elasticsearch.common.logging.DeprecationLogger; +import org.elasticsearch.common.logging.ESLoggerFactory; import org.elasticsearch.common.lucene.Lucene; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentBuilder; @@ -44,6 +48,9 @@ */ public class FieldNamesFieldMapper extends MetadataFieldMapper { + private static final DeprecationLogger DEPRECATION_LOGGER = new DeprecationLogger( + ESLoggerFactory.getLogger(FieldNamesFieldMapper.class)); + public static final String NAME = "_field_names"; public static final String CONTENT_TYPE = "_field_names"; @@ -178,11 +185,18 @@ public boolean isEnabled() { return enabled; } + @Override + public Query existsQuery(QueryShardContext context) { + throw new UnsupportedOperationException("Cannot run exists query on _field_names"); + } + @Override public Query termQuery(Object value, QueryShardContext context) { if (isEnabled() == false) { throw new IllegalStateException("Cannot run [exists] queries if the [_field_names] field is disabled"); } + DEPRECATION_LOGGER.deprecated( + "terms query on the _field_names field is deprecated and will be removed, use exists query instead"); return super.termQuery(value, context); } } @@ -206,12 +220,14 @@ public void preParse(ParseContext context) throws IOException { @Override public void postParse(ParseContext context) throws IOException { - super.parse(context); + if (context.indexSettings().getAsVersion(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT).before(Version.V_6_1_0)) { + super.parse(context); + } } @Override public Mapper parse(ParseContext context) throws IOException { - // we parse in post parse + // Adding values to the _field_names field is handled by the mappers for each field type return null; } @@ -258,9 +274,19 @@ protected void parseCreateField(ParseContext context, List field return; } for (ParseContext.Document document : context.docs()) { - final List paths = new ArrayList<>(); + final List paths = new ArrayList<>(document.getFields().size()); + String previousPath = ""; // used as a sentinel - field names can't be empty for (IndexableField field : document.getFields()) { - paths.add(field.name()); + final String path = field.name(); + if (path.equals(previousPath)) { + // Sometimes mappers create multiple Lucene fields, eg. one for indexing, + // one for doc values and one for storing. Deduplicating is not required + // for correctness but this simple check helps save utf-8 conversions and + // gives Lucene fewer values to deal with. + continue; + } + paths.add(path); + previousPath = path; } for (String path : paths) { for (String fieldName : extractFieldNames(path)) { diff --git a/core/src/main/java/org/elasticsearch/index/mapper/GeoPointFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/GeoPointFieldMapper.java index 836d01e1a13d6..45237eb572d2c 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/GeoPointFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/GeoPointFieldMapper.java @@ -23,7 +23,10 @@ import org.apache.lucene.document.StoredField; import org.apache.lucene.index.IndexOptions; import org.apache.lucene.index.IndexableField; +import org.apache.lucene.index.Term; +import org.apache.lucene.search.DocValuesFieldExistsQuery; import org.apache.lucene.search.Query; +import org.apache.lucene.search.TermQuery; import org.elasticsearch.ElasticsearchParseException; import org.elasticsearch.common.Explicit; import org.elasticsearch.common.geo.GeoPoint; @@ -37,6 +40,7 @@ import org.elasticsearch.index.query.QueryShardException; import java.io.IOException; +import java.util.ArrayList; import java.util.Iterator; import java.util.List; import java.util.Map; @@ -180,6 +184,15 @@ public IndexFieldData.Builder fielddataBuilder(String fullyQualifiedIndexName) { return new AbstractLatLonPointDVIndexFieldData.Builder(); } + @Override + public Query existsQuery(QueryShardContext context) { + if (hasDocValues()) { + return new DocValuesFieldExistsQuery(name()); + } else { + return new TermQuery(new Term(FieldNamesFieldMapper.NAME, name())); + } + } + @Override public Query termQuery(Object value, QueryShardContext context) { throw new QueryShardException(context, "Geo fields do not support exact searching, use dedicated geo queries instead: [" @@ -187,9 +200,7 @@ public Query termQuery(Object value, QueryShardContext context) { } } - protected void parse(ParseContext originalContext, GeoPoint point) throws IOException { - // Geopoint fields, by default, will not be included in _all - final ParseContext context = originalContext.setIncludeInAllDefault(false); + protected void parse(ParseContext context, GeoPoint point) throws IOException { if (ignoreMalformed.value() == false) { if (point.lat() > 90.0 || point.lat() < -90.0) { @@ -209,6 +220,12 @@ protected void parse(ParseContext originalContext, GeoPoint point) throws IOExce } if (fieldType.hasDocValues()) { context.doc().add(new LatLonDocValuesField(fieldType().name(), point.lat(), point.lon())); + } else if (fieldType().stored() || fieldType().indexOptions() != IndexOptions.NONE) { + List fields = new ArrayList<>(1); + createFieldNamesField(context, fields); + for (IndexableField field : fields) { + context.doc().add(field); + } } // if the mapping contains multifields then use the geohash string if (multiFields.iterator().hasNext()) { diff --git a/core/src/main/java/org/elasticsearch/index/mapper/GeoShapeFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/GeoShapeFieldMapper.java index 72bb35668bd5e..c605b8d093644 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/GeoShapeFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/GeoShapeFieldMapper.java @@ -18,10 +18,12 @@ */ package org.elasticsearch.index.mapper; -import org.apache.lucene.document.Field; import org.apache.lucene.index.IndexOptions; import org.apache.lucene.index.IndexableField; +import org.apache.lucene.index.Term; +import org.apache.lucene.search.DocValuesFieldExistsQuery; import org.apache.lucene.search.Query; +import org.apache.lucene.search.TermQuery; import org.apache.lucene.spatial.prefix.PrefixTreeStrategy; import org.apache.lucene.spatial.prefix.RecursivePrefixTreeStrategy; import org.apache.lucene.spatial.prefix.TermQueryPrefixTreeStrategy; @@ -29,6 +31,7 @@ import org.apache.lucene.spatial.prefix.tree.PackedQuadPrefixTree; import org.apache.lucene.spatial.prefix.tree.QuadPrefixTree; import org.apache.lucene.spatial.prefix.tree.SpatialPrefixTree; +import org.elasticsearch.Version; import org.elasticsearch.common.Explicit; import org.elasticsearch.common.geo.GeoUtils; import org.elasticsearch.common.geo.SpatialStrategy; @@ -44,6 +47,8 @@ import org.locationtech.spatial4j.shape.jts.JtsGeometry; import java.io.IOException; +import java.util.ArrayList; +import java.util.Arrays; import java.util.Iterator; import java.util.List; import java.util.Map; @@ -125,6 +130,11 @@ public Builder coerce(boolean coerce) { return builder; } + @Override + protected boolean defaultDocValues(Version indexCreated) { + return false; + } + protected Explicit coerce(BuilderContext context) { if (coerce != null) { return new Explicit<>(coerce, true); @@ -406,6 +416,11 @@ public PrefixTreeStrategy resolveStrategy(String strategyName) { throw new IllegalArgumentException("Unknown prefix tree strategy [" + strategyName + "]"); } + @Override + public Query existsQuery(QueryShardContext context) { + return new TermQuery(new Term(FieldNamesFieldMapper.NAME, name())); + } + @Override public Query termQuery(Object value, QueryShardContext context) { throw new QueryShardException(context, "Geo fields do not support exact searching, use dedicated geo queries instead"); @@ -440,11 +455,9 @@ public Mapper parse(ParseContext context) throws IOException { throw new MapperParsingException("[{" + fieldType().name() + "}] is configured for points only but a " + ((shape instanceof JtsGeometry) ? ((JtsGeometry)shape).getGeom().getGeometryType() : shape.getClass()) + " was found"); } - Field[] fields = fieldType().defaultStrategy().createIndexableFields(shape); - if (fields == null || fields.length == 0) { - return null; - } - for (Field field : fields) { + List fields = new ArrayList<>(Arrays.asList(fieldType().defaultStrategy().createIndexableFields(shape))); + createFieldNamesField(context, fields); + for (IndexableField field : fields) { context.doc().add(field); } } catch (Exception e) { diff --git a/core/src/main/java/org/elasticsearch/index/mapper/IdFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/IdFieldMapper.java index 55898e5f96c03..41256d3a5bb58 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/IdFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/IdFieldMapper.java @@ -23,6 +23,7 @@ import org.apache.lucene.index.IndexOptions; import org.apache.lucene.index.IndexableField; import org.apache.lucene.index.LeafReaderContext; +import org.apache.lucene.search.MatchAllDocsQuery; import org.apache.lucene.search.Query; import org.apache.lucene.search.SortField; import org.apache.lucene.search.TermInSetQuery; @@ -36,10 +37,10 @@ import org.elasticsearch.index.fielddata.AtomicFieldData; import org.elasticsearch.index.fielddata.IndexFieldData; import org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource.Nested; -import org.elasticsearch.index.fielddata.fieldcomparator.BytesRefFieldComparatorSource; import org.elasticsearch.index.fielddata.IndexFieldDataCache; import org.elasticsearch.index.fielddata.ScriptDocValues; import org.elasticsearch.index.fielddata.SortedBinaryDocValues; +import org.elasticsearch.index.fielddata.fieldcomparator.BytesRefFieldComparatorSource; import org.elasticsearch.index.fielddata.plain.PagedBytesIndexFieldData; import org.elasticsearch.index.query.QueryShardContext; import org.elasticsearch.indices.breaker.CircuitBreakerService; @@ -126,6 +127,11 @@ public Query termQuery(Object value, QueryShardContext context) { return termsQuery(Arrays.asList(value), context); } + @Override + public Query existsQuery(QueryShardContext context) { + return new MatchAllDocsQuery(); + } + @Override public Query termsQuery(List values, QueryShardContext context) { if (indexOptions() != IndexOptions.NONE) { diff --git a/core/src/main/java/org/elasticsearch/index/mapper/IndexFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/IndexFieldMapper.java index 347ec4d679bcb..0010211a95514 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/IndexFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/IndexFieldMapper.java @@ -21,9 +21,9 @@ import org.apache.lucene.index.IndexOptions; import org.apache.lucene.index.IndexableField; +import org.apache.lucene.search.MatchAllDocsQuery; import org.apache.lucene.search.Query; import org.apache.lucene.util.BytesRef; -import org.elasticsearch.Version; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.lucene.Lucene; import org.elasticsearch.common.lucene.search.Queries; @@ -77,10 +77,7 @@ public IndexFieldMapper build(BuilderContext context) { public static class TypeParser implements MetadataFieldMapper.TypeParser { @Override public MetadataFieldMapper.Builder parse(String name, Map node, ParserContext parserContext) throws MapperParsingException { - if (parserContext.indexVersionCreated().onOrAfter(Version.V_5_0_0_alpha3)) { - throw new MapperParsingException(NAME + " is not configurable"); - } - return new Builder(parserContext.mapperService().fullName(NAME)); + throw new MapperParsingException(NAME + " is not configurable"); } @Override @@ -114,6 +111,11 @@ public boolean isSearchable() { return true; } + @Override + public Query existsQuery(QueryShardContext context) { + return new MatchAllDocsQuery(); + } + /** * This termQuery impl looks at the context to determine the index that * is being queried and then returns a MATCH_ALL_QUERY or MATCH_NO_QUERY diff --git a/core/src/main/java/org/elasticsearch/index/mapper/IpFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/IpFieldMapper.java index 7b45e5f05b1a2..bc811d041e313 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/IpFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/IpFieldMapper.java @@ -25,12 +25,16 @@ import org.apache.lucene.index.IndexOptions; import org.apache.lucene.index.IndexableField; import org.apache.lucene.index.SortedSetDocValues; +import org.apache.lucene.index.Term; +import org.apache.lucene.search.DocValuesFieldExistsQuery; import org.apache.lucene.search.MatchNoDocsQuery; import org.apache.lucene.search.Query; +import org.apache.lucene.search.TermQuery; import org.apache.lucene.util.ArrayUtil; import org.apache.lucene.util.BytesRef; import org.elasticsearch.common.Explicit; import org.elasticsearch.common.Nullable; +import org.elasticsearch.common.collect.Tuple; import org.elasticsearch.common.network.InetAddresses; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentBuilder; @@ -85,7 +89,7 @@ protected Explicit ignoreMalformed(BuilderContext context) { public IpFieldMapper build(BuilderContext context) { setupFieldType(context); return new IpFieldMapper(name, fieldType, defaultFieldType, ignoreMalformed(context), - includeInAll, context.indexSettings(), multiFieldsBuilder.build(this, context), copyTo); + context.indexSettings(), multiFieldsBuilder.build(this, context), copyTo); } } @@ -119,7 +123,7 @@ public Mapper.Builder parse(String name, Map node, ParserCo } } - public static final class IpFieldType extends MappedFieldType { + public static final class IpFieldType extends SimpleMappedFieldType { public IpFieldType() { super(); @@ -152,6 +156,15 @@ private InetAddress parse(Object value) { } } + @Override + public Query existsQuery(QueryShardContext context) { + if (hasDocValues()) { + return new DocValuesFieldExistsQuery(name()); + } else { + return new TermQuery(new Term(FieldNamesFieldMapper.NAME, name())); + } + } + @Override public Query termQuery(Object value, @Nullable QueryShardContext context) { failIfNotIndexed(); @@ -163,14 +176,8 @@ public Query termQuery(Object value, @Nullable QueryShardContext context) { } String term = value.toString(); if (term.contains("/")) { - String[] fields = term.split("/"); - if (fields.length == 2) { - InetAddress address = InetAddresses.forString(fields[0]); - int prefixLength = Integer.parseInt(fields[1]); - return InetAddressPoint.newPrefixQuery(name(), address, prefixLength); - } else { - throw new IllegalArgumentException("Expected [ip/prefix] but was [" + term + "]"); - } + final Tuple cidr = InetAddresses.parseCidr(term); + return InetAddressPoint.newPrefixQuery(name(), cidr.v1(), cidr.v2()); } InetAddress address = InetAddresses.forString(term); return InetAddressPoint.newExactQuery(name(), address); @@ -307,8 +314,6 @@ public DocValueFormat docValueFormat(@Nullable String format, DateTimeZone timeZ } } - private Boolean includeInAll; - private Explicit ignoreMalformed; private IpFieldMapper( @@ -316,13 +321,11 @@ private IpFieldMapper( MappedFieldType fieldType, MappedFieldType defaultFieldType, Explicit ignoreMalformed, - Boolean includeInAll, Settings indexSettings, MultiFields multiFields, CopyTo copyTo) { super(simpleName, fieldType, defaultFieldType, indexSettings, multiFields, copyTo); this.ignoreMalformed = ignoreMalformed; - this.includeInAll = includeInAll; } @Override @@ -373,15 +376,13 @@ protected void parseCreateField(ParseContext context, List field } } - if (context.includeInAll(includeInAll, this)) { - context.allEntries().addText(fieldType().name(), addressAsString, fieldType().boost()); - } - if (fieldType().indexOptions() != IndexOptions.NONE) { fields.add(new InetAddressPoint(fieldType().name(), address)); } if (fieldType().hasDocValues()) { fields.add(new SortedSetDocValuesField(fieldType().name(), new BytesRef(InetAddressPoint.encode(address)))); + } else if (fieldType().stored() || fieldType().indexOptions() != IndexOptions.NONE) { + createFieldNamesField(context, fields); } if (fieldType().stored()) { fields.add(new StoredField(fieldType().name(), new BytesRef(InetAddressPoint.encode(address)))); @@ -392,7 +393,6 @@ protected void parseCreateField(ParseContext context, List field protected void doMerge(Mapper mergeWith, boolean updateAllTypes) { super.doMerge(mergeWith, updateAllTypes); IpFieldMapper other = (IpFieldMapper) mergeWith; - this.includeInAll = other.includeInAll; if (other.ignoreMalformed.explicit()) { this.ignoreMalformed = other.ignoreMalformed; } @@ -413,10 +413,5 @@ protected void doXContentBody(XContentBuilder builder, boolean includeDefaults, if (includeDefaults || ignoreMalformed.explicit()) { builder.field("ignore_malformed", ignoreMalformed.value()); } - if (includeInAll != null) { - builder.field("include_in_all", includeInAll); - } else if (includeDefaults) { - builder.field("include_in_all", false); - } } } diff --git a/core/src/main/java/org/elasticsearch/index/mapper/KeywordFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/KeywordFieldMapper.java index f73acae7d2bbf..cb2c4b6b6fddf 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/KeywordFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/KeywordFieldMapper.java @@ -25,7 +25,10 @@ import org.apache.lucene.document.SortedSetDocValuesField; import org.apache.lucene.index.IndexOptions; import org.apache.lucene.index.IndexableField; +import org.apache.lucene.index.Term; +import org.apache.lucene.search.DocValuesFieldExistsQuery; import org.apache.lucene.search.Query; +import org.apache.lucene.search.TermQuery; import org.apache.lucene.util.BytesRef; import org.elasticsearch.common.lucene.Lucene; import org.elasticsearch.common.settings.Settings; @@ -35,6 +38,7 @@ import org.elasticsearch.index.analysis.NamedAnalyzer; import org.elasticsearch.index.fielddata.IndexFieldData; import org.elasticsearch.index.fielddata.plain.DocValuesIndexFieldData; +import org.elasticsearch.index.query.QueryShardContext; import java.io.IOException; import java.util.Iterator; @@ -112,7 +116,7 @@ public Builder normalizer(NamedAnalyzer normalizer) { public KeywordFieldMapper build(BuilderContext context) { setupFieldType(context); return new KeywordFieldMapper( - name, fieldType, defaultFieldType, ignoreAbove, includeInAll, + name, fieldType, defaultFieldType, ignoreAbove, context.indexSettings(), multiFieldsBuilder.build(this, context), copyTo); } } @@ -210,6 +214,15 @@ public void setNormalizer(NamedAnalyzer normalizer) { this.normalizer = normalizer; } + @Override + public Query existsQuery(QueryShardContext context) { + if (hasDocValues()) { + return new DocValuesFieldExistsQuery(name()); + } else { + return new TermQuery(new Term(FieldNamesFieldMapper.NAME, name())); + } + } + @Override public Query nullValueQuery() { if (nullValue() == null) { @@ -255,16 +268,13 @@ protected BytesRef indexedValueForSearch(Object value) { } } - private Boolean includeInAll; private int ignoreAbove; protected KeywordFieldMapper(String simpleName, MappedFieldType fieldType, MappedFieldType defaultFieldType, - int ignoreAbove, Boolean includeInAll, - Settings indexSettings, MultiFields multiFields, CopyTo copyTo) { + int ignoreAbove, Settings indexSettings, MultiFields multiFields, CopyTo copyTo) { super(simpleName, fieldType, defaultFieldType, indexSettings, multiFields, copyTo); assert fieldType.indexOptions().compareTo(IndexOptions.DOCS_AND_FREQS) <= 0; this.ignoreAbove = ignoreAbove; - this.includeInAll = includeInAll; } /** Values that have more chars than the return value of this method will @@ -284,11 +294,6 @@ public KeywordFieldType fieldType() { return (KeywordFieldType) super.fieldType(); } - // pkg-private for testing - Boolean includeInAll() { - return includeInAll; - } - @Override protected void parseCreateField(ParseContext context, List fields) throws IOException { String value; @@ -328,10 +333,6 @@ protected void parseCreateField(ParseContext context, List field } } - if (context.includeInAll(includeInAll, this)) { - context.allEntries().addText(fieldType().name(), value, fieldType().boost()); - } - // convert to utf8 only once before feeding postings/dv/stored fields final BytesRef binaryValue = new BytesRef(value); if (fieldType().indexOptions() != IndexOptions.NONE || fieldType().stored()) { @@ -340,6 +341,8 @@ protected void parseCreateField(ParseContext context, List field } if (fieldType().hasDocValues()) { fields.add(new SortedSetDocValuesField(fieldType().name(), binaryValue)); + } else if (fieldType().stored() || fieldType().indexOptions() != IndexOptions.NONE) { + createFieldNamesField(context, fields); } } @@ -351,7 +354,6 @@ protected String contentType() { @Override protected void doMerge(Mapper mergeWith, boolean updateAllTypes) { super.doMerge(mergeWith, updateAllTypes); - this.includeInAll = ((KeywordFieldMapper) mergeWith).includeInAll; this.ignoreAbove = ((KeywordFieldMapper) mergeWith).ignoreAbove; } @@ -363,12 +365,6 @@ protected void doXContentBody(XContentBuilder builder, boolean includeDefaults, builder.field("null_value", fieldType().nullValue()); } - if (includeInAll != null) { - builder.field("include_in_all", includeInAll); - } else if (includeDefaults) { - builder.field("include_in_all", true); - } - if (includeDefaults || ignoreAbove != Defaults.IGNORE_ABOVE) { builder.field("ignore_above", ignoreAbove); } diff --git a/core/src/main/java/org/elasticsearch/index/mapper/MappedFieldType.java b/core/src/main/java/org/elasticsearch/index/mapper/MappedFieldType.java index 6cd6c91157680..6eab90875345b 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/MappedFieldType.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/MappedFieldType.java @@ -35,8 +35,8 @@ import org.apache.lucene.search.TermQuery; import org.apache.lucene.util.BytesRef; import org.elasticsearch.common.Nullable; +import org.elasticsearch.common.geo.ShapeRelation; import org.elasticsearch.common.joda.DateMathParser; -import org.elasticsearch.common.lucene.all.AllTermQuery; import org.elasticsearch.common.unit.Fuzziness; import org.elasticsearch.index.analysis.NamedAnalyzer; import org.elasticsearch.index.fielddata.IndexFieldData; @@ -293,7 +293,7 @@ public Object nullValue() { return nullValue; } - /** Returns the null value stringified, so it can be used for e.g. _all field, or null if there is no null value */ + /** Returns the null value stringified or null if there is no null value */ public String nullValueAsString() { return nullValueAsString; } @@ -348,7 +348,15 @@ public Query termsQuery(List values, @Nullable QueryShardContext context) { return new ConstantScoreQuery(builder.build()); } - public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper, QueryShardContext context) { + /** + * Factory method for range queries. + * @param relation the relation, nulls should be interpreted like INTERSECTS + */ + public Query rangeQuery( + Object lowerTerm, Object upperTerm, + boolean includeLower, boolean includeUpper, + ShapeRelation relation, DateTimeZone timeZone, DateMathParser parser, + QueryShardContext context) { throw new IllegalArgumentException("Field [" + name + "] of type [" + typeName() + "] does not support range queries"); } @@ -371,6 +379,8 @@ public Query nullValueQuery() { return new ConstantScoreQuery(termQuery(nullValue, null)); } + public abstract Query existsQuery(QueryShardContext context); + /** * An enum used to describe the relation between the range of terms in a * shard when compared with a query range @@ -449,9 +459,7 @@ public static Term extractTerm(Query termQuery) { while (termQuery instanceof BoostQuery) { termQuery = ((BoostQuery) termQuery).getQuery(); } - if (termQuery instanceof AllTermQuery) { - return ((AllTermQuery) termQuery).getTerm(); - } else if (termQuery instanceof TypeFieldMapper.TypesQuery) { + if (termQuery instanceof TypeFieldMapper.TypesQuery) { assert ((TypeFieldMapper.TypesQuery) termQuery).getTerms().length == 1; return new Term(TypeFieldMapper.NAME, ((TypeFieldMapper.TypesQuery) termQuery).getTerms()[0]); } diff --git a/core/src/main/java/org/elasticsearch/index/mapper/MapperService.java b/core/src/main/java/org/elasticsearch/index/mapper/MapperService.java index 85cdc7243a221..93d7f4b13380a 100755 --- a/core/src/main/java/org/elasticsearch/index/mapper/MapperService.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/MapperService.java @@ -50,7 +50,6 @@ import org.elasticsearch.index.query.QueryShardContext; import org.elasticsearch.index.similarity.SimilarityService; import org.elasticsearch.indices.InvalidTypeNameException; -import org.elasticsearch.indices.TypeMissingException; import org.elasticsearch.indices.mapper.MapperRegistry; import java.io.Closeable; @@ -98,11 +97,13 @@ public enum MergeReason { public static final Setting INDEX_MAPPING_DEPTH_LIMIT_SETTING = Setting.longSetting("index.mapping.depth.limit", 20L, 1, Property.Dynamic, Property.IndexScope); public static final boolean INDEX_MAPPER_DYNAMIC_DEFAULT = true; + @Deprecated public static final Setting INDEX_MAPPER_DYNAMIC_SETTING = - Setting.boolSetting("index.mapper.dynamic", INDEX_MAPPER_DYNAMIC_DEFAULT, Property.Dynamic, Property.IndexScope); + Setting.boolSetting("index.mapper.dynamic", INDEX_MAPPER_DYNAMIC_DEFAULT, + Property.Dynamic, Property.IndexScope, Property.Deprecated); private static ObjectHashSet META_FIELDS = ObjectHashSet.from( - "_uid", "_id", "_type", "_all", "_parent", "_routing", "_index", + "_uid", "_id", "_type", "_parent", "_routing", "_index", "_size", "_timestamp", "_ttl" ); @@ -110,11 +111,6 @@ public enum MergeReason { private final IndexAnalyzers indexAnalyzers; - /** - * Will create types automatically if they do not exists in the mapping definition yet - */ - private final boolean dynamic; - private volatile String defaultMappingSource; private volatile Map mappers = emptyMap(); @@ -122,7 +118,6 @@ public enum MergeReason { private volatile FieldTypeLookup fieldTypes; private volatile Map fullPathObjectMappers = emptyMap(); private boolean hasNested = false; // updated dynamically to true when a nested object is added - private boolean allEnabled = false; // updated dynamically to true when _all is enabled private final DocumentMapperParser documentParser; @@ -149,13 +144,15 @@ public MapperService(IndexSettings indexSettings, IndexAnalyzers indexAnalyzers, this.searchQuoteAnalyzer = new MapperAnalyzerWrapper(indexAnalyzers.getDefaultSearchQuoteAnalyzer(), p -> p.searchQuoteAnalyzer()); this.mapperRegistry = mapperRegistry; - this.dynamic = this.indexSettings.getValue(INDEX_MAPPER_DYNAMIC_SETTING); + if (INDEX_MAPPER_DYNAMIC_SETTING.exists(indexSettings.getSettings()) && + indexSettings.getIndexVersionCreated().onOrAfter(Version.V_7_0_0_alpha1)) { + throw new IllegalArgumentException("Setting " + INDEX_MAPPER_DYNAMIC_SETTING.getKey() + " was removed after version 6.0.0"); + } + defaultMappingSource = "{\"_default_\":{}}"; if (logger.isTraceEnabled()) { - logger.trace("using dynamic[{}], default mapping source[{}]", dynamic, defaultMappingSource); - } else if (logger.isDebugEnabled()) { - logger.debug("using dynamic[{}]", dynamic); + logger.trace("default mapping source[{}]", defaultMappingSource); } } @@ -163,13 +160,6 @@ public boolean hasNested() { return this.hasNested; } - /** - * Returns true if the "_all" field is enabled on any type. - */ - public boolean allEnabled() { - return this.allEnabled; - } - /** * returns an immutable iterator over current document mappers. * @@ -346,7 +336,6 @@ private synchronized Map internalMerge(Map internalMerge(@Nullable DocumentMapper defaultMapper, @Nullable String defaultMappingSource, List documentMappers, MergeReason reason, boolean updateAllTypes) { boolean hasNested = this.hasNested; - boolean allEnabled = this.allEnabled; Map fullPathObjectMappers = this.fullPathObjectMappers; FieldTypeLookup fieldTypes = this.fieldTypes; Set parentTypes = this.parentTypes; @@ -424,7 +413,7 @@ private synchronized Map internalMerge(@Nullable Documen } if (indexSettings.getIndexVersionCreated().onOrAfter(Version.V_6_0_0_beta1)) { - validateCopyTo(fieldMappers, fullPathObjectMappers); + validateCopyTo(fieldMappers, fullPathObjectMappers, fieldTypes); } if (reason == MergeReason.MAPPING_UPDATE) { @@ -444,10 +433,6 @@ private synchronized Map internalMerge(@Nullable Documen parentTypes.add(mapper.parentFieldMapper().type()); } - // this is only correct because types cannot be removed and we do not - // allow to disable an existing _all field - allEnabled |= mapper.allFieldMapper().enabled(); - results.put(newMapper.type(), newMapper); mappers.put(newMapper.type(), newMapper); } @@ -510,7 +495,6 @@ private synchronized Map internalMerge(@Nullable Documen this.hasNested = hasNested; this.fullPathObjectMappers = fullPathObjectMappers; this.parentTypes = parentTypes; - this.allEnabled = allEnabled; assert assertMappersShareSameFieldType(); assert results.values().stream().allMatch(this::assertSerialization); @@ -658,14 +642,26 @@ private static void checkIndexSortCompatibility(IndexSortConfig sortConfig, bool } } - private static void validateCopyTo(List fieldMappers, Map fullPathObjectMappers) { + private static void validateCopyTo(List fieldMappers, Map fullPathObjectMappers, + FieldTypeLookup fieldTypes) { for (FieldMapper mapper : fieldMappers) { if (mapper.copyTo() != null && mapper.copyTo().copyToFields().isEmpty() == false) { + String sourceParent = parentObject(mapper.name()); + if (sourceParent != null && fieldTypes.get(sourceParent) != null) { + throw new IllegalArgumentException("[copy_to] may not be used to copy from a multi-field: [" + mapper.name() + "]"); + } + final String sourceScope = getNestedScope(mapper.name(), fullPathObjectMappers); for (String copyTo : mapper.copyTo().copyToFields()) { + String copyToParent = parentObject(copyTo); + if (copyToParent != null && fieldTypes.get(copyToParent) != null) { + throw new IllegalArgumentException("[copy_to] may not be used to copy to a multi-field: [" + copyTo + "]"); + } + if (fullPathObjectMappers.containsKey(copyTo)) { throw new IllegalArgumentException("Cannot copy to field [" + copyTo + "] since it is mapped as an object"); } + final String targetScope = getNestedScope(copyTo, fullPathObjectMappers); checkNestedScopeCompatibility(sourceScope, targetScope); } @@ -688,7 +684,7 @@ private static void checkNestedScopeCompatibility(String source, String target) if (source == null || target == null) { targetIsParentOfSource = target == null; } else { - targetIsParentOfSource = source.startsWith(target + "."); + targetIsParentOfSource = source.equals(target) || source.startsWith(target + "."); } if (targetIsParentOfSource == false) { throw new IllegalArgumentException( @@ -742,10 +738,6 @@ public DocumentMapperForType documentMapperWithAutoCreate(String type) { if (mapper != null) { return new DocumentMapperForType(mapper, null); } - if (!dynamic) { - throw new TypeMissingException(index(), - new IllegalStateException("trying to auto create mapping, but dynamic mapping is disabled"), type); - } mapper = parse(type, null, true); return new DocumentMapperForType(mapper, mapper.mapping()); } diff --git a/core/src/main/java/org/elasticsearch/index/mapper/NumberFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/NumberFieldMapper.java index 8d8845be531da..a44611d6406a1 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/NumberFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/NumberFieldMapper.java @@ -29,13 +29,17 @@ import org.apache.lucene.document.StoredField; import org.apache.lucene.index.IndexOptions; import org.apache.lucene.index.IndexableField; +import org.apache.lucene.index.Term; import org.apache.lucene.search.BoostQuery; +import org.apache.lucene.search.DocValuesFieldExistsQuery; import org.apache.lucene.search.IndexOrDocValuesQuery; import org.apache.lucene.search.MatchNoDocsQuery; import org.apache.lucene.search.Query; +import org.apache.lucene.search.TermQuery; import org.apache.lucene.util.BytesRef; import org.apache.lucene.util.NumericUtils; import org.elasticsearch.common.Explicit; +import org.elasticsearch.common.Numbers; import org.elasticsearch.common.lucene.search.Queries; import org.elasticsearch.common.settings.Setting; import org.elasticsearch.common.settings.Setting.Property; @@ -61,8 +65,7 @@ /** A {@link FieldMapper} for numeric types: byte, short, int, long, float and double. */ public class NumberFieldMapper extends FieldMapper { - // this is private since it has a different default - static final Setting COERCE_SETTING = + public static final Setting COERCE_SETTING = Setting.boolSetting("index.mapping.coerce", true, Property.IndexScope); public static class Defaults { @@ -85,6 +88,12 @@ public Builder ignoreMalformed(boolean ignoreMalformed) { return builder; } + @Override + public Builder indexOptions(IndexOptions indexOptions) { + throw new MapperParsingException( + "index_options not allowed in field [" + name + "] of type [" + builder.fieldType().typeName() + "]"); + } + protected Explicit ignoreMalformed(BuilderContext context) { if (ignoreMalformed != null) { return new Explicit<>(ignoreMalformed, true); @@ -119,7 +128,7 @@ protected void setupFieldType(BuilderContext context) { public NumberFieldMapper build(BuilderContext context) { setupFieldType(context); return new NumberFieldMapper(name, fieldType, defaultFieldType, ignoreMalformed(context), - coerce(context), includeInAll, context.indexSettings(), multiFieldsBuilder.build(this, context), copyTo); + coerce(context), context.indexSettings(), multiFieldsBuilder.build(this, context), copyTo); } } @@ -161,7 +170,7 @@ public Mapper.Builder parse(String name, Map node, public enum NumberType { HALF_FLOAT("half_float", NumericType.HALF_FLOAT) { @Override - Float parse(Object value, boolean coerce) { + public Float parse(Object value, boolean coerce) { final float result; if (value instanceof Number) { @@ -177,20 +186,20 @@ Float parse(Object value, boolean coerce) { } @Override - Float parse(XContentParser parser, boolean coerce) throws IOException { + public Float parse(XContentParser parser, boolean coerce) throws IOException { float parsed = parser.floatValue(coerce); validateParsed(parsed); return parsed; } @Override - Query termQuery(String field, Object value) { + public Query termQuery(String field, Object value) { float v = parse(value, false); return HalfFloatPoint.newExactQuery(field, v); } @Override - Query termsQuery(String field, List values) { + public Query termsQuery(String field, List values) { float[] v = new float[values.size()]; for (int i = 0; i < values.size(); ++i) { v[i] = parse(values.get(i), false); @@ -199,7 +208,7 @@ Query termsQuery(String field, List values) { } @Override - Query rangeQuery(String field, Object lowerTerm, Object upperTerm, + public Query rangeQuery(String field, Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper, boolean hasDocValues) { float l = Float.NEGATIVE_INFINITY; @@ -253,7 +262,7 @@ private void validateParsed(float value) { }, FLOAT("float", NumericType.FLOAT) { @Override - Float parse(Object value, boolean coerce) { + public Float parse(Object value, boolean coerce) { final float result; if (value instanceof Number) { @@ -269,20 +278,20 @@ Float parse(Object value, boolean coerce) { } @Override - Float parse(XContentParser parser, boolean coerce) throws IOException { + public Float parse(XContentParser parser, boolean coerce) throws IOException { float parsed = parser.floatValue(coerce); validateParsed(parsed); return parsed; } @Override - Query termQuery(String field, Object value) { + public Query termQuery(String field, Object value) { float v = parse(value, false); return FloatPoint.newExactQuery(field, v); } @Override - Query termsQuery(String field, List values) { + public Query termsQuery(String field, List values) { float[] v = new float[values.size()]; for (int i = 0; i < values.size(); ++i) { v[i] = parse(values.get(i), false); @@ -291,7 +300,7 @@ Query termsQuery(String field, List values) { } @Override - Query rangeQuery(String field, Object lowerTerm, Object upperTerm, + public Query rangeQuery(String field, Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper, boolean hasDocValues) { float l = Float.NEGATIVE_INFINITY; @@ -343,27 +352,27 @@ private void validateParsed(float value) { }, DOUBLE("double", NumericType.DOUBLE) { @Override - Double parse(Object value, boolean coerce) { + public Double parse(Object value, boolean coerce) { double parsed = objectToDouble(value); validateParsed(parsed); return parsed; } @Override - Double parse(XContentParser parser, boolean coerce) throws IOException { + public Double parse(XContentParser parser, boolean coerce) throws IOException { double parsed = parser.doubleValue(coerce); validateParsed(parsed); return parsed; } @Override - Query termQuery(String field, Object value) { + public Query termQuery(String field, Object value) { double v = parse(value, false); return DoublePoint.newExactQuery(field, v); } @Override - Query termsQuery(String field, List values) { + public Query termsQuery(String field, List values) { double[] v = new double[values.size()]; for (int i = 0; i < values.size(); ++i) { v[i] = parse(values.get(i), false); @@ -372,7 +381,7 @@ Query termsQuery(String field, List values) { } @Override - Query rangeQuery(String field, Object lowerTerm, Object upperTerm, + public Query rangeQuery(String field, Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper, boolean hasDocValues) { double l = Double.NEGATIVE_INFINITY; @@ -424,7 +433,7 @@ private void validateParsed(double value) { }, BYTE("byte", NumericType.BYTE) { @Override - Byte parse(Object value, boolean coerce) { + public Byte parse(Object value, boolean coerce) { double doubleValue = objectToDouble(value); if (doubleValue < Byte.MIN_VALUE || doubleValue > Byte.MAX_VALUE) { @@ -442,7 +451,7 @@ Byte parse(Object value, boolean coerce) { } @Override - Short parse(XContentParser parser, boolean coerce) throws IOException { + public Short parse(XContentParser parser, boolean coerce) throws IOException { int value = parser.intValue(coerce); if (value < Byte.MIN_VALUE || value > Byte.MAX_VALUE) { throw new IllegalArgumentException("Value [" + value + "] is out of range for a byte"); @@ -451,17 +460,17 @@ Short parse(XContentParser parser, boolean coerce) throws IOException { } @Override - Query termQuery(String field, Object value) { + public Query termQuery(String field, Object value) { return INTEGER.termQuery(field, value); } @Override - Query termsQuery(String field, List values) { + public Query termsQuery(String field, List values) { return INTEGER.termsQuery(field, values); } @Override - Query rangeQuery(String field, Object lowerTerm, Object upperTerm, + public Query rangeQuery(String field, Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper, boolean hasDocValues) { return INTEGER.rangeQuery(field, lowerTerm, upperTerm, includeLower, includeUpper, hasDocValues); @@ -480,7 +489,7 @@ Number valueForSearch(Number value) { }, SHORT("short", NumericType.SHORT) { @Override - Short parse(Object value, boolean coerce) { + public Short parse(Object value, boolean coerce) { double doubleValue = objectToDouble(value); if (doubleValue < Short.MIN_VALUE || doubleValue > Short.MAX_VALUE) { @@ -498,22 +507,22 @@ Short parse(Object value, boolean coerce) { } @Override - Short parse(XContentParser parser, boolean coerce) throws IOException { + public Short parse(XContentParser parser, boolean coerce) throws IOException { return parser.shortValue(coerce); } @Override - Query termQuery(String field, Object value) { + public Query termQuery(String field, Object value) { return INTEGER.termQuery(field, value); } @Override - Query termsQuery(String field, List values) { + public Query termsQuery(String field, List values) { return INTEGER.termsQuery(field, values); } @Override - Query rangeQuery(String field, Object lowerTerm, Object upperTerm, + public Query rangeQuery(String field, Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper, boolean hasDocValues) { return INTEGER.rangeQuery(field, lowerTerm, upperTerm, includeLower, includeUpper, hasDocValues); @@ -532,7 +541,7 @@ Number valueForSearch(Number value) { }, INTEGER("integer", NumericType.INT) { @Override - Integer parse(Object value, boolean coerce) { + public Integer parse(Object value, boolean coerce) { double doubleValue = objectToDouble(value); if (doubleValue < Integer.MIN_VALUE || doubleValue > Integer.MAX_VALUE) { @@ -550,12 +559,12 @@ Integer parse(Object value, boolean coerce) { } @Override - Integer parse(XContentParser parser, boolean coerce) throws IOException { + public Integer parse(XContentParser parser, boolean coerce) throws IOException { return parser.intValue(coerce); } @Override - Query termQuery(String field, Object value) { + public Query termQuery(String field, Object value) { if (hasDecimalPart(value)) { return Queries.newMatchNoDocsQuery("Value [" + value + "] has a decimal part"); } @@ -564,7 +573,7 @@ Query termQuery(String field, Object value) { } @Override - Query termsQuery(String field, List values) { + public Query termsQuery(String field, List values) { int[] v = new int[values.size()]; int upTo = 0; @@ -585,7 +594,7 @@ Query termsQuery(String field, List values) { } @Override - Query rangeQuery(String field, Object lowerTerm, Object upperTerm, + public Query rangeQuery(String field, Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper, boolean hasDocValues) { int l = Integer.MIN_VALUE; @@ -643,9 +652,14 @@ public List createFields(String name, Number value, }, LONG("long", NumericType.LONG) { @Override - Long parse(Object value, boolean coerce) { - double doubleValue = objectToDouble(value); + public Long parse(Object value, boolean coerce) { + if (value instanceof Long) { + return (Long)value; + } + double doubleValue = objectToDouble(value); + // this check does not guarantee that value is inside MIN_VALUE/MAX_VALUE because values up to 9223372036854776832 will + // be equal to Long.MAX_VALUE after conversion to double. More checks ahead. if (doubleValue < Long.MIN_VALUE || doubleValue > Long.MAX_VALUE) { throw new IllegalArgumentException("Value [" + value + "] is out of range for a long"); } @@ -653,27 +667,18 @@ Long parse(Object value, boolean coerce) { throw new IllegalArgumentException("Value [" + value + "] has a decimal part"); } - if (value instanceof Number) { - return ((Number) value).longValue(); - } - // longs need special handling so we don't lose precision while parsing String stringValue = (value instanceof BytesRef) ? ((BytesRef) value).utf8ToString() : value.toString(); - - try { - return Long.parseLong(stringValue); - } catch (NumberFormatException e) { - return (long) Double.parseDouble(stringValue); - } + return Numbers.toLong(stringValue, coerce); } @Override - Long parse(XContentParser parser, boolean coerce) throws IOException { + public Long parse(XContentParser parser, boolean coerce) throws IOException { return parser.longValue(coerce); } @Override - Query termQuery(String field, Object value) { + public Query termQuery(String field, Object value) { if (hasDecimalPart(value)) { return Queries.newMatchNoDocsQuery("Value [" + value + "] has a decimal part"); } @@ -682,7 +687,7 @@ Query termQuery(String field, Object value) { } @Override - Query termsQuery(String field, List values) { + public Query termsQuery(String field, List values) { long[] v = new long[values.size()]; int upTo = 0; @@ -703,7 +708,7 @@ Query termsQuery(String field, List values) { } @Override - Query rangeQuery(String field, Object lowerTerm, Object upperTerm, + public Query rangeQuery(String field, Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper, boolean hasDocValues) { long l = Long.MIN_VALUE; @@ -776,13 +781,13 @@ public final String typeName() { final NumericType numericType() { return numericType; } - abstract Query termQuery(String field, Object value); - abstract Query termsQuery(String field, List values); - abstract Query rangeQuery(String field, Object lowerTerm, Object upperTerm, + public abstract Query termQuery(String field, Object value); + public abstract Query termsQuery(String field, List values); + public abstract Query rangeQuery(String field, Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper, boolean hasDocValues); - abstract Number parse(XContentParser parser, boolean coerce) throws IOException; - abstract Number parse(Object value, boolean coerce); + public abstract Number parse(XContentParser parser, boolean coerce) throws IOException; + public abstract Number parse(Object value, boolean coerce); public abstract List createFields(String name, Number value, boolean indexed, boolean docValued, boolean stored); Number valueForSearch(Number value) { @@ -836,10 +841,9 @@ private static double objectToDouble(Object value) { return doubleValue; } - } - public static final class NumberFieldType extends MappedFieldType { + public static final class NumberFieldType extends SimpleMappedFieldType { NumberType type; @@ -866,6 +870,15 @@ public String typeName() { return type.name; } + @Override + public Query existsQuery(QueryShardContext context) { + if (hasDocValues()) { + return new DocValuesFieldExistsQuery(name()); + } else { + return new TermQuery(new Term(FieldNamesFieldMapper.NAME, name())); + } + } + @Override public Query termQuery(Object value, QueryShardContext context) { failIfNotIndexed(); @@ -924,8 +937,6 @@ public DocValueFormat docValueFormat(String format, DateTimeZone timeZone) { } } - private Boolean includeInAll; - private Explicit ignoreMalformed; private Explicit coerce; @@ -936,14 +947,12 @@ private NumberFieldMapper( MappedFieldType defaultFieldType, Explicit ignoreMalformed, Explicit coerce, - Boolean includeInAll, Settings indexSettings, MultiFields multiFields, CopyTo copyTo) { super(simpleName, fieldType, defaultFieldType, indexSettings, multiFields, copyTo); this.ignoreMalformed = ignoreMalformed; this.coerce = coerce; - this.includeInAll = includeInAll; } @Override @@ -963,7 +972,6 @@ protected NumberFieldMapper clone() { @Override protected void parseCreateField(ParseContext context, List fields) throws IOException { - final boolean includeInAll = context.includeInAll(this.includeInAll, this); XContentParser parser = context.parser(); Object value; @@ -986,11 +994,7 @@ protected void parseCreateField(ParseContext context, List field throw e; } } - if (includeInAll) { - value = parser.textOrNull(); // preserve formatting - } else { - value = numericValue; - } + value = numericValue; } if (value == null) { @@ -1005,21 +1009,19 @@ protected void parseCreateField(ParseContext context, List field numericValue = fieldType().type.parse(value, coerce.value()); } - if (includeInAll) { - context.allEntries().addText(fieldType().name(), value.toString(), fieldType().boost()); - } - boolean indexed = fieldType().indexOptions() != IndexOptions.NONE; boolean docValued = fieldType().hasDocValues(); boolean stored = fieldType().stored(); fields.addAll(fieldType().type.createFields(fieldType().name(), numericValue, indexed, docValued, stored)); + if (docValued == false && (stored || indexed)) { + createFieldNamesField(context, fields); + } } @Override protected void doMerge(Mapper mergeWith, boolean updateAllTypes) { super.doMerge(mergeWith, updateAllTypes); NumberFieldMapper other = (NumberFieldMapper) mergeWith; - this.includeInAll = other.includeInAll; if (other.ignoreMalformed.explicit()) { this.ignoreMalformed = other.ignoreMalformed; } @@ -1042,11 +1044,5 @@ protected void doXContentBody(XContentBuilder builder, boolean includeDefaults, if (includeDefaults || fieldType().nullValue() != null) { builder.field("null_value", fieldType().nullValue()); } - - if (includeInAll != null) { - builder.field("include_in_all", includeInAll); - } else if (includeDefaults) { - builder.field("include_in_all", false); - } } } diff --git a/core/src/main/java/org/elasticsearch/index/mapper/ObjectMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/ObjectMapper.java index 1d3c4791b65dd..d83ce173d6896 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/ObjectMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/ObjectMapper.java @@ -24,12 +24,14 @@ import org.apache.lucene.search.TermQuery; import org.apache.lucene.util.BytesRef; import org.elasticsearch.ElasticsearchParseException; -import org.elasticsearch.Version; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.collect.CopyOnWriteHashMap; +import org.elasticsearch.common.logging.DeprecationLogger; +import org.elasticsearch.common.logging.ESLoggerFactory; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.index.fielddata.ScriptDocValues; import java.io.IOException; import java.util.ArrayList; @@ -43,6 +45,7 @@ import java.util.Map; public class ObjectMapper extends Mapper implements Cloneable { + private static final DeprecationLogger deprecationLogger = new DeprecationLogger(ESLoggerFactory.getLogger(ObjectMapper.class)); public static final String CONTENT_TYPE = "object"; public static final String NESTED_CONTENT_TYPE = "nested"; @@ -100,8 +103,6 @@ public static class Builder extends M protected Dynamic dynamic = Defaults.DYNAMIC; - protected Boolean includeInAll; - protected final List mappersBuilders = new ArrayList<>(); public Builder(String name) { @@ -124,11 +125,6 @@ public T nested(Nested nested) { return builder; } - public T includeInAll(boolean includeInAll) { - this.includeInAll = includeInAll; - return builder; - } - public T add(Mapper.Builder builder) { mappersBuilders.add(builder); return this.builder; @@ -150,14 +146,14 @@ public Y build(BuilderContext context) { context.path().remove(); ObjectMapper objectMapper = createMapper(name, context.path().pathAsText(name), enabled, nested, dynamic, - includeInAll, mappers, context.indexSettings()); + mappers, context.indexSettings()); return (Y) objectMapper; } protected ObjectMapper createMapper(String name, String fullPath, boolean enabled, Nested nested, Dynamic dynamic, - Boolean includeInAll, Map mappers, @Nullable Settings settings) { - return new ObjectMapper(name, fullPath, enabled, nested, dynamic, includeInAll, mappers, settings); + Map mappers, @Nullable Settings settings) { + return new ObjectMapper(name, fullPath, enabled, nested, dynamic, mappers, settings); } } @@ -200,7 +196,7 @@ protected static boolean parseObjectOrDocumentTypeProperties(String fieldName, O } return true; } else if (fieldName.equals("include_in_all")) { - builder.includeInAll(TypeParsers.nodeBooleanValue(fieldName, "include_in_all", fieldNode, parserContext)); + deprecationLogger.deprecated("[include_in_all] is deprecated, the _all field have been removed in this version"); return true; } return false; @@ -313,25 +309,19 @@ protected static void parseProperties(ObjectMapper.Builder objBuilder, Map mappers; ObjectMapper(String name, String fullPath, boolean enabled, Nested nested, Dynamic dynamic, - Boolean includeInAll, Map mappers, Settings settings) { + Map mappers, Settings settings) { super(name); assert settings != null; - Version indexCreatedVersion = Version.indexCreated(settings); - if (indexCreatedVersion.onOrAfter(Version.V_5_0_0_beta1)) { - if (name.isEmpty()) { - throw new IllegalArgumentException("name cannot be empty string"); - } + if (name.isEmpty()) { + throw new IllegalArgumentException("name cannot be empty string"); } this.fullPath = fullPath; this.enabled = enabled; this.nested = nested; this.dynamic = dynamic; - this.includeInAll = includeInAll; if (mappers == null) { this.mappers = new CopyOnWriteHashMap<>(); } else { @@ -381,10 +371,6 @@ public Nested nested() { return this.nested; } - public Boolean includeInAll() { - return includeInAll; - } - public Query nestedTypeFilter() { return this.nestedTypeFilter; } @@ -410,6 +396,35 @@ public final Dynamic dynamic() { return dynamic; } + /** + * Returns the parent {@link ObjectMapper} instance of the specified object mapper or null if there + * isn't any. + */ + public ObjectMapper getParentObjectMapper(MapperService mapperService) { + int indexOfLastDot = fullPath().lastIndexOf('.'); + if (indexOfLastDot != -1) { + String parentNestObjectPath = fullPath().substring(0, indexOfLastDot); + return mapperService.getObjectMapper(parentNestObjectPath); + } else { + return null; + } + } + + /** + * Returns whether all parent objects fields are nested too. + */ + public boolean parentObjectMapperAreNested(MapperService mapperService) { + for (ObjectMapper parent = getParentObjectMapper(mapperService); + parent != null; + parent = parent.getParentObjectMapper(mapperService)) { + + if (parent.nested().isNested() == false) { + return false; + } + } + return true; + } + @Override public ObjectMapper merge(Mapper mergeWith, boolean updateAllTypes) { if (!(mergeWith instanceof ObjectMapper)) { @@ -432,7 +447,6 @@ protected void doMerge(final ObjectMapper mergeWith, boolean updateAllTypes) { } } - this.includeInAll = mergeWith.includeInAll; if (mergeWith.dynamic != null) { this.dynamic = mergeWith.dynamic; } @@ -498,9 +512,6 @@ public void toXContent(XContentBuilder builder, Params params, ToXContent custom if (enabled != Defaults.ENABLED) { builder.field("enabled", enabled); } - if (includeInAll != null) { - builder.field("include_in_all", includeInAll); - } if (custom != null) { custom.toXContent(builder, params); diff --git a/core/src/main/java/org/elasticsearch/index/mapper/ParentFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/ParentFieldMapper.java index ed57e060e4723..73109a3ecd8f9 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/ParentFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/ParentFieldMapper.java @@ -25,6 +25,7 @@ import org.apache.lucene.index.Term; import org.apache.lucene.search.BooleanClause; import org.apache.lucene.search.BooleanQuery; +import org.apache.lucene.search.DocValuesFieldExistsQuery; import org.apache.lucene.search.DocValuesTermsQuery; import org.apache.lucene.search.Query; import org.apache.lucene.search.TermQuery; @@ -33,7 +34,6 @@ import org.elasticsearch.common.ParseField; import org.elasticsearch.common.lucene.BytesRefs; import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.common.settings.loader.SettingsLoader; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.support.XContentMapValues; import org.elasticsearch.index.fielddata.IndexFieldData; @@ -118,7 +118,7 @@ public MetadataFieldMapper.Builder parse(String name, Map node, iterator.remove(); } else if (FIELDDATA.match(fieldName)) { // for bw compat only - Map fieldDataSettings = SettingsLoader.Helper.loadNestedFromMap(nodeMapValue(fieldNode, "fielddata")); + Map fieldDataSettings = nodeMapValue(fieldNode, "fielddata"); if (fieldDataSettings.containsKey("loading")) { builder.eagerGlobalOrdinals("eager_global_ordinals".equals(fieldDataSettings.get("loading"))); } @@ -179,6 +179,11 @@ public String typeName() { return CONTENT_TYPE; } + @Override + public Query existsQuery(QueryShardContext context) { + return new DocValuesFieldExistsQuery(name()); + } + @Override public Query termQuery(Object value, @Nullable QueryShardContext context) { return termsQuery(Collections.singletonList(value), context); diff --git a/core/src/main/java/org/elasticsearch/index/mapper/ParseContext.java b/core/src/main/java/org/elasticsearch/index/mapper/ParseContext.java index 64c4932e47017..c6ed6b315a015 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/ParseContext.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/ParseContext.java @@ -22,11 +22,9 @@ import com.carrotsearch.hppc.ObjectObjectHashMap; import com.carrotsearch.hppc.ObjectObjectMap; import org.apache.lucene.document.Field; -import org.apache.lucene.index.IndexOptions; import org.apache.lucene.index.IndexableField; import org.apache.lucene.util.BytesRef; import org.elasticsearch.common.Nullable; -import org.elasticsearch.common.lucene.all.AllEntries; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentParser; @@ -263,11 +261,6 @@ public void seqID(SeqNoFieldMapper.SequenceIDFields seqID) { in.seqID(seqID); } - @Override - public AllEntries allEntries() { - return in.allEntries(); - } - @Override public boolean externalValueSet() { return in.externalValueSet(); @@ -312,7 +305,6 @@ public static class InternalParseContext extends ParseContext { private SeqNoFieldMapper.SequenceIDFields seqID; - private final AllEntries allEntries; private final List dynamicMappers; @@ -328,7 +320,6 @@ public InternalParseContext(@Nullable Settings indexSettings, DocumentMapperPars this.documents.add(document); this.version = null; this.sourceToParse = source; - this.allEntries = new AllEntries(); this.dynamicMappers = new ArrayList<>(); } @@ -413,11 +404,6 @@ public void seqID(SeqNoFieldMapper.SequenceIDFields seqID) { this.seqID = seqID; } - @Override - public AllEntries allEntries() { - return this.allEntries; - } - @Override public void addDynamicMapper(Mapper mapper) { dynamicMappers.add(mapper); @@ -431,22 +417,6 @@ public List getDynamicMappers() { public abstract DocumentMapperParser docMapperParser(); - /** Return a view of this {@link ParseContext} that changes the return - * value of {@link #getIncludeInAllDefault()}. */ - public final ParseContext setIncludeInAllDefault(boolean includeInAll) { - return new FilterParseContext(this) { - @Override - public Boolean getIncludeInAllDefault() { - return includeInAll; - } - }; - } - - /** Whether field values should be added to the _all field by default. */ - public Boolean getIncludeInAllDefault() { - return null; - } - /** * Return a new context that will be within a copy-to operation. */ @@ -543,37 +513,6 @@ public boolean isWithinMultiFields() { public abstract void seqID(SeqNoFieldMapper.SequenceIDFields seqID); - public final boolean includeInAll(Boolean includeInAll, FieldMapper mapper) { - return includeInAll(includeInAll, mapper.fieldType().indexOptions() != IndexOptions.NONE); - } - - /** - * Is all included or not. Will always disable it if {@link org.elasticsearch.index.mapper.AllFieldMapper#enabled()} - * is false. If its enabled, then will return true only if the specific flag is null or - * its actual value (so, if not set, defaults to "true") and the field is indexed. - */ - private boolean includeInAll(Boolean includeInAll, boolean indexed) { - if (isWithinCopyTo()) { - return false; - } - if (isWithinMultiFields()) { - return false; - } - if (!docMapper().allFieldMapper().enabled()) { - return false; - } - if (includeInAll == null) { - includeInAll = getIncludeInAllDefault(); - } - // not explicitly set - if (includeInAll == null) { - return indexed; - } - return includeInAll; - } - - public abstract AllEntries allEntries(); - /** * Return a new context that will have the external value set. */ diff --git a/core/src/main/java/org/elasticsearch/index/mapper/ParsedDocument.java b/core/src/main/java/org/elasticsearch/index/mapper/ParsedDocument.java index 14b3291f441a9..11804c2e88e1d 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/ParsedDocument.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/ParsedDocument.java @@ -20,7 +20,6 @@ package org.elasticsearch.index.mapper; import org.apache.lucene.document.Field; -import org.apache.lucene.util.BytesRef; import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.index.mapper.ParseContext.Document; @@ -35,7 +34,6 @@ public class ParsedDocument { private final Field version; private final String id, type; - private final BytesRef uid; private final SeqNoFieldMapper.SequenceIDFields seqID; private final String routing; @@ -62,7 +60,6 @@ public ParsedDocument(Field version, this.seqID = seqID; this.id = id; this.type = type; - this.uid = Uid.createUidAsBytes(type, id); this.routing = routing; this.documents = documents; this.source = source; @@ -140,9 +137,7 @@ public void addDynamicMappingsUpdate(Mapping update) { @Override public String toString() { - StringBuilder sb = new StringBuilder(); - sb.append("Document ").append("uid[").append(uid).append("] doc [").append(documents).append("]"); - return sb.toString(); + return "Document uid[" + Uid.createUidAsBytes(type, id) + "] doc [" + documents + ']'; } } diff --git a/core/src/main/java/org/elasticsearch/index/mapper/RootObjectMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/RootObjectMapper.java index c51f29072c188..42341bfb96b2d 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/RootObjectMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/RootObjectMapper.java @@ -74,11 +74,43 @@ public Builder dynamicTemplates(Collection templates) { return this; } + @Override + public RootObjectMapper build(BuilderContext context) { + fixRedundantIncludes(this, true); + return super.build(context); + } + + /** + * Removes redundant root includes in {@link ObjectMapper.Nested} trees to avoid duplicate + * fields on the root mapper when {@code isIncludeInRoot} is {@code true} for a node that is + * itself included into a parent node, for which either {@code isIncludeInRoot} is + * {@code true} or which is transitively included in root by a chain of nodes with + * {@code isIncludeInParent} returning {@code true}. + * @param omb Builder whose children to check. + * @param parentIncluded True iff node is a child of root or a node that is included in + * root + */ + private static void fixRedundantIncludes(ObjectMapper.Builder omb, boolean parentIncluded) { + for (Object mapper : omb.mappersBuilders) { + if (mapper instanceof ObjectMapper.Builder) { + ObjectMapper.Builder child = (ObjectMapper.Builder) mapper; + Nested nested = child.nested; + boolean isNested = nested.isNested(); + boolean includeInRootViaParent = parentIncluded && isNested && nested.isIncludeInParent(); + boolean includedInRoot = isNested && nested.isIncludeInRoot(); + if (includeInRootViaParent && includedInRoot) { + child.nested = Nested.newNested(true, false); + } + fixRedundantIncludes(child, includeInRootViaParent || includedInRoot); + } + } + } + @Override protected ObjectMapper createMapper(String name, String fullPath, boolean enabled, Nested nested, Dynamic dynamic, - Boolean includeInAll, Map mappers, @Nullable Settings settings) { + Map mappers, @Nullable Settings settings) { assert !nested.isNested(); - return new RootObjectMapper(name, enabled, dynamic, includeInAll, mappers, + return new RootObjectMapper(name, enabled, dynamic, mappers, dynamicDateTimeFormatters, dynamicTemplates, dateDetection, numericDetection, settings); @@ -165,10 +197,10 @@ protected boolean processField(RootObjectMapper.Builder builder, String fieldNam private Explicit numericDetection; private Explicit dynamicTemplates; - RootObjectMapper(String name, boolean enabled, Dynamic dynamic, Boolean includeInAll, Map mappers, + RootObjectMapper(String name, boolean enabled, Dynamic dynamic, Map mappers, Explicit dynamicDateTimeFormatters, Explicit dynamicTemplates, Explicit dateDetection, Explicit numericDetection, Settings settings) { - super(name, name, enabled, Nested.NO, dynamic, includeInAll, mappers, settings); + super(name, name, enabled, Nested.NO, dynamic, mappers, settings); this.dynamicTemplates = dynamicTemplates; this.dynamicDateTimeFormatters = dynamicDateTimeFormatters; this.dateDetection = dateDetection; diff --git a/core/src/main/java/org/elasticsearch/index/mapper/RoutingFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/RoutingFieldMapper.java index 88679d910b19a..a4b009f9f1fa3 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/RoutingFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/RoutingFieldMapper.java @@ -22,9 +22,13 @@ import org.apache.lucene.document.Field; import org.apache.lucene.index.IndexOptions; import org.apache.lucene.index.IndexableField; +import org.apache.lucene.index.Term; +import org.apache.lucene.search.Query; +import org.apache.lucene.search.TermQuery; import org.elasticsearch.common.lucene.Lucene; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.index.query.QueryShardContext; import java.io.IOException; import java.util.Collections; @@ -121,6 +125,11 @@ public MappedFieldType clone() { public String typeName() { return CONTENT_TYPE; } + + @Override + public Query existsQuery(QueryShardContext context) { + return new TermQuery(new Term(FieldNamesFieldMapper.NAME, name())); + } } private boolean required; @@ -165,6 +174,7 @@ protected void parseCreateField(ParseContext context, List field if (routing != null) { if (fieldType().indexOptions() != IndexOptions.NONE || fieldType().stored()) { fields.add(new Field(fieldType().name(), routing, fieldType())); + createFieldNamesField(context, fields); } } } diff --git a/core/src/main/java/org/elasticsearch/index/mapper/SeqNoFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/SeqNoFieldMapper.java index ec5b718a94b97..7d74f9e52aa4a 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/SeqNoFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/SeqNoFieldMapper.java @@ -24,6 +24,7 @@ import org.apache.lucene.document.NumericDocValuesField; import org.apache.lucene.index.DocValuesType; import org.apache.lucene.index.IndexableField; +import org.apache.lucene.search.DocValuesFieldExistsQuery; import org.apache.lucene.search.MatchNoDocsQuery; import org.apache.lucene.search.Query; import org.apache.lucene.util.BytesRef; @@ -35,7 +36,7 @@ import org.elasticsearch.index.fielddata.plain.DocValuesIndexFieldData; import org.elasticsearch.index.mapper.ParseContext.Document; import org.elasticsearch.index.query.QueryShardContext; -import org.elasticsearch.index.seqno.SequenceNumbersService; +import org.elasticsearch.index.seqno.SequenceNumbers; import java.io.IOException; import java.util.List; @@ -78,8 +79,8 @@ public SequenceIDFields(Field seqNo, Field seqNoDocValue, Field primaryTerm) { } public static SequenceIDFields emptySeqID() { - return new SequenceIDFields(new LongPoint(NAME, SequenceNumbersService.UNASSIGNED_SEQ_NO), - new NumericDocValuesField(NAME, SequenceNumbersService.UNASSIGNED_SEQ_NO), + return new SequenceIDFields(new LongPoint(NAME, SequenceNumbers.UNASSIGNED_SEQ_NO), + new NumericDocValuesField(NAME, SequenceNumbers.UNASSIGNED_SEQ_NO), new NumericDocValuesField(PRIMARY_TERM_NAME, 0)); } } @@ -126,7 +127,7 @@ public MetadataFieldMapper getDefault(MappedFieldType fieldType, ParserContext c } } - static final class SeqNoFieldType extends MappedFieldType { + static final class SeqNoFieldType extends SimpleMappedFieldType { SeqNoFieldType() { } @@ -162,6 +163,11 @@ private long parse(Object value) { return Long.parseLong(value.toString()); } + @Override + public Query existsQuery(QueryShardContext context) { + return new DocValuesFieldExistsQuery(name()); + } + @Override public Query termQuery(Object value, @Nullable QueryShardContext context) { long v = parse(value); diff --git a/core/src/main/java/org/elasticsearch/index/mapper/SimpleMappedFieldType.java b/core/src/main/java/org/elasticsearch/index/mapper/SimpleMappedFieldType.java new file mode 100644 index 0000000000000..b91be82cd6b26 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/index/mapper/SimpleMappedFieldType.java @@ -0,0 +1,63 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.index.mapper; + +import org.apache.lucene.search.Query; +import org.elasticsearch.common.geo.ShapeRelation; +import org.elasticsearch.common.joda.DateMathParser; +import org.elasticsearch.index.query.QueryShardContext; +import org.joda.time.DateTimeZone; + +/** + * {@link MappedFieldType} base impl for field types that are neither dates nor ranges. + */ +public abstract class SimpleMappedFieldType extends MappedFieldType { + + protected SimpleMappedFieldType() { + super(); + } + + protected SimpleMappedFieldType(MappedFieldType ref) { + super(ref); + } + + @Override + public final Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper, + ShapeRelation relation, DateTimeZone timeZone, DateMathParser parser, QueryShardContext context) { + if (relation == ShapeRelation.DISJOINT) { + throw new IllegalArgumentException("Field [" + name() + "] of type [" + typeName() + + "] does not support DISJOINT ranges"); + } + // We do not fail on non-null time zones and date parsers + // The reasoning is that on query parsers, you might want to set a time zone or format for date fields + // but then the API has no way to know which fields are dates and which fields are not dates + return rangeQuery(lowerTerm, upperTerm, includeLower, includeUpper, context); + } + + /** + * Same as {@link #rangeQuery(Object, Object, boolean, boolean, ShapeRelation, DateTimeZone, DateMathParser, QueryShardContext)} + * but without the trouble of relations or date-specific options. + */ + protected Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper, + QueryShardContext context) { + throw new IllegalArgumentException("Field [" + name() + "] of type [" + typeName() + "] does not support range queries"); + } + +} diff --git a/core/src/main/java/org/elasticsearch/index/mapper/SourceFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/SourceFieldMapper.java index 5e1b6843940f0..47d5e64438e57 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/SourceFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/SourceFieldMapper.java @@ -24,7 +24,6 @@ import org.apache.lucene.index.IndexableField; import org.apache.lucene.search.Query; import org.apache.lucene.util.BytesRef; -import org.elasticsearch.Version; import org.elasticsearch.common.Strings; import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.collect.Tuple; @@ -116,9 +115,6 @@ public MetadataFieldMapper.Builder parse(String name, Map n if (fieldName.equals("enabled")) { builder.enabled(TypeParsers.nodeBooleanValue(name, "enabled", fieldNode, parserContext)); iterator.remove(); - } else if ("format".equals(fieldName) && parserContext.indexVersionCreated().before(Version.V_5_0_0_alpha1)) { - // ignore on old indices, reject on and after 5.0 - iterator.remove(); } else if (fieldName.equals("includes")) { List values = (List) fieldNode; String[] includes = new String[values.size()]; @@ -165,6 +161,11 @@ public String typeName() { return CONTENT_TYPE; } + @Override + public Query existsQuery(QueryShardContext context) { + throw new QueryShardException(context, "The _source field is not searchable"); + } + @Override public Query termQuery(Object value, QueryShardContext context) { throw new QueryShardException(context, "The _source field is not searchable"); diff --git a/core/src/main/java/org/elasticsearch/index/mapper/TermBasedFieldType.java b/core/src/main/java/org/elasticsearch/index/mapper/TermBasedFieldType.java index 89b09cc068e63..9638fecb982e4 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/TermBasedFieldType.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/TermBasedFieldType.java @@ -27,13 +27,12 @@ import org.apache.lucene.search.TermInSetQuery; import org.apache.lucene.search.TermQuery; import org.apache.lucene.util.BytesRef; -import org.elasticsearch.Version; import org.elasticsearch.common.lucene.BytesRefs; import org.elasticsearch.index.query.QueryShardContext; /** Base {@link MappedFieldType} implementation for a field that is indexed * with the inverted index. */ -abstract class TermBasedFieldType extends MappedFieldType { +abstract class TermBasedFieldType extends SimpleMappedFieldType { TermBasedFieldType() {} @@ -51,12 +50,11 @@ protected BytesRef indexedValueForSearch(Object value) { @Override public Query termQuery(Object value, QueryShardContext context) { failIfNotIndexed(); - TermQuery query = new TermQuery(new Term(name(), indexedValueForSearch(value))); - if (boost() == 1f || - (context != null && context.indexVersionCreated().before(Version.V_5_0_0_alpha1))) { - return query; + Query query = new TermQuery(new Term(name(), indexedValueForSearch(value))); + if (boost() != 1f) { + query = new BoostQuery(query, boost()); } - return new BoostQuery(query, boost()); + return query; } @Override diff --git a/core/src/main/java/org/elasticsearch/index/mapper/TextFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/TextFieldMapper.java index 2ca86f48c40c4..ae99f743fe57f 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/TextFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/TextFieldMapper.java @@ -22,13 +22,17 @@ import org.apache.lucene.document.Field; import org.apache.lucene.index.IndexOptions; import org.apache.lucene.index.IndexableField; +import org.apache.lucene.index.Term; +import org.apache.lucene.search.NormsFieldExistsQuery; import org.apache.lucene.search.Query; +import org.apache.lucene.search.TermQuery; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.support.XContentMapValues; import org.elasticsearch.index.analysis.NamedAnalyzer; import org.elasticsearch.index.fielddata.IndexFieldData; import org.elasticsearch.index.fielddata.plain.PagedBytesIndexFieldData; +import org.elasticsearch.index.query.QueryShardContext; import java.io.IOException; import java.util.Iterator; @@ -122,7 +126,7 @@ public TextFieldMapper build(BuilderContext context) { } setupFieldType(context); return new TextFieldMapper( - name, fieldType, defaultFieldType, positionIncrementGap, includeInAll, + name, fieldType, defaultFieldType, positionIncrementGap, context.indexSettings(), multiFieldsBuilder.build(this, context), copyTo); } } @@ -274,6 +278,15 @@ public String typeName() { return CONTENT_TYPE; } + @Override + public Query existsQuery(QueryShardContext context) { + if (omitNorms()) { + return new TermQuery(new Term(FieldNamesFieldMapper.NAME, name())); + } else { + return new NormsFieldExistsQuery(name()); + } + } + @Override public Query nullValueQuery() { if (nullValue() == null) { @@ -293,11 +306,10 @@ public IndexFieldData.Builder fielddataBuilder(String fullyQualifiedIndexName) { } } - private Boolean includeInAll; private int positionIncrementGap; protected TextFieldMapper(String simpleName, MappedFieldType fieldType, MappedFieldType defaultFieldType, - int positionIncrementGap, Boolean includeInAll, + int positionIncrementGap, Settings indexSettings, MultiFields multiFields, CopyTo copyTo) { super(simpleName, fieldType, defaultFieldType, indexSettings, multiFields, copyTo); assert fieldType.tokenized(); @@ -306,7 +318,6 @@ protected TextFieldMapper(String simpleName, MappedFieldType fieldType, MappedFi throw new IllegalArgumentException("Cannot enable fielddata on a [text] field that is not indexed: [" + name() + "]"); } this.positionIncrementGap = positionIncrementGap; - this.includeInAll = includeInAll; } @Override @@ -314,11 +325,6 @@ protected TextFieldMapper clone() { return (TextFieldMapper) super.clone(); } - // pkg-private for testing - Boolean includeInAll() { - return includeInAll; - } - public int getPositionIncrementGap() { return this.positionIncrementGap; } @@ -336,13 +342,12 @@ protected void parseCreateField(ParseContext context, List field return; } - if (context.includeInAll(includeInAll, this)) { - context.allEntries().addText(fieldType().name(), value, fieldType().boost()); - } - if (fieldType().indexOptions() != IndexOptions.NONE || fieldType().stored()) { Field field = new Field(fieldType().name(), value, fieldType()); fields.add(field); + if (fieldType().omitNorms()) { + createFieldNamesField(context, fields); + } } } @@ -354,7 +359,6 @@ protected String contentType() { @Override protected void doMerge(Mapper mergeWith, boolean updateAllTypes) { super.doMerge(mergeWith, updateAllTypes); - this.includeInAll = ((TextFieldMapper) mergeWith).includeInAll; } @Override @@ -367,12 +371,6 @@ protected void doXContentBody(XContentBuilder builder, boolean includeDefaults, super.doXContentBody(builder, includeDefaults, params); doXContentAnalyzers(builder, includeDefaults); - if (includeInAll != null) { - builder.field("include_in_all", includeInAll); - } else if (includeDefaults) { - builder.field("include_in_all", true); - } - if (includeDefaults || positionIncrementGap != POSITION_INCREMENT_GAP_USE_ANALYZER) { builder.field("position_increment_gap", positionIncrementGap); } diff --git a/core/src/main/java/org/elasticsearch/index/mapper/TypeFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/TypeFieldMapper.java index 09ef33f0795ee..d0e30e77c9e16 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/TypeFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/TypeFieldMapper.java @@ -132,6 +132,11 @@ public boolean isSearchable() { return true; } + @Override + public Query existsQuery(QueryShardContext context) { + return new MatchAllDocsQuery(); + } + @Override public Query termQuery(Object value, QueryShardContext context) { return termsQuery(Arrays.asList(value), context); diff --git a/core/src/main/java/org/elasticsearch/index/mapper/TypeParsers.java b/core/src/main/java/org/elasticsearch/index/mapper/TypeParsers.java index 18928246772f6..37fd1203622b1 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/TypeParsers.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/TypeParsers.java @@ -35,11 +35,9 @@ import java.util.Iterator; import java.util.List; import java.util.Map; -import java.util.Map.Entry; import static org.elasticsearch.common.xcontent.support.XContentMapValues.isArray; import static org.elasticsearch.common.xcontent.support.XContentMapValues.nodeFloatValue; -import static org.elasticsearch.common.xcontent.support.XContentMapValues.nodeMapValue; import static org.elasticsearch.common.xcontent.support.XContentMapValues.nodeStringValue; public class TypeParsers { @@ -158,37 +156,9 @@ private static void parseAnalyzersAndTermVectors(FieldMapper.Builder builder, St } } - public static boolean parseNorms(FieldMapper.Builder builder, String fieldName, String propName, Object propNode, + public static void parseNorms(FieldMapper.Builder builder, String fieldName, Object propNode, Mapper.TypeParser.ParserContext parserContext) { - if (propName.equals("norms")) { - if (propNode instanceof Map) { - final Map properties = nodeMapValue(propNode, "norms"); - for (Iterator> propsIterator = properties.entrySet().iterator(); propsIterator.hasNext(); ) { - Entry entry2 = propsIterator.next(); - final String propName2 = entry2.getKey(); - final Object propNode2 = entry2.getValue(); - if (propName2.equals("enabled")) { - builder.omitNorms(nodeBooleanValue(fieldName, "enabled", propNode2, parserContext) == false); - propsIterator.remove(); - } else if (propName2.equals("loading")) { - // ignore for bw compat - propsIterator.remove(); - } - } - DocumentMapperParser.checkNoRemainingFields(propName, properties, parserContext.indexVersionCreated()); - DEPRECATION_LOGGER.deprecated("The [norms{enabled:true/false}] way of specifying norms is deprecated, please use " + - "[norms:true/false] instead"); - } else { - builder.omitNorms(nodeBooleanValue(fieldName,"norms", propNode, parserContext) == false); - } - return true; - } else if (propName.equals("omit_norms")) { - builder.omitNorms(nodeBooleanValue(fieldName,"norms", propNode, parserContext)); - DEPRECATION_LOGGER.deprecated("[omit_norms] is deprecated, please use [norms] instead with the opposite boolean value"); - return true; - } else { - return false; - } + builder.omitNorms(nodeBooleanValue(fieldName, "norms", propNode, parserContext) == false); } /** @@ -203,7 +173,8 @@ public static void parseTextField(FieldMapper.Builder builder, String name, Map< Map.Entry entry = iterator.next(); final String propName = entry.getKey(); final Object propNode = entry.getValue(); - if (parseNorms(builder, name, propName, propNode, parserContext)) { + if ("norms".equals(propName)) { + parseNorms(builder, name, propNode, parserContext); iterator.remove(); } } @@ -237,33 +208,13 @@ public static void parseField(FieldMapper.Builder builder, String name, Map= '0' && c <= '9') || + (c >= '0' && c <= '9') || (c >= 'A' && c <= 'Z') || (c >= 'a' && c <= 'z') || c == '-' || c == '_'; @@ -244,16 +244,16 @@ public static BytesRef encodeId(String id) { } } - private static String decodeNumericId(byte[] idBytes) { - assert Byte.toUnsignedInt(idBytes[0]) == NUMERIC; - int length = (idBytes.length - 1) * 2; + private static String decodeNumericId(byte[] idBytes, int offset, int len) { + assert Byte.toUnsignedInt(idBytes[offset]) == NUMERIC; + int length = (len - 1) * 2; char[] chars = new char[length]; - for (int i = 1; i < idBytes.length; ++i) { - final int b = Byte.toUnsignedInt(idBytes[i]); + for (int i = 1; i < len; ++i) { + final int b = Byte.toUnsignedInt(idBytes[offset + i]); final int b1 = (b >>> 4); final int b2 = b & 0x0f; chars[(i - 1) * 2] = (char) (b1 + '0'); - if (i == idBytes.length - 1 && b2 == 0x0f) { + if (i == len - 1 && b2 == 0x0f) { length--; break; } @@ -262,15 +262,17 @@ private static String decodeNumericId(byte[] idBytes) { return new String(chars, 0, length); } - private static String decodeUtf8Id(byte[] idBytes) { - assert Byte.toUnsignedInt(idBytes[0]) == UTF8; - return new BytesRef(idBytes, 1, idBytes.length - 1).utf8ToString(); + private static String decodeUtf8Id(byte[] idBytes, int offset, int length) { + assert Byte.toUnsignedInt(idBytes[offset]) == UTF8; + return new BytesRef(idBytes, offset + 1, length - 1).utf8ToString(); } - private static String decodeBase64Id(byte[] idBytes) { - assert Byte.toUnsignedInt(idBytes[0]) <= BASE64_ESCAPE; - if (Byte.toUnsignedInt(idBytes[0]) == BASE64_ESCAPE) { - idBytes = Arrays.copyOfRange(idBytes, 1, idBytes.length); + private static String decodeBase64Id(byte[] idBytes, int offset, int length) { + assert Byte.toUnsignedInt(idBytes[offset]) <= BASE64_ESCAPE; + if (Byte.toUnsignedInt(idBytes[offset]) == BASE64_ESCAPE) { + idBytes = Arrays.copyOfRange(idBytes, offset + 1, offset + length); + } else if ((idBytes.length == length && offset == 0) == false) { // no need to copy if it's not a slice + idBytes = Arrays.copyOfRange(idBytes, offset, offset + length); } return Base64.getUrlEncoder().withoutPadding().encodeToString(idBytes); } @@ -278,17 +280,23 @@ private static String decodeBase64Id(byte[] idBytes) { /** Decode an indexed id back to its original form. * @see #encodeId */ public static String decodeId(byte[] idBytes) { - if (idBytes.length == 0) { + return decodeId(idBytes, 0, idBytes.length); + } + + /** Decode an indexed id back to its original form. + * @see #encodeId */ + public static String decodeId(byte[] idBytes, int offset, int length) { + if (length == 0) { throw new IllegalArgumentException("Ids can't be empty"); } - final int magicChar = Byte.toUnsignedInt(idBytes[0]); + final int magicChar = Byte.toUnsignedInt(idBytes[offset]); switch (magicChar) { - case NUMERIC: - return decodeNumericId(idBytes); - case UTF8: - return decodeUtf8Id(idBytes); - default: - return decodeBase64Id(idBytes); + case NUMERIC: + return decodeNumericId(idBytes, offset, length); + case UTF8: + return decodeUtf8Id(idBytes, offset, length); + default: + return decodeBase64Id(idBytes, offset, length); } } } diff --git a/core/src/main/java/org/elasticsearch/index/mapper/UidFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/UidFieldMapper.java index f981fc94b2d01..95dc40bca637a 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/UidFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/UidFieldMapper.java @@ -22,6 +22,7 @@ import org.apache.lucene.document.Field; import org.apache.lucene.index.IndexOptions; import org.apache.lucene.index.IndexableField; +import org.apache.lucene.search.MatchAllDocsQuery; import org.apache.lucene.search.MatchNoDocsQuery; import org.apache.lucene.search.Query; import org.apache.lucene.search.TermInSetQuery; @@ -133,6 +134,11 @@ public IndexFieldData build(IndexSettings indexSettings, MappedFieldType fiel } } + @Override + public Query existsQuery(QueryShardContext context) { + return new MatchAllDocsQuery(); + } + @Override public Query termQuery(Object value, @Nullable QueryShardContext context) { return termsQuery(Arrays.asList(value), context); diff --git a/core/src/main/java/org/elasticsearch/index/mapper/VersionFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/VersionFieldMapper.java index 1d2e997acba1d..90ea85024c119 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/VersionFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/VersionFieldMapper.java @@ -24,6 +24,7 @@ import org.apache.lucene.index.DocValuesType; import org.apache.lucene.index.IndexOptions; import org.apache.lucene.index.IndexableField; +import org.apache.lucene.search.DocValuesFieldExistsQuery; import org.apache.lucene.search.Query; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentBuilder; @@ -87,6 +88,11 @@ public String typeName() { return CONTENT_TYPE; } + @Override + public Query existsQuery(QueryShardContext context) { + return new DocValuesFieldExistsQuery(name()); + } + @Override public Query termQuery(Object value, QueryShardContext context) { throw new QueryShardException(context, "The _version field is not searchable"); diff --git a/core/src/main/java/org/elasticsearch/index/query/ConstantScoreQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/ConstantScoreQueryBuilder.java index f1e3f7aea7780..df4e31c5955a9 100644 --- a/core/src/main/java/org/elasticsearch/index/query/ConstantScoreQueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/ConstantScoreQueryBuilder.java @@ -39,7 +39,7 @@ public class ConstantScoreQueryBuilder extends AbstractQueryBuilder { public static final String NAME = "constant_score"; - private static final ParseField INNER_QUERY_FIELD = new ParseField("filter", "query"); + private static final ParseField INNER_QUERY_FIELD = new ParseField("filter"); private final QueryBuilder filterBuilder; diff --git a/core/src/main/java/org/elasticsearch/index/query/ExistsQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/ExistsQueryBuilder.java index 799998e2c9fde..97378e01236fb 100644 --- a/core/src/main/java/org/elasticsearch/index/query/ExistsQueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/ExistsQueryBuilder.java @@ -19,10 +19,14 @@ package org.elasticsearch.index.query; +import org.apache.lucene.index.Term; import org.apache.lucene.search.BooleanClause; +import org.apache.lucene.search.BooleanClause.Occur; import org.apache.lucene.search.BooleanQuery; import org.apache.lucene.search.ConstantScoreQuery; import org.apache.lucene.search.Query; +import org.apache.lucene.search.TermQuery; +import org.elasticsearch.Version; import org.elasticsearch.common.ParseField; import org.elasticsearch.common.ParsingException; import org.elasticsearch.common.Strings; @@ -32,6 +36,7 @@ import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.index.mapper.FieldNamesFieldMapper; +import org.elasticsearch.index.mapper.MappedFieldType; import java.io.IOException; import java.util.Collection; @@ -126,8 +131,9 @@ protected Query doToQuery(QueryShardContext context) throws IOException { } public static Query newFilter(QueryShardContext context, String fieldPattern) { - final FieldNamesFieldMapper.FieldNamesFieldType fieldNamesFieldType = - (FieldNamesFieldMapper.FieldNamesFieldType) context.getMapperService().fullName(FieldNamesFieldMapper.NAME); + + final FieldNamesFieldMapper.FieldNamesFieldType fieldNamesFieldType = (FieldNamesFieldMapper.FieldNamesFieldType) context + .getMapperService().fullName(FieldNamesFieldMapper.NAME); if (fieldNamesFieldType == null) { // can only happen when no types exist, so no docs exist either return Queries.newMatchNoDocsQuery("Missing types in \"" + NAME + "\" query."); @@ -142,19 +148,62 @@ public static Query newFilter(QueryShardContext context, String fieldPattern) { fields = context.simpleMatchToIndexNames(fieldPattern); } + if (context.indexVersionCreated().before(Version.V_6_1_0)) { + return newLegacyExistsQuery(fields); + } + if (fields.size() == 1) { - Query filter = fieldNamesFieldType.termQuery(fields.iterator().next(), context); + String field = fields.iterator().next(); + return newFieldExistsQuery(context, field); + } + + BooleanQuery.Builder boolFilterBuilder = new BooleanQuery.Builder(); + for (String field : fields) { + boolFilterBuilder.add(newFieldExistsQuery(context, field), BooleanClause.Occur.SHOULD); + } + return new ConstantScoreQuery(boolFilterBuilder.build()); + } + + private static Query newLegacyExistsQuery(Collection fields) { + // We create TermsQuery directly here rather than using FieldNamesFieldType.termsQuery() + // so we don't end up with deprecation warnings + if (fields.size() == 1) { + Query filter = new TermQuery(new Term(FieldNamesFieldMapper.NAME, fields.iterator().next())); return new ConstantScoreQuery(filter); } BooleanQuery.Builder boolFilterBuilder = new BooleanQuery.Builder(); for (String field : fields) { - Query filter = fieldNamesFieldType.termQuery(field, context); + Query filter = new TermQuery(new Term(FieldNamesFieldMapper.NAME, field)); boolFilterBuilder.add(filter, BooleanClause.Occur.SHOULD); } return new ConstantScoreQuery(boolFilterBuilder.build()); } + private static Query newFieldExistsQuery(QueryShardContext context, String field) { + MappedFieldType fieldType = context.getMapperService().fullName(field); + if (fieldType == null) { + // The field does not exist as a leaf but could be an object so + // check for an object mapper + if (context.getObjectMapper(field) != null) { + return newObjectFieldExistsQuery(context, field); + } + return Queries.newMatchNoDocsQuery("No field \"" + field + "\" exists in mappings."); + } + Query filter = fieldType.existsQuery(context); + return new ConstantScoreQuery(filter); + } + + private static Query newObjectFieldExistsQuery(QueryShardContext context, String objField) { + BooleanQuery.Builder booleanQuery = new BooleanQuery.Builder(); + Collection fields = context.simpleMatchToIndexNames(objField + ".*"); + for (String field : fields) { + Query existsQuery = context.getMapperService().fullName(field).existsQuery(context); + booleanQuery.add(existsQuery, Occur.SHOULD); + } + return new ConstantScoreQuery(booleanQuery.build()); + } + @Override protected int doHashCode() { return Objects.hash(fieldName); diff --git a/core/src/main/java/org/elasticsearch/index/query/FuzzyQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/FuzzyQueryBuilder.java index 1f43944c8aa9a..ba6b4dd0450d6 100644 --- a/core/src/main/java/org/elasticsearch/index/query/FuzzyQueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/FuzzyQueryBuilder.java @@ -54,8 +54,8 @@ public class FuzzyQueryBuilder extends AbstractQueryBuilder i public static final int DEFAULT_MAX_EXPANSIONS = FuzzyQuery.defaultMaxExpansions; /** Default as to whether transpositions should be treated as a primitive edit operation, - * instead of classic Levenshtein algorithm. Defaults to false. */ - public static final boolean DEFAULT_TRANSPOSITIONS = false; + * instead of classic Levenshtein algorithm. Defaults to true. */ + public static final boolean DEFAULT_TRANSPOSITIONS = FuzzyQuery.defaultTranspositions; private static final ParseField TERM_FIELD = new ParseField("term"); private static final ParseField VALUE_FIELD = new ParseField("value"); @@ -74,7 +74,6 @@ public class FuzzyQueryBuilder extends AbstractQueryBuilder i private int maxExpansions = DEFAULT_MAX_EXPANSIONS; - //LUCENE 4 UPGRADE we need a testcase for this + documentation private boolean transpositions = DEFAULT_TRANSPOSITIONS; private String rewrite; diff --git a/core/src/main/java/org/elasticsearch/index/query/IdsQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/IdsQueryBuilder.java index 127a444b9100a..9f9b508267224 100644 --- a/core/src/main/java/org/elasticsearch/index/query/IdsQueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/IdsQueryBuilder.java @@ -51,7 +51,7 @@ public class IdsQueryBuilder extends AbstractQueryBuilder { public static final String NAME = "ids"; - private static final ParseField TYPE_FIELD = new ParseField("type", "types", "_type"); + private static final ParseField TYPE_FIELD = new ParseField("type"); private static final ParseField VALUES_FIELD = new ParseField("values"); private final Set ids = new HashSet<>(); diff --git a/core/src/main/java/org/elasticsearch/index/query/InnerHitBuilder.java b/core/src/main/java/org/elasticsearch/index/query/InnerHitBuilder.java index 484431726e296..6bb8e0259fb15 100644 --- a/core/src/main/java/org/elasticsearch/index/query/InnerHitBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/InnerHitBuilder.java @@ -19,13 +19,14 @@ package org.elasticsearch.index.query; import org.elasticsearch.Version; -import org.elasticsearch.action.support.ToXContentToBytes; import org.elasticsearch.common.ParseField; import org.elasticsearch.common.ParsingException; +import org.elasticsearch.common.Strings; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; import org.elasticsearch.common.xcontent.ObjectParser; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.script.Script; @@ -47,7 +48,7 @@ import static org.elasticsearch.common.xcontent.XContentParser.Token.END_OBJECT; -public final class InnerHitBuilder extends ToXContentToBytes implements Writeable { +public final class InnerHitBuilder implements Writeable, ToXContentObject { public static final ParseField NAME_FIELD = new ParseField("name"); public static final ParseField IGNORE_UNMAPPED = new ParseField("ignore_unmapped"); @@ -544,4 +545,9 @@ public int hashCode() { public static InnerHitBuilder fromXContent(XContentParser parser) throws IOException { return PARSER.parse(parser, new InnerHitBuilder(), null); } + + @Override + public String toString() { + return Strings.toString(this, true, true); + } } diff --git a/core/src/main/java/org/elasticsearch/index/query/InnerHitContextBuilder.java b/core/src/main/java/org/elasticsearch/index/query/InnerHitContextBuilder.java index f13aa22f7d914..58d271bb8206c 100644 --- a/core/src/main/java/org/elasticsearch/index/query/InnerHitContextBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/InnerHitContextBuilder.java @@ -19,7 +19,7 @@ package org.elasticsearch.index.query; -import org.elasticsearch.script.ScriptContext; +import org.elasticsearch.index.IndexSettings; import org.elasticsearch.script.SearchScript; import org.elasticsearch.search.builder.SearchSourceBuilder; import org.elasticsearch.search.fetch.subphase.DocValueFieldsContext; @@ -47,8 +47,21 @@ protected InnerHitContextBuilder(QueryBuilder query, InnerHitBuilder innerHitBui this.query = query; } - public abstract void build(SearchContext parentSearchContext, - InnerHitsContext innerHitsContext) throws IOException; + public final void build(SearchContext parentSearchContext, InnerHitsContext innerHitsContext) throws IOException { + long innerResultWindow = innerHitBuilder.getFrom() + innerHitBuilder.getSize(); + int maxInnerResultWindow = parentSearchContext.mapperService().getIndexSettings().getMaxInnerResultWindow(); + if (innerResultWindow > maxInnerResultWindow) { + throw new IllegalArgumentException( + "Inner result window is too large, the inner hit definition's [" + innerHitBuilder.getName() + + "]'s from + size must be less than or equal to: [" + maxInnerResultWindow + "] but was [" + innerResultWindow + + "]. This limit can be set by changing the [" + IndexSettings.MAX_INNER_RESULT_WINDOW_SETTING.getKey() + + "] index level setting." + ); + } + doBuild(parentSearchContext, innerHitsContext); + } + + protected abstract void doBuild(SearchContext parentSearchContext, InnerHitsContext innerHitsContext) throws IOException; public static void extractInnerHits(QueryBuilder query, Map innerHitBuilders) { if (query instanceof AbstractQueryBuilder) { diff --git a/core/src/main/java/org/elasticsearch/index/query/MatchPhraseQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/MatchPhraseQueryBuilder.java index 1865e30744302..1bdab8d78a81d 100644 --- a/core/src/main/java/org/elasticsearch/index/query/MatchPhraseQueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/MatchPhraseQueryBuilder.java @@ -38,7 +38,7 @@ */ public class MatchPhraseQueryBuilder extends AbstractQueryBuilder { public static final String NAME = "match_phrase"; - public static final ParseField SLOP_FIELD = new ParseField("slop", "phrase_slop"); + public static final ParseField SLOP_FIELD = new ParseField("slop"); private final String fieldName; diff --git a/core/src/main/java/org/elasticsearch/index/query/MatchQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/MatchQueryBuilder.java index 029bb8ab64e0b..cc19603ea64d8 100644 --- a/core/src/main/java/org/elasticsearch/index/query/MatchQueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/MatchQueryBuilder.java @@ -35,7 +35,6 @@ import org.elasticsearch.index.search.MatchQuery.ZeroTermsQuery; import java.io.IOException; -import java.util.Locale; import java.util.Objects; /** @@ -43,7 +42,6 @@ * result of the analysis. */ public class MatchQueryBuilder extends AbstractQueryBuilder { - public static final ParseField SLOP_FIELD = new ParseField("slop", "phrase_slop").withAllDeprecated("match_phrase query"); public static final ParseField ZERO_TERMS_QUERY_FIELD = new ParseField("zero_terms_query"); public static final ParseField CUTOFF_FREQUENCY_FIELD = new ParseField("cutoff_frequency"); public static final ParseField LENIENT_FIELD = new ParseField("lenient"); @@ -54,7 +52,6 @@ public class MatchQueryBuilder extends AbstractQueryBuilder { public static final ParseField MAX_EXPANSIONS_FIELD = new ParseField("max_expansions"); public static final ParseField PREFIX_LENGTH_FIELD = new ParseField("prefix_length"); public static final ParseField ANALYZER_FIELD = new ParseField("analyzer"); - public static final ParseField TYPE_FIELD = new ParseField("type").withAllDeprecated("match_phrase and match_phrase_prefix query"); public static final ParseField QUERY_FIELD = new ParseField("query"); public static final ParseField GENERATE_SYNONYMS_PHRASE_QUERY = new ParseField("auto_generate_synonyms_phrase_query"); @@ -64,24 +61,14 @@ public class MatchQueryBuilder extends AbstractQueryBuilder { /** The default mode terms are combined in a match query */ public static final Operator DEFAULT_OPERATOR = Operator.OR; - /** The default mode match query type */ - @Deprecated - public static final MatchQuery.Type DEFAULT_TYPE = MatchQuery.Type.BOOLEAN; - private final String fieldName; private final Object value; - @Deprecated - private MatchQuery.Type type = DEFAULT_TYPE; - private Operator operator = DEFAULT_OPERATOR; private String analyzer; - @Deprecated - private int slop = MatchQuery.DEFAULT_PHRASE_SLOP; - private Fuzziness fuzziness = null; private int prefixLength = FuzzyQuery.defaultPrefixLength; @@ -123,9 +110,13 @@ public MatchQueryBuilder(StreamInput in) throws IOException { super(in); fieldName = in.readString(); value = in.readGenericValue(); - type = MatchQuery.Type.readFromStream(in); + if (in.getVersion().before(Version.V_6_0_0_rc1)) { + MatchQuery.Type.readFromStream(in); // deprecated type + } operator = Operator.readFromStream(in); - slop = in.readVInt(); + if (in.getVersion().before(Version.V_6_0_0_rc1)) { + in.readVInt(); // deprecated slop + } prefixLength = in.readVInt(); maxExpansions = in.readVInt(); fuzzyTranspositions = in.readBoolean(); @@ -146,9 +137,13 @@ public MatchQueryBuilder(StreamInput in) throws IOException { protected void doWriteTo(StreamOutput out) throws IOException { out.writeString(fieldName); out.writeGenericValue(value); - type.writeTo(out); + if (out.getVersion().before(Version.V_6_0_0_rc1)) { + MatchQuery.Type.BOOLEAN.writeTo(out); // deprecated type + } operator.writeTo(out); - out.writeVInt(slop); + if (out.getVersion().before(Version.V_6_0_0_rc1)) { + out.writeVInt(MatchQuery.DEFAULT_PHRASE_SLOP); // deprecated slop + } out.writeVInt(prefixLength); out.writeVInt(maxExpansions); out.writeBoolean(fuzzyTranspositions); @@ -175,34 +170,6 @@ public Object value() { return this.value; } - /** - * Sets the type of the text query. - * - * @deprecated Use {@link MatchPhraseQueryBuilder} for phrase - * queries and {@link MatchPhrasePrefixQueryBuilder} for - * phrase_prefix queries - */ - @Deprecated - public MatchQueryBuilder type(MatchQuery.Type type) { - if (type == null) { - throw new IllegalArgumentException("[" + NAME + "] requires type to be non-null"); - } - this.type = type; - return this; - } - - /** - * Get the type of the query. - * - * @deprecated Use {@link MatchPhraseQueryBuilder} for phrase - * queries and {@link MatchPhrasePrefixQueryBuilder} for - * phrase_prefix queries - */ - @Deprecated - public MatchQuery.Type type() { - return this.type; - } - /** Sets the operator to use when using a boolean query. Defaults to OR. */ public MatchQueryBuilder operator(Operator operator) { if (operator == null) { @@ -231,30 +198,6 @@ public String analyzer() { return this.analyzer; } - /** - * Sets a slop factor for phrase queries - * - * @deprecated for phrase queries use {@link MatchPhraseQueryBuilder} - */ - @Deprecated - public MatchQueryBuilder slop(int slop) { - if (slop < 0 ) { - throw new IllegalArgumentException("No negative slop allowed."); - } - this.slop = slop; - return this; - } - - /** - * Get the slop factor for phrase queries. - * - * @deprecated for phrase queries use {@link MatchPhraseQueryBuilder} - */ - @Deprecated - public int slop() { - return this.slop; - } - /** Sets the fuzziness used when evaluated to a fuzzy query type. Defaults to "AUTO". */ public MatchQueryBuilder fuzziness(Object fuzziness) { this.fuzziness = Fuzziness.build(fuzziness); @@ -425,18 +368,10 @@ public void doXContent(XContentBuilder builder, Params params) throws IOExceptio builder.startObject(fieldName); builder.field(QUERY_FIELD.getPreferredName(), value); - // this is deprecated so only output the value if its not the default value (for bwc) - if (type != MatchQuery.Type.BOOLEAN) { - builder.field(TYPE_FIELD.getPreferredName(), type.toString().toLowerCase(Locale.ENGLISH)); - } builder.field(OPERATOR_FIELD.getPreferredName(), operator.toString()); if (analyzer != null) { builder.field(ANALYZER_FIELD.getPreferredName(), analyzer); } - // this is deprecated so only output the value if its not the default value (for bwc) - if (slop != MatchQuery.DEFAULT_PHRASE_SLOP) { - builder.field(SLOP_FIELD.getPreferredName(), slop); - } if (fuzziness != null) { fuzziness.toXContent(builder, params); } @@ -473,7 +408,6 @@ protected Query doToQuery(QueryShardContext context) throws IOException { if (analyzer != null) { matchQuery.setAnalyzer(analyzer); } - matchQuery.setPhraseSlop(slop); matchQuery.setFuzziness(fuzziness); matchQuery.setFuzzyPrefixLength(prefixLength); matchQuery.setMaxExpansions(maxExpansions); @@ -484,7 +418,7 @@ protected Query doToQuery(QueryShardContext context) throws IOException { matchQuery.setZeroTermsQuery(zeroTermsQuery); matchQuery.setAutoGenerateSynonymsPhraseQuery(autoGenerateSynonymsPhraseQuery); - Query query = matchQuery.parse(type, fieldName, value); + Query query = matchQuery.parse(MatchQuery.Type.BOOLEAN, fieldName, value); return Queries.maybeApplyMinimumShouldMatch(query, minimumShouldMatch); } @@ -492,10 +426,8 @@ protected Query doToQuery(QueryShardContext context) throws IOException { protected boolean doEquals(MatchQueryBuilder other) { return Objects.equals(fieldName, other.fieldName) && Objects.equals(value, other.value) && - Objects.equals(type, other.type) && Objects.equals(operator, other.operator) && Objects.equals(analyzer, other.analyzer) && - Objects.equals(slop, other.slop) && Objects.equals(fuzziness, other.fuzziness) && Objects.equals(prefixLength, other.prefixLength) && Objects.equals(maxExpansions, other.maxExpansions) && @@ -510,7 +442,7 @@ protected boolean doEquals(MatchQueryBuilder other) { @Override protected int doHashCode() { - return Objects.hash(fieldName, value, type, operator, analyzer, slop, + return Objects.hash(fieldName, value, operator, analyzer, fuzziness, prefixLength, maxExpansions, minimumShouldMatch, fuzzyRewrite, lenient, fuzzyTranspositions, zeroTermsQuery, cutoffFrequency, autoGenerateSynonymsPhraseQuery); } @@ -522,13 +454,11 @@ public String getWriteableName() { public static MatchQueryBuilder fromXContent(XContentParser parser) throws IOException { String fieldName = null; - MatchQuery.Type type = MatchQuery.Type.BOOLEAN; Object value = null; float boost = AbstractQueryBuilder.DEFAULT_BOOST; String minimumShouldMatch = null; String analyzer = null; Operator operator = MatchQueryBuilder.DEFAULT_OPERATOR; - int slop = MatchQuery.DEFAULT_PHRASE_SLOP; Fuzziness fuzziness = null; int prefixLength = FuzzyQuery.defaultPrefixLength; int maxExpansion = FuzzyQuery.defaultMaxExpansions; @@ -553,23 +483,10 @@ public static MatchQueryBuilder fromXContent(XContentParser parser) throws IOExc } else if (token.isValue()) { if (QUERY_FIELD.match(currentFieldName)) { value = parser.objectText(); - } else if (TYPE_FIELD.match(currentFieldName)) { - String tStr = parser.text(); - if ("boolean".equals(tStr)) { - type = MatchQuery.Type.BOOLEAN; - } else if ("phrase".equals(tStr)) { - type = MatchQuery.Type.PHRASE; - } else if ("phrase_prefix".equals(tStr) || ("phrasePrefix".equals(tStr))) { - type = MatchQuery.Type.PHRASE_PREFIX; - } else { - throw new ParsingException(parser.getTokenLocation(), "[" + NAME + "] query does not support type " + tStr); - } } else if (ANALYZER_FIELD.match(currentFieldName)) { analyzer = parser.text(); } else if (AbstractQueryBuilder.BOOST_FIELD.match(currentFieldName)) { boost = parser.floatValue(); - } else if (SLOP_FIELD.match(currentFieldName)) { - slop = parser.intValue(); } else if (Fuzziness.FIELD.match(currentFieldName)) { fuzziness = Fuzziness.parse(parser); } else if (PREFIX_LENGTH_FIELD.match(currentFieldName)) { @@ -624,9 +541,7 @@ public static MatchQueryBuilder fromXContent(XContentParser parser) throws IOExc MatchQueryBuilder matchQuery = new MatchQueryBuilder(fieldName, value); matchQuery.operator(operator); - matchQuery.type(type); matchQuery.analyzer(analyzer); - matchQuery.slop(slop); matchQuery.minimumShouldMatch(minimumShouldMatch); if (fuzziness != null) { matchQuery.fuzziness(fuzziness); diff --git a/core/src/main/java/org/elasticsearch/index/query/MoreLikeThisQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/MoreLikeThisQueryBuilder.java index 172e96cb34da0..34411d669ec3b 100644 --- a/core/src/main/java/org/elasticsearch/index/query/MoreLikeThisQueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/MoreLikeThisQueryBuilder.java @@ -59,7 +59,6 @@ import java.io.IOException; import java.util.ArrayList; import java.util.Arrays; -import java.util.Collections; import java.util.HashSet; import java.util.List; import java.util.Locale; @@ -97,15 +96,12 @@ private interface Field { ParseField FIELDS = new ParseField("fields"); ParseField LIKE = new ParseField("like"); ParseField UNLIKE = new ParseField("unlike"); - ParseField LIKE_TEXT = new ParseField("like_text").withAllDeprecated("like"); - ParseField IDS = new ParseField("ids").withAllDeprecated("like"); - ParseField DOCS = new ParseField("docs").withAllDeprecated("like"); ParseField MAX_QUERY_TERMS = new ParseField("max_query_terms"); ParseField MIN_TERM_FREQ = new ParseField("min_term_freq"); ParseField MIN_DOC_FREQ = new ParseField("min_doc_freq"); ParseField MAX_DOC_FREQ = new ParseField("max_doc_freq"); - ParseField MIN_WORD_LENGTH = new ParseField("min_word_length", "min_word_len"); - ParseField MAX_WORD_LENGTH = new ParseField("max_word_length", "max_word_len"); + ParseField MIN_WORD_LENGTH = new ParseField("min_word_length"); + ParseField MAX_WORD_LENGTH = new ParseField("max_word_length"); ParseField STOP_WORDS = new ParseField("stop_words"); ParseField ANALYZER = new ParseField("analyzer"); ParseField MINIMUM_SHOULD_MATCH = new ParseField("minimum_should_match"); @@ -489,7 +485,7 @@ public boolean equals(Object o) { } /** - * Constructs a new more like this query which uses the "_all" field. + * Constructs a new more like this query which uses the default search field. * @param likeTexts the text to use when generating the 'More Like This' query. * @param likeItems the documents to use when generating the 'More Like This' query. */ @@ -847,8 +843,6 @@ public static MoreLikeThisQueryBuilder fromXContent(XContentParser parser) throw parseLikeField(parser, likeTexts, likeItems); } else if (Field.UNLIKE.match(currentFieldName)) { parseLikeField(parser, unlikeTexts, unlikeItems); - } else if (Field.LIKE_TEXT.match(currentFieldName)) { - likeTexts.add(parser.text()); } else if (Field.MAX_QUERY_TERMS.match(currentFieldName)) { maxQueryTerms = parser.intValue(); } else if (Field.MIN_TERM_FREQ.match(currentFieldName)) { @@ -892,20 +886,6 @@ public static MoreLikeThisQueryBuilder fromXContent(XContentParser parser) throw while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { parseLikeField(parser, unlikeTexts, unlikeItems); } - } else if (Field.IDS.match(currentFieldName)) { - while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { - if (!token.isValue()) { - throw new IllegalArgumentException("ids array element should only contain ids"); - } - likeItems.add(new Item(null, null, parser.text())); - } - } else if (Field.DOCS.match(currentFieldName)) { - while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { - if (token != XContentParser.Token.START_OBJECT) { - throw new IllegalArgumentException("docs array element should include an object"); - } - likeItems.add(Item.parse(parser, new Item())); - } } else if (Field.STOP_WORDS.match(currentFieldName)) { stopWords = new ArrayList<>(); while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { @@ -1033,7 +1013,7 @@ protected Query doToQuery(QueryShardContext context) throws IOException { boolean useDefaultField = (fields == null); List moreLikeFields = new ArrayList<>(); if (useDefaultField) { - moreLikeFields = Collections.singletonList(context.defaultField()); + moreLikeFields = context.defaultFields(); } else { for (String field : fields) { MappedFieldType fieldType = context.fieldMapper(field); diff --git a/core/src/main/java/org/elasticsearch/index/query/MultiMatchQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/MultiMatchQueryBuilder.java index f81474f3416f4..6063b8a120491 100644 --- a/core/src/main/java/org/elasticsearch/index/query/MultiMatchQueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/MultiMatchQueryBuilder.java @@ -35,7 +35,7 @@ import org.elasticsearch.index.query.support.QueryParsers; import org.elasticsearch.index.search.MatchQuery; import org.elasticsearch.index.search.MultiMatchQuery; -import org.elasticsearch.index.search.QueryStringQueryParser; +import org.elasticsearch.index.search.QueryParserHelper; import java.io.IOException; import java.util.HashMap; @@ -57,8 +57,9 @@ public class MultiMatchQueryBuilder extends AbstractQueryBuilder + * The default metric used by fuzzy queries to determine a match is the Damerau-Levenshtein + * distance formula which supports transpositions. Setting transposition to false will + * switch to classic Levenshtein distance.
+ * If not set, Damerau-Levenshtein distance metric will be used. + */ + public MultiMatchQueryBuilder fuzzyTranspositions(boolean fuzzyTranspositions) { + this.fuzzyTranspositions = fuzzyTranspositions; + return this; + } + @Override public void doXContent(XContentBuilder builder, Params params) throws IOException { builder.startObject(NAME); @@ -573,6 +594,7 @@ public void doXContent(XContentBuilder builder, Params params) throws IOExceptio } builder.field(ZERO_TERMS_QUERY_FIELD.getPreferredName(), zeroTermsQuery.toString()); builder.field(GENERATE_SYNONYMS_PHRASE_QUERY.getPreferredName(), autoGenerateSynonymsPhraseQuery); + builder.field(FUZZY_TRANSPOSITIONS_FIELD.getPreferredName(), fuzzyTranspositions); printBoostAndQueryName(builder); builder.endObject(); } @@ -595,6 +617,7 @@ public static MultiMatchQueryBuilder fromXContent(XContentParser parser) throws boolean lenient = DEFAULT_LENIENCY; MatchQuery.ZeroTermsQuery zeroTermsQuery = DEFAULT_ZERO_TERMS_QUERY; boolean autoGenerateSynonymsPhraseQuery = true; + boolean fuzzyTranspositions = DEFAULT_FUZZY_TRANSPOSITIONS; float boost = AbstractQueryBuilder.DEFAULT_BOOST; String queryName = null; @@ -659,6 +682,8 @@ public static MultiMatchQueryBuilder fromXContent(XContentParser parser) throws queryName = parser.text(); } else if (GENERATE_SYNONYMS_PHRASE_QUERY.match(currentFieldName)) { autoGenerateSynonymsPhraseQuery = parser.booleanValue(); + } else if (FUZZY_TRANSPOSITIONS_FIELD.match(currentFieldName)) { + fuzzyTranspositions = parser.booleanValue(); } else { throw new ParsingException(parser.getTokenLocation(), "[" + NAME + "] query does not support [" + currentFieldName + "]"); @@ -700,7 +725,8 @@ public static MultiMatchQueryBuilder fromXContent(XContentParser parser) throws .zeroTermsQuery(zeroTermsQuery) .autoGenerateSynonymsPhraseQuery(autoGenerateSynonymsPhraseQuery) .boost(boost) - .queryName(queryName); + .queryName(queryName) + .fuzzyTranspositions(fuzzyTranspositions); } private static void parseFieldAndBoost(XContentParser parser, Map fieldsBoosts) throws IOException { @@ -755,6 +781,7 @@ protected Query doToQuery(QueryShardContext context) throws IOException { multiMatchQuery.setLenient(lenient); multiMatchQuery.setZeroTermsQuery(zeroTermsQuery); multiMatchQuery.setAutoGenerateSynonymsPhraseQuery(autoGenerateSynonymsPhraseQuery); + multiMatchQuery.setTranspositions(fuzzyTranspositions); if (useDisMax != null) { // backwards foobar boolean typeUsesDismax = type.tieBreaker() != 1.0f; @@ -767,7 +794,7 @@ protected Query doToQuery(QueryShardContext context) throws IOException { } } - Map newFieldsBoosts = QueryStringQueryParser.resolveMappingFields(context, fieldsBoosts); + Map newFieldsBoosts = QueryParserHelper.resolveMappingFields(context, fieldsBoosts); return multiMatchQuery.parse(type, newFieldsBoosts, value, minimumShouldMatch); } @@ -775,7 +802,7 @@ protected Query doToQuery(QueryShardContext context) throws IOException { protected int doHashCode() { return Objects.hash(value, fieldsBoosts, type, operator, analyzer, slop, fuzziness, prefixLength, maxExpansions, minimumShouldMatch, fuzzyRewrite, useDisMax, tieBreaker, lenient, - cutoffFrequency, zeroTermsQuery, autoGenerateSynonymsPhraseQuery); + cutoffFrequency, zeroTermsQuery, autoGenerateSynonymsPhraseQuery, fuzzyTranspositions); } @Override @@ -796,6 +823,7 @@ protected boolean doEquals(MultiMatchQueryBuilder other) { Objects.equals(lenient, other.lenient) && Objects.equals(cutoffFrequency, other.cutoffFrequency) && Objects.equals(zeroTermsQuery, other.zeroTermsQuery) && - Objects.equals(autoGenerateSynonymsPhraseQuery, other.autoGenerateSynonymsPhraseQuery); + Objects.equals(autoGenerateSynonymsPhraseQuery, other.autoGenerateSynonymsPhraseQuery) && + Objects.equals(fuzzyTranspositions, other.fuzzyTranspositions); } } diff --git a/core/src/main/java/org/elasticsearch/index/query/NestedQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/NestedQueryBuilder.java index b9037110b1c59..4e3429e1a2088 100644 --- a/core/src/main/java/org/elasticsearch/index/query/NestedQueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/NestedQueryBuilder.java @@ -336,7 +336,7 @@ static class NestedInnerHitContextBuilder extends InnerHitContextBuilder { } @Override - public void build(SearchContext parentSearchContext, + protected void doBuild(SearchContext parentSearchContext, InnerHitsContext innerHitsContext) throws IOException { QueryShardContext queryShardContext = parentSearchContext.getQueryShardContext(); ObjectMapper nestedObjectMapper = queryShardContext.getObjectMapper(path); diff --git a/core/src/main/java/org/elasticsearch/index/query/PrefixQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/PrefixQueryBuilder.java index 0392f0e7ff6c4..fcc688d191a36 100644 --- a/core/src/main/java/org/elasticsearch/index/query/PrefixQueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/PrefixQueryBuilder.java @@ -43,7 +43,7 @@ public class PrefixQueryBuilder extends AbstractQueryBuilder implements MultiTermQueryBuilder { public static final String NAME = "prefix"; - private static final ParseField PREFIX_FIELD = new ParseField("value", "prefix"); + private static final ParseField PREFIX_FIELD = new ParseField("value"); private static final ParseField REWRITE_FIELD = new ParseField("rewrite"); private final String fieldName; diff --git a/core/src/main/java/org/elasticsearch/index/query/QueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/QueryBuilder.java index b6ee368b488be..f8f5b68be9afb 100644 --- a/core/src/main/java/org/elasticsearch/index/query/QueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/QueryBuilder.java @@ -83,6 +83,7 @@ public interface QueryBuilder extends NamedWriteable, ToXContentObject, Rewritea * Rewrites this query builder into its primitive form. By default this method return the builder itself. If the builder * did not change the identity reference must be returned otherwise the builder will be rewritten infinitely. */ + @Override default QueryBuilder rewrite(QueryRewriteContext queryShardContext) throws IOException { return this; } diff --git a/core/src/main/java/org/elasticsearch/index/query/QueryShardContext.java b/core/src/main/java/org/elasticsearch/index/query/QueryShardContext.java index c637aa4ba31e6..32a1f64d37b33 100644 --- a/core/src/main/java/org/elasticsearch/index/query/QueryShardContext.java +++ b/core/src/main/java/org/elasticsearch/index/query/QueryShardContext.java @@ -40,14 +40,11 @@ import org.elasticsearch.index.analysis.IndexAnalyzers; import org.elasticsearch.index.cache.bitset.BitsetFilterCache; import org.elasticsearch.index.fielddata.IndexFieldData; -import org.elasticsearch.index.fielddata.plain.ConstantIndexFieldData; import org.elasticsearch.index.mapper.ContentPath; import org.elasticsearch.index.mapper.DocumentMapper; -import org.elasticsearch.index.mapper.IndexFieldMapper; import org.elasticsearch.index.mapper.MappedFieldType; import org.elasticsearch.index.mapper.Mapper; import org.elasticsearch.index.mapper.MapperService; -import org.elasticsearch.index.mapper.MetadataFieldMapper; import org.elasticsearch.index.mapper.ObjectMapper; import org.elasticsearch.index.mapper.TextFieldMapper; import org.elasticsearch.index.query.support.NestedScope; @@ -60,10 +57,10 @@ import java.util.Arrays; import java.util.Collection; import java.util.HashMap; +import java.util.List; import java.util.Map; import java.util.function.BiConsumer; import java.util.function.BiFunction; -import java.util.function.Function; import java.util.function.LongSupplier; import static java.util.Collections.unmodifiableMap; @@ -144,8 +141,8 @@ public Similarity getSearchSimilarity() { return similarityService != null ? similarityService.similarity(mapperService) : null; } - public String defaultField() { - return indexSettings.getDefaultField(); + public List defaultFields() { + return indexSettings.getDefaultFields(); } public boolean queryStringLenient() { diff --git a/core/src/main/java/org/elasticsearch/index/query/QueryStringQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/QueryStringQueryBuilder.java index 9cb635d6d481f..154060ec1a5b0 100644 --- a/core/src/main/java/org/elasticsearch/index/query/QueryStringQueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/QueryStringQueryBuilder.java @@ -35,12 +35,13 @@ import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.index.analysis.NamedAnalyzer; import org.elasticsearch.index.query.support.QueryParsers; +import org.elasticsearch.index.search.QueryParserHelper; import org.elasticsearch.index.search.QueryStringQueryParser; import org.joda.time.DateTimeZone; import java.io.IOException; import java.util.ArrayList; -import java.util.HashMap; +import java.util.Collections; import java.util.List; import java.util.Locale; import java.util.Map; @@ -67,6 +68,7 @@ public class QueryStringQueryBuilder extends AbstractQueryBuilder + * The default metric used by fuzzy queries to determine a match is the Damerau-Levenshtein + * distance formula which supports transpositions. Setting transposition to false will + * switch to classic Levenshtein distance.
+ * If not set, Damerau-Levenshtein distance metric will be used. + */ + public QueryStringQueryBuilder fuzzyTranspositions(boolean fuzzyTranspositions) { + this.fuzzyTranspositions = fuzzyTranspositions; + return this; + } + @Override protected void doXContent(XContentBuilder builder, Params params) throws IOException { builder.startObject(NAME); @@ -705,6 +728,7 @@ protected void doXContent(XContentBuilder builder, Params params) throws IOExcep } builder.field(ESCAPE_FIELD.getPreferredName(), this.escape); builder.field(GENERATE_SYNONYMS_PHRASE_QUERY.getPreferredName(), autoGenerateSynonymsPhraseQuery); + builder.field(FUZZY_TRANSPOSITIONS_FIELD.getPreferredName(), fuzzyTranspositions); printBoostAndQueryName(builder); builder.endObject(); } @@ -736,31 +760,20 @@ public static QueryStringQueryBuilder fromXContent(XContentParser parser) throws Fuzziness fuzziness = QueryStringQueryBuilder.DEFAULT_FUZZINESS; String fuzzyRewrite = null; String rewrite = null; - Map fieldsAndWeights = new HashMap<>(); + Map fieldsAndWeights = null; boolean autoGenerateSynonymsPhraseQuery = true; + boolean fuzzyTranspositions = DEFAULT_FUZZY_TRANSPOSITIONS; + while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { if (token == XContentParser.Token.FIELD_NAME) { currentFieldName = parser.currentName(); } else if (token == XContentParser.Token.START_ARRAY) { if (FIELDS_FIELD.match(currentFieldName)) { - while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { - String fField = null; - float fBoost = AbstractQueryBuilder.DEFAULT_BOOST; - char[] text = parser.textCharacters(); - int end = parser.textOffset() + parser.textLength(); - for (int i = parser.textOffset(); i < end; i++) { - if (text[i] == '^') { - int relativeLocation = i - parser.textOffset(); - fField = new String(text, parser.textOffset(), relativeLocation); - fBoost = Float.parseFloat(new String(text, i + 1, parser.textLength() - relativeLocation - 1)); - break; - } - } - if (fField == null) { - fField = parser.text(); - } - fieldsAndWeights.put(fField, fBoost); + List fields = new ArrayList<>(); + while (parser.nextToken() != XContentParser.Token.END_ARRAY) { + fields.add(parser.text()); } + fieldsAndWeights = QueryParserHelper.parseFieldsAndWeights(fields); } else { throw new ParsingException(parser.getTokenLocation(), "[" + QueryStringQueryBuilder.NAME + "] query does not support [" + currentFieldName + "]"); @@ -825,6 +838,8 @@ public static QueryStringQueryBuilder fromXContent(XContentParser parser) throws queryName = parser.text(); } else if (GENERATE_SYNONYMS_PHRASE_QUERY.match(currentFieldName)) { autoGenerateSynonymsPhraseQuery = parser.booleanValue(); + } else if (FUZZY_TRANSPOSITIONS_FIELD.match(currentFieldName)) { + fuzzyTranspositions = parser.booleanValue(); } else if (AUTO_GENERATE_PHRASE_QUERIES_FIELD.match(currentFieldName)) { // ignore, deprecated setting } else if (LOWERCASE_EXPANDED_TERMS_FIELD.match(currentFieldName)) { @@ -849,7 +864,9 @@ public static QueryStringQueryBuilder fromXContent(XContentParser parser) throws } QueryStringQueryBuilder queryStringQuery = new QueryStringQueryBuilder(queryString); - queryStringQuery.fields(fieldsAndWeights); + if (fieldsAndWeights != null) { + queryStringQuery.fields(fieldsAndWeights); + } queryStringQuery.defaultField(defaultField); queryStringQuery.defaultOperator(defaultOperator); queryStringQuery.analyzer(analyzer); @@ -876,6 +893,7 @@ public static QueryStringQueryBuilder fromXContent(XContentParser parser) throws queryStringQuery.boost(boost); queryStringQuery.queryName(queryName); queryStringQuery.autoGenerateSynonymsPhraseQuery(autoGenerateSynonymsPhraseQuery); + queryStringQuery.fuzzyTranspositions(fuzzyTranspositions); return queryStringQuery; } @@ -910,7 +928,8 @@ protected boolean doEquals(QueryStringQueryBuilder other) { Objects.equals(timeZone.getID(), other.timeZone.getID()) && Objects.equals(escape, other.escape) && Objects.equals(maxDeterminizedStates, other.maxDeterminizedStates) && - Objects.equals(autoGenerateSynonymsPhraseQuery, other.autoGenerateSynonymsPhraseQuery); + Objects.equals(autoGenerateSynonymsPhraseQuery, other.autoGenerateSynonymsPhraseQuery) && + Objects.equals(fuzzyTranspositions, other.fuzzyTranspositions); } @Override @@ -919,7 +938,8 @@ protected int doHashCode() { quoteFieldSuffix, allowLeadingWildcard, analyzeWildcard, enablePositionIncrements, fuzziness, fuzzyPrefixLength, fuzzyMaxExpansions, fuzzyRewrite, phraseSlop, type, tieBreaker, rewrite, minimumShouldMatch, lenient, - timeZone == null ? 0 : timeZone.getID(), escape, maxDeterminizedStates, autoGenerateSynonymsPhraseQuery); + timeZone == null ? 0 : timeZone.getID(), escape, maxDeterminizedStates, autoGenerateSynonymsPhraseQuery, + fuzzyTranspositions); } @Override @@ -938,20 +958,17 @@ protected Query doToQuery(QueryShardContext context) throws IOException { queryParser = new QueryStringQueryParser(context, defaultField, isLenient); } } else if (fieldsAndWeights.size() > 0) { - final Map resolvedFields = QueryStringQueryParser.resolveMappingFields(context, fieldsAndWeights); + final Map resolvedFields = QueryParserHelper.resolveMappingFields(context, fieldsAndWeights); queryParser = new QueryStringQueryParser(context, resolvedFields, isLenient); } else { - // Expand to all fields if: - // - The index default search field is "*" - // - The index default search field is "_all" and _all is disabled - // TODO the index default search field should be "*" for new indices. - if (Regex.isMatchAllPattern(context.defaultField()) || - (context.getMapperService().allEnabled() == false && "_all".equals(context.defaultField()))) { - // Automatically determine the fields from the index mapping. - // Automatically set leniency to "true" if unset so mismatched fields don't cause exceptions; + List defaultFields = context.defaultFields(); + boolean isAllField = defaultFields.size() == 1 && Regex.isMatchAllPattern(defaultFields.get(0)); + if (isAllField) { queryParser = new QueryStringQueryParser(context, lenient == null ? true : lenient); } else { - queryParser = new QueryStringQueryParser(context, context.defaultField(), isLenient); + final Map resolvedFields = QueryParserHelper.resolveMappingFields(context, + QueryParserHelper.parseFieldsAndWeights(defaultFields)); + queryParser = new QueryStringQueryParser(context, resolvedFields, isLenient); } } @@ -992,6 +1009,7 @@ protected Query doToQuery(QueryShardContext context) throws IOException { queryParser.setTimeZone(timeZone); queryParser.setMaxDeterminizedStates(maxDeterminizedStates); queryParser.setAutoGenerateMultiTermSynonymsPhraseQuery(autoGenerateSynonymsPhraseQuery); + queryParser.setFuzzyTranspositions(fuzzyTranspositions); Query query; try { diff --git a/core/src/main/java/org/elasticsearch/index/query/RangeQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/RangeQueryBuilder.java index 0156710520da8..14f1e16f39cbf 100644 --- a/core/src/main/java/org/elasticsearch/index/query/RangeQueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/RangeQueryBuilder.java @@ -36,11 +36,9 @@ import org.elasticsearch.common.lucene.BytesRefs; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentParser; -import org.elasticsearch.index.mapper.DateFieldMapper; import org.elasticsearch.index.mapper.FieldNamesFieldMapper; import org.elasticsearch.index.mapper.MappedFieldType; import org.elasticsearch.index.mapper.MapperService; -import org.elasticsearch.index.mapper.RangeFieldMapper; import org.joda.time.DateTimeZone; import java.io.IOException; @@ -55,11 +53,8 @@ public class RangeQueryBuilder extends AbstractQueryBuilder i public static final boolean DEFAULT_INCLUDE_UPPER = true; public static final boolean DEFAULT_INCLUDE_LOWER = true; - private static final ParseField FIELDDATA_FIELD = new ParseField("fielddata").withAllDeprecated("[no replacement]"); - private static final ParseField NAME_FIELD = new ParseField("_name") - .withAllDeprecated("query name is not supported in short version of range query"); - public static final ParseField LTE_FIELD = new ParseField("lte", "le"); - public static final ParseField GTE_FIELD = new ParseField("gte", "ge"); + public static final ParseField LTE_FIELD = new ParseField("lte"); + public static final ParseField GTE_FIELD = new ParseField("gte"); public static final ParseField FROM_FIELD = new ParseField("from"); public static final ParseField TO_FIELD = new ParseField("to"); private static final ParseField INCLUDE_LOWER_FIELD = new ParseField("include_lower"); @@ -117,10 +112,20 @@ public RangeQueryBuilder(StreamInput in) throws IOException { String relationString = in.readOptionalString(); if (relationString != null) { relation = ShapeRelation.getRelationByName(relationString); + if (relation != null && !isRelationAllowed(relation)) { + throw new IllegalArgumentException( + "[range] query does not support relation [" + relationString + "]"); + } } } } + private boolean isRelationAllowed(ShapeRelation relation) { + return relation == ShapeRelation.INTERSECTS + || relation == ShapeRelation.CONTAINS + || relation == ShapeRelation.WITHIN; + } + @Override protected void doWriteTo(StreamOutput out) throws IOException { out.writeString(this.fieldName); @@ -319,6 +324,9 @@ public RangeQueryBuilder relation(String relation) { if (this.relation == null) { throw new IllegalArgumentException(relation + " is not a valid relation"); } + if (!isRelationAllowed(this.relation)) { + throw new IllegalArgumentException("[range] query does not support relation [" + relation + "]"); + } return this; } @@ -405,13 +413,7 @@ public static RangeQueryBuilder fromXContent(XContentParser parser) throws IOExc } } } else if (token.isValue()) { - if (NAME_FIELD.match(currentFieldName)) { - queryName = parser.text(); - } else if (FIELDDATA_FIELD.match(currentFieldName)) { - // ignore - } else { throw new ParsingException(parser.getTokenLocation(), "[range] query does not support [" + currentFieldName + "]"); - } } } @@ -453,7 +455,7 @@ protected MappedFieldType.Relation getRelation(QueryRewriteContext queryRewriteC // no field means we have no values return MappedFieldType.Relation.DISJOINT; } else { - DateMathParser dateMathParser = format == null ? null : new DateMathParser(format); + DateMathParser dateMathParser = getForceDateParser(); return fieldType.isFieldWithinQuery(shardContext.getIndexReader(), from, to, includeLower, includeUpper, timeZone, dateMathParser, queryRewriteContext); } @@ -503,25 +505,10 @@ protected Query doToQuery(QueryShardContext context) throws IOException { Query query = null; MappedFieldType mapper = context.fieldMapper(this.fieldName); if (mapper != null) { - if (mapper instanceof DateFieldMapper.DateFieldType) { - - query = ((DateFieldMapper.DateFieldType) mapper).rangeQuery(from, to, includeLower, includeUpper, - timeZone, getForceDateParser(), context); - } else if (mapper instanceof RangeFieldMapper.RangeFieldType) { - DateMathParser forcedDateParser = null; - if (mapper.typeName() == RangeFieldMapper.RangeType.DATE.name && this.format != null) { - forcedDateParser = new DateMathParser(this.format); - } - query = ((RangeFieldMapper.RangeFieldType) mapper).rangeQuery(from, to, includeLower, includeUpper, + DateMathParser forcedDateParser = getForceDateParser(); + query = mapper.rangeQuery( + from, to, includeLower, includeUpper, relation, timeZone, forcedDateParser, context); - } else { - if (timeZone != null) { - throw new QueryShardException(context, "[range] time_zone can not be applied to non date field [" - + fieldName + "]"); - } - //LUCENE 4 UPGRADE Mapper#rangeQuery should use bytesref as well? - query = mapper.rangeQuery(from, to, includeLower, includeUpper, context); - } } else { if (timeZone != null) { throw new QueryShardException(context, "[range] time_zone can not be applied to non unmapped field [" @@ -530,7 +517,9 @@ protected Query doToQuery(QueryShardContext context) throws IOException { } if (query == null) { - query = new TermRangeQuery(this.fieldName, BytesRefs.toBytesRef(from), BytesRefs.toBytesRef(to), includeLower, includeUpper); + query = new TermRangeQuery(this.fieldName, + BytesRefs.toBytesRef(from), BytesRefs.toBytesRef(to), + includeLower, includeUpper); } return query; } diff --git a/core/src/main/java/org/elasticsearch/index/query/RegexpQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/RegexpQueryBuilder.java index e6ae1b7d63a93..96290d9291259 100644 --- a/core/src/main/java/org/elasticsearch/index/query/RegexpQueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/RegexpQueryBuilder.java @@ -47,8 +47,6 @@ public class RegexpQueryBuilder extends AbstractQueryBuilder public static final int DEFAULT_FLAGS_VALUE = RegexpFlag.ALL.value(); public static final int DEFAULT_MAX_DETERMINIZED_STATES = Operations.DEFAULT_MAX_DETERMINIZED_STATES; - private static final ParseField NAME_FIELD = new ParseField("_name") - .withAllDeprecated("query name is not supported in short version of regexp query"); private static final ParseField FLAGS_VALUE_FIELD = new ParseField("flags_value"); private static final ParseField MAX_DETERMINIZED_STATES_FIELD = new ParseField("max_determinized_states"); private static final ParseField FLAGS_FIELD = new ParseField("flags"); @@ -219,13 +217,9 @@ public static RegexpQueryBuilder fromXContent(XContentParser parser) throws IOEx } } } else { - if (NAME_FIELD.match(currentFieldName)) { - queryName = parser.text(); - } else { - throwParsingExceptionOnMultipleFields(NAME, parser.getTokenLocation(), fieldName, parser.currentName()); - fieldName = currentFieldName; - value = parser.textOrNull(); - } + throwParsingExceptionOnMultipleFields(NAME, parser.getTokenLocation(), fieldName, parser.currentName()); + fieldName = currentFieldName; + value = parser.textOrNull(); } } diff --git a/core/src/main/java/org/elasticsearch/index/query/ScriptQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/ScriptQueryBuilder.java index 7272316499d0a..91d5bc9a0275a 100644 --- a/core/src/main/java/org/elasticsearch/index/query/ScriptQueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/ScriptQueryBuilder.java @@ -34,6 +34,7 @@ import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.script.FilterScript; import org.elasticsearch.script.Script; import org.elasticsearch.script.SearchScript; @@ -126,25 +127,25 @@ public static ScriptQueryBuilder fromXContent(XContentParser parser) throws IOEx @Override protected Query doToQuery(QueryShardContext context) throws IOException { - SearchScript.Factory factory = context.getScriptService().compile(script, SearchScript.CONTEXT); - SearchScript.LeafFactory searchScript = factory.newFactory(script.getParams(), context.lookup()); - return new ScriptQuery(script, searchScript); + FilterScript.Factory factory = context.getScriptService().compile(script, FilterScript.CONTEXT); + FilterScript.LeafFactory filterScript = factory.newFactory(script.getParams(), context.lookup()); + return new ScriptQuery(script, filterScript); } static class ScriptQuery extends Query { final Script script; - final SearchScript.LeafFactory searchScript; + final FilterScript.LeafFactory filterScript; - ScriptQuery(Script script, SearchScript.LeafFactory searchScript) { + ScriptQuery(Script script, FilterScript.LeafFactory filterScript) { this.script = script; - this.searchScript = searchScript; + this.filterScript = filterScript; } @Override public String toString(String field) { StringBuilder buffer = new StringBuilder(); - buffer.append("ScriptFilter("); + buffer.append("ScriptQuery("); buffer.append(script); buffer.append(")"); return buffer.toString(); @@ -178,23 +179,13 @@ public Weight createWeight(IndexSearcher searcher, boolean needsScores, float bo @Override public Scorer scorer(LeafReaderContext context) throws IOException { DocIdSetIterator approximation = DocIdSetIterator.all(context.reader().maxDoc()); - final SearchScript leafScript = searchScript.newInstance(context); + final FilterScript leafScript = filterScript.newInstance(context); TwoPhaseIterator twoPhase = new TwoPhaseIterator(approximation) { @Override public boolean matches() throws IOException { leafScript.setDocument(approximation.docID()); - Object val = leafScript.run(); - if (val == null) { - return false; - } - if (val instanceof Boolean) { - return (Boolean) val; - } - if (val instanceof Number) { - return ((Number) val).longValue() != 0; - } - throw new IllegalArgumentException("Can't handle type [" + val + "] in script filter"); + return leafScript.execute(); } @Override diff --git a/core/src/main/java/org/elasticsearch/index/query/SimpleQueryStringBuilder.java b/core/src/main/java/org/elasticsearch/index/query/SimpleQueryStringBuilder.java index 17e7418fbacc5..3a9a0c3736b93 100644 --- a/core/src/main/java/org/elasticsearch/index/query/SimpleQueryStringBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/SimpleQueryStringBuilder.java @@ -20,6 +20,7 @@ package org.elasticsearch.index.query; import org.apache.lucene.analysis.Analyzer; +import org.apache.lucene.search.FuzzyQuery; import org.apache.lucene.search.Query; import org.elasticsearch.Version; import org.elasticsearch.common.ParseField; @@ -31,16 +32,18 @@ import org.elasticsearch.common.regex.Regex; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentParser; -import org.elasticsearch.index.mapper.MappedFieldType; -import org.elasticsearch.index.query.SimpleQueryParser.Settings; -import org.elasticsearch.index.search.QueryStringQueryParser; +import org.elasticsearch.index.search.QueryParserHelper; +import org.elasticsearch.index.search.SimpleQueryStringQueryParser; +import org.elasticsearch.index.search.SimpleQueryStringQueryParser.Settings; import java.io.IOException; +import java.util.ArrayList; +import java.util.Collections; import java.util.HashMap; +import java.util.List; import java.util.Locale; import java.util.Map; import java.util.Objects; -import java.util.TreeMap; /** * SimpleQuery is a query parser that acts similar to a query_string query, but @@ -57,7 +60,7 @@ *
  • '{@code ~}N' at the end of phrases specifies near/slop query: "term1 term2"~5 * *

    - * See: {@link SimpleQueryParser} for more information. + * See: {@link SimpleQueryStringQueryParser} for more information. *

    * This query supports these options: *

    @@ -87,6 +90,12 @@ public class SimpleQueryStringBuilder extends AbstractQueryBuilder fieldsAndWeights = new TreeMap<>(); + private Map fieldsAndWeights = new HashMap<>(); /** If specified, analyzer to use to parse the query text, defaults to registered default in toQuery. */ private String analyzer; /** Default operator to use for linking boolean clauses. Defaults to OR according to docs. */ @@ -126,8 +137,6 @@ public class SimpleQueryStringBuilder extends AbstractQueryBuilder resolvedFieldsAndWeights = new TreeMap<>(); + public int fuzzyPrefixLength() { + return settings.fuzzyPrefixLength(); + } - if ((useAllFields != null && useAllFields) && (fieldsAndWeights.size() != 0)) { - throw addValidationError("cannot use [all_fields] parameter in conjunction with [fields]", null); - } + public SimpleQueryStringBuilder fuzzyMaxExpansions(int fuzzyMaxExpansions) { + this.settings.fuzzyMaxExpansions(fuzzyMaxExpansions); + return this; + } + + public int fuzzyMaxExpansions() { + return settings.fuzzyMaxExpansions(); + } + + public boolean fuzzyTranspositions() { + return settings.fuzzyTranspositions(); + } + + /** + * Sets whether transpositions are supported in fuzzy queries.

    + * The default metric used by fuzzy queries to determine a match is the Damerau-Levenshtein + * distance formula which supports transpositions. Setting transposition to false will + * switch to classic Levenshtein distance.
    + * If not set, Damerau-Levenshtein distance metric will be used. + */ + public SimpleQueryStringBuilder fuzzyTranspositions(boolean fuzzyTranspositions) { + this.settings.fuzzyTranspositions(fuzzyTranspositions); + return this; + } - // If explicitly required to use all fields, use all fields, OR: - // Automatically determine the fields (to replace the _all field) if all of the following are true: - // - The _all field is disabled, - // - and the default_field has not been changed in the settings - // - and no fields are specified in the request + @Override + protected Query doToQuery(QueryShardContext context) throws IOException { Settings newSettings = new Settings(settings); - if ((this.useAllFields != null && this.useAllFields) || - (context.getMapperService().allEnabled() == false && - "_all".equals(context.defaultField()) && - this.fieldsAndWeights.isEmpty())) { - resolvedFieldsAndWeights = QueryStringQueryParser.resolveMappingField(context, "*", 1.0f, - false, false); - // Need to use lenient mode when using "all-mode" so exceptions aren't thrown due to mismatched types - newSettings.lenient(lenientSet ? settings.lenient() : true); + final Map resolvedFieldsAndWeights; + if (fieldsAndWeights.isEmpty() == false) { + resolvedFieldsAndWeights = QueryParserHelper.resolveMappingFields(context, fieldsAndWeights); } else { - // Use the default field if no fields specified - if (fieldsAndWeights.isEmpty()) { - resolvedFieldsAndWeights.put(resolveIndexName(context.defaultField(), context), AbstractQueryBuilder.DEFAULT_BOOST); - } else { - for (Map.Entry fieldEntry : fieldsAndWeights.entrySet()) { - if (Regex.isSimpleMatchPattern(fieldEntry.getKey())) { - for (String fieldName : context.getMapperService().simpleMatchToIndexNames(fieldEntry.getKey())) { - resolvedFieldsAndWeights.put(fieldName, fieldEntry.getValue()); - } - } else { - resolvedFieldsAndWeights.put(resolveIndexName(fieldEntry.getKey(), context), fieldEntry.getValue()); - } - } + List defaultFields = context.defaultFields(); + boolean isAllField = defaultFields.size() == 1 && Regex.isMatchAllPattern(defaultFields.get(0)); + if (isAllField) { + newSettings.lenient(lenientSet ? settings.lenient() : true); } + resolvedFieldsAndWeights = QueryParserHelper.resolveMappingFields(context, + QueryParserHelper.parseFieldsAndWeights(defaultFields)); } - // Use standard analyzer by default if none specified - Analyzer luceneAnalyzer; + final SimpleQueryStringQueryParser sqp; if (analyzer == null) { - luceneAnalyzer = context.getMapperService().searchAnalyzer(); + sqp = new SimpleQueryStringQueryParser(resolvedFieldsAndWeights, flags, newSettings, context); } else { - luceneAnalyzer = context.getIndexAnalyzers().get(analyzer); + Analyzer luceneAnalyzer = context.getIndexAnalyzers().get(analyzer); if (luceneAnalyzer == null) { throw new QueryShardException(context, "[" + SimpleQueryStringBuilder.NAME + "] analyzer [" + analyzer + "] not found"); } - + sqp = new SimpleQueryStringQueryParser(luceneAnalyzer, resolvedFieldsAndWeights, flags, newSettings, context); } - - SimpleQueryParser sqp = new SimpleQueryParser(luceneAnalyzer, resolvedFieldsAndWeights, flags, newSettings, context); sqp.setDefaultOperator(defaultOperator.toBooleanClauseOccur()); Query query = sqp.parse(queryText); return Queries.maybeApplyMinimumShouldMatch(query, minimumShouldMatch); } - private static String resolveIndexName(String fieldName, QueryShardContext context) { - MappedFieldType fieldType = context.fieldMapper(fieldName); - if (fieldType != null) { - return fieldType.name(); - } - return fieldName; - } - @Override protected void doXContent(XContentBuilder builder, Params params) throws IOException { builder.startObject(NAME); @@ -477,10 +508,10 @@ protected void doXContent(XContentBuilder builder, Params params) throws IOExcep if (minimumShouldMatch != null) { builder.field(MINIMUM_SHOULD_MATCH_FIELD.getPreferredName(), minimumShouldMatch); } - if (useAllFields != null) { - builder.field(ALL_FIELDS_FIELD.getPreferredName(), useAllFields); - } builder.field(GENERATE_SYNONYMS_PHRASE_QUERY.getPreferredName(), settings.autoGenerateSynonymsPhraseQuery()); + builder.field(FUZZY_PREFIX_LENGTH_FIELD.getPreferredName(), settings.fuzzyPrefixLength()); + builder.field(FUZZY_MAX_EXPANSIONS_FIELD.getPreferredName(), settings.fuzzyMaxExpansions()); + builder.field(FUZZY_TRANSPOSITIONS_FIELD.getPreferredName(), settings.fuzzyTranspositions()); printBoostAndQueryName(builder); builder.endObject(); } @@ -491,15 +522,17 @@ public static SimpleQueryStringBuilder fromXContent(XContentParser parser) throw float boost = AbstractQueryBuilder.DEFAULT_BOOST; String queryName = null; String minimumShouldMatch = null; - Map fieldsAndWeights = new HashMap<>(); + Map fieldsAndWeights = null; Operator defaultOperator = null; String analyzerName = null; int flags = SimpleQueryStringFlag.ALL.value(); Boolean lenient = null; boolean analyzeWildcard = SimpleQueryStringBuilder.DEFAULT_ANALYZE_WILDCARD; String quoteFieldSuffix = null; - Boolean useAllFields = null; boolean autoGenerateSynonymsPhraseQuery = true; + int fuzzyPrefixLenght = SimpleQueryStringBuilder.DEFAULT_FUZZY_PREFIX_LENGTH; + int fuzzyMaxExpansions = SimpleQueryStringBuilder.DEFAULT_FUZZY_MAX_EXPANSIONS; + boolean fuzzyTranspositions = SimpleQueryStringBuilder.DEFAULT_FUZZY_TRANSPOSITIONS; XContentParser.Token token; while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { @@ -507,24 +540,11 @@ public static SimpleQueryStringBuilder fromXContent(XContentParser parser) throw currentFieldName = parser.currentName(); } else if (token == XContentParser.Token.START_ARRAY) { if (FIELDS_FIELD.match(currentFieldName)) { - while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { - String fField = null; - float fBoost = 1; - char[] text = parser.textCharacters(); - int end = parser.textOffset() + parser.textLength(); - for (int i = parser.textOffset(); i < end; i++) { - if (text[i] == '^') { - int relativeLocation = i - parser.textOffset(); - fField = new String(text, parser.textOffset(), relativeLocation); - fBoost = Float.parseFloat(new String(text, i + 1, parser.textLength() - relativeLocation - 1)); - break; - } - } - if (fField == null) { - fField = parser.text(); - } - fieldsAndWeights.put(fField, fBoost); + List fields = new ArrayList<>(); + while (parser.nextToken() != XContentParser.Token.END_ARRAY) { + fields.add(parser.text()); } + fieldsAndWeights = QueryParserHelper.parseFieldsAndWeights(fields); } else { throw new ParsingException(parser.getTokenLocation(), "[" + SimpleQueryStringBuilder.NAME + "] query does not support [" + currentFieldName + "]"); @@ -564,9 +584,15 @@ public static SimpleQueryStringBuilder fromXContent(XContentParser parser) throw } else if (QUOTE_FIELD_SUFFIX_FIELD.match(currentFieldName)) { quoteFieldSuffix = parser.textOrNull(); } else if (ALL_FIELDS_FIELD.match(currentFieldName)) { - useAllFields = parser.booleanValue(); + // Ignore deprecated option } else if (GENERATE_SYNONYMS_PHRASE_QUERY.match(currentFieldName)) { autoGenerateSynonymsPhraseQuery = parser.booleanValue(); + } else if (FUZZY_PREFIX_LENGTH_FIELD.match(currentFieldName)) { + fuzzyPrefixLenght = parser.intValue(); + } else if (FUZZY_MAX_EXPANSIONS_FIELD.match(currentFieldName)) { + fuzzyMaxExpansions = parser.intValue(); + } else if (FUZZY_TRANSPOSITIONS_FIELD.match(currentFieldName)) { + fuzzyTranspositions = parser.booleanValue(); } else { throw new ParsingException(parser.getTokenLocation(), "[" + SimpleQueryStringBuilder.NAME + "] unsupported field [" + parser.currentName() + "]"); @@ -582,20 +608,20 @@ public static SimpleQueryStringBuilder fromXContent(XContentParser parser) throw throw new ParsingException(parser.getTokenLocation(), "[" + SimpleQueryStringBuilder.NAME + "] query text missing"); } - if ((useAllFields != null && useAllFields) && (fieldsAndWeights.size() != 0)) { - throw new ParsingException(parser.getTokenLocation(), - "cannot use [all_fields] parameter in conjunction with [fields]"); - } - SimpleQueryStringBuilder qb = new SimpleQueryStringBuilder(queryBody); - qb.boost(boost).fields(fieldsAndWeights).analyzer(analyzerName).queryName(queryName).minimumShouldMatch(minimumShouldMatch); + if (fieldsAndWeights != null) { + qb.fields(fieldsAndWeights); + } + qb.boost(boost).analyzer(analyzerName).queryName(queryName).minimumShouldMatch(minimumShouldMatch); qb.flags(flags).defaultOperator(defaultOperator); if (lenient != null) { qb.lenient(lenient); } qb.analyzeWildcard(analyzeWildcard).boost(boost).quoteFieldSuffix(quoteFieldSuffix); - qb.useAllFields(useAllFields); qb.autoGenerateSynonymsPhraseQuery(autoGenerateSynonymsPhraseQuery); + qb.fuzzyPrefixLength(fuzzyPrefixLenght); + qb.fuzzyMaxExpansions(fuzzyMaxExpansions); + qb.fuzzyTranspositions(fuzzyTranspositions); return qb; } @@ -606,7 +632,7 @@ public String getWriteableName() { @Override protected int doHashCode() { - return Objects.hash(fieldsAndWeights, analyzer, defaultOperator, queryText, minimumShouldMatch, settings, flags, useAllFields); + return Objects.hash(fieldsAndWeights, analyzer, defaultOperator, queryText, minimumShouldMatch, settings, flags); } @Override @@ -615,8 +641,6 @@ protected boolean doEquals(SimpleQueryStringBuilder other) { && Objects.equals(defaultOperator, other.defaultOperator) && Objects.equals(queryText, other.queryText) && Objects.equals(minimumShouldMatch, other.minimumShouldMatch) && Objects.equals(settings, other.settings) - && (flags == other.flags) - && (useAllFields == other.useAllFields); + && (flags == other.flags); } - } diff --git a/core/src/main/java/org/elasticsearch/index/query/SimpleQueryStringFlag.java b/core/src/main/java/org/elasticsearch/index/query/SimpleQueryStringFlag.java index e8cbe035c90cd..77f21125fc83f 100644 --- a/core/src/main/java/org/elasticsearch/index/query/SimpleQueryStringFlag.java +++ b/core/src/main/java/org/elasticsearch/index/query/SimpleQueryStringFlag.java @@ -19,6 +19,7 @@ package org.elasticsearch.index.query; import org.elasticsearch.common.Strings; +import org.elasticsearch.index.search.SimpleQueryStringQueryParser; import java.util.Locale; @@ -28,18 +29,18 @@ public enum SimpleQueryStringFlag { ALL(-1), NONE(0), - AND(SimpleQueryParser.AND_OPERATOR), - NOT(SimpleQueryParser.NOT_OPERATOR), - OR(SimpleQueryParser.OR_OPERATOR), - PREFIX(SimpleQueryParser.PREFIX_OPERATOR), - PHRASE(SimpleQueryParser.PHRASE_OPERATOR), - PRECEDENCE(SimpleQueryParser.PRECEDENCE_OPERATORS), - ESCAPE(SimpleQueryParser.ESCAPE_OPERATOR), - WHITESPACE(SimpleQueryParser.WHITESPACE_OPERATOR), - FUZZY(SimpleQueryParser.FUZZY_OPERATOR), + AND(SimpleQueryStringQueryParser.AND_OPERATOR), + NOT(SimpleQueryStringQueryParser.NOT_OPERATOR), + OR(SimpleQueryStringQueryParser.OR_OPERATOR), + PREFIX(SimpleQueryStringQueryParser.PREFIX_OPERATOR), + PHRASE(SimpleQueryStringQueryParser.PHRASE_OPERATOR), + PRECEDENCE(SimpleQueryStringQueryParser.PRECEDENCE_OPERATORS), + ESCAPE(SimpleQueryStringQueryParser.ESCAPE_OPERATOR), + WHITESPACE(SimpleQueryStringQueryParser.WHITESPACE_OPERATOR), + FUZZY(SimpleQueryStringQueryParser.FUZZY_OPERATOR), // NEAR and SLOP are synonymous, since "slop" is a more familiar term than "near" - NEAR(SimpleQueryParser.NEAR_OPERATOR), - SLOP(SimpleQueryParser.NEAR_OPERATOR); + NEAR(SimpleQueryStringQueryParser.NEAR_OPERATOR), + SLOP(SimpleQueryStringQueryParser.NEAR_OPERATOR); final int value; diff --git a/core/src/main/java/org/elasticsearch/index/query/TermsQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/TermsQueryBuilder.java index 1eefbb158a6d1..ffb7e9d607f8a 100644 --- a/core/src/main/java/org/elasticsearch/index/query/TermsQueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/TermsQueryBuilder.java @@ -391,7 +391,7 @@ public static TermsQueryBuilder fromXContent(XContentParser parser) throws IOExc .queryName(queryName); } - private static List parseValues(XContentParser parser) throws IOException { + static List parseValues(XContentParser parser) throws IOException { List values = new ArrayList<>(); while (parser.nextToken() != XContentParser.Token.END_ARRAY) { Object value = parser.objectBytes(); diff --git a/core/src/main/java/org/elasticsearch/index/query/TermsSetQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/TermsSetQueryBuilder.java new file mode 100644 index 0000000000000..0947a67212d77 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/index/query/TermsSetQueryBuilder.java @@ -0,0 +1,369 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.index.query; + +import org.apache.lucene.index.DocValues; +import org.apache.lucene.index.LeafReaderContext; +import org.apache.lucene.index.NumericDocValues; +import org.apache.lucene.index.SortedNumericDocValues; +import org.apache.lucene.index.Term; +import org.apache.lucene.search.BooleanQuery; +import org.apache.lucene.search.CoveringQuery; +import org.apache.lucene.search.DoubleValues; +import org.apache.lucene.search.LongValues; +import org.apache.lucene.search.LongValuesSource; +import org.apache.lucene.search.Query; +import org.apache.lucene.search.TermQuery; +import org.elasticsearch.common.ParseField; +import org.elasticsearch.common.ParsingException; +import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.lucene.BytesRefs; +import org.elasticsearch.common.lucene.search.Queries; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.index.fielddata.IndexNumericFieldData; +import org.elasticsearch.index.mapper.MappedFieldType; +import org.elasticsearch.script.Script; +import org.elasticsearch.script.SearchScript; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.Objects; + +public final class TermsSetQueryBuilder extends AbstractQueryBuilder { + + public static final String NAME = "terms_set"; + + static final ParseField TERMS_FIELD = new ParseField("terms"); + static final ParseField MINIMUM_SHOULD_MATCH_FIELD = new ParseField("minimum_should_match_field"); + static final ParseField MINIMUM_SHOULD_MATCH_SCRIPT = new ParseField("minimum_should_match_script"); + + private final String fieldName; + private final List values; + + private String minimumShouldMatchField; + private Script minimumShouldMatchScript; + + public TermsSetQueryBuilder(String fieldName, List values) { + this.fieldName = Objects.requireNonNull(fieldName); + this.values = TermsQueryBuilder.convert(Objects.requireNonNull(values)); + } + + public TermsSetQueryBuilder(StreamInput in) throws IOException { + super(in); + this.fieldName = in.readString(); + this.values = (List) in.readGenericValue(); + this.minimumShouldMatchField = in.readOptionalString(); + this.minimumShouldMatchScript = in.readOptionalWriteable(Script::new); + } + + @Override + protected void doWriteTo(StreamOutput out) throws IOException { + out.writeString(fieldName); + out.writeGenericValue(values); + out.writeOptionalString(minimumShouldMatchField); + out.writeOptionalWriteable(minimumShouldMatchScript); + } + + public List getValues() { + return values; + } + + public String getMinimumShouldMatchField() { + return minimumShouldMatchField; + } + + public TermsSetQueryBuilder setMinimumShouldMatchField(String minimumShouldMatchField) { + if (minimumShouldMatchScript != null) { + throw new IllegalArgumentException("A script has already been specified. Cannot specify both a field and script"); + } + this.minimumShouldMatchField = minimumShouldMatchField; + return this; + } + + public Script getMinimumShouldMatchScript() { + return minimumShouldMatchScript; + } + + public TermsSetQueryBuilder setMinimumShouldMatchScript(Script minimumShouldMatchScript) { + if (minimumShouldMatchField != null) { + throw new IllegalArgumentException("A field has already been specified. Cannot specify both a field and script"); + } + this.minimumShouldMatchScript = minimumShouldMatchScript; + return this; + } + + @Override + protected boolean doEquals(TermsSetQueryBuilder other) { + return Objects.equals(fieldName, this.fieldName) && Objects.equals(values, this.values) && + Objects.equals(minimumShouldMatchField, this.minimumShouldMatchField) && + Objects.equals(minimumShouldMatchScript, this.minimumShouldMatchScript); + } + + @Override + protected int doHashCode() { + return Objects.hash(fieldName, values, minimumShouldMatchField, minimumShouldMatchScript); + } + + @Override + public String getWriteableName() { + return NAME; + } + + @Override + protected void doXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(NAME); + builder.startObject(fieldName); + builder.field(TERMS_FIELD.getPreferredName(), TermsQueryBuilder.convertBack(values)); + if (minimumShouldMatchField != null) { + builder.field(MINIMUM_SHOULD_MATCH_FIELD.getPreferredName(), minimumShouldMatchField); + } + if (minimumShouldMatchScript != null) { + builder.field(MINIMUM_SHOULD_MATCH_SCRIPT.getPreferredName(), minimumShouldMatchScript); + } + printBoostAndQueryName(builder); + builder.endObject(); + builder.endObject(); + } + + public static TermsSetQueryBuilder fromXContent(XContentParser parser) throws IOException { + XContentParser.Token token = parser.nextToken(); + if (token != XContentParser.Token.FIELD_NAME) { + throw new ParsingException(parser.getTokenLocation(), "[" + NAME + "] unknown token [" + token + "]"); + } + String currentFieldName = parser.currentName(); + String fieldName = currentFieldName; + + token = parser.nextToken(); + if (token != XContentParser.Token.START_OBJECT) { + throw new ParsingException(parser.getTokenLocation(), "[" + NAME + "] unknown token [" + token + "]"); + } + + List values = new ArrayList<>(); + String minimumShouldMatchField = null; + Script minimumShouldMatchScript = null; + String queryName = null; + float boost = AbstractQueryBuilder.DEFAULT_BOOST; + + while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { + if (token == XContentParser.Token.FIELD_NAME) { + currentFieldName = parser.currentName(); + } else if (token == XContentParser.Token.START_ARRAY) { + if (TERMS_FIELD.match(currentFieldName)) { + values = TermsQueryBuilder.parseValues(parser); + } else { + throw new ParsingException(parser.getTokenLocation(), "[" + NAME + "] query does not support [" + + currentFieldName + "]"); + } + } else if (token == XContentParser.Token.START_OBJECT) { + if (MINIMUM_SHOULD_MATCH_SCRIPT.match(currentFieldName)) { + minimumShouldMatchScript = Script.parse(parser); + } else { + throw new ParsingException(parser.getTokenLocation(), "[" + NAME + "] query does not support [" + + currentFieldName + "]"); + } + } else if (token.isValue()) { + if (MINIMUM_SHOULD_MATCH_FIELD.match(currentFieldName)) { + minimumShouldMatchField = parser.text(); + } else if (AbstractQueryBuilder.BOOST_FIELD.match(currentFieldName)) { + boost = parser.floatValue(); + } else if (AbstractQueryBuilder.NAME_FIELD.match(currentFieldName)) { + queryName = parser.text(); + } else { + throw new ParsingException(parser.getTokenLocation(), "[" + NAME + "] query does not support [" + + currentFieldName + "]"); + } + } else { + throw new ParsingException(parser.getTokenLocation(), "[" + NAME + "] unknown token [" + token + + "] after [" + currentFieldName + "]"); + } + } + + token = parser.nextToken(); + if (token != XContentParser.Token.END_OBJECT) { + throw new ParsingException(parser.getTokenLocation(), "[" + NAME + "] unknown token [" + token + "]"); + } + + TermsSetQueryBuilder queryBuilder = new TermsSetQueryBuilder(fieldName, values) + .queryName(queryName).boost(boost); + if (minimumShouldMatchField != null) { + queryBuilder.setMinimumShouldMatchField(minimumShouldMatchField); + } + if (minimumShouldMatchScript != null) { + queryBuilder.setMinimumShouldMatchScript(minimumShouldMatchScript); + } + return queryBuilder; + } + + @Override + protected Query doToQuery(QueryShardContext context) throws IOException { + if (values.isEmpty()) { + return Queries.newMatchNoDocsQuery("No terms supplied for \"" + getName() + "\" query."); + } + // Fail before we attempt to create the term queries: + if (values.size() > BooleanQuery.getMaxClauseCount()) { + throw new BooleanQuery.TooManyClauses(); + } + + final MappedFieldType fieldType = context.fieldMapper(fieldName); + final List queries = new ArrayList<>(values.size()); + for (Object value : values) { + if (fieldType != null) { + queries.add(fieldType.termQuery(value, context)); + } else { + queries.add(new TermQuery(new Term(fieldName, BytesRefs.toBytesRef(value)))); + } + } + final LongValuesSource longValuesSource; + if (minimumShouldMatchField != null) { + MappedFieldType msmFieldType = context.fieldMapper(minimumShouldMatchField); + if (msmFieldType == null) { + throw new QueryShardException(context, "failed to find minimum_should_match field [" + minimumShouldMatchField + "]"); + } + + IndexNumericFieldData fieldData = context.getForField(msmFieldType); + longValuesSource = new FieldValuesSource(fieldData); + } else if (minimumShouldMatchScript != null) { + SearchScript.Factory factory = context.getScriptService().compile(minimumShouldMatchScript, SearchScript.CONTEXT); + Map params = new HashMap<>(); + params.putAll(minimumShouldMatchScript.getParams()); + params.put("num_terms", queries.size()); + SearchScript.LeafFactory leafFactory = factory.newFactory(params, context.lookup()); + longValuesSource = new ScriptLongValueSource(minimumShouldMatchScript, leafFactory); + } else { + throw new IllegalStateException("No minimum should match has been specified"); + } + return new CoveringQuery(queries, longValuesSource); + } + + static final class ScriptLongValueSource extends LongValuesSource { + + private final Script script; + private final SearchScript.LeafFactory leafFactory; + + ScriptLongValueSource(Script script, SearchScript.LeafFactory leafFactory) { + this.script = script; + this.leafFactory = leafFactory; + } + + @Override + public LongValues getValues(LeafReaderContext ctx, DoubleValues scores) throws IOException { + SearchScript searchScript = leafFactory.newInstance(ctx); + return new LongValues() { + @Override + public long longValue() throws IOException { + return searchScript.runAsLong(); + } + + @Override + public boolean advanceExact(int doc) throws IOException { + searchScript.setDocument(doc); + return searchScript.run() != null; + } + }; + } + + @Override + public boolean needsScores() { + return false; + } + + @Override + public int hashCode() { + // CoveringQuery with this field value source cannot be cachable + return System.identityHashCode(this); + } + + @Override + public boolean equals(Object obj) { + return this == obj; + } + + @Override + public String toString() { + return "script(" + script.toString() + ")"; + } + + } + + // Forked from LongValuesSource.FieldValuesSource and changed getValues() method to always use sorted numeric + // doc values, because that is what is being used in NumberFieldMapper. + static class FieldValuesSource extends LongValuesSource { + + private final IndexNumericFieldData field; + + FieldValuesSource(IndexNumericFieldData field) { + this.field = field; + } + + @Override + public boolean equals(Object o) { + if (this == o) return true; + if (o == null || getClass() != o.getClass()) return false; + FieldValuesSource that = (FieldValuesSource) o; + return Objects.equals(field, that.field); + } + + @Override + public String toString() { + return "long(" + field + ")"; + } + + @Override + public int hashCode() { + return Objects.hash(field); + } + + @Override + public LongValues getValues(LeafReaderContext ctx, DoubleValues scores) throws IOException { + SortedNumericDocValues values = field.load(ctx).getLongValues(); + return new LongValues() { + + long current = -1; + + @Override + public long longValue() throws IOException { + return current; + } + + @Override + public boolean advanceExact(int doc) throws IOException { + boolean hasValue = values.advanceExact(doc); + if (hasValue) { + assert values.docValueCount() == 1; + current = values.nextValue(); + return true; + } else { + return false; + } + } + }; + } + + @Override + public boolean needsScores() { + return false; + } + } + +} diff --git a/core/src/main/java/org/elasticsearch/index/query/functionscore/DecayFunctionBuilder.java b/core/src/main/java/org/elasticsearch/index/query/functionscore/DecayFunctionBuilder.java index dcd1399bf5159..fe7d097638f31 100644 --- a/core/src/main/java/org/elasticsearch/index/query/functionscore/DecayFunctionBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/functionscore/DecayFunctionBuilder.java @@ -43,8 +43,9 @@ import org.elasticsearch.index.fielddata.MultiGeoPointValues; import org.elasticsearch.index.fielddata.NumericDoubleValues; import org.elasticsearch.index.fielddata.SortedNumericDoubleValues; -import org.elasticsearch.index.mapper.GeoPointFieldMapper.GeoPointFieldType; +import org.elasticsearch.index.fielddata.SortingNumericDoubleValues; import org.elasticsearch.index.mapper.DateFieldMapper; +import org.elasticsearch.index.mapper.GeoPointFieldMapper.GeoPointFieldType; import org.elasticsearch.index.mapper.MappedFieldType; import org.elasticsearch.index.mapper.NumberFieldMapper; import org.elasticsearch.index.query.QueryShardContext; @@ -346,22 +347,23 @@ public boolean needsScores() { @Override protected NumericDoubleValues distance(LeafReaderContext context) { final MultiGeoPointValues geoPointValues = fieldData.load(context).getGeoPointValues(); - return mode.select(new MultiValueMode.UnsortedNumericDoubleValues() { - @Override - public int docValueCount() { - return geoPointValues.docValueCount(); - } - + return mode.select(new SortingNumericDoubleValues() { @Override public boolean advanceExact(int docId) throws IOException { - return geoPointValues.advanceExact(docId); - } - - @Override - public double nextValue() throws IOException { - GeoPoint other = geoPointValues.nextValue(); - return Math.max(0.0d, - distFunction.calculate(origin.lat(), origin.lon(), other.lat(), other.lon(), DistanceUnit.METERS) - offset); + if (geoPointValues.advanceExact(docId)) { + int n = geoPointValues.docValueCount(); + resize(n); + for (int i = 0; i < n; i++) { + GeoPoint other = geoPointValues.nextValue(); + double distance = distFunction.calculate( + origin.lat(), origin.lon(), other.lat(), other.lon(), DistanceUnit.METERS); + values[i] = Math.max(0.0d, distance - offset); + } + sort(); + return true; + } else { + return false; + } } }, 0.0); } @@ -427,20 +429,20 @@ public boolean needsScores() { @Override protected NumericDoubleValues distance(LeafReaderContext context) { final SortedNumericDoubleValues doubleValues = fieldData.load(context).getDoubleValues(); - return mode.select(new MultiValueMode.UnsortedNumericDoubleValues() { + return mode.select(new SortingNumericDoubleValues() { @Override - public int docValueCount() { - return doubleValues.docValueCount(); - } - - @Override - public boolean advanceExact(int doc) throws IOException { - return doubleValues.advanceExact(doc); - } - - @Override - public double nextValue() throws IOException { - return Math.max(0.0d, Math.abs(doubleValues.nextValue() - origin) - offset); + public boolean advanceExact(int docId) throws IOException { + if (doubleValues.advanceExact(docId)) { + int n = doubleValues.docValueCount(); + resize(n); + for (int i = 0; i < n; i++) { + values[i] = Math.max(0.0d, Math.abs(doubleValues.nextValue() - origin) - offset); + } + sort(); + return true; + } else { + return false; + } } }, 0.0); } @@ -542,10 +544,11 @@ public Explanation explainScore(int docId, Explanation subQueryScore) throws IOE if (distance.advanceExact(docId) == false) { return Explanation.noMatch("No value for the distance"); } + double value = distance.doubleValue(); return Explanation.match( (float) score(docId, subQueryScore.getValue()), "Function for field " + getFieldName() + ":", - func.explainFunction(getDistanceString(ctx, docId), distance.doubleValue(), scale)); + func.explainFunction(getDistanceString(ctx, docId), value, scale)); } }; } diff --git a/core/src/main/java/org/elasticsearch/index/query/functionscore/ScoreFunctionBuilder.java b/core/src/main/java/org/elasticsearch/index/query/functionscore/ScoreFunctionBuilder.java index 3088a39f942ed..c64f1b1e403c8 100644 --- a/core/src/main/java/org/elasticsearch/index/query/functionscore/ScoreFunctionBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/functionscore/ScoreFunctionBuilder.java @@ -24,14 +24,15 @@ import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.lucene.search.function.ScoreFunction; import org.elasticsearch.common.lucene.search.function.WeightFactorFunction; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContent.Params; +import org.elasticsearch.common.xcontent.ToXContentFragment; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.index.query.QueryShardContext; import java.io.IOException; import java.util.Objects; -public abstract class ScoreFunctionBuilder> implements ToXContent, NamedWriteable { +public abstract class ScoreFunctionBuilder> implements ToXContentFragment, NamedWriteable { private Float weight; diff --git a/core/src/main/java/org/elasticsearch/index/query/support/QueryParsers.java b/core/src/main/java/org/elasticsearch/index/query/support/QueryParsers.java index 62750d275b1c6..036efa75bdaa3 100644 --- a/core/src/main/java/org/elasticsearch/index/query/support/QueryParsers.java +++ b/core/src/main/java/org/elasticsearch/index/query/support/QueryParsers.java @@ -25,7 +25,7 @@ public final class QueryParsers { - public static final ParseField CONSTANT_SCORE = new ParseField("constant_score", "constant_score_auto", "constant_score_filter"); + public static final ParseField CONSTANT_SCORE = new ParseField("constant_score"); public static final ParseField SCORING_BOOLEAN = new ParseField("scoring_boolean"); public static final ParseField CONSTANT_SCORE_BOOLEAN = new ParseField("constant_score_boolean"); public static final ParseField TOP_TERMS = new ParseField("top_terms_"); diff --git a/core/src/main/java/org/elasticsearch/index/recovery/RecoveryStats.java b/core/src/main/java/org/elasticsearch/index/recovery/RecoveryStats.java index 750e77092ceeb..4e3d71cce6299 100644 --- a/core/src/main/java/org/elasticsearch/index/recovery/RecoveryStats.java +++ b/core/src/main/java/org/elasticsearch/index/recovery/RecoveryStats.java @@ -22,7 +22,8 @@ import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Streamable; import org.elasticsearch.common.unit.TimeValue; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContent.Params; +import org.elasticsearch.common.xcontent.ToXContentFragment; import org.elasticsearch.common.xcontent.XContentBuilder; import java.io.IOException; @@ -33,7 +34,7 @@ * Recovery related statistics, starting at the shard level and allowing aggregation to * indices and node level */ -public class RecoveryStats implements ToXContent, Streamable { +public class RecoveryStats implements ToXContentFragment, Streamable { private final AtomicInteger currentAsSource = new AtomicInteger(); private final AtomicInteger currentAsTarget = new AtomicInteger(); diff --git a/core/src/main/java/org/elasticsearch/index/refresh/RefreshStats.java b/core/src/main/java/org/elasticsearch/index/refresh/RefreshStats.java index 729bb47802d3f..1235aad885f35 100644 --- a/core/src/main/java/org/elasticsearch/index/refresh/RefreshStats.java +++ b/core/src/main/java/org/elasticsearch/index/refresh/RefreshStats.java @@ -23,13 +23,14 @@ import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Streamable; import org.elasticsearch.common.unit.TimeValue; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContent.Params; +import org.elasticsearch.common.xcontent.ToXContentFragment; import org.elasticsearch.common.xcontent.XContentBuilder; import java.io.IOException; import java.util.Objects; -public class RefreshStats implements Streamable, ToXContent { +public class RefreshStats implements Streamable, ToXContentFragment { private long total; diff --git a/core/src/main/java/org/elasticsearch/index/reindex/AbstractBulkByScrollRequest.java b/core/src/main/java/org/elasticsearch/index/reindex/AbstractBulkByScrollRequest.java index 0355eeaee4788..62dbd5b131234 100644 --- a/core/src/main/java/org/elasticsearch/index/reindex/AbstractBulkByScrollRequest.java +++ b/core/src/main/java/org/elasticsearch/index/reindex/AbstractBulkByScrollRequest.java @@ -43,7 +43,7 @@ public abstract class AbstractBulkByScrollRequest= 0; + } + public Query parse(Type type, String fieldName, Object value) throws IOException { MappedFieldType fieldType = context.fieldMapper(fieldName); if (fieldType == null) { @@ -264,7 +267,11 @@ public Query parse(Type type, String fieldName, Object value) throws IOException assert analyzer != null; MatchQueryBuilder builder = new MatchQueryBuilder(analyzer, fieldType); builder.setEnablePositionIncrements(this.enablePositionIncrements); - builder.setAutoGenerateMultiTermSynonymsPhraseQuery(this.autoGenerateSynonymsPhraseQuery); + if (hasPositions(fieldType)) { + builder.setAutoGenerateMultiTermSynonymsPhraseQuery(this.autoGenerateSynonymsPhraseQuery); + } else { + builder.setAutoGenerateMultiTermSynonymsPhraseQuery(false); + } Query query = null; switch (type) { @@ -332,6 +339,20 @@ protected Query newSynonymQuery(Term[] terms) { return blendTermsQuery(terms, mapper); } + @Override + protected Query analyzePhrase(String field, TokenStream stream, int slop) throws IOException { + if (hasPositions(mapper) == false) { + IllegalStateException exc = + new IllegalStateException("field:[" + field + "] was indexed without position data; cannot run PhraseQuery"); + if (lenient) { + return newLenientFieldQuery(field, exc); + } else { + throw exc; + } + } + return super.analyzePhrase(field, stream, slop); + } + /** * Checks if graph analysis should be enabled for the field depending * on the provided {@link Analyzer} @@ -396,9 +417,6 @@ private Query toMultiPhrasePrefix(final Query query, int phraseSlop, int maxExpa } else if (innerQuery instanceof TermQuery) { prefixQuery.add(((TermQuery) innerQuery).getTerm()); return boost == 1 ? prefixQuery : new BoostQuery(prefixQuery, boost); - } else if (innerQuery instanceof AllTermQuery) { - prefixQuery.add(((AllTermQuery) innerQuery).getTerm()); - return boost == 1 ? prefixQuery : new BoostQuery(prefixQuery, boost); } return query; } diff --git a/core/src/main/java/org/elasticsearch/index/search/MultiMatchQuery.java b/core/src/main/java/org/elasticsearch/index/search/MultiMatchQuery.java index 198930b243e76..61029f70e8f19 100644 --- a/core/src/main/java/org/elasticsearch/index/search/MultiMatchQuery.java +++ b/core/src/main/java/org/elasticsearch/index/search/MultiMatchQuery.java @@ -29,7 +29,6 @@ import org.apache.lucene.search.TermQuery; import org.apache.lucene.util.BytesRef; import org.elasticsearch.ElasticsearchParseException; -import org.elasticsearch.common.collect.Tuple; import org.elasticsearch.common.lucene.search.Queries; import org.elasticsearch.index.mapper.MappedFieldType; import org.elasticsearch.index.query.AbstractQueryBuilder; @@ -44,6 +43,8 @@ import java.util.Map; import java.util.Objects; +import static org.elasticsearch.common.lucene.search.Queries.newLenientFieldQuery; + public class MultiMatchQuery extends MatchQuery { private Float groupTieBreaker = null; @@ -59,7 +60,7 @@ public MultiMatchQuery(QueryShardContext context) { private Query parseAndApply(Type type, String fieldName, Object value, String minimumShouldMatch, Float boostValue) throws IOException { Query query = parse(type, fieldName, value); query = Queries.maybeApplyMinimumShouldMatch(query, minimumShouldMatch); - if (query != null && boostValue != null && boostValue != AbstractQueryBuilder.DEFAULT_BOOST) { + if (query != null && boostValue != null && boostValue != AbstractQueryBuilder.DEFAULT_BOOST && query instanceof MatchNoDocsQuery == false) { query = new BoostQuery(query, boostValue); } return query; @@ -205,7 +206,7 @@ public Query blendTerms(Term[] terms, MappedFieldType fieldType) { for (int i = 0; i < terms.length; i++) { values[i] = terms[i].bytes(); } - return MultiMatchQuery.blendTerms(context, values, commonTermsCutoff, tieBreaker, blendedFields); + return MultiMatchQuery.blendTerms(context, values, commonTermsCutoff, tieBreaker, lenient, blendedFields); } @Override @@ -213,7 +214,7 @@ public Query blendTerm(Term term, MappedFieldType fieldType) { if (blendedFields == null) { return super.blendTerm(term, fieldType); } - return MultiMatchQuery.blendTerm(context, term.bytes(), commonTermsCutoff, tieBreaker, blendedFields); + return MultiMatchQuery.blendTerm(context, term.bytes(), commonTermsCutoff, tieBreaker, lenient, blendedFields); } @Override @@ -228,12 +229,12 @@ public Query termQuery(MappedFieldType fieldType, BytesRef value) { } static Query blendTerm(QueryShardContext context, BytesRef value, Float commonTermsCutoff, float tieBreaker, - FieldAndFieldType... blendedFields) { - return blendTerms(context, new BytesRef[] {value}, commonTermsCutoff, tieBreaker, blendedFields); + boolean lenient, FieldAndFieldType... blendedFields) { + return blendTerms(context, new BytesRef[] {value}, commonTermsCutoff, tieBreaker, lenient, blendedFields); } static Query blendTerms(QueryShardContext context, BytesRef[] values, Float commonTermsCutoff, float tieBreaker, - FieldAndFieldType... blendedFields) { + boolean lenient, FieldAndFieldType... blendedFields) { List queries = new ArrayList<>(); Term[] terms = new Term[blendedFields.length * values.length]; float[] blendedBoost = new float[blendedFields.length * values.length]; @@ -243,19 +244,12 @@ static Query blendTerms(QueryShardContext context, BytesRef[] values, Float comm Query query; try { query = ft.fieldType.termQuery(term, context); - } catch (IllegalArgumentException e) { - // the query expects a certain class of values such as numbers - // of ip addresses and the value can't be parsed, so ignore this - // field - continue; - } catch (ElasticsearchParseException parseException) { - // date fields throw an ElasticsearchParseException with the - // underlying IAE as the cause, ignore this field if that is - // the case - if (parseException.getCause() instanceof IllegalArgumentException) { - continue; + } catch (RuntimeException e) { + if (lenient) { + query = newLenientFieldQuery(ft.fieldType.name(), e); + } else { + throw e; } - throw parseException; } float boost = ft.boost; while (query instanceof BoostQuery) { @@ -268,7 +262,7 @@ static Query blendTerms(QueryShardContext context, BytesRef[] values, Float comm blendedBoost[i] = boost; i++; } else { - if (boost != 1f) { + if (boost != 1f && query instanceof MatchNoDocsQuery == false) { query = new BoostQuery(query, boost); } queries.add(query); diff --git a/core/src/main/java/org/elasticsearch/index/search/QueryParserHelper.java b/core/src/main/java/org/elasticsearch/index/search/QueryParserHelper.java new file mode 100644 index 0000000000000..18a124d86b35c --- /dev/null +++ b/core/src/main/java/org/elasticsearch/index/search/QueryParserHelper.java @@ -0,0 +1,197 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.index.search; + +import org.elasticsearch.common.regex.Regex; +import org.elasticsearch.index.mapper.DateFieldMapper; +import org.elasticsearch.index.mapper.DocumentMapper; +import org.elasticsearch.index.mapper.FieldMapper; +import org.elasticsearch.index.mapper.IpFieldMapper; +import org.elasticsearch.index.mapper.KeywordFieldMapper; +import org.elasticsearch.index.mapper.MapperService; +import org.elasticsearch.index.mapper.MetadataFieldMapper; +import org.elasticsearch.index.mapper.NumberFieldMapper; +import org.elasticsearch.index.mapper.TextFieldMapper; +import org.elasticsearch.index.query.QueryShardContext; + +import java.util.Collection; +import java.util.HashMap; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; + +/** + * Helpers to extract and expand field names and boosts + */ +public final class QueryParserHelper { + // Mapping types the "all-ish" query can be executed against + // TODO: Fix the API so that we don't need a hardcoded list of types + private static final Set ALLOWED_QUERY_MAPPER_TYPES; + + static { + ALLOWED_QUERY_MAPPER_TYPES = new HashSet<>(); + ALLOWED_QUERY_MAPPER_TYPES.add(DateFieldMapper.CONTENT_TYPE); + ALLOWED_QUERY_MAPPER_TYPES.add(IpFieldMapper.CONTENT_TYPE); + ALLOWED_QUERY_MAPPER_TYPES.add(KeywordFieldMapper.CONTENT_TYPE); + for (NumberFieldMapper.NumberType nt : NumberFieldMapper.NumberType.values()) { + ALLOWED_QUERY_MAPPER_TYPES.add(nt.typeName()); + } + ALLOWED_QUERY_MAPPER_TYPES.add("scaled_float"); + ALLOWED_QUERY_MAPPER_TYPES.add(TextFieldMapper.CONTENT_TYPE); + } + + private QueryParserHelper() {} + + /** + * Convert a list of field names encoded with optional boosts to a map that associates + * the field name and its boost. + * @param fields The list of fields encoded with optional boosts (e.g. ^0.35). + * @return The converted map with field names and associated boosts. + */ + public static Map parseFieldsAndWeights(List fields) { + final Map fieldsAndWeights = new HashMap<>(); + for (String field : fields) { + int boostIndex = field.indexOf('^'); + String fieldName; + float boost = 1.0f; + if (boostIndex != -1) { + fieldName = field.substring(0, boostIndex); + boost = Float.parseFloat(field.substring(boostIndex+1, field.length())); + } else { + fieldName = field; + } + fieldsAndWeights.put(fieldName, boost); + } + return fieldsAndWeights; + } + + /** + * Get a {@link FieldMapper} associated with a field name or null. + * @param mapperService The mapper service where to find the mapping. + * @param field The field name to search. + */ + public static FieldMapper getFieldMapper(MapperService mapperService, String field) { + for (DocumentMapper mapper : mapperService.docMappers(true)) { + FieldMapper fieldMapper = mapper.mappers().smartNameFieldMapper(field); + if (fieldMapper != null) { + return fieldMapper; + } + } + return null; + } + + public static Map resolveMappingFields(QueryShardContext context, + Map fieldsAndWeights) { + return resolveMappingFields(context, fieldsAndWeights, null); + } + + /** + * Resolve all the field names and patterns present in the provided map with the + * {@link QueryShardContext} and returns a new map containing all the expanded fields with their original boost. + * @param context The context of the query. + * @param fieldsAndWeights The map of fields and weights to expand. + * @param fieldSuffix The suffix name to add to the expanded field names if a mapping exists for that name. + * The original name of the field is kept if adding the suffix to the field name does not point to a valid field + * in the mapping. + */ + public static Map resolveMappingFields(QueryShardContext context, + Map fieldsAndWeights, + String fieldSuffix) { + Map resolvedFields = new HashMap<>(); + for (Map.Entry fieldEntry : fieldsAndWeights.entrySet()) { + boolean allField = Regex.isMatchAllPattern(fieldEntry.getKey()); + boolean multiField = Regex.isSimpleMatchPattern(fieldEntry.getKey()); + float weight = fieldEntry.getValue() == null ? 1.0f : fieldEntry.getValue(); + Map fieldMap = resolveMappingField(context, fieldEntry.getKey(), weight, + !multiField, !allField, fieldSuffix); + resolvedFields.putAll(fieldMap); + } + checkForTooManyFields(resolvedFields); + return resolvedFields; + } + + /** + * Resolves the provided pattern or field name from the {@link QueryShardContext} and return a map of + * the expanded fields with their original boost. + * @param context The context of the query + * @param fieldOrPattern The field name or the pattern to resolve + * @param weight The weight for the field + * @param acceptAllTypes Whether all field type should be added when a pattern is expanded. + * If false, only {@link #ALLOWED_QUERY_MAPPER_TYPES} are accepted and other field types + * are discarded from the query. + * @param acceptMetadataField Whether metadata fields should be added when a pattern is expanded. + */ + public static Map resolveMappingField(QueryShardContext context, String fieldOrPattern, float weight, + boolean acceptAllTypes, boolean acceptMetadataField) { + return resolveMappingField(context, fieldOrPattern, weight, acceptAllTypes, acceptMetadataField, null); + } + + /** + * Resolves the provided pattern or field name from the {@link QueryShardContext} and return a map of + * the expanded fields with their original boost. + * @param context The context of the query + * @param fieldOrPattern The field name or the pattern to resolve + * @param weight The weight for the field + * @param acceptAllTypes Whether all field type should be added when a pattern is expanded. + * If false, only {@link #ALLOWED_QUERY_MAPPER_TYPES} are accepted and other field types + * are discarded from the query. + * @param acceptMetadataField Whether metadata fields should be added when a pattern is expanded. + * @param fieldSuffix The suffix name to add to the expanded field names if a mapping exists for that name. + * The original name of the field is kept if adding the suffix to the field name does not point to a valid field + * in the mapping. + */ + public static Map resolveMappingField(QueryShardContext context, String fieldOrPattern, float weight, + boolean acceptAllTypes, boolean acceptMetadataField, String fieldSuffix) { + Collection allFields = context.simpleMatchToIndexNames(fieldOrPattern); + Map fields = new HashMap<>(); + for (String fieldName : allFields) { + if (fieldSuffix != null && context.fieldMapper(fieldName + fieldSuffix) != null) { + fieldName = fieldName + fieldSuffix; + } + FieldMapper mapper = getFieldMapper(context.getMapperService(), fieldName); + if (mapper == null) { + // Unmapped fields are not ignored + fields.put(fieldOrPattern, weight); + continue; + } + if (acceptMetadataField == false && mapper instanceof MetadataFieldMapper) { + // Ignore metadata fields + continue; + } + // Ignore fields that are not in the allowed mapper types. Some + // types do not support term queries, and thus we cannot generate + // a special query for them. + String mappingType = mapper.fieldType().typeName(); + if (acceptAllTypes == false && ALLOWED_QUERY_MAPPER_TYPES.contains(mappingType) == false) { + continue; + } + fields.put(fieldName, weight); + } + checkForTooManyFields(fields); + return fields; + } + + private static void checkForTooManyFields(Map fields) { + if (fields.size() > 1024) { + throw new IllegalArgumentException("field expansion matches too many fields, limit: 1024, got: " + fields.size()); + } + } +} diff --git a/core/src/main/java/org/elasticsearch/index/search/QueryStringQueryParser.java b/core/src/main/java/org/elasticsearch/index/search/QueryStringQueryParser.java index b6537f6deb50c..5f453d49ab5cc 100644 --- a/core/src/main/java/org/elasticsearch/index/search/QueryStringQueryParser.java +++ b/core/src/main/java/org/elasticsearch/index/search/QueryStringQueryParser.java @@ -21,7 +21,6 @@ import org.apache.lucene.analysis.Analyzer; import org.apache.lucene.analysis.TokenStream; -import org.apache.lucene.analysis.miscellaneous.DisableGraphAttribute; import org.apache.lucene.analysis.tokenattributes.CharTermAttribute; import org.apache.lucene.analysis.tokenattributes.PositionIncrementAttribute; import org.apache.lucene.index.Term; @@ -47,20 +46,10 @@ import org.elasticsearch.common.lucene.search.Queries; import org.elasticsearch.common.regex.Regex; import org.elasticsearch.common.unit.Fuzziness; -import org.elasticsearch.index.mapper.AllFieldMapper; -import org.elasticsearch.index.mapper.DateFieldMapper; -import org.elasticsearch.index.mapper.DocumentMapper; -import org.elasticsearch.index.mapper.FieldMapper; import org.elasticsearch.index.mapper.FieldNamesFieldMapper; -import org.elasticsearch.index.mapper.IpFieldMapper; -import org.elasticsearch.index.mapper.KeywordFieldMapper; import org.elasticsearch.index.mapper.MappedFieldType; -import org.elasticsearch.index.mapper.MetadataFieldMapper; -import org.elasticsearch.index.mapper.NumberFieldMapper; -import org.elasticsearch.index.mapper.ScaledFloatFieldMapper; -import org.elasticsearch.index.mapper.StringFieldType; import org.elasticsearch.index.mapper.MapperService; -import org.elasticsearch.index.mapper.TextFieldMapper; +import org.elasticsearch.index.mapper.StringFieldType; import org.elasticsearch.index.query.ExistsQueryBuilder; import org.elasticsearch.index.query.MultiMatchQueryBuilder; import org.elasticsearch.index.query.QueryShardContext; @@ -69,17 +58,14 @@ import java.io.IOException; import java.util.ArrayList; -import java.util.Collection; import java.util.Collections; -import java.util.HashMap; -import java.util.HashSet; import java.util.List; import java.util.Map; -import java.util.Set; import static org.elasticsearch.common.lucene.search.Queries.fixNegativeQueryIfNeeded; import static org.elasticsearch.common.lucene.search.Queries.newLenientFieldQuery; import static org.elasticsearch.common.lucene.search.Queries.newUnmappedFieldQuery; +import static org.elasticsearch.index.search.QueryParserHelper.resolveMappingField; /** * A {@link XQueryParser} that uses the {@link MapperService} in order to build smarter @@ -88,22 +74,8 @@ * to assemble the result logically. */ public class QueryStringQueryParser extends XQueryParser { - // Mapping types the "all-ish" query can be executed against - private static final Set ALLOWED_QUERY_MAPPER_TYPES; private static final String EXISTS_FIELD = "_exists_"; - static { - ALLOWED_QUERY_MAPPER_TYPES = new HashSet<>(); - ALLOWED_QUERY_MAPPER_TYPES.add(DateFieldMapper.CONTENT_TYPE); - ALLOWED_QUERY_MAPPER_TYPES.add(IpFieldMapper.CONTENT_TYPE); - ALLOWED_QUERY_MAPPER_TYPES.add(KeywordFieldMapper.CONTENT_TYPE); - for (NumberFieldMapper.NumberType nt : NumberFieldMapper.NumberType.values()) { - ALLOWED_QUERY_MAPPER_TYPES.add(nt.typeName()); - } - ALLOWED_QUERY_MAPPER_TYPES.add(ScaledFloatFieldMapper.CONTENT_TYPE); - ALLOWED_QUERY_MAPPER_TYPES.add(TextFieldMapper.CONTENT_TYPE); - } - private final QueryShardContext context; private final Map fieldsAndWeights; private final boolean lenient; @@ -121,6 +93,7 @@ public class QueryStringQueryParser extends XQueryParser { private int fuzzyMaxExpansions = FuzzyQuery.defaultMaxExpansions; private MappedFieldType currentFieldType; private MultiTermQuery.RewriteMethod fuzzyRewriteMethod; + private boolean fuzzyTranspositions = FuzzyQuery.defaultTranspositions; /** * @param context The query shard context. @@ -162,8 +135,9 @@ public QueryStringQueryParser(QueryShardContext context, Map fiel * @param lenient If set to `true` will cause format based failures (like providing text to a numeric field) to be ignored. */ public QueryStringQueryParser(QueryShardContext context, boolean lenient) { - this(context, "*", resolveMappingField(context, "*", 1.0f, false, false), - lenient, context.getMapperService().searchAnalyzer()); + this(context, "*", + resolveMappingField(context, "*", 1.0f, false, false), + lenient, context.getMapperService().searchAnalyzer()); } private QueryStringQueryParser(QueryShardContext context, String defaultField, @@ -177,69 +151,6 @@ private QueryStringQueryParser(QueryShardContext context, String defaultField, this.lenient = lenient; } - - private static FieldMapper getFieldMapper(MapperService mapperService, String field) { - for (DocumentMapper mapper : mapperService.docMappers(true)) { - FieldMapper fieldMapper = mapper.mappers().smartNameFieldMapper(field); - if (fieldMapper != null) { - return fieldMapper; - } - } - return null; - } - - public static Map resolveMappingFields(QueryShardContext context, Map fieldsAndWeights) { - Map resolvedFields = new HashMap<>(); - for (Map.Entry fieldEntry : fieldsAndWeights.entrySet()) { - boolean allField = Regex.isMatchAllPattern(fieldEntry.getKey()); - boolean multiField = Regex.isSimpleMatchPattern(fieldEntry.getKey()); - float weight = fieldEntry.getValue() == null ? 1.0f : fieldEntry.getValue(); - Map fieldMap = resolveMappingField(context, fieldEntry.getKey(), weight, !multiField, !allField); - resolvedFields.putAll(fieldMap); - } - return resolvedFields; - } - - public static Map resolveMappingField(QueryShardContext context, String field, float weight, - boolean acceptMetadataField, boolean acceptAllTypes) { - return resolveMappingField(context, field, weight, acceptMetadataField, acceptAllTypes, false, null); - } - - /** - * Given a shard context, return a map of all fields in the mappings that - * can be queried. The map will be field name to a float of 1.0f. - */ - private static Map resolveMappingField(QueryShardContext context, String field, float weight, - boolean acceptAllTypes, boolean acceptMetadataField, - boolean quoted, String quoteFieldSuffix) { - Collection allFields = context.simpleMatchToIndexNames(field); - Map fields = new HashMap<>(); - for (String fieldName : allFields) { - if (quoted && quoteFieldSuffix != null && context.fieldMapper(fieldName + quoteFieldSuffix) != null) { - fieldName = fieldName + quoteFieldSuffix; - } - FieldMapper mapper = getFieldMapper(context.getMapperService(), fieldName); - if (mapper == null) { - // Unmapped fields are not ignored - fields.put(field, weight); - continue; - } - if (acceptMetadataField == false && mapper instanceof MetadataFieldMapper) { - // Ignore metadata fields - continue; - } - // Ignore fields that are not in the allowed mapper types. Some - // types do not support term queries, and thus we cannot generate - // a special query for them. - String mappingType = mapper.fieldType().typeName(); - if (acceptAllTypes == false && ALLOWED_QUERY_MAPPER_TYPES.contains(mappingType) == false) { - continue; - } - fields.put(fieldName, weight); - } - return fields; - } - @Override public void setDefaultOperator(Operator op) { super.setDefaultOperator(op); @@ -326,6 +237,14 @@ public void setAutoGenerateMultiTermSynonymsPhraseQuery(boolean enable) { queryBuilder.setAutoGenerateSynonymsPhraseQuery(enable); } + /** + * @param fuzzyTranspositions Sets whether transpositions are supported in fuzzy queries. + * Defaults to {@link FuzzyQuery#defaultTranspositions}. + */ + public void setFuzzyTranspositions(boolean fuzzyTranspositions) { + this.fuzzyTranspositions = fuzzyTranspositions; + } + private Query applyBoost(Query q, Float boost) { if (boost != null && boost != 1f) { return new BoostQuery(q, boost); @@ -343,7 +262,7 @@ private Map extractMultiFields(String field, boolean quoted) { boolean multiFields = Regex.isSimpleMatchPattern(field); // Filters unsupported fields if a pattern is requested // Filters metadata fields if all fields are requested - return resolveMappingField(context, field, 1.0f, !allFields, !multiFields, quoted, quoteFieldSuffix); + return resolveMappingField(context, field, 1.0f, !allFields, !multiFields, quoted ? quoteFieldSuffix : null); } else { return fieldsAndWeights; } @@ -483,14 +402,8 @@ private Query getRangeQuerySingle(String field, String part1, String part2, Analyzer normalizer = forceAnalyzer == null ? queryBuilder.context.getSearchAnalyzer(currentFieldType) : forceAnalyzer; BytesRef part1Binary = part1 == null ? null : normalizer.normalize(field, part1); BytesRef part2Binary = part2 == null ? null : normalizer.normalize(field, part2); - Query rangeQuery; - if (currentFieldType instanceof DateFieldMapper.DateFieldType && timeZone != null) { - DateFieldMapper.DateFieldType dateFieldType = (DateFieldMapper.DateFieldType) this.currentFieldType; - rangeQuery = dateFieldType.rangeQuery(part1Binary, part2Binary, - startInclusive, endInclusive, timeZone, null, context); - } else { - rangeQuery = currentFieldType.rangeQuery(part1Binary, part2Binary, startInclusive, endInclusive, context); - } + Query rangeQuery = currentFieldType.rangeQuery(part1Binary, part2Binary, + startInclusive, endInclusive, null, timeZone, null, context); return rangeQuery; } catch (RuntimeException e) { if (lenient) { @@ -538,7 +451,7 @@ private Query getFuzzyQuerySingle(String field, String termStr, float minSimilar Analyzer normalizer = forceAnalyzer == null ? queryBuilder.context.getSearchAnalyzer(currentFieldType) : forceAnalyzer; BytesRef term = termStr == null ? null : normalizer.normalize(field, termStr); return currentFieldType.fuzzyQuery(term, Fuzziness.fromEdits((int) minSimilarity), - getFuzzyPrefixLength(), fuzzyMaxExpansions, FuzzyQuery.defaultTranspositions); + getFuzzyPrefixLength(), fuzzyMaxExpansions, fuzzyTranspositions); } catch (RuntimeException e) { if (lenient) { return newLenientFieldQuery(field, e); @@ -551,7 +464,7 @@ private Query getFuzzyQuerySingle(String field, String termStr, float minSimilar protected Query newFuzzyQuery(Term term, float minimumSimilarity, int prefixLength) { int numEdits = Fuzziness.build(minimumSimilarity).asDistance(term.text()); FuzzyQuery query = new FuzzyQuery(term, numEdits, prefixLength, - fuzzyMaxExpansions, FuzzyQuery.defaultTranspositions); + fuzzyMaxExpansions, fuzzyTranspositions); QueryParsers.setRewriteMethod(query, fuzzyRewriteMethod); return query; } @@ -577,22 +490,20 @@ protected Query getPrefixQuery(String field, String termStr) throws ParseExcepti } private Query getPrefixQuerySingle(String field, String termStr) throws ParseException { - currentFieldType = null; Analyzer oldAnalyzer = getAnalyzer(); try { currentFieldType = context.fieldMapper(field); - if (currentFieldType != null) { - setAnalyzer(forceAnalyzer == null ? queryBuilder.context.getSearchAnalyzer(currentFieldType) : forceAnalyzer); - Query query = null; - if (currentFieldType instanceof StringFieldType == false) { - query = currentFieldType.prefixQuery(termStr, getMultiTermRewriteMethod(), context); - } - if (query == null) { - query = getPossiblyAnalyzedPrefixQuery(currentFieldType.name(), termStr); - } - return query; + if (currentFieldType == null) { + return newUnmappedFieldQuery(field); } - return getPossiblyAnalyzedPrefixQuery(field, termStr); + setAnalyzer(forceAnalyzer == null ? queryBuilder.context.getSearchAnalyzer(currentFieldType) : forceAnalyzer); + Query query = null; + if (currentFieldType instanceof StringFieldType == false) { + query = currentFieldType.prefixQuery(termStr, getMultiTermRewriteMethod(), context); + } else { + query = getPossiblyAnalyzedPrefixQuery(currentFieldType.name(), termStr); + } + return query; } catch (RuntimeException e) { if (lenient) { return newLenientFieldQuery(field, e); @@ -700,18 +611,11 @@ private Query existsQuery(String fieldName) { @Override protected Query getWildcardQuery(String field, String termStr) throws ParseException { - if (termStr.equals("*") && field != null) { - /** - * We rewrite _all:* to a match all query. - * TODO: We can remove this special case when _all is completely removed. - */ - if (Regex.isMatchAllPattern(field) || AllFieldMapper.NAME.equals(field)) { + String actualField = field != null ? field : this.field; + if (termStr.equals("*") && actualField != null) { + if (Regex.isMatchAllPattern(actualField)) { return newMatchAllDocsQuery(); } - String actualField = field; - if (actualField == null) { - actualField = this.field; - } // effectively, we check if a field exists or not return existsQuery(actualField); } @@ -784,12 +688,12 @@ private Query getRegexpQuerySingle(String field, String termStr) throws ParseExc Analyzer oldAnalyzer = getAnalyzer(); try { currentFieldType = queryBuilder.context.fieldMapper(field); - if (currentFieldType != null) { - setAnalyzer(forceAnalyzer == null ? queryBuilder.context.getSearchAnalyzer(currentFieldType) : forceAnalyzer); - Query query = super.getRegexpQuery(field, termStr); - return query; + if (currentFieldType == null) { + return newUnmappedFieldQuery(field); } - return super.getRegexpQuery(field, termStr); + setAnalyzer(forceAnalyzer == null ? queryBuilder.context.getSearchAnalyzer(currentFieldType) : forceAnalyzer); + Query query = super.getRegexpQuery(field, termStr); + return query; } catch (RuntimeException e) { if (lenient) { return newLenientFieldQuery(field, e); @@ -863,30 +767,4 @@ public Query parse(String query) throws ParseException { } return super.parse(query); } - - /** - * Checks if graph analysis should be enabled for the field depending - * on the provided {@link Analyzer} - */ - protected Query createFieldQuery(Analyzer analyzer, BooleanClause.Occur operator, String field, - String queryText, boolean quoted, int phraseSlop) { - assert operator == BooleanClause.Occur.SHOULD || operator == BooleanClause.Occur.MUST; - - // Use the analyzer to get all the tokens, and then build an appropriate - // query based on the analysis chain. - try (TokenStream source = analyzer.tokenStream(field, queryText)) { - if (source.hasAttribute(DisableGraphAttribute.class)) { - /** - * A {@link TokenFilter} in this {@link TokenStream} disabled the graph analysis to avoid - * paths explosion. See {@link ShingleTokenFilterFactory} for details. - */ - setEnableGraphQueries(false); - } - Query query = super.createFieldQuery(source, operator, field, quoted, phraseSlop); - setEnableGraphQueries(true); - return query; - } catch (IOException e) { - throw new RuntimeException("Error analyzing query text", e); - } - } } diff --git a/core/src/main/java/org/elasticsearch/index/query/SimpleQueryParser.java b/core/src/main/java/org/elasticsearch/index/search/SimpleQueryStringQueryParser.java similarity index 65% rename from core/src/main/java/org/elasticsearch/index/query/SimpleQueryParser.java rename to core/src/main/java/org/elasticsearch/index/search/SimpleQueryStringQueryParser.java index 141b935eb1375..9f91b16359287 100644 --- a/core/src/main/java/org/elasticsearch/index/query/SimpleQueryParser.java +++ b/core/src/main/java/org/elasticsearch/index/search/SimpleQueryStringQueryParser.java @@ -16,48 +16,73 @@ * specific language governing permissions and limitations * under the License. */ -package org.elasticsearch.index.query; +package org.elasticsearch.index.search; import org.apache.lucene.analysis.Analyzer; -import org.apache.lucene.analysis.miscellaneous.DisableGraphAttribute; import org.apache.lucene.analysis.TokenStream; import org.apache.lucene.analysis.tokenattributes.CharTermAttribute; import org.apache.lucene.analysis.tokenattributes.PositionIncrementAttribute; import org.apache.lucene.index.Term; +import org.apache.lucene.queryparser.simple.SimpleQueryParser; import org.apache.lucene.search.BooleanClause; import org.apache.lucene.search.BooleanQuery; import org.apache.lucene.search.BoostQuery; import org.apache.lucene.search.DisjunctionMaxQuery; -import org.apache.lucene.search.FuzzyQuery; +import org.apache.lucene.search.MatchNoDocsQuery; import org.apache.lucene.search.PrefixQuery; import org.apache.lucene.search.Query; import org.apache.lucene.search.SynonymQuery; import org.apache.lucene.util.BytesRef; -import org.elasticsearch.index.analysis.ShingleTokenFilterFactory; +import org.elasticsearch.common.unit.Fuzziness; import org.elasticsearch.index.mapper.MappedFieldType; +import org.elasticsearch.index.query.AbstractQueryBuilder; +import org.elasticsearch.index.query.MultiMatchQueryBuilder; +import org.elasticsearch.index.query.QueryShardContext; +import org.elasticsearch.index.query.SimpleQueryStringBuilder; import java.io.IOException; -import java.util.Iterator; import java.util.Map; import java.util.Objects; import java.util.List; import java.util.ArrayList; +import static org.elasticsearch.common.lucene.search.Queries.newUnmappedFieldQuery; + /** - * Wrapper class for Lucene's SimpleQueryParser that allows us to redefine + * Wrapper class for Lucene's SimpleQueryStringQueryParser that allows us to redefine * different types of queries. */ -public class SimpleQueryParser extends org.apache.lucene.queryparser.simple.SimpleQueryParser { +public class SimpleQueryStringQueryParser extends SimpleQueryParser { private final Settings settings; private QueryShardContext context; + private final MultiMatchQuery queryBuilder; + + /** Creates a new parser with custom flags used to enable/disable certain features. */ + public SimpleQueryStringQueryParser(Map weights, int flags, + Settings settings, QueryShardContext context) { + this(null, weights, flags, settings, context); + } /** Creates a new parser with custom flags used to enable/disable certain features. */ - public SimpleQueryParser(Analyzer analyzer, Map weights, int flags, - Settings settings, QueryShardContext context) { + public SimpleQueryStringQueryParser(Analyzer analyzer, Map weights, int flags, + Settings settings, QueryShardContext context) { super(analyzer, weights, flags); this.settings = settings; this.context = context; + this.queryBuilder = new MultiMatchQuery(context); + this.queryBuilder.setAutoGenerateSynonymsPhraseQuery(settings.autoGenerateSynonymsPhraseQuery()); + this.queryBuilder.setLenient(settings.lenient()); + if (analyzer != null) { + this.queryBuilder.setAnalyzer(analyzer); + } + } + + private Analyzer getAnalyzer(MappedFieldType ft) { + if (getAnalyzer() != null) { + return analyzer; + } + return ft.searchAnalyzer(); } /** @@ -70,46 +95,44 @@ private Query rethrowUnlessLenient(RuntimeException e) { throw e; } + @Override + public void setDefaultOperator(BooleanClause.Occur operator) { + super.setDefaultOperator(operator); + queryBuilder.setOccur(operator); + } + @Override protected Query newTermQuery(Term term) { - MappedFieldType currentFieldType = context.fieldMapper(term.field()); - if (currentFieldType == null || currentFieldType.tokenized()) { - return super.newTermQuery(term); + MappedFieldType ft = context.fieldMapper(term.field()); + if (ft == null) { + return newUnmappedFieldQuery(term.field()); } - return currentFieldType.termQuery(term.bytes(), context); + return ft.termQuery(term.bytes(), context); } @Override public Query newDefaultQuery(String text) { - List disjuncts = new ArrayList<>(); - for (Map.Entry entry : weights.entrySet()) { - try { - Query q = createBooleanQuery(entry.getKey(), text, super.getDefaultOperator()); - if (q != null) { - disjuncts.add(wrapWithBoost(q, entry.getValue())); - } - } catch (RuntimeException e) { - rethrowUnlessLenient(e); - } - } - if (disjuncts.size() == 1) { - return disjuncts.get(0); + try { + return queryBuilder.parse(MultiMatchQueryBuilder.Type.MOST_FIELDS, weights, text, null); + } catch (IOException e) { + return rethrowUnlessLenient(new IllegalArgumentException(e.getMessage())); } - return new DisjunctionMaxQuery(disjuncts, 1.0f); } - /** - * Dispatches to Lucene's SimpleQueryParser's newFuzzyQuery, optionally - * lowercasing the term first - */ @Override public Query newFuzzyQuery(String text, int fuzziness) { List disjuncts = new ArrayList<>(); for (Map.Entry entry : weights.entrySet()) { final String fieldName = entry.getKey(); + final MappedFieldType ft = context.fieldMapper(fieldName); + if (ft == null) { + disjuncts.add(newUnmappedFieldQuery(fieldName)); + continue; + } try { - final BytesRef term = getAnalyzer().normalize(fieldName, text); - Query query = new FuzzyQuery(new Term(fieldName, term), fuzziness); + final BytesRef term = getAnalyzer(ft).normalize(fieldName, text); + Query query = ft.fuzzyQuery(term, Fuzziness.fromEdits(fuzziness), settings.fuzzyPrefixLength, + settings.fuzzyMaxExpansions, settings.fuzzyTranspositions); disjuncts.add(wrapWithBoost(query, entry.getValue())); } catch (RuntimeException e) { rethrowUnlessLenient(e); @@ -123,50 +146,41 @@ public Query newFuzzyQuery(String text, int fuzziness) { @Override public Query newPhraseQuery(String text, int slop) { - List disjuncts = new ArrayList<>(); - for (Map.Entry entry : weights.entrySet()) { - try { - String field = entry.getKey(); - if (settings.quoteFieldSuffix() != null) { - String quoteField = field + settings.quoteFieldSuffix(); - MappedFieldType quotedFieldType = context.fieldMapper(quoteField); - if (quotedFieldType != null) { - field = quoteField; - } - } - Float boost = entry.getValue(); - Query q = createPhraseQuery(field, text, slop); - if (q != null) { - disjuncts.add(wrapWithBoost(q, boost)); - } - } catch (RuntimeException e) { - rethrowUnlessLenient(e); + try { + queryBuilder.setPhraseSlop(slop); + Map phraseWeights; + if (settings.quoteFieldSuffix() != null) { + phraseWeights = QueryParserHelper.resolveMappingFields(context, weights, settings.quoteFieldSuffix()); + } else { + phraseWeights = weights; } + return queryBuilder.parse(MultiMatchQueryBuilder.Type.PHRASE, phraseWeights, text, null); + } catch (IOException e) { + return rethrowUnlessLenient(new IllegalArgumentException(e.getMessage())); + } finally { + queryBuilder.setPhraseSlop(0); } - if (disjuncts.size() == 1) { - return disjuncts.get(0); - } - return new DisjunctionMaxQuery(disjuncts, 1.0f); } - /** - * Dispatches to Lucene's SimpleQueryParser's newPrefixQuery, optionally - * lowercasing the term first or trying to analyze terms - */ @Override public Query newPrefixQuery(String text) { List disjuncts = new ArrayList<>(); for (Map.Entry entry : weights.entrySet()) { final String fieldName = entry.getKey(); + final MappedFieldType ft = context.fieldMapper(fieldName); + if (ft == null) { + disjuncts.add(newUnmappedFieldQuery(fieldName)); + continue; + } try { if (settings.analyzeWildcard()) { - Query analyzedQuery = newPossiblyAnalyzedQuery(fieldName, text); + Query analyzedQuery = newPossiblyAnalyzedQuery(fieldName, text, getAnalyzer(ft)); if (analyzedQuery != null) { disjuncts.add(wrapWithBoost(analyzedQuery, entry.getValue())); } } else { - Term term = new Term(fieldName, getAnalyzer().normalize(fieldName, text)); - Query query = new PrefixQuery(term); + BytesRef term = getAnalyzer(ft).normalize(fieldName, text); + Query query = ft.prefixQuery(term.utf8ToString(), null, context); disjuncts.add(wrapWithBoost(query, entry.getValue())); } } catch (RuntimeException e) { @@ -179,33 +193,10 @@ public Query newPrefixQuery(String text) { return new DisjunctionMaxQuery(disjuncts, 1.0f); } - /** - * Checks if graph analysis should be enabled for the field depending - * on the provided {@link Analyzer} - */ - protected Query createFieldQuery(Analyzer analyzer, BooleanClause.Occur operator, String field, - String queryText, boolean quoted, int phraseSlop) { - assert operator == BooleanClause.Occur.SHOULD || operator == BooleanClause.Occur.MUST; - - // Use the analyzer to get all the tokens, and then build an appropriate - // query based on the analysis chain. - try (TokenStream source = analyzer.tokenStream(field, queryText)) { - if (source.hasAttribute(DisableGraphAttribute.class)) { - /** - * A {@link TokenFilter} in this {@link TokenStream} disabled the graph analysis to avoid - * paths explosion. See {@link ShingleTokenFilterFactory} for details. - */ - setEnableGraphQueries(false); - } - Query query = super.createFieldQuery(source, operator, field, quoted, phraseSlop); - setEnableGraphQueries(true); + private static Query wrapWithBoost(Query query, float boost) { + if (query instanceof MatchNoDocsQuery) { return query; - } catch (IOException e) { - throw new RuntimeException("Error analyzing query text", e); } - } - - private static Query wrapWithBoost(Query query, float boost) { if (boost != AbstractQueryBuilder.DEFAULT_BOOST) { return new BoostQuery(query, boost); } @@ -217,10 +208,9 @@ private static Query wrapWithBoost(Query query, float boost) { * {@code PrefixQuery} or a {@code BooleanQuery} made up * of {@code TermQuery}s and {@code PrefixQuery}s */ - private Query newPossiblyAnalyzedQuery(String field, String termStr) { + private Query newPossiblyAnalyzedQuery(String field, String termStr, Analyzer analyzer) { List> tlist = new ArrayList<> (); - // get Analyzer from superclass and tokenize the term - try (TokenStream source = getAnalyzer().tokenStream(field, termStr)) { + try (TokenStream source = analyzer.tokenStream(field, termStr)) { source.reset(); List currentPos = new ArrayList<>(); CharTermAttribute termAtt = source.addAttribute(CharTermAttribute.class); @@ -233,7 +223,7 @@ private Query newPossiblyAnalyzedQuery(String field, String termStr) { tlist.add(currentPos); currentPos = new ArrayList<>(); } - final BytesRef term = getAnalyzer().normalize(field, termAtt.toString()); + final BytesRef term = analyzer.normalize(field, termAtt.toString()); currentPos.add(term); hasMoreTokens = source.incrementToken(); } @@ -293,7 +283,7 @@ private Query newPossiblyAnalyzedQuery(String field, String termStr) { * Class encapsulating the settings for the SimpleQueryString query, with * their default values */ - static class Settings { + public static class Settings { /** Specifies whether lenient query parsing should be used. */ private boolean lenient = SimpleQueryStringBuilder.DEFAULT_LENIENT; /** Specifies whether wildcards should be analyzed. */ @@ -302,19 +292,28 @@ static class Settings { private String quoteFieldSuffix = null; /** Whether phrase queries should be automatically generated for multi terms synonyms. */ private boolean autoGenerateSynonymsPhraseQuery = true; + /** Prefix length in fuzzy queries.*/ + private int fuzzyPrefixLength = SimpleQueryStringBuilder.DEFAULT_FUZZY_PREFIX_LENGTH; + /** The number of terms fuzzy queries will expand to.*/ + private int fuzzyMaxExpansions = SimpleQueryStringBuilder.DEFAULT_FUZZY_MAX_EXPANSIONS; + /** Whether transpositions are supported in fuzzy queries.*/ + private boolean fuzzyTranspositions = SimpleQueryStringBuilder.DEFAULT_FUZZY_TRANSPOSITIONS; /** * Generates default {@link Settings} object (uses ROOT locale, does * lowercase terms, no lenient parsing, no wildcard analysis). * */ - Settings() { + public Settings() { } - Settings(Settings other) { + public Settings(Settings other) { this.lenient = other.lenient; this.analyzeWildcard = other.analyzeWildcard; this.quoteFieldSuffix = other.quoteFieldSuffix; this.autoGenerateSynonymsPhraseQuery = other.autoGenerateSynonymsPhraseQuery; + this.fuzzyPrefixLength = other.fuzzyPrefixLength; + this.fuzzyMaxExpansions = other.fuzzyMaxExpansions; + this.fuzzyTranspositions = other.fuzzyTranspositions; } /** Specifies whether to use lenient parsing, defaults to false. */ @@ -364,9 +363,34 @@ public boolean autoGenerateSynonymsPhraseQuery() { return autoGenerateSynonymsPhraseQuery; } + public int fuzzyPrefixLength() { + return fuzzyPrefixLength; + } + + public void fuzzyPrefixLength(int fuzzyPrefixLength) { + this.fuzzyPrefixLength = fuzzyPrefixLength; + } + + public int fuzzyMaxExpansions() { + return fuzzyMaxExpansions; + } + + public void fuzzyMaxExpansions(int fuzzyMaxExpansions) { + this.fuzzyMaxExpansions = fuzzyMaxExpansions; + } + + public boolean fuzzyTranspositions() { + return fuzzyTranspositions; + } + + public void fuzzyTranspositions(boolean fuzzyTranspositions) { + this.fuzzyTranspositions = fuzzyTranspositions; + } + @Override public int hashCode() { - return Objects.hash(lenient, analyzeWildcard, quoteFieldSuffix, autoGenerateSynonymsPhraseQuery); + return Objects.hash(lenient, analyzeWildcard, quoteFieldSuffix, autoGenerateSynonymsPhraseQuery, + fuzzyPrefixLength, fuzzyMaxExpansions, fuzzyTranspositions); } @Override @@ -381,7 +405,10 @@ public boolean equals(Object obj) { return Objects.equals(lenient, other.lenient) && Objects.equals(analyzeWildcard, other.analyzeWildcard) && Objects.equals(quoteFieldSuffix, other.quoteFieldSuffix) && - Objects.equals(autoGenerateSynonymsPhraseQuery, other.autoGenerateSynonymsPhraseQuery); + Objects.equals(autoGenerateSynonymsPhraseQuery, other.autoGenerateSynonymsPhraseQuery) && + Objects.equals(fuzzyPrefixLength, fuzzyPrefixLength) && + Objects.equals(fuzzyMaxExpansions, fuzzyMaxExpansions) && + Objects.equals(fuzzyTranspositions, fuzzyTranspositions); } } } diff --git a/core/src/main/java/org/elasticsearch/index/search/stats/SearchStats.java b/core/src/main/java/org/elasticsearch/index/search/stats/SearchStats.java index 824ca598ae2bb..de0a659d5f1a6 100644 --- a/core/src/main/java/org/elasticsearch/index/search/stats/SearchStats.java +++ b/core/src/main/java/org/elasticsearch/index/search/stats/SearchStats.java @@ -19,23 +19,23 @@ package org.elasticsearch.index.search.stats; -import org.elasticsearch.action.support.ToXContentToBytes; import org.elasticsearch.common.Nullable; +import org.elasticsearch.common.Strings; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Streamable; import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentFragment; import org.elasticsearch.common.xcontent.XContentBuilder; -import org.elasticsearch.common.xcontent.XContentFactory; import java.io.IOException; import java.util.HashMap; import java.util.Map; -public class SearchStats extends ToXContentToBytes implements Streamable { +public class SearchStats implements Streamable, ToXContentFragment { - public static class Stats implements Streamable, ToXContent { + public static class Stats implements Streamable, ToXContentFragment { private long queryCount; private long queryTimeInMillis; @@ -317,6 +317,11 @@ public XContentBuilder toXContent(XContentBuilder builder, ToXContent.Params par return builder; } + @Override + public String toString() { + return Strings.toString(this, true, true); + } + static final class Fields { static final String SEARCH = "search"; static final String OPEN_CONTEXTS = "open_contexts"; diff --git a/core/src/main/java/org/elasticsearch/index/search/stats/ShardSearchStats.java b/core/src/main/java/org/elasticsearch/index/search/stats/ShardSearchStats.java index 37c7b13ec79c9..636977029107e 100644 --- a/core/src/main/java/org/elasticsearch/index/search/stats/ShardSearchStats.java +++ b/core/src/main/java/org/elasticsearch/index/search/stats/ShardSearchStats.java @@ -180,12 +180,19 @@ public void onNewScrollContext(SearchContext context) { public void onFreeScrollContext(SearchContext context) { totalStats.scrollCurrent.dec(); assert totalStats.scrollCurrent.count() >= 0; - totalStats.scrollMetric.inc(System.nanoTime() - context.getOriginNanoTime()); + totalStats.scrollMetric.inc(TimeUnit.NANOSECONDS.toMicros(System.nanoTime() - context.getOriginNanoTime())); } static final class StatsHolder { public final MeanMetric queryMetric = new MeanMetric(); public final MeanMetric fetchMetric = new MeanMetric(); + /* We store scroll statistics in microseconds because with nanoseconds we run the risk of overflowing the total stats if there are + * many scrolls. For example, on a system with 2^24 scrolls that have been executed, each executing for 2^10 seconds, then using + * nanoseconds would require a numeric representation that can represent at least 2^24 * 2^10 * 10^9 > 2^24 * 2^10 * 2^29 = 2^63 + * which exceeds the largest value that can be represented by a long. By using microseconds, we enable capturing one-thousand + * times as many scrolls (i.e., billions of scrolls which at one per second would take 32 years to occur), or scrolls that execute + * for one-thousand times as long (i.e., scrolls that execute for almost twelve days on average). + */ public final MeanMetric scrollMetric = new MeanMetric(); public final MeanMetric suggestMetric = new MeanMetric(); public final CounterMetric queryCurrent = new CounterMetric(); @@ -197,7 +204,7 @@ public SearchStats.Stats stats() { return new SearchStats.Stats( queryMetric.count(), TimeUnit.NANOSECONDS.toMillis(queryMetric.sum()), queryCurrent.count(), fetchMetric.count(), TimeUnit.NANOSECONDS.toMillis(fetchMetric.sum()), fetchCurrent.count(), - scrollMetric.count(), TimeUnit.NANOSECONDS.toMillis(scrollMetric.sum()), scrollCurrent.count(), + scrollMetric.count(), TimeUnit.MICROSECONDS.toMillis(scrollMetric.sum()), scrollCurrent.count(), suggestMetric.count(), TimeUnit.NANOSECONDS.toMillis(suggestMetric.sum()), suggestCurrent.count() ); } diff --git a/core/src/main/java/org/elasticsearch/index/seqno/GlobalCheckpointSyncAction.java b/core/src/main/java/org/elasticsearch/index/seqno/GlobalCheckpointSyncAction.java index c7059d05919b1..2c60ebfac6b6c 100644 --- a/core/src/main/java/org/elasticsearch/index/seqno/GlobalCheckpointSyncAction.java +++ b/core/src/main/java/org/elasticsearch/index/seqno/GlobalCheckpointSyncAction.java @@ -19,6 +19,9 @@ package org.elasticsearch.index.seqno; +import org.apache.logging.log4j.message.ParameterizedMessage; +import org.apache.lucene.store.AlreadyClosedException; +import org.elasticsearch.ExceptionsHelper; import org.elasticsearch.Version; import org.elasticsearch.action.ActionListener; import org.elasticsearch.action.support.ActionFilters; @@ -32,8 +35,9 @@ import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.index.shard.IndexEventListener; +import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.index.shard.IndexShard; +import org.elasticsearch.index.shard.IndexShardClosedException; import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.indices.IndicesService; import org.elasticsearch.threadpool.ThreadPool; @@ -47,7 +51,7 @@ public class GlobalCheckpointSyncAction extends TransportReplicationAction< GlobalCheckpointSyncAction.Request, GlobalCheckpointSyncAction.Request, - ReplicationResponse> implements IndexEventListener { + ReplicationResponse> { public static String ACTION_NAME = "indices:admin/seq_no/global_checkpoint_sync"; @@ -73,7 +77,22 @@ public GlobalCheckpointSyncAction( indexNameExpressionResolver, Request::new, Request::new, - ThreadPool.Names.SAME); + ThreadPool.Names.MANAGEMENT); + } + + public void updateGlobalCheckpointForShard(final ShardId shardId) { + final ThreadContext threadContext = threadPool.getThreadContext(); + try (ThreadContext.StoredContext ignore = threadContext.stashContext()) { + threadContext.markAsSystemContext(); + execute( + new Request(shardId), + ActionListener.wrap(r -> { + }, e -> { + if (ExceptionsHelper.unwrap(e, AlreadyClosedException.class, IndexShardClosedException.class) == null) { + logger.info(new ParameterizedMessage("{} global checkpoint sync failed", shardId), e); + } + })); + } } @Override @@ -89,15 +108,11 @@ protected void sendReplicaRequest( if (node.getVersion().onOrAfter(Version.V_6_0_0_alpha1)) { super.sendReplicaRequest(replicaRequest, node, listener); } else { - listener.onResponse(new ReplicaResponse(SequenceNumbersService.PRE_60_NODE_LOCAL_CHECKPOINT)); + final long pre60NodeCheckpoint = SequenceNumbers.PRE_60_NODE_CHECKPOINT; + listener.onResponse(new ReplicaResponse(pre60NodeCheckpoint, pre60NodeCheckpoint)); } } - @Override - public void onShardInactive(final IndexShard indexShard) { - execute(new Request(indexShard.shardId())); - } - @Override protected PrimaryResult shardOperationOnPrimary( final Request request, final IndexShard indexShard) throws Exception { diff --git a/core/src/main/java/org/elasticsearch/index/seqno/GlobalCheckpointTracker.java b/core/src/main/java/org/elasticsearch/index/seqno/GlobalCheckpointTracker.java index 447815cf9afa0..d2b53aac1a045 100644 --- a/core/src/main/java/org/elasticsearch/index/seqno/GlobalCheckpointTracker.java +++ b/core/src/main/java/org/elasticsearch/index/seqno/GlobalCheckpointTracker.java @@ -19,6 +19,8 @@ package org.elasticsearch.index.seqno; +import com.carrotsearch.hppc.ObjectLongHashMap; +import com.carrotsearch.hppc.ObjectLongMap; import org.elasticsearch.cluster.routing.AllocationId; import org.elasticsearch.cluster.routing.IndexShardRoutingTable; import org.elasticsearch.cluster.routing.ShardRouting; @@ -36,8 +38,13 @@ import java.util.HashMap; import java.util.HashSet; import java.util.Map; +import java.util.OptionalLong; import java.util.Set; +import java.util.function.Function; +import java.util.function.LongConsumer; +import java.util.function.ToLongFunction; import java.util.stream.Collectors; +import java.util.stream.LongStream; /** * This class is responsible of tracking the global checkpoint. The global checkpoint is the highest sequence number for which all lower (or @@ -50,6 +57,11 @@ */ public class GlobalCheckpointTracker extends AbstractIndexShardComponent { + /** + * The allocation ID for the shard to which this tracker is a component of. + */ + final String shardAllocationId; + /** * The global checkpoint tracker can operate in two modes: * - primary: this shard is in charge of collecting local checkpoint information from all shard copies and computing the global @@ -101,9 +113,9 @@ public class GlobalCheckpointTracker extends AbstractIndexShardComponent { /** * Local checkpoint information for all shard copies that are tracked. Has an entry for all shard copies that are either initializing * and / or in-sync, possibly also containing information about unassigned in-sync shard copies. The information that is tracked for - * each shard copy is explained in the docs for the {@link LocalCheckpointState} class. + * each shard copy is explained in the docs for the {@link CheckpointState} class. */ - final Map localCheckpoints; + final Map checkpoints; /** * This set contains allocation IDs for which there is a thread actively waiting for the local checkpoint to advance to at least the @@ -111,60 +123,67 @@ public class GlobalCheckpointTracker extends AbstractIndexShardComponent { */ final Set pendingInSync; - /** - * The global checkpoint: - * - computed based on local checkpoints, if the tracker is in primary mode - * - received from the primary, if the tracker is in replica mode - */ - volatile long globalCheckpoint; - /** * Cached value for the last replication group that was computed */ volatile ReplicationGroup replicationGroup; - public static class LocalCheckpointState implements Writeable { + public static class CheckpointState implements Writeable { /** * the last local checkpoint information that we have for this shard */ long localCheckpoint; + + /** + * the last global checkpoint information that we have for this shard. This information is computed for the primary if + * the tracker is in primary mode and received from the primary if in replica mode. + */ + long globalCheckpoint; /** * whether this shard is treated as in-sync and thus contributes to the global checkpoint calculation */ boolean inSync; - public LocalCheckpointState(long localCheckpoint, boolean inSync) { + public CheckpointState(long localCheckpoint, long globalCheckpoint, boolean inSync) { this.localCheckpoint = localCheckpoint; + this.globalCheckpoint = globalCheckpoint; this.inSync = inSync; } - public LocalCheckpointState(StreamInput in) throws IOException { + public CheckpointState(StreamInput in) throws IOException { this.localCheckpoint = in.readZLong(); + this.globalCheckpoint = in.readZLong(); this.inSync = in.readBoolean(); } @Override public void writeTo(StreamOutput out) throws IOException { out.writeZLong(localCheckpoint); + out.writeZLong(globalCheckpoint); out.writeBoolean(inSync); } /** * Returns a full copy of this object */ - public LocalCheckpointState copy() { - return new LocalCheckpointState(localCheckpoint, inSync); + public CheckpointState copy() { + return new CheckpointState(localCheckpoint, globalCheckpoint, inSync); } public long getLocalCheckpoint() { return localCheckpoint; } + public long getGlobalCheckpoint() { + return globalCheckpoint; + } + @Override public String toString() { return "LocalCheckpointState{" + "localCheckpoint=" + localCheckpoint + + ", globalCheckpoint=" + globalCheckpoint + ", inSync=" + inSync + '}'; } @@ -174,40 +193,78 @@ public boolean equals(Object o) { if (this == o) return true; if (o == null || getClass() != o.getClass()) return false; - LocalCheckpointState that = (LocalCheckpointState) o; + CheckpointState that = (CheckpointState) o; if (localCheckpoint != that.localCheckpoint) return false; + if (globalCheckpoint != that.globalCheckpoint) return false; return inSync == that.inSync; } @Override public int hashCode() { - int result = (int) (localCheckpoint ^ (localCheckpoint >>> 32)); - result = 31 * result + (inSync ? 1 : 0); + int result = Long.hashCode(localCheckpoint); + result = 31 * result + Long.hashCode(globalCheckpoint); + result = 31 * result + Boolean.hashCode(inSync); return result; } } + /** + * Get the local knowledge of the global checkpoints for all in-sync allocation IDs. + * + * @return a map from allocation ID to the local knowledge of the global checkpoint for that allocation ID + */ + synchronized ObjectLongMap getInSyncGlobalCheckpoints() { + assert primaryMode; + assert handoffInProgress == false; + final ObjectLongMap globalCheckpoints = new ObjectLongHashMap<>(checkpoints.size()); // upper bound on the size + checkpoints + .entrySet() + .stream() + .filter(e -> e.getValue().inSync) + .forEach(e -> globalCheckpoints.put(e.getKey(), e.getValue().globalCheckpoint)); + return globalCheckpoints; + } + /** * Class invariant that should hold before and after every invocation of public methods on this class. As Java lacks implication * as a logical operator, many of the invariants are written under the form (!A || B), they should be read as (A implies B) however. */ private boolean invariant() { + assert checkpoints.get(shardAllocationId) != null : + "checkpoints map should always have an entry for the current shard"; + // local checkpoints only set during primary mode - assert primaryMode || localCheckpoints.values().stream() - .allMatch(lcps -> lcps.localCheckpoint == SequenceNumbersService.UNASSIGNED_SEQ_NO || - lcps.localCheckpoint == SequenceNumbersService.PRE_60_NODE_LOCAL_CHECKPOINT); + assert primaryMode || checkpoints.values().stream() + .allMatch(lcps -> lcps.localCheckpoint == SequenceNumbers.UNASSIGNED_SEQ_NO || + lcps.localCheckpoint == SequenceNumbers.PRE_60_NODE_CHECKPOINT); + + // global checkpoints for other shards only set during primary mode + assert primaryMode + || checkpoints + .entrySet() + .stream() + .filter(e -> e.getKey().equals(shardAllocationId) == false) + .map(Map.Entry::getValue) + .allMatch(cps -> + (cps.globalCheckpoint == SequenceNumbers.UNASSIGNED_SEQ_NO + || cps.globalCheckpoint == SequenceNumbers.PRE_60_NODE_CHECKPOINT)); // relocation handoff can only occur in primary mode assert !handoffInProgress || primaryMode; - // there is at least one in-sync shard copy when the global checkpoint tracker operates in primary mode (i.e. the shard itself) - assert !primaryMode || localCheckpoints.values().stream().anyMatch(lcps -> lcps.inSync); + // the current shard is marked as in-sync when the global checkpoint tracker operates in primary mode + assert !primaryMode || checkpoints.get(shardAllocationId).inSync; // the routing table and replication group is set when the global checkpoint tracker operates in primary mode assert !primaryMode || (routingTable != null && replicationGroup != null) : "primary mode but routing table is " + routingTable + " and replication group is " + replicationGroup; + // when in primary mode, the current allocation ID is the allocation ID of the primary or the relocation allocation ID + assert !primaryMode + || (routingTable.primaryShard().allocationId().getId().equals(shardAllocationId) + || routingTable.primaryShard().allocationId().getRelocationId().equals(shardAllocationId)); + // during relocation handoff there are no entries blocking global checkpoint advancement assert !handoffInProgress || pendingInSync.isEmpty() : "entries blocking global checkpoint advancement during relocation handoff: " + pendingInSync; @@ -216,9 +273,17 @@ private boolean invariant() { assert pendingInSync.isEmpty() || (primaryMode && !handoffInProgress); // the computed global checkpoint is always up-to-date - assert !primaryMode || globalCheckpoint == computeGlobalCheckpoint(pendingInSync, localCheckpoints.values(), globalCheckpoint) : - "global checkpoint is not up-to-date, expected: " + - computeGlobalCheckpoint(pendingInSync, localCheckpoints.values(), globalCheckpoint) + " but was: " + globalCheckpoint; + assert !primaryMode + || getGlobalCheckpoint() == computeGlobalCheckpoint(pendingInSync, checkpoints.values(), getGlobalCheckpoint()) + : "global checkpoint is not up-to-date, expected: " + + computeGlobalCheckpoint(pendingInSync, checkpoints.values(), getGlobalCheckpoint()) + " but was: " + getGlobalCheckpoint(); + + // when in primary mode, the global checkpoint is at most the minimum local checkpoint on all in-sync shard copies + assert !primaryMode + || getGlobalCheckpoint() <= inSyncCheckpointStates(checkpoints, CheckpointState::getLocalCheckpoint, LongStream::min) + : "global checkpoint [" + getGlobalCheckpoint() + "] " + + "for primary mode allocation ID [" + shardAllocationId + "] " + + "more than in-sync local checkpoints [" + checkpoints + "]"; // we have a routing table iff we have a replication group assert (routingTable == null) == (replicationGroup == null) : @@ -228,10 +293,10 @@ private boolean invariant() { "cached replication group out of sync: expected: " + calculateReplicationGroup() + " but was: " + replicationGroup; // all assigned shards from the routing table are tracked - assert routingTable == null || localCheckpoints.keySet().containsAll(routingTable.getAllAllocationIds()) : - "local checkpoints " + localCheckpoints + " not in-sync with routing table " + routingTable; + assert routingTable == null || checkpoints.keySet().containsAll(routingTable.getAllAllocationIds()) : + "local checkpoints " + checkpoints + " not in-sync with routing table " + routingTable; - for (Map.Entry entry : localCheckpoints.entrySet()) { + for (Map.Entry entry : checkpoints.entrySet()) { // blocking global checkpoint advancement only happens for shards that are not in-sync assert !pendingInSync.contains(entry.getKey()) || !entry.getValue().inSync : "shard copy " + entry.getKey() + " blocks global checkpoint advancement but is in-sync"; @@ -240,22 +305,43 @@ private boolean invariant() { return true; } + private static long inSyncCheckpointStates( + final Map checkpoints, + ToLongFunction function, + Function reducer) { + final OptionalLong value = + reducer.apply( + checkpoints + .values() + .stream() + .filter(cps -> cps.inSync) + .mapToLong(function) + .filter(v -> v != SequenceNumbers.PRE_60_NODE_CHECKPOINT && v != SequenceNumbers.UNASSIGNED_SEQ_NO)); + return value.isPresent() ? value.getAsLong() : SequenceNumbers.UNASSIGNED_SEQ_NO; + } + /** * Initialize the global checkpoint service. The specified global checkpoint should be set to the last known global checkpoint, or - * {@link SequenceNumbersService#UNASSIGNED_SEQ_NO}. + * {@link SequenceNumbers#UNASSIGNED_SEQ_NO}. * * @param shardId the shard ID + * @param allocationId the allocation ID * @param indexSettings the index settings - * @param globalCheckpoint the last known global checkpoint for this shard, or {@link SequenceNumbersService#UNASSIGNED_SEQ_NO} + * @param globalCheckpoint the last known global checkpoint for this shard, or {@link SequenceNumbers#UNASSIGNED_SEQ_NO} */ - GlobalCheckpointTracker(final ShardId shardId, final IndexSettings indexSettings, final long globalCheckpoint) { + GlobalCheckpointTracker( + final ShardId shardId, + final String allocationId, + final IndexSettings indexSettings, + final long globalCheckpoint) { super(shardId, indexSettings); - assert globalCheckpoint >= SequenceNumbersService.UNASSIGNED_SEQ_NO : "illegal initial global checkpoint: " + globalCheckpoint; + assert globalCheckpoint >= SequenceNumbers.UNASSIGNED_SEQ_NO : "illegal initial global checkpoint: " + globalCheckpoint; + this.shardAllocationId = allocationId; this.primaryMode = false; this.handoffInProgress = false; this.appliedClusterStateVersion = -1L; - this.globalCheckpoint = globalCheckpoint; - this.localCheckpoints = new HashMap<>(1 + indexSettings.getNumberOfReplicas()); + this.checkpoints = new HashMap<>(1 + indexSettings.getNumberOfReplicas()); + checkpoints.put(allocationId, new CheckpointState(SequenceNumbers.UNASSIGNED_SEQ_NO, globalCheckpoint, false)); this.pendingInSync = new HashSet<>(); this.routingTable = null; this.replicationGroup = null; @@ -274,7 +360,7 @@ public ReplicationGroup getReplicationGroup() { private ReplicationGroup calculateReplicationGroup() { return new ReplicationGroup(routingTable, - localCheckpoints.entrySet().stream().filter(e -> e.getValue().inSync).map(Map.Entry::getKey).collect(Collectors.toSet())); + checkpoints.entrySet().stream().filter(e -> e.getValue().inSync).map(Map.Entry::getKey).collect(Collectors.toSet())); } /** @@ -282,8 +368,10 @@ private ReplicationGroup calculateReplicationGroup() { * * @return the global checkpoint */ - public long getGlobalCheckpoint() { - return globalCheckpoint; + public synchronized long getGlobalCheckpoint() { + final CheckpointState cps = checkpoints.get(shardAllocationId); + assert cps != null; + return cps.globalCheckpoint; } /** @@ -298,27 +386,58 @@ public synchronized void updateGlobalCheckpointOnReplica(final long globalCheckp /* * The global checkpoint here is a local knowledge which is updated under the mandate of the primary. It can happen that the primary * information is lagging compared to a replica (e.g., if a replica is promoted to primary but has stale info relative to other - * replica shards). In these cases, the local knowledge of the global checkpoint could be higher than sync from the lagging primary. + * replica shards). In these cases, the local knowledge of the global checkpoint could be higher than the sync from the lagging + * primary. */ - if (this.globalCheckpoint <= globalCheckpoint) { - logger.trace("updating global checkpoint from [{}] to [{}] due to [{}]", this.globalCheckpoint, globalCheckpoint, reason); - this.globalCheckpoint = globalCheckpoint; - } + updateGlobalCheckpoint( + shardAllocationId, + globalCheckpoint, + current -> logger.trace("updating global checkpoint from [{}] to [{}] due to [{}]", current, globalCheckpoint, reason)); + assert invariant(); + } + + /** + * Update the local knowledge of the global checkpoint for the specified allocation ID. + * + * @param allocationId the allocation ID to update the global checkpoint for + * @param globalCheckpoint the global checkpoint + */ + public synchronized void updateGlobalCheckpointForShard(final String allocationId, final long globalCheckpoint) { + assert primaryMode; + assert handoffInProgress == false; + assert invariant(); + updateGlobalCheckpoint( + allocationId, + globalCheckpoint, + current -> logger.trace( + "updating local knowledge for [{}] on the primary of the global checkpoint from [{}] to [{}]", + allocationId, + current, + globalCheckpoint)); assert invariant(); } + private void updateGlobalCheckpoint(final String allocationId, final long globalCheckpoint, LongConsumer ifUpdated) { + final CheckpointState cps = checkpoints.get(allocationId); + assert !this.shardAllocationId.equals(allocationId) || cps != null; + if (cps != null && globalCheckpoint > cps.globalCheckpoint) { + ifUpdated.accept(cps.globalCheckpoint); + cps.globalCheckpoint = globalCheckpoint; + } + } + /** * Initializes the global checkpoint tracker in primary mode (see {@link #primaryMode}. Called on primary activation or promotion. */ - public synchronized void activatePrimaryMode(final String allocationId, final long localCheckpoint) { + public synchronized void activatePrimaryMode(final long localCheckpoint) { assert invariant(); assert primaryMode == false; - assert localCheckpoints.get(allocationId) != null && localCheckpoints.get(allocationId).inSync && - localCheckpoints.get(allocationId).localCheckpoint == SequenceNumbersService.UNASSIGNED_SEQ_NO : - "expected " + allocationId + " to have initialized entry in " + localCheckpoints + " when activating primary"; - assert localCheckpoint >= SequenceNumbersService.NO_OPS_PERFORMED; + assert checkpoints.get(shardAllocationId) != null && checkpoints.get(shardAllocationId).inSync && + checkpoints.get(shardAllocationId).localCheckpoint == SequenceNumbers.UNASSIGNED_SEQ_NO : + "expected " + shardAllocationId + " to have initialized entry in " + checkpoints + " when activating primary"; + assert localCheckpoint >= SequenceNumbers.NO_OPS_PERFORMED; primaryMode = true; - updateLocalCheckpoint(allocationId, localCheckpoints.get(allocationId), localCheckpoint); + updateLocalCheckpoint(shardAllocationId, checkpoints.get(shardAllocationId), localCheckpoint); updateGlobalCheckpointOnPrimary(); assert invariant(); } @@ -337,37 +456,47 @@ public synchronized void updateFromMaster(final long applyingClusterStateVersion if (applyingClusterStateVersion > appliedClusterStateVersion) { // check that the master does not fabricate new in-sync entries out of thin air once we are in primary mode assert !primaryMode || inSyncAllocationIds.stream().allMatch( - inSyncId -> localCheckpoints.containsKey(inSyncId) && localCheckpoints.get(inSyncId).inSync) : + inSyncId -> checkpoints.containsKey(inSyncId) && checkpoints.get(inSyncId).inSync) : "update from master in primary mode contains in-sync ids " + inSyncAllocationIds + - " that have no matching entries in " + localCheckpoints; + " that have no matching entries in " + checkpoints; // remove entries which don't exist on master Set initializingAllocationIds = routingTable.getAllInitializingShards().stream() .map(ShardRouting::allocationId).map(AllocationId::getId).collect(Collectors.toSet()); - boolean removedEntries = localCheckpoints.keySet().removeIf( + boolean removedEntries = checkpoints.keySet().removeIf( aid -> !inSyncAllocationIds.contains(aid) && !initializingAllocationIds.contains(aid)); if (primaryMode) { // add new initializingIds that are missing locally. These are fresh shard copies - and not in-sync for (String initializingId : initializingAllocationIds) { - if (localCheckpoints.containsKey(initializingId) == false) { + if (checkpoints.containsKey(initializingId) == false) { final boolean inSync = inSyncAllocationIds.contains(initializingId); assert inSync == false : "update from master in primary mode has " + initializingId + " as in-sync but it does not exist locally"; final long localCheckpoint = pre60AllocationIds.contains(initializingId) ? - SequenceNumbersService.PRE_60_NODE_LOCAL_CHECKPOINT : SequenceNumbersService.UNASSIGNED_SEQ_NO; - localCheckpoints.put(initializingId, new LocalCheckpointState(localCheckpoint, inSync)); + SequenceNumbers.PRE_60_NODE_CHECKPOINT : SequenceNumbers.UNASSIGNED_SEQ_NO; + final long globalCheckpoint = localCheckpoint; + checkpoints.put(initializingId, new CheckpointState(localCheckpoint, globalCheckpoint, inSync)); } } } else { for (String initializingId : initializingAllocationIds) { - final long localCheckpoint = pre60AllocationIds.contains(initializingId) ? - SequenceNumbersService.PRE_60_NODE_LOCAL_CHECKPOINT : SequenceNumbersService.UNASSIGNED_SEQ_NO; - localCheckpoints.put(initializingId, new LocalCheckpointState(localCheckpoint, false)); + if (shardAllocationId.equals(initializingId) == false) { + final long localCheckpoint = pre60AllocationIds.contains(initializingId) ? + SequenceNumbers.PRE_60_NODE_CHECKPOINT : SequenceNumbers.UNASSIGNED_SEQ_NO; + final long globalCheckpoint = localCheckpoint; + checkpoints.put(initializingId, new CheckpointState(localCheckpoint, globalCheckpoint, false)); + } } for (String inSyncId : inSyncAllocationIds) { - final long localCheckpoint = pre60AllocationIds.contains(inSyncId) ? - SequenceNumbersService.PRE_60_NODE_LOCAL_CHECKPOINT : SequenceNumbersService.UNASSIGNED_SEQ_NO; - localCheckpoints.put(inSyncId, new LocalCheckpointState(localCheckpoint, true)); + if (shardAllocationId.equals(inSyncId)) { + // current shard is initially marked as not in-sync because we don't know better at that point + checkpoints.get(shardAllocationId).inSync = true; + } else { + final long localCheckpoint = pre60AllocationIds.contains(inSyncId) ? + SequenceNumbers.PRE_60_NODE_CHECKPOINT : SequenceNumbers.UNASSIGNED_SEQ_NO; + final long globalCheckpoint = localCheckpoint; + checkpoints.put(inSyncId, new CheckpointState(localCheckpoint, globalCheckpoint, true)); + } } } appliedClusterStateVersion = applyingClusterStateVersion; @@ -389,8 +518,8 @@ public synchronized void updateFromMaster(final long applyingClusterStateVersion public synchronized void initiateTracking(final String allocationId) { assert invariant(); assert primaryMode; - LocalCheckpointState lcps = localCheckpoints.get(allocationId); - if (lcps == null) { + CheckpointState cps = checkpoints.get(allocationId); + if (cps == null) { // can happen if replica was removed from cluster but recovery process is unaware of it yet throw new IllegalStateException("no local checkpoint tracking information available"); } @@ -408,21 +537,21 @@ public synchronized void markAllocationIdAsInSync(final String allocationId, fin assert invariant(); assert primaryMode; assert handoffInProgress == false; - LocalCheckpointState lcps = localCheckpoints.get(allocationId); - if (lcps == null) { + CheckpointState cps = checkpoints.get(allocationId); + if (cps == null) { // can happen if replica was removed from cluster but recovery process is unaware of it yet throw new IllegalStateException("no local checkpoint tracking information available for " + allocationId); } - assert localCheckpoint >= SequenceNumbersService.NO_OPS_PERFORMED : + assert localCheckpoint >= SequenceNumbers.NO_OPS_PERFORMED : "expected known local checkpoint for " + allocationId + " but was " + localCheckpoint; assert pendingInSync.contains(allocationId) == false : "shard copy " + allocationId + " is already marked as pending in-sync"; - updateLocalCheckpoint(allocationId, lcps, localCheckpoint); + updateLocalCheckpoint(allocationId, cps, localCheckpoint); // if it was already in-sync (because of a previously failed recovery attempt), global checkpoint must have been // stuck from advancing - assert !lcps.inSync || (lcps.localCheckpoint >= globalCheckpoint) : - "shard copy " + allocationId + " that's already in-sync should have a local checkpoint " + lcps.localCheckpoint + - " that's above the global checkpoint " + globalCheckpoint; - if (lcps.localCheckpoint < globalCheckpoint) { + assert !cps.inSync || (cps.localCheckpoint >= getGlobalCheckpoint()) : + "shard copy " + allocationId + " that's already in-sync should have a local checkpoint " + cps.localCheckpoint + + " that's above the global checkpoint " + getGlobalCheckpoint(); + if (cps.localCheckpoint < getGlobalCheckpoint()) { pendingInSync.add(allocationId); try { while (true) { @@ -436,7 +565,7 @@ public synchronized void markAllocationIdAsInSync(final String allocationId, fin pendingInSync.remove(allocationId); } } else { - lcps.inSync = true; + cps.inSync = true; replicationGroup = calculateReplicationGroup(); logger.trace("marked [{}] as in-sync", allocationId); updateGlobalCheckpointOnPrimary(); @@ -445,21 +574,21 @@ public synchronized void markAllocationIdAsInSync(final String allocationId, fin assert invariant(); } - private boolean updateLocalCheckpoint(String allocationId, LocalCheckpointState lcps, long localCheckpoint) { - // a local checkpoint of PRE_60_NODE_LOCAL_CHECKPOINT cannot be overridden - assert lcps.localCheckpoint != SequenceNumbersService.PRE_60_NODE_LOCAL_CHECKPOINT || - localCheckpoint == SequenceNumbersService.PRE_60_NODE_LOCAL_CHECKPOINT : + private boolean updateLocalCheckpoint(String allocationId, CheckpointState cps, long localCheckpoint) { + // a local checkpoint of PRE_60_NODE_CHECKPOINT cannot be overridden + assert cps.localCheckpoint != SequenceNumbers.PRE_60_NODE_CHECKPOINT || + localCheckpoint == SequenceNumbers.PRE_60_NODE_CHECKPOINT : "pre-6.0 shard copy " + allocationId + " unexpected to send valid local checkpoint " + localCheckpoint; // a local checkpoint for a shard copy should be a valid sequence number or the pre-6.0 sequence number indicator - assert localCheckpoint != SequenceNumbersService.UNASSIGNED_SEQ_NO : + assert localCheckpoint != SequenceNumbers.UNASSIGNED_SEQ_NO : "invalid local checkpoint for shard copy [" + allocationId + "]"; - if (localCheckpoint > lcps.localCheckpoint) { - logger.trace("updated local checkpoint of [{}] from [{}] to [{}]", allocationId, lcps.localCheckpoint, localCheckpoint); - lcps.localCheckpoint = localCheckpoint; + if (localCheckpoint > cps.localCheckpoint) { + logger.trace("updated local checkpoint of [{}] from [{}] to [{}]", allocationId, cps.localCheckpoint, localCheckpoint); + cps.localCheckpoint = localCheckpoint; return true; } else { logger.trace("skipped updating local checkpoint of [{}] from [{}] to [{}], current checkpoint is higher", allocationId, - lcps.localCheckpoint, localCheckpoint); + cps.localCheckpoint, localCheckpoint); return false; } } @@ -475,17 +604,17 @@ public synchronized void updateLocalCheckpoint(final String allocationId, final assert invariant(); assert primaryMode; assert handoffInProgress == false; - LocalCheckpointState lcps = localCheckpoints.get(allocationId); - if (lcps == null) { + CheckpointState cps = checkpoints.get(allocationId); + if (cps == null) { // can happen if replica was removed from cluster but replication process is unaware of it yet return; } - boolean increasedLocalCheckpoint = updateLocalCheckpoint(allocationId, lcps, localCheckpoint); + boolean increasedLocalCheckpoint = updateLocalCheckpoint(allocationId, cps, localCheckpoint); boolean pending = pendingInSync.contains(allocationId); - if (pending && lcps.localCheckpoint >= globalCheckpoint) { + if (pending && cps.localCheckpoint >= getGlobalCheckpoint()) { pendingInSync.remove(allocationId); pending = false; - lcps.inSync = true; + cps.inSync = true; replicationGroup = calculateReplicationGroup(); logger.trace("marked [{}] as in-sync", allocationId); notifyAllWaiters(); @@ -500,21 +629,21 @@ public synchronized void updateLocalCheckpoint(final String allocationId, final * Computes the global checkpoint based on the given local checkpoints. In case where there are entries preventing the * computation to happen (for example due to blocking), it returns the fallback value. */ - private static long computeGlobalCheckpoint(final Set pendingInSync, final Collection localCheckpoints, + private static long computeGlobalCheckpoint(final Set pendingInSync, final Collection localCheckpoints, final long fallback) { long minLocalCheckpoint = Long.MAX_VALUE; if (pendingInSync.isEmpty() == false) { return fallback; } - for (final LocalCheckpointState lcps : localCheckpoints) { - if (lcps.inSync) { - if (lcps.localCheckpoint == SequenceNumbersService.UNASSIGNED_SEQ_NO) { + for (final CheckpointState cps : localCheckpoints) { + if (cps.inSync) { + if (cps.localCheckpoint == SequenceNumbers.UNASSIGNED_SEQ_NO) { // unassigned in-sync replica return fallback; - } else if (lcps.localCheckpoint == SequenceNumbersService.PRE_60_NODE_LOCAL_CHECKPOINT) { + } else if (cps.localCheckpoint == SequenceNumbers.PRE_60_NODE_CHECKPOINT) { // 5.x replica, ignore for global checkpoint calculation } else { - minLocalCheckpoint = Math.min(lcps.localCheckpoint, minLocalCheckpoint); + minLocalCheckpoint = Math.min(cps.localCheckpoint, minLocalCheckpoint); } } } @@ -527,12 +656,14 @@ private static long computeGlobalCheckpoint(final Set pendingInSync, fin */ private synchronized void updateGlobalCheckpointOnPrimary() { assert primaryMode; - final long computedGlobalCheckpoint = computeGlobalCheckpoint(pendingInSync, localCheckpoints.values(), globalCheckpoint); + final CheckpointState cps = checkpoints.get(shardAllocationId); + final long globalCheckpoint = cps.globalCheckpoint; + final long computedGlobalCheckpoint = computeGlobalCheckpoint(pendingInSync, checkpoints.values(), getGlobalCheckpoint()); assert computedGlobalCheckpoint >= globalCheckpoint : "new global checkpoint [" + computedGlobalCheckpoint + "] is lower than previous one [" + globalCheckpoint + "]"; if (globalCheckpoint != computedGlobalCheckpoint) { logger.trace("global checkpoint updated to [{}]", computedGlobalCheckpoint); - globalCheckpoint = computedGlobalCheckpoint; + cps.globalCheckpoint = computedGlobalCheckpoint; } } @@ -545,13 +676,13 @@ public synchronized PrimaryContext startRelocationHandoff() { assert handoffInProgress == false; assert pendingInSync.isEmpty() : "relocation handoff started while there are still shard copies pending in-sync: " + pendingInSync; handoffInProgress = true; - // copy clusterStateVersion and localCheckpoints and return - // all the entries from localCheckpoints that are inSync: the reason we don't need to care about initializing non-insync entries + // copy clusterStateVersion and checkpoints and return + // all the entries from checkpoints that are inSync: the reason we don't need to care about initializing non-insync entries // is that they will have to undergo a recovery attempt on the relocation target, and will hence be supplied by the cluster state // update on the relocation target once relocation completes). We could alternatively also copy the map as-is (it’s safe), and it // would be cleaned up on the target by cluster state updates. - Map localCheckpointsCopy = new HashMap<>(); - for (Map.Entry entry : localCheckpoints.entrySet()) { + Map localCheckpointsCopy = new HashMap<>(); + for (Map.Entry entry : checkpoints.entrySet()) { localCheckpointsCopy.put(entry.getKey(), entry.getValue().copy()); } assert invariant(); @@ -578,11 +709,19 @@ public synchronized void completeRelocationHandoff() { assert handoffInProgress; primaryMode = false; handoffInProgress = false; - // forget all checkpoint information - localCheckpoints.values().stream().forEach(lcps -> { - if (lcps.localCheckpoint != SequenceNumbersService.UNASSIGNED_SEQ_NO && - lcps.localCheckpoint != SequenceNumbersService.PRE_60_NODE_LOCAL_CHECKPOINT) { - lcps.localCheckpoint = SequenceNumbersService.UNASSIGNED_SEQ_NO; + // forget all checkpoint information except for global checkpoint of current shard + checkpoints.entrySet().stream().forEach(e -> { + final CheckpointState cps = e.getValue(); + if (cps.localCheckpoint != SequenceNumbers.UNASSIGNED_SEQ_NO && + cps.localCheckpoint != SequenceNumbers.PRE_60_NODE_CHECKPOINT) { + cps.localCheckpoint = SequenceNumbers.UNASSIGNED_SEQ_NO; + } + if (e.getKey().equals(shardAllocationId) == false) { + // don't throw global checkpoint information of current shard away + if (cps.globalCheckpoint != SequenceNumbers.UNASSIGNED_SEQ_NO && + cps.globalCheckpoint != SequenceNumbers.PRE_60_NODE_CHECKPOINT) { + cps.globalCheckpoint = SequenceNumbers.UNASSIGNED_SEQ_NO; + } } }); assert invariant(); @@ -601,9 +740,9 @@ public synchronized void activateWithPrimaryContext(PrimaryContext primaryContex primaryMode = true; // capture current state to possibly replay missed cluster state update appliedClusterStateVersion = primaryContext.clusterStateVersion(); - localCheckpoints.clear(); - for (Map.Entry entry : primaryContext.localCheckpoints.entrySet()) { - localCheckpoints.put(entry.getKey(), entry.getValue().copy()); + checkpoints.clear(); + for (Map.Entry entry : primaryContext.checkpoints.entrySet()) { + checkpoints.put(entry.getKey(), entry.getValue().copy()); } routingTable = primaryContext.getRoutingTable(); replicationGroup = calculateReplicationGroup(); @@ -620,11 +759,11 @@ private Runnable getMasterUpdateOperationFromCurrentState() { final long lastAppliedClusterStateVersion = appliedClusterStateVersion; final Set inSyncAllocationIds = new HashSet<>(); final Set pre60AllocationIds = new HashSet<>(); - localCheckpoints.entrySet().forEach(entry -> { + checkpoints.entrySet().forEach(entry -> { if (entry.getValue().inSync) { inSyncAllocationIds.add(entry.getKey()); } - if (entry.getValue().getLocalCheckpoint() == SequenceNumbersService.PRE_60_NODE_LOCAL_CHECKPOINT) { + if (entry.getValue().getLocalCheckpoint() == SequenceNumbers.PRE_60_NODE_CHECKPOINT) { pre60AllocationIds.add(entry.getKey()); } }); @@ -643,9 +782,9 @@ public synchronized boolean pendingInSync() { /** * Returns the local checkpoint information tracked for a specific shard. Used by tests. */ - public synchronized LocalCheckpointState getTrackedLocalCheckpointForShard(String allocationId) { + public synchronized CheckpointState getTrackedLocalCheckpointForShard(String allocationId) { assert primaryMode; - return localCheckpoints.get(allocationId); + return checkpoints.get(allocationId); } /** @@ -674,19 +813,19 @@ private synchronized void waitForLocalCheckpointToAdvance() throws InterruptedEx public static class PrimaryContext implements Writeable { private final long clusterStateVersion; - private final Map localCheckpoints; + private final Map checkpoints; private final IndexShardRoutingTable routingTable; - public PrimaryContext(long clusterStateVersion, Map localCheckpoints, + public PrimaryContext(long clusterStateVersion, Map checkpoints, IndexShardRoutingTable routingTable) { this.clusterStateVersion = clusterStateVersion; - this.localCheckpoints = localCheckpoints; + this.checkpoints = checkpoints; this.routingTable = routingTable; } public PrimaryContext(StreamInput in) throws IOException { clusterStateVersion = in.readVLong(); - localCheckpoints = in.readMap(StreamInput::readString, LocalCheckpointState::new); + checkpoints = in.readMap(StreamInput::readString, CheckpointState::new); routingTable = IndexShardRoutingTable.Builder.readFrom(in); } @@ -694,8 +833,8 @@ public long clusterStateVersion() { return clusterStateVersion; } - public Map getLocalCheckpoints() { - return localCheckpoints; + public Map getCheckpointStates() { + return checkpoints; } public IndexShardRoutingTable getRoutingTable() { @@ -705,7 +844,7 @@ public IndexShardRoutingTable getRoutingTable() { @Override public void writeTo(StreamOutput out) throws IOException { out.writeVLong(clusterStateVersion); - out.writeMap(localCheckpoints, (streamOutput, s) -> out.writeString(s), (streamOutput, lcps) -> lcps.writeTo(out)); + out.writeMap(checkpoints, (streamOutput, s) -> out.writeString(s), (streamOutput, cps) -> cps.writeTo(out)); IndexShardRoutingTable.Builder.writeTo(routingTable, out); } @@ -713,7 +852,7 @@ public void writeTo(StreamOutput out) throws IOException { public String toString() { return "PrimaryContext{" + "clusterStateVersion=" + clusterStateVersion + - ", localCheckpoints=" + localCheckpoints + + ", checkpoints=" + checkpoints + ", routingTable=" + routingTable + '}'; } @@ -732,8 +871,8 @@ public boolean equals(Object o) { @Override public int hashCode() { - int result = (int) (clusterStateVersion ^ (clusterStateVersion >>> 32)); - result = 31 * result + localCheckpoints.hashCode(); + int result = Long.hashCode(clusterStateVersion); + result = 31 * result + checkpoints.hashCode(); result = 31 * result + routingTable.hashCode(); return result; } diff --git a/core/src/main/java/org/elasticsearch/index/seqno/LocalCheckpointTracker.java b/core/src/main/java/org/elasticsearch/index/seqno/LocalCheckpointTracker.java index 9af9f00b1d120..54751e8958a0c 100644 --- a/core/src/main/java/org/elasticsearch/index/seqno/LocalCheckpointTracker.java +++ b/core/src/main/java/org/elasticsearch/index/seqno/LocalCheckpointTracker.java @@ -19,12 +19,9 @@ package org.elasticsearch.index.seqno; +import com.carrotsearch.hppc.LongObjectHashMap; import org.apache.lucene.util.FixedBitSet; import org.elasticsearch.common.SuppressForbidden; -import org.elasticsearch.common.settings.Setting; -import org.elasticsearch.index.IndexSettings; - -import java.util.LinkedList; /** * This class generates sequences numbers and keeps track of the so-called "local checkpoint" which is the highest number for which all @@ -33,27 +30,16 @@ public class LocalCheckpointTracker { /** - * We keep a bit for each sequence number that is still pending. To optimize allocation, we do so in multiple arrays allocating them on - * demand and cleaning up while completed. This setting controls the size of the arrays. - */ - public static Setting SETTINGS_BIT_ARRAYS_SIZE = - Setting.intSetting("index.seq_no.checkpoint.bit_arrays_size", 1024, 4, Setting.Property.IndexScope); - - /** - * An ordered list of bit arrays representing pending sequence numbers. The list is "anchored" in {@link #firstProcessedSeqNo} which - * marks the sequence number the fist bit in the first array corresponds to. - */ - final LinkedList processedSeqNo = new LinkedList<>(); - - /** - * The size of each bit set representing processed sequence numbers. + * We keep a bit for each sequence number that is still pending. To optimize allocation, we do so in multiple sets allocating them on + * demand and cleaning up while completed. This constant controls the size of the sets. */ - private final int bitArraysSize; + static final int BIT_SET_SIZE = 1024; /** - * The sequence number that the first bit in the first array corresponds to. + * A collection of bit sets representing pending sequence numbers. Each sequence number is mapped to a bit set by dividing by the + * bit set size. */ - long firstProcessedSeqNo; + final LongObjectHashMap processedSeqNo = new LongObjectHashMap<>(); /** * The current local checkpoint, i.e., all sequence numbers no more than this number have been completed. @@ -67,26 +53,23 @@ public class LocalCheckpointTracker { /** * Initialize the local checkpoint service. The {@code maxSeqNo} should be set to the last sequence number assigned, or - * {@link SequenceNumbersService#NO_OPS_PERFORMED} and {@code localCheckpoint} should be set to the last known local checkpoint, - * or {@link SequenceNumbersService#NO_OPS_PERFORMED}. + * {@link SequenceNumbers#NO_OPS_PERFORMED} and {@code localCheckpoint} should be set to the last known local checkpoint, + * or {@link SequenceNumbers#NO_OPS_PERFORMED}. * - * @param indexSettings the index settings - * @param maxSeqNo the last sequence number assigned, or {@link SequenceNumbersService#NO_OPS_PERFORMED} - * @param localCheckpoint the last known local checkpoint, or {@link SequenceNumbersService#NO_OPS_PERFORMED} + * @param maxSeqNo the last sequence number assigned, or {@link SequenceNumbers#NO_OPS_PERFORMED} + * @param localCheckpoint the last known local checkpoint, or {@link SequenceNumbers#NO_OPS_PERFORMED} */ - public LocalCheckpointTracker(final IndexSettings indexSettings, final long maxSeqNo, final long localCheckpoint) { - if (localCheckpoint < 0 && localCheckpoint != SequenceNumbersService.NO_OPS_PERFORMED) { + public LocalCheckpointTracker(final long maxSeqNo, final long localCheckpoint) { + if (localCheckpoint < 0 && localCheckpoint != SequenceNumbers.NO_OPS_PERFORMED) { throw new IllegalArgumentException( - "local checkpoint must be non-negative or [" + SequenceNumbersService.NO_OPS_PERFORMED + "] " + "local checkpoint must be non-negative or [" + SequenceNumbers.NO_OPS_PERFORMED + "] " + "but was [" + localCheckpoint + "]"); } - if (maxSeqNo < 0 && maxSeqNo != SequenceNumbersService.NO_OPS_PERFORMED) { + if (maxSeqNo < 0 && maxSeqNo != SequenceNumbers.NO_OPS_PERFORMED) { throw new IllegalArgumentException( - "max seq. no. must be non-negative or [" + SequenceNumbersService.NO_OPS_PERFORMED + "] but was [" + maxSeqNo + "]"); + "max seq. no. must be non-negative or [" + SequenceNumbers.NO_OPS_PERFORMED + "] but was [" + maxSeqNo + "]"); } - bitArraysSize = SETTINGS_BIT_ARRAYS_SIZE.get(indexSettings.getSettings()); - firstProcessedSeqNo = localCheckpoint == SequenceNumbersService.NO_OPS_PERFORMED ? 0 : localCheckpoint + 1; - nextSeqNo = maxSeqNo == SequenceNumbersService.NO_OPS_PERFORMED ? 0 : maxSeqNo + 1; + nextSeqNo = maxSeqNo == SequenceNumbers.NO_OPS_PERFORMED ? 0 : maxSeqNo + 1; checkpoint = localCheckpoint; } @@ -127,10 +110,9 @@ public synchronized void markSeqNoAsCompleted(final long seqNo) { * @param checkpoint the local checkpoint to reset this tracker to */ synchronized void resetCheckpoint(final long checkpoint) { - assert checkpoint != SequenceNumbersService.UNASSIGNED_SEQ_NO; + assert checkpoint != SequenceNumbers.UNASSIGNED_SEQ_NO; assert checkpoint <= this.checkpoint; processedSeqNo.clear(); - firstProcessedSeqNo = checkpoint + 1; this.checkpoint = checkpoint; } @@ -183,24 +165,28 @@ synchronized void waitForOpsToComplete(final long seqNo) throws InterruptedExcep @SuppressForbidden(reason = "Object#notifyAll") private void updateCheckpoint() { assert Thread.holdsLock(this); - assert checkpoint < firstProcessedSeqNo + bitArraysSize - 1 : - "checkpoint should be below the end of the first bit set (o.w. current bit set is completed and shouldn't be there)"; - assert getBitSetForSeqNo(checkpoint + 1) == processedSeqNo.getFirst() : - "checkpoint + 1 doesn't point to the first bit set (o.w. current bit set is completed and shouldn't be there)"; assert getBitSetForSeqNo(checkpoint + 1).get(seqNoToBitSetOffset(checkpoint + 1)) : "updateCheckpoint is called but the bit following the checkpoint is not set"; try { // keep it simple for now, get the checkpoint one by one; in the future we can optimize and read words - FixedBitSet current = processedSeqNo.getFirst(); + long bitSetKey = getBitSetKey(checkpoint); + FixedBitSet current = processedSeqNo.get(bitSetKey); + if (current == null) { + // the bit set corresponding to the checkpoint has already been removed, set ourselves up for the next bit set + assert checkpoint % BIT_SET_SIZE == BIT_SET_SIZE - 1; + current = processedSeqNo.get(++bitSetKey); + } do { checkpoint++; - // the checkpoint always falls in the first bit set or just before. If it falls - // on the last bit of the current bit set, we can clean it. - if (checkpoint == firstProcessedSeqNo + bitArraysSize - 1) { - processedSeqNo.removeFirst(); - firstProcessedSeqNo += bitArraysSize; - assert checkpoint - firstProcessedSeqNo < bitArraysSize; - current = processedSeqNo.peekFirst(); + /* + * The checkpoint always falls in the current bit set or we have already cleaned it; if it falls on the last bit of the + * current bit set, we can clean it. + */ + if (checkpoint == lastSeqNoInBitSet(bitSetKey)) { + assert current != null; + final FixedBitSet removed = processedSeqNo.remove(bitSetKey); + assert removed == current; + current = processedSeqNo.get(++bitSetKey); } } while (current != null && current.get(seqNoToBitSetOffset(checkpoint + 1))); } finally { @@ -209,37 +195,45 @@ assert getBitSetForSeqNo(checkpoint + 1).get(seqNoToBitSetOffset(checkpoint + 1) } } + private long lastSeqNoInBitSet(final long bitSetKey) { + return (1 + bitSetKey) * BIT_SET_SIZE - 1; + } + /** - * Return the bit array for the provided sequence number, possibly allocating a new array if needed. + * Return the bit set for the provided sequence number, possibly allocating a new set if needed. * - * @param seqNo the sequence number to obtain the bit array for - * @return the bit array corresponding to the provided sequence number + * @param seqNo the sequence number to obtain the bit set for + * @return the bit set corresponding to the provided sequence number */ + private long getBitSetKey(final long seqNo) { + assert Thread.holdsLock(this); + return seqNo / BIT_SET_SIZE; + } + private FixedBitSet getBitSetForSeqNo(final long seqNo) { assert Thread.holdsLock(this); - assert seqNo >= firstProcessedSeqNo : "seqNo: " + seqNo + " firstProcessedSeqNo: " + firstProcessedSeqNo; - final long bitSetOffset = (seqNo - firstProcessedSeqNo) / bitArraysSize; - if (bitSetOffset > Integer.MAX_VALUE) { - throw new IndexOutOfBoundsException( - "sequence number too high; got [" + seqNo + "], firstProcessedSeqNo [" + firstProcessedSeqNo + "]"); - } - while (bitSetOffset >= processedSeqNo.size()) { - processedSeqNo.add(new FixedBitSet(bitArraysSize)); + final long bitSetKey = getBitSetKey(seqNo); + final int index = processedSeqNo.indexOf(bitSetKey); + final FixedBitSet bitSet; + if (processedSeqNo.indexExists(index)) { + bitSet = processedSeqNo.indexGet(index); + } else { + bitSet = new FixedBitSet(BIT_SET_SIZE); + processedSeqNo.indexInsert(index, bitSetKey, bitSet); } - return processedSeqNo.get((int) bitSetOffset); + return bitSet; } /** - * Obtain the position in the bit array corresponding to the provided sequence number. The bit array corresponding to the sequence - * number can be obtained via {@link #getBitSetForSeqNo(long)}. + * Obtain the position in the bit set corresponding to the provided sequence number. The bit set corresponding to the sequence number + * can be obtained via {@link #getBitSetForSeqNo(long)}. * * @param seqNo the sequence number to obtain the position for - * @return the position in the bit array corresponding to the provided sequence number + * @return the position in the bit set corresponding to the provided sequence number */ private int seqNoToBitSetOffset(final long seqNo) { assert Thread.holdsLock(this); - assert seqNo >= firstProcessedSeqNo; - return ((int) (seqNo - firstProcessedSeqNo)) % bitArraysSize; + return Math.toIntExact(seqNo % BIT_SET_SIZE); } } diff --git a/core/src/main/java/org/elasticsearch/index/seqno/SeqNoStats.java b/core/src/main/java/org/elasticsearch/index/seqno/SeqNoStats.java index 363e7f498d1bb..9c1795d654ccc 100644 --- a/core/src/main/java/org/elasticsearch/index/seqno/SeqNoStats.java +++ b/core/src/main/java/org/elasticsearch/index/seqno/SeqNoStats.java @@ -22,12 +22,12 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentFragment; import org.elasticsearch.common.xcontent.XContentBuilder; import java.io.IOException; -public class SeqNoStats implements ToXContent, Writeable { +public class SeqNoStats implements ToXContentFragment, Writeable { private static final String SEQ_NO = "seq_no"; private static final String MAX_SEQ_NO = "max_seq_no"; @@ -41,7 +41,7 @@ public class SeqNoStats implements ToXContent, Writeable { public SeqNoStats(long maxSeqNo, long localCheckpoint, long globalCheckpoint) { assert localCheckpoint <= maxSeqNo: "local checkpoint [" + localCheckpoint + "] is above maximum seq no [" + maxSeqNo + "]"; - // note that the the global checkpoint can be higher from both maxSeqNo and localCheckpoint + // note that the global checkpoint can be higher from both maxSeqNo and localCheckpoint // as we use this stats object to describe lucene commits as well as live statistic. this.maxSeqNo = maxSeqNo; this.localCheckpoint = localCheckpoint; diff --git a/core/src/main/java/org/elasticsearch/index/seqno/SequenceNumbers.java b/core/src/main/java/org/elasticsearch/index/seqno/SequenceNumbers.java index 885fdfc9e6553..21b4134f9837e 100644 --- a/core/src/main/java/org/elasticsearch/index/seqno/SequenceNumbers.java +++ b/core/src/main/java/org/elasticsearch/index/seqno/SequenceNumbers.java @@ -28,6 +28,18 @@ public class SequenceNumbers { public static final String LOCAL_CHECKPOINT_KEY = "local_checkpoint"; public static final String MAX_SEQ_NO = "max_seq_no"; + /** + * Represents a checkpoint coming from a pre-6.0 node + */ + public static final long PRE_60_NODE_CHECKPOINT = -3L; + /** + * Represents an unassigned sequence number (e.g., can be used on primary operations before they are executed). + */ + public static final long UNASSIGNED_SEQ_NO = -2L; + /** + * Represents no operations have been performed on the shard. + */ + public static final long NO_OPS_PERFORMED = -1L; /** * Reads the sequence number stats from the commit data (maximum sequence number and local checkpoint) and uses the specified global @@ -40,16 +52,16 @@ public class SequenceNumbers { public static SeqNoStats loadSeqNoStatsFromLuceneCommit( final long globalCheckpoint, final Iterable> commitData) { - long maxSeqNo = SequenceNumbersService.NO_OPS_PERFORMED; - long localCheckpoint = SequenceNumbersService.NO_OPS_PERFORMED; + long maxSeqNo = NO_OPS_PERFORMED; + long localCheckpoint = NO_OPS_PERFORMED; for (final Map.Entry entry : commitData) { final String key = entry.getKey(); if (key.equals(SequenceNumbers.LOCAL_CHECKPOINT_KEY)) { - assert localCheckpoint == SequenceNumbersService.NO_OPS_PERFORMED : localCheckpoint; + assert localCheckpoint == NO_OPS_PERFORMED : localCheckpoint; localCheckpoint = Long.parseLong(entry.getValue()); } else if (key.equals(SequenceNumbers.MAX_SEQ_NO)) { - assert maxSeqNo == SequenceNumbersService.NO_OPS_PERFORMED : maxSeqNo; + assert maxSeqNo == NO_OPS_PERFORMED : maxSeqNo; maxSeqNo = Long.parseLong(entry.getValue()); } } @@ -59,22 +71,22 @@ public static SeqNoStats loadSeqNoStatsFromLuceneCommit( /** * Compute the minimum of the given current minimum sequence number and the specified sequence number, accounting for the fact that the - * current minimum sequence number could be {@link SequenceNumbersService#NO_OPS_PERFORMED} or - * {@link SequenceNumbersService#UNASSIGNED_SEQ_NO}. When the current minimum sequence number is not - * {@link SequenceNumbersService#NO_OPS_PERFORMED} nor {@link SequenceNumbersService#UNASSIGNED_SEQ_NO}, the specified sequence number - * must not be {@link SequenceNumbersService#UNASSIGNED_SEQ_NO}. + * current minimum sequence number could be {@link SequenceNumbers#NO_OPS_PERFORMED} or + * {@link SequenceNumbers#UNASSIGNED_SEQ_NO}. When the current minimum sequence number is not + * {@link SequenceNumbers#NO_OPS_PERFORMED} nor {@link SequenceNumbers#UNASSIGNED_SEQ_NO}, the specified sequence number + * must not be {@link SequenceNumbers#UNASSIGNED_SEQ_NO}. * * @param minSeqNo the current minimum sequence number * @param seqNo the specified sequence number * @return the new minimum sequence number */ public static long min(final long minSeqNo, final long seqNo) { - if (minSeqNo == SequenceNumbersService.NO_OPS_PERFORMED) { + if (minSeqNo == NO_OPS_PERFORMED) { return seqNo; - } else if (minSeqNo == SequenceNumbersService.UNASSIGNED_SEQ_NO) { + } else if (minSeqNo == UNASSIGNED_SEQ_NO) { return seqNo; } else { - if (seqNo == SequenceNumbersService.UNASSIGNED_SEQ_NO) { + if (seqNo == UNASSIGNED_SEQ_NO) { throw new IllegalArgumentException("sequence number must be assigned"); } return Math.min(minSeqNo, seqNo); @@ -83,22 +95,22 @@ public static long min(final long minSeqNo, final long seqNo) { /** * Compute the maximum of the given current maximum sequence number and the specified sequence number, accounting for the fact that the - * current maximum sequence number could be {@link SequenceNumbersService#NO_OPS_PERFORMED} or - * {@link SequenceNumbersService#UNASSIGNED_SEQ_NO}. When the current maximum sequence number is not - * {@link SequenceNumbersService#NO_OPS_PERFORMED} nor {@link SequenceNumbersService#UNASSIGNED_SEQ_NO}, the specified sequence number - * must not be {@link SequenceNumbersService#UNASSIGNED_SEQ_NO}. + * current maximum sequence number could be {@link SequenceNumbers#NO_OPS_PERFORMED} or + * {@link SequenceNumbers#UNASSIGNED_SEQ_NO}. When the current maximum sequence number is not + * {@link SequenceNumbers#NO_OPS_PERFORMED} nor {@link SequenceNumbers#UNASSIGNED_SEQ_NO}, the specified sequence number + * must not be {@link SequenceNumbers#UNASSIGNED_SEQ_NO}. * * @param maxSeqNo the current maximum sequence number * @param seqNo the specified sequence number * @return the new maximum sequence number */ public static long max(final long maxSeqNo, final long seqNo) { - if (maxSeqNo == SequenceNumbersService.NO_OPS_PERFORMED) { + if (maxSeqNo == NO_OPS_PERFORMED) { return seqNo; - } else if (maxSeqNo == SequenceNumbersService.UNASSIGNED_SEQ_NO) { + } else if (maxSeqNo == UNASSIGNED_SEQ_NO) { return seqNo; } else { - if (seqNo == SequenceNumbersService.UNASSIGNED_SEQ_NO) { + if (seqNo == UNASSIGNED_SEQ_NO) { throw new IllegalArgumentException("sequence number must be assigned"); } return Math.max(maxSeqNo, seqNo); diff --git a/core/src/main/java/org/elasticsearch/index/seqno/SequenceNumbersService.java b/core/src/main/java/org/elasticsearch/index/seqno/SequenceNumbersService.java index ba5ce68287e7a..1b46eedacc457 100644 --- a/core/src/main/java/org/elasticsearch/index/seqno/SequenceNumbersService.java +++ b/core/src/main/java/org/elasticsearch/index/seqno/SequenceNumbersService.java @@ -19,6 +19,7 @@ package org.elasticsearch.index.seqno; +import com.carrotsearch.hppc.ObjectLongMap; import org.elasticsearch.cluster.routing.IndexShardRoutingTable; import org.elasticsearch.index.IndexSettings; import org.elasticsearch.index.shard.AbstractIndexShardComponent; @@ -32,45 +33,31 @@ */ public class SequenceNumbersService extends AbstractIndexShardComponent { - /** - * Represents an unassigned sequence number (e.g., can be used on primary operations before they are executed). - */ - public static final long UNASSIGNED_SEQ_NO = -2L; - - /** - * Represents no operations have been performed on the shard. - */ - public static final long NO_OPS_PERFORMED = -1L; - - /** - * Represents a local checkpoint coming from a pre-6.0 node - */ - public static final long PRE_60_NODE_LOCAL_CHECKPOINT = -3L; - private final LocalCheckpointTracker localCheckpointTracker; private final GlobalCheckpointTracker globalCheckpointTracker; /** * Initialize the sequence number service. The {@code maxSeqNo} should be set to the last sequence number assigned by this shard, or - * {@link SequenceNumbersService#NO_OPS_PERFORMED}, {@code localCheckpoint} should be set to the last known local checkpoint for this - * shard, or {@link SequenceNumbersService#NO_OPS_PERFORMED}, and {@code globalCheckpoint} should be set to the last known global - * checkpoint for this shard, or {@link SequenceNumbersService#UNASSIGNED_SEQ_NO}. + * {@link SequenceNumbers#NO_OPS_PERFORMED}, {@code localCheckpoint} should be set to the last known local checkpoint for this + * shard, or {@link SequenceNumbers#NO_OPS_PERFORMED}, and {@code globalCheckpoint} should be set to the last known global + * checkpoint for this shard, or {@link SequenceNumbers#UNASSIGNED_SEQ_NO}. * * @param shardId the shard this service is providing tracking local checkpoints for * @param indexSettings the index settings - * @param maxSeqNo the last sequence number assigned by this shard, or {@link SequenceNumbersService#NO_OPS_PERFORMED} - * @param localCheckpoint the last known local checkpoint for this shard, or {@link SequenceNumbersService#NO_OPS_PERFORMED} - * @param globalCheckpoint the last known global checkpoint for this shard, or {@link SequenceNumbersService#UNASSIGNED_SEQ_NO} + * @param maxSeqNo the last sequence number assigned by this shard, or {@link SequenceNumbers#NO_OPS_PERFORMED} + * @param localCheckpoint the last known local checkpoint for this shard, or {@link SequenceNumbers#NO_OPS_PERFORMED} + * @param globalCheckpoint the last known global checkpoint for this shard, or {@link SequenceNumbers#UNASSIGNED_SEQ_NO} */ public SequenceNumbersService( final ShardId shardId, + final String allocationId, final IndexSettings indexSettings, final long maxSeqNo, final long localCheckpoint, final long globalCheckpoint) { super(shardId, indexSettings); - localCheckpointTracker = new LocalCheckpointTracker(indexSettings, maxSeqNo, localCheckpoint); - globalCheckpointTracker = new GlobalCheckpointTracker(shardId, indexSettings, globalCheckpoint); + localCheckpointTracker = new LocalCheckpointTracker(maxSeqNo, localCheckpoint); + globalCheckpointTracker = new GlobalCheckpointTracker(shardId, allocationId, indexSettings, globalCheckpoint); } /** @@ -79,7 +66,7 @@ public SequenceNumbersService( * * @return the next assigned sequence number */ - public long generateSeqNo() { + public final long generateSeqNo() { return localCheckpointTracker.generateSeqNo(); } @@ -141,6 +128,25 @@ public void updateLocalCheckpointForShard(final String allocationId, final long globalCheckpointTracker.updateLocalCheckpoint(allocationId, checkpoint); } + /** + * Update the local knowledge of the global checkpoint for the specified allocation ID. + * + * @param allocationId the allocation ID to update the global checkpoint for + * @param globalCheckpoint the global checkpoint + */ + public void updateGlobalCheckpointForShard(final String allocationId, final long globalCheckpoint) { + globalCheckpointTracker.updateGlobalCheckpointForShard(allocationId, globalCheckpoint); + } + + /** + * Get the local knowledge of the global checkpoints for all in-sync allocation IDs. + * + * @return a map from allocation ID to the local knowledge of the global checkpoint for that allocation ID + */ + public ObjectLongMap getInSyncGlobalCheckpoints() { + return globalCheckpointTracker.getInSyncGlobalCheckpoints(); + } + /** * Called when the recovery process for a shard is ready to open the engine on the target shard. * See {@link GlobalCheckpointTracker#initiateTracking(String)} for details. @@ -210,8 +216,8 @@ public synchronized long getTrackedLocalCheckpointForShard(final String allocati * Activates the global checkpoint tracker in primary mode (see {@link GlobalCheckpointTracker#primaryMode}. * Called on primary activation or promotion. */ - public void activatePrimaryMode(final String allocationId, final long localCheckpoint) { - globalCheckpointTracker.activatePrimaryMode(allocationId, localCheckpoint); + public void activatePrimaryMode(final long localCheckpoint) { + globalCheckpointTracker.activatePrimaryMode(localCheckpoint); } /** diff --git a/core/src/main/java/org/elasticsearch/index/shard/DocsStats.java b/core/src/main/java/org/elasticsearch/index/shard/DocsStats.java index 6d4f5ee7815ec..ffad0f085f7d2 100644 --- a/core/src/main/java/org/elasticsearch/index/shard/DocsStats.java +++ b/core/src/main/java/org/elasticsearch/index/shard/DocsStats.java @@ -19,11 +19,13 @@ package org.elasticsearch.index.shard; +import org.elasticsearch.Version; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Streamable; import org.elasticsearch.common.xcontent.ToXContentFragment; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.index.store.StoreStats; import java.io.IOException; @@ -31,22 +33,25 @@ public class DocsStats implements Streamable, ToXContentFragment { long count = 0; long deleted = 0; + long totalSizeInBytes = 0; public DocsStats() { } - public DocsStats(long count, long deleted) { + public DocsStats(long count, long deleted, long totalSizeInBytes) { this.count = count; this.deleted = deleted; + this.totalSizeInBytes = totalSizeInBytes; } - public void add(DocsStats docsStats) { - if (docsStats == null) { + public void add(DocsStats other) { + if (other == null) { return; } - count += docsStats.count; - deleted += docsStats.deleted; + this.totalSizeInBytes += other.totalSizeInBytes; + this.count += other.count; + this.deleted += other.deleted; } public long getCount() { @@ -57,16 +62,40 @@ public long getDeleted() { return this.deleted; } + /** + * Returns the total size in bytes of all documents in this stats. + * This value may be more reliable than {@link StoreStats#getSizeInBytes()} in estimating the index size. + */ + public long getTotalSizeInBytes() { + return totalSizeInBytes; + } + + /** + * Returns the average size in bytes of all documents in this stats. + */ + public long getAverageSizeInBytes() { + long totalDocs = count + deleted; + return totalDocs == 0 ? 0 : totalSizeInBytes / totalDocs; + } + @Override public void readFrom(StreamInput in) throws IOException { count = in.readVLong(); deleted = in.readVLong(); + if (in.getVersion().onOrAfter(Version.V_6_1_0)) { + totalSizeInBytes = in.readVLong(); + } else { + totalSizeInBytes = -1; + } } @Override public void writeTo(StreamOutput out) throws IOException { out.writeVLong(count); out.writeVLong(deleted); + if (out.getVersion().onOrAfter(Version.V_6_1_0)) { + out.writeVLong(totalSizeInBytes); + } } @Override diff --git a/core/src/main/java/org/elasticsearch/index/shard/ElasticsearchQueryCachingPolicy.java b/core/src/main/java/org/elasticsearch/index/shard/ElasticsearchQueryCachingPolicy.java deleted file mode 100644 index 3ea3955a1f416..0000000000000 --- a/core/src/main/java/org/elasticsearch/index/shard/ElasticsearchQueryCachingPolicy.java +++ /dev/null @@ -1,56 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.index.shard; - -import org.apache.lucene.search.Query; -import org.apache.lucene.search.QueryCachingPolicy; -import org.apache.lucene.search.TermQuery; - -import java.io.IOException; - -/** - * A {@link QueryCachingPolicy} that does not cache {@link TermQuery}s. - */ -final class ElasticsearchQueryCachingPolicy implements QueryCachingPolicy { - - private final QueryCachingPolicy in; - - ElasticsearchQueryCachingPolicy(QueryCachingPolicy in) { - this.in = in; - } - - @Override - public void onUse(Query query) { - if (query.getClass() != TermQuery.class) { - // Do not waste space in the history for term queries. The assumption - // is that these queries are very fast so not worth caching - in.onUse(query); - } - } - - @Override - public boolean shouldCache(Query query) throws IOException { - if (query.getClass() == TermQuery.class) { - return false; - } - return in.shouldCache(query); - } - -} diff --git a/core/src/main/java/org/elasticsearch/index/shard/IndexShard.java b/core/src/main/java/org/elasticsearch/index/shard/IndexShard.java index af2c45e42cd13..304764656b73f 100644 --- a/core/src/main/java/org/elasticsearch/index/shard/IndexShard.java +++ b/core/src/main/java/org/elasticsearch/index/shard/IndexShard.java @@ -19,6 +19,7 @@ package org.elasticsearch.index.shard; +import com.carrotsearch.hppc.ObjectLongMap; import org.apache.logging.log4j.Logger; import org.apache.lucene.index.CheckIndex; import org.apache.lucene.index.IndexCommit; @@ -104,6 +105,7 @@ import org.elasticsearch.index.search.stats.ShardSearchStats; import org.elasticsearch.index.seqno.GlobalCheckpointTracker; import org.elasticsearch.index.seqno.SeqNoStats; +import org.elasticsearch.index.seqno.SequenceNumbers; import org.elasticsearch.index.seqno.SequenceNumbersService; import org.elasticsearch.index.shard.PrimaryReplicaSyncer.ResyncTask; import org.elasticsearch.index.similarity.SimilarityService; @@ -137,6 +139,7 @@ import java.nio.charset.StandardCharsets; import java.util.ArrayList; import java.util.Arrays; +import java.util.Collections; import java.util.EnumSet; import java.util.List; import java.util.Locale; @@ -154,6 +157,7 @@ import java.util.function.Consumer; import java.util.function.Supplier; import java.util.stream.Collectors; +import java.util.stream.StreamSupport; import static org.elasticsearch.index.mapper.SourceToParse.source; @@ -180,12 +184,6 @@ public class IndexShard extends AbstractIndexShardComponent implements IndicesCl private final QueryCachingPolicy cachingPolicy; private final Supplier indexSortSupplier; - /** - * How many bytes we are currently moving to disk, via either IndexWriter.flush or refresh. IndexingMemoryController polls this - * across all shards to decide if throttling is necessary because moving bytes to disk is falling behind vs incoming documents - * being indexed/deleted. - */ - private final AtomicLong writingBytes = new AtomicLong(); private final SearchOperationListener searchOperationListener; protected volatile ShardRouting shardRouting; @@ -195,6 +193,11 @@ public class IndexShard extends AbstractIndexShardComponent implements IndicesCl protected final EngineFactory engineFactory; private final IndexingOperationListener indexingOperationListeners; + private final Runnable globalCheckpointSyncer; + + Runnable getGlobalCheckpointSyncer() { + return globalCheckpointSyncer; + } @Nullable private RecoveryState recoveryState; @@ -231,11 +234,24 @@ public class IndexShard extends AbstractIndexShardComponent implements IndicesCl */ private final RefreshListeners refreshListeners; - public IndexShard(ShardRouting shardRouting, IndexSettings indexSettings, ShardPath path, Store store, - Supplier indexSortSupplier, IndexCache indexCache, MapperService mapperService, SimilarityService similarityService, - @Nullable EngineFactory engineFactory, - IndexEventListener indexEventListener, IndexSearcherWrapper indexSearcherWrapper, ThreadPool threadPool, BigArrays bigArrays, - Engine.Warmer warmer, List searchOperationListener, List listeners) throws IOException { + public IndexShard( + ShardRouting shardRouting, + IndexSettings indexSettings, + ShardPath path, + Store store, + Supplier indexSortSupplier, + IndexCache indexCache, + MapperService mapperService, + SimilarityService similarityService, + @Nullable EngineFactory engineFactory, + IndexEventListener indexEventListener, + IndexSearcherWrapper indexSearcherWrapper, + ThreadPool threadPool, + BigArrays bigArrays, + Engine.Warmer warmer, + List searchOperationListener, + List listeners, + Runnable globalCheckpointSyncer) throws IOException { super(shardRouting.shardId(), indexSettings); assert shardRouting.initializing(); this.shardRouting = shardRouting; @@ -255,6 +271,7 @@ public IndexShard(ShardRouting shardRouting, IndexSettings indexSettings, ShardP final List listenersList = new ArrayList<>(listeners); listenersList.add(internalIndexingStats); this.indexingOperationListeners = new IndexingOperationListener.CompositeListener(listenersList, logger); + this.globalCheckpointSyncer = globalCheckpointSyncer; final List searchListenersList = new ArrayList<>(searchOperationListener); searchListenersList.add(searchStats); this.searchOperationListener = new SearchOperationListener.CompositeListener(searchListenersList, logger); @@ -275,11 +292,7 @@ public IndexShard(ShardRouting shardRouting, IndexSettings indexSettings, ShardP if (IndexModule.INDEX_QUERY_CACHE_EVERYTHING_SETTING.get(settings)) { cachingPolicy = QueryCachingPolicy.ALWAYS_CACHE; } else { - QueryCachingPolicy cachingPolicy = new UsageTrackingQueryCachingPolicy(); - if (IndexModule.INDEX_QUERY_CACHE_TERM_QUERIES_SETTING.get(settings) == false) { - cachingPolicy = new ElasticsearchQueryCachingPolicy(cachingPolicy); - } - this.cachingPolicy = cachingPolicy; + cachingPolicy = new UsageTrackingQueryCachingPolicy(); } indexShardOperationPermits = new IndexShardOperationPermits(shardId, logger, threadPool); searcherWrapper = indexSearcherWrapper; @@ -302,12 +315,6 @@ public Store store() { public Sort getIndexSort() { return indexSortSupplier.get(); } - /** - * returns true if this shard supports indexing (i.e., write) operations. - */ - public boolean canIndex() { - return true; - } public ShardGetService getService() { return this.getService; @@ -400,7 +407,7 @@ public void updateShardState(final ShardRouting newRouting, final DiscoveryNode recoverySourceNode = recoveryState.getSourceNode(); if (currentRouting.isRelocationTarget() == false || recoverySourceNode.getVersion().before(Version.V_6_0_0_alpha1)) { // there was no primary context hand-off in < 6.0.0, need to manually activate the shard - getEngine().seqNoService().activatePrimaryMode(currentRouting.allocationId().getId(), getEngine().seqNoService().getLocalCheckpoint()); + getEngine().seqNoService().activatePrimaryMode(getEngine().seqNoService().getLocalCheckpoint()); } } @@ -415,10 +422,10 @@ public void updateShardState(final ShardRouting newRouting, assert newRouting.active() == false || state == IndexShardState.STARTED || state == IndexShardState.RELOCATED || state == IndexShardState.CLOSED : "routing is active, but local shard state isn't. routing: " + newRouting + ", local state: " + state; - this.shardRouting = newRouting; persistMetadata(path, indexSettings, newRouting, currentRouting, logger); + final CountDownLatch shardStateUpdated = new CountDownLatch(1); - if (shardRouting.primary()) { + if (newRouting.primary()) { if (newPrimaryTerm != primaryTerm) { assert currentRouting.primary() == false : "term is only increased as part of primary promotion"; /* Note that due to cluster state batching an initializing primary shard term can failed and re-assigned @@ -434,9 +441,9 @@ public void updateShardState(final ShardRouting newRouting, * We could fail the shard in that case, but this will cause it to be removed from the insync allocations list * potentially preventing re-allocation. */ - assert shardRouting.initializing() == false : + assert newRouting.initializing() == false : "a started primary shard should never update its term; " - + "shard " + shardRouting + ", " + + "shard " + newRouting + ", " + "current term [" + primaryTerm + "], " + "new term [" + newPrimaryTerm + "]"; assert newPrimaryTerm > primaryTerm : @@ -446,7 +453,6 @@ public void updateShardState(final ShardRouting newRouting, * increment the primary term. The latch is needed to ensure that we do not unblock operations before the primary term is * incremented. */ - final CountDownLatch latch = new CountDownLatch(1); // to prevent primary relocation handoff while resync is not completed boolean resyncStarted = primaryReplicaResyncInProgress.compareAndSet(false, true); if (resyncStarted == false) { @@ -456,7 +462,7 @@ public void updateShardState(final ShardRouting newRouting, 30, TimeUnit.MINUTES, () -> { - latch.await(); + shardStateUpdated.await(); try { /* * If this shard was serving as a replica shard when another shard was promoted to primary then the state of @@ -466,8 +472,12 @@ public void updateShardState(final ShardRouting newRouting, * subsequently fails before the primary/replica re-sync completes successfully and we are now being * promoted, the local checkpoint tracker here could be left in a state where it would re-issue sequence * numbers. To ensure that this is not the case, we restore the state of the local checkpoint tracker by - * replaying the translog and marking any operations there are completed. + * replaying the translog and marking any operations there are completed. Rolling the translog generation is + * not strictly needed here (as we will never have collisions between sequence numbers in a translog + * generation in a new primary as it takes the last known sequence number as a starting point), but it + * simplifies reasoning about the relationship between primary terms and translog generations. */ + getEngine().rollTranslogGeneration(); getEngine().restoreLocalCheckpointFromTranslog(); getEngine().fillSeqNoGaps(newPrimaryTerm); getEngine().seqNoService().updateLocalCheckpointForShard(currentRouting.allocationId().getId(), @@ -497,11 +507,13 @@ public void onFailure(Exception e) { } }, e -> failShard("exception during primary term transition", e)); - getEngine().seqNoService().activatePrimaryMode(currentRouting.allocationId().getId(), getEngine().seqNoService().getLocalCheckpoint()); + getEngine().seqNoService().activatePrimaryMode(getEngine().seqNoService().getLocalCheckpoint()); primaryTerm = newPrimaryTerm; - latch.countDown(); } } + // set this last, once we finished updating all internal state. + this.shardRouting = newRouting; + shardStateUpdated.countDown(); } if (currentRouting != null && currentRouting.active() == false && newRouting.active()) { indexEventListener.afterIndexShardStarted(this); @@ -608,6 +620,7 @@ private void verifyRelocatingState() { } } + @Override public IndexShardState state() { return state; } @@ -631,7 +644,7 @@ private IndexShardState changeState(IndexShardState newState, String reason) { public Engine.IndexResult applyIndexOperationOnPrimary(long version, VersionType versionType, SourceToParse sourceToParse, long autoGeneratedTimestamp, boolean isRetry, Consumer onMappingUpdate) throws IOException { - return applyIndexOperation(SequenceNumbersService.UNASSIGNED_SEQ_NO, primaryTerm, version, versionType, autoGeneratedTimestamp, + return applyIndexOperation(SequenceNumbers.UNASSIGNED_SEQ_NO, primaryTerm, version, versionType, autoGeneratedTimestamp, isRetry, Engine.Operation.Origin.PRIMARY, sourceToParse, onMappingUpdate); } @@ -728,7 +741,7 @@ private Engine.NoOpResult noOp(Engine engine, Engine.NoOp noOp) { public Engine.DeleteResult applyDeleteOperationOnPrimary(long version, String type, String id, VersionType versionType, Consumer onMappingUpdate) throws IOException { - return applyDeleteOperation(SequenceNumbersService.UNASSIGNED_SEQ_NO, primaryTerm, version, type, id, versionType, + return applyDeleteOperation(SequenceNumbers.UNASSIGNED_SEQ_NO, primaryTerm, version, type, id, versionType, Engine.Operation.Origin.PRIMARY, onMappingUpdate); } @@ -816,34 +829,21 @@ public Engine.GetResult get(Engine.Get get) { */ public void refresh(String source) { verifyNotClosed(); - - if (canIndex()) { - long bytes = getEngine().getIndexBufferRAMBytesUsed(); - writingBytes.addAndGet(bytes); - try { - if (logger.isTraceEnabled()) { - logger.trace("refresh with source [{}] indexBufferRAMBytesUsed [{}]", source, new ByteSizeValue(bytes)); - } - getEngine().refresh(source); - } finally { - if (logger.isTraceEnabled()) { - logger.trace("remove [{}] writing bytes for shard [{}]", new ByteSizeValue(bytes), shardId()); - } - writingBytes.addAndGet(-bytes); - } - } else { - if (logger.isTraceEnabled()) { - logger.trace("refresh with source [{}]", source); - } - getEngine().refresh(source); + if (logger.isTraceEnabled()) { + logger.trace("refresh with source [{}]", source); } + getEngine().refresh(source); } /** * Returns how many bytes we are currently moving from heap to disk */ public long getWritingBytes() { - return writingBytes.get(); + Engine engine = getEngineOrNull(); + if (engine == null) { + return 0; + } + return engine.getWritingBytes(); } public RefreshStats refreshStats() { @@ -856,9 +856,18 @@ public FlushStats flushStats() { } public DocsStats docStats() { - try (Engine.Searcher searcher = acquireSearcher("doc_stats")) { - return new DocsStats(searcher.reader().numDocs(), searcher.reader().numDeletedDocs()); + long numDocs = 0; + long numDeletedDocs = 0; + long sizeInBytes = 0; + List segments = segments(false); + for (Segment segment : segments) { + if (segment.search) { + numDocs += segment.getNumDocs(); + numDeletedDocs += segment.getDeletedDocs(); + sizeInBytes += segment.getSizeInBytes(); + } } + return new DocsStats(numDocs, numDeletedDocs, sizeInBytes); } /** @@ -1008,7 +1017,8 @@ public void forceMerge(ForceMergeRequest forceMerge) throws IOException { if (logger.isTraceEnabled()) { logger.trace("force merge with {}", forceMerge); } - getEngine().forceMerge(forceMerge.flush(), forceMerge.maxNumSegments(), + Engine engine = getEngine(); + engine.forceMerge(forceMerge.flush(), forceMerge.maxNumSegments(), forceMerge.onlyExpungeDeletes(), false, false); } @@ -1022,7 +1032,8 @@ public org.apache.lucene.util.Version upgrade(UpgradeRequest upgrade) throws IOE } org.apache.lucene.util.Version previousVersion = minimumCompatibleVersion(); // we just want to upgrade the segments, not actually forge merge to a single segment - getEngine().forceMerge(true, // we need to flush at the end to make sure the upgrade is durable + final Engine engine = getEngine(); + engine.forceMerge(true, // we need to flush at the end to make sure the upgrade is durable Integer.MAX_VALUE, // we just want to upgrade the segments, not actually optimize to a single segment false, true, upgrade.upgradeOnlyAncientSegments()); org.apache.lucene.util.Version version = minimumCompatibleVersion(); @@ -1103,11 +1114,14 @@ public void failShard(String reason, @Nullable Exception e) { // fail the engine. This will cause this shard to also be removed from the node's index service. getEngine().failEngine(reason, e); } - public Engine.Searcher acquireSearcher(String source) { + return acquireSearcher(source, Engine.SearcherScope.EXTERNAL); + } + + private Engine.Searcher acquireSearcher(String source, Engine.SearcherScope scope) { readAllowed(); final Engine engine = getEngine(); - final Engine.Searcher searcher = engine.acquireSearcher(source); + final Engine.Searcher searcher = engine.acquireSearcher(source, scope); boolean success = false; try { final Engine.Searcher wrappedSearcher = searcherWrapper == null ? searcher : searcherWrapper.wrap(searcher); @@ -1345,6 +1359,7 @@ public RecoveryStats recoveryStats() { * Returns the current {@link RecoveryState} if this shard is recovering or has been recovering. * Returns null if the recovery has not yet started or shard was not recovered (created via an API). */ + @Override public RecoveryState recoveryState() { return this.recoveryState; } @@ -1582,6 +1597,10 @@ public Translog getTranslog() { return getEngine().getTranslog(); } + public String getHistoryUUID() { + return getEngine().getHistoryUUID(); + } + public IndexEventListener getIndexEventListener() { return indexEventListener; } @@ -1629,24 +1648,9 @@ private void handleRefreshException(Exception e) { * Called when our shard is using too much heap and should move buffered indexed/deleted documents to disk. */ public void writeIndexingBuffer() { - if (canIndex() == false) { - throw new UnsupportedOperationException(); - } try { Engine engine = getEngine(); - long bytes = engine.getIndexBufferRAMBytesUsed(); - - // NOTE: this can be an overestimate by up to 20%, if engine uses IW.flush not refresh, because version map - // memory is low enough, but this is fine because after the writes finish, IMC will poll again and see that - // there's still up to the 20% being used and continue writing if necessary: - logger.debug("add [{}] writing bytes for shard [{}]", new ByteSizeValue(bytes), shardId()); - writingBytes.addAndGet(bytes); - try { - engine.writeIndexingBuffer(); - } finally { - writingBytes.addAndGet(-bytes); - logger.debug("remove [{}] writing bytes for shard [{}]", new ByteSizeValue(bytes), shardId()); - } + engine.writeIndexingBuffer(); } catch (Exception e) { handleRefreshException(e); } @@ -1666,6 +1670,18 @@ public void updateLocalCheckpointForShard(final String allocationId, final long getEngine().seqNoService().updateLocalCheckpointForShard(allocationId, checkpoint); } + /** + * Update the local knowledge of the global checkpoint for the specified allocation ID. + * + * @param allocationId the allocation ID to update the global checkpoint for + * @param globalCheckpoint the global checkpoint + */ + public void updateGlobalCheckpointForShard(final String allocationId, final long globalCheckpoint) { + verifyPrimary(); + verifyNotClosed(); + getEngine().seqNoService().updateGlobalCheckpointForShard(allocationId, globalCheckpoint); + } + /** * Waits for all operations up to the provided sequence number to complete. * @@ -1703,11 +1719,6 @@ public void initiateTracking(final String allocationId) { public void markAllocationIdAsInSync(final String allocationId, final long localCheckpoint) throws InterruptedException { verifyPrimary(); getEngine().seqNoService().markAllocationIdAsInSync(allocationId, localCheckpoint); - /* - * We could have blocked so long waiting for the replica to catch up that we fell idle and there will not be a background sync to - * the replica; mark our self as active to force a future background sync. - */ - active.compareAndSet(false, true); } /** @@ -1728,6 +1739,46 @@ public long getGlobalCheckpoint() { return getEngine().seqNoService().getGlobalCheckpoint(); } + /** + * Get the local knowledge of the global checkpoints for all in-sync allocation IDs. + * + * @return a map from allocation ID to the local knowledge of the global checkpoint for that allocation ID + */ + public ObjectLongMap getInSyncGlobalCheckpoints() { + verifyPrimary(); + verifyNotClosed(); + return getEngine().seqNoService().getInSyncGlobalCheckpoints(); + } + + /** + * Syncs the global checkpoint to the replicas if the global checkpoint on at least one replica is behind the global checkpoint on the + * primary. + */ + public void maybeSyncGlobalCheckpoint(final String reason) { + verifyPrimary(); + verifyNotClosed(); + if (state == IndexShardState.RELOCATED) { + return; + } + // only sync if there are not operations in flight + final SeqNoStats stats = getEngine().seqNoService().stats(); + if (stats.getMaxSeqNo() == stats.getGlobalCheckpoint()) { + final ObjectLongMap globalCheckpoints = getInSyncGlobalCheckpoints(); + final String allocationId = routingEntry().allocationId().getId(); + assert globalCheckpoints.containsKey(allocationId); + final long globalCheckpoint = globalCheckpoints.get(allocationId); + final boolean syncNeeded = + StreamSupport + .stream(globalCheckpoints.values().spliterator(), false) + .anyMatch(v -> v.value < globalCheckpoint); + // only sync if there is a shard lagging the primary + if (syncNeeded) { + logger.trace("syncing global checkpoint for [{}]", reason); + globalCheckpointSyncer.run(); + } + } + } + /** * Returns the current replication group for the shard. * @@ -1756,7 +1807,7 @@ public void updateGlobalCheckpointOnReplica(final long globalCheckpoint, final S * case that the global checkpoint update from the primary is ahead of the local checkpoint on this shard. In this case, we * ignore the global checkpoint update. This can happen if we are in the translog stage of recovery. Prior to this, the engine * is not opened and this shard will not receive global checkpoint updates, and after this the shard will be contributing to - * calculations of the the global checkpoint. However, we can not assert that we are in the translog stage of recovery here as + * calculations of the global checkpoint. However, we can not assert that we are in the translog stage of recovery here as * while the global checkpoint update may have emanated from the primary when we were in that state, we could subsequently move * to recovery finalization, or even finished recovery before the update arrives here. */ @@ -1776,9 +1827,9 @@ assert state() != IndexShardState.POST_RECOVERY && state() != IndexShardState.ST public void activateWithPrimaryContext(final GlobalCheckpointTracker.PrimaryContext primaryContext) { verifyPrimary(); assert shardRouting.isRelocationTarget() : "only relocation target can update allocation IDs from primary context: " + shardRouting; - assert primaryContext.getLocalCheckpoints().containsKey(routingEntry().allocationId().getId()) && + assert primaryContext.getCheckpointStates().containsKey(routingEntry().allocationId().getId()) && getEngine().seqNoService().getLocalCheckpoint() == - primaryContext.getLocalCheckpoints().get(routingEntry().allocationId().getId()).getLocalCheckpoint(); + primaryContext.getCheckpointStates().get(routingEntry().allocationId().getId()).getLocalCheckpoint(); getEngine().seqNoService().activateWithPrimaryContext(primaryContext); } @@ -1950,25 +2001,32 @@ public void startRecovery(RecoveryState recoveryState, PeerRecoveryTargetService break; case LOCAL_SHARDS: final IndexMetaData indexMetaData = indexSettings().getIndexMetaData(); - final Index mergeSourceIndex = indexMetaData.getMergeSourceIndex(); + final Index resizeSourceIndex = indexMetaData.getResizeSourceIndex(); final List startedShards = new ArrayList<>(); - final IndexService sourceIndexService = indicesService.indexService(mergeSourceIndex); - final int numShards = sourceIndexService != null ? sourceIndexService.getIndexSettings().getNumberOfShards() : -1; + final IndexService sourceIndexService = indicesService.indexService(resizeSourceIndex); + final Set requiredShards; + final int numShards; if (sourceIndexService != null) { + requiredShards = IndexMetaData.selectRecoverFromShards(shardId().id(), + sourceIndexService.getMetaData(), indexMetaData.getNumberOfShards()); for (IndexShard shard : sourceIndexService) { - if (shard.state() == IndexShardState.STARTED) { + if (shard.state() == IndexShardState.STARTED && requiredShards.contains(shard.shardId())) { startedShards.add(shard); } } + numShards = requiredShards.size(); + } else { + numShards = -1; + requiredShards = Collections.emptySet(); } + if (numShards == startedShards.size()) { + assert requiredShards.isEmpty() == false; markAsRecovering("from local shards", recoveryState); // mark the shard as recovering on the cluster state thread threadPool.generic().execute(() -> { try { - final Set shards = IndexMetaData.selectShrinkShards(shardId().id(), sourceIndexService.getMetaData(), - +indexMetaData.getNumberOfShards()); if (recoverFromLocalShards(mappingUpdateConsumer, startedShards.stream() - .filter((s) -> shards.contains(s.shardId())).collect(Collectors.toList()))) { + .filter((s) -> requiredShards.contains(s.shardId())).collect(Collectors.toList()))) { recoveryListener.onRecoveryDone(recoveryState); } } catch (Exception e) { @@ -1979,9 +2037,9 @@ public void startRecovery(RecoveryState recoveryState, PeerRecoveryTargetService } else { final RuntimeException e; if (numShards == -1) { - e = new IndexNotFoundException(mergeSourceIndex); + e = new IndexNotFoundException(resizeSourceIndex); } else { - e = new IllegalStateException("not all shards from index " + mergeSourceIndex + e = new IllegalStateException("not all required shards of index " + resizeSourceIndex + " are started yet, expected " + numShards + " found " + startedShards.size() + " can't recover shard " + shardId()); } @@ -2073,10 +2131,24 @@ private DocumentMapperForType docMapper(String type) { private EngineConfig newEngineConfig(EngineConfig.OpenMode openMode) { Sort indexSort = indexSortSupplier.get(); - return new EngineConfig(openMode, shardId, + final boolean forceNewHistoryUUID; + switch (shardRouting.recoverySource().getType()) { + case EXISTING_STORE: + case PEER: + forceNewHistoryUUID = false; + break; + case EMPTY_STORE: + case SNAPSHOT: + case LOCAL_SHARDS: + forceNewHistoryUUID = true; + break; + default: + throw new AssertionError("unknown recovery type: [" + shardRouting.recoverySource().getType() + "]"); + } + return new EngineConfig(openMode, shardId, shardRouting.allocationId().getId(), threadPool, indexSettings, warmer, store, indexSettings.getMergePolicy(), mapperService.indexAnalyzer(), similarityService.similarity(mapperService), codecService, shardEventListener, - indexCache.query(), cachingPolicy, translogConfig, + indexCache.query(), cachingPolicy, forceNewHistoryUUID, translogConfig, IndexingMemoryController.SHARD_INACTIVE_TIME_SETTING.get(indexSettings.getSettings()), Arrays.asList(refreshListeners, new RefreshMetricUpdater(refreshMetric)), indexSort, this::runTranslogRecovery); @@ -2134,8 +2206,8 @@ public void acquireReplicaOperationPermit(final long operationPrimaryTerm, final updateGlobalCheckpointOnReplica(globalCheckpoint, "primary term transition"); final long currentGlobalCheckpoint = getGlobalCheckpoint(); final long localCheckpoint; - if (currentGlobalCheckpoint == SequenceNumbersService.UNASSIGNED_SEQ_NO) { - localCheckpoint = SequenceNumbersService.NO_OPS_PERFORMED; + if (currentGlobalCheckpoint == SequenceNumbers.UNASSIGNED_SEQ_NO) { + localCheckpoint = SequenceNumbers.NO_OPS_PERFORMED; } else { localCheckpoint = currentGlobalCheckpoint; } diff --git a/core/src/main/java/org/elasticsearch/index/shard/IndexShardOperationPermits.java b/core/src/main/java/org/elasticsearch/index/shard/IndexShardOperationPermits.java index ac3459b78e9a3..3f6d443aa8009 100644 --- a/core/src/main/java/org/elasticsearch/index/shard/IndexShardOperationPermits.java +++ b/core/src/main/java/org/elasticsearch/index/shard/IndexShardOperationPermits.java @@ -183,7 +183,7 @@ private void releaseDelayedOperations() { * - blockOperations can be called on a recovery thread which can be expected to be interrupted when recovery is cancelled; * interruptions are bad here as permit acquisition will throw an interrupted exception which will be swallowed by * the threaded action listener if the queue of the thread pool on which it submits is full - * - if a permit is acquired and the queue of the thread pool which the the threaded action listener uses is full, the + * - if a permit is acquired and the queue of the thread pool which the threaded action listener uses is full, the * onFailure handler is executed on the calling thread; this should not be the recovery thread as it would delay the * recovery */ diff --git a/core/src/main/java/org/elasticsearch/index/shard/IndexingStats.java b/core/src/main/java/org/elasticsearch/index/shard/IndexingStats.java index 8f900fc9d0b11..26eb8b469f52b 100644 --- a/core/src/main/java/org/elasticsearch/index/shard/IndexingStats.java +++ b/core/src/main/java/org/elasticsearch/index/shard/IndexingStats.java @@ -34,7 +34,7 @@ public class IndexingStats implements Streamable, ToXContentFragment { - public static class Stats implements Streamable, ToXContent { + public static class Stats implements Streamable, ToXContentFragment { private long indexCount; private long indexTimeInMillis; diff --git a/core/src/main/java/org/elasticsearch/index/shard/PrimaryReplicaSyncer.java b/core/src/main/java/org/elasticsearch/index/shard/PrimaryReplicaSyncer.java index 3716dcaff0fc7..08d64cb82bc72 100644 --- a/core/src/main/java/org/elasticsearch/index/shard/PrimaryReplicaSyncer.java +++ b/core/src/main/java/org/elasticsearch/index/shard/PrimaryReplicaSyncer.java @@ -35,7 +35,7 @@ import org.elasticsearch.common.unit.ByteSizeValue; import org.elasticsearch.common.util.concurrent.AbstractRunnable; import org.elasticsearch.common.xcontent.XContentBuilder; -import org.elasticsearch.index.seqno.SequenceNumbersService; +import org.elasticsearch.index.seqno.SequenceNumbers; import org.elasticsearch.index.translog.Translog; import org.elasticsearch.tasks.Task; import org.elasticsearch.tasks.TaskId; @@ -231,7 +231,7 @@ protected void doRun() throws Exception { while ((operation = snapshot.next()) != null) { final long seqNo = operation.seqNo(); if (startingSeqNo >= 0 && - (seqNo == SequenceNumbersService.UNASSIGNED_SEQ_NO || seqNo < startingSeqNo)) { + (seqNo == SequenceNumbers.UNASSIGNED_SEQ_NO || seqNo < startingSeqNo)) { totalSkippedOps.incrementAndGet(); continue; } diff --git a/core/src/main/java/org/elasticsearch/index/shard/ShardPath.java b/core/src/main/java/org/elasticsearch/index/shard/ShardPath.java index eb2d530fd99e0..4a18bcdd5c79a 100644 --- a/core/src/main/java/org/elasticsearch/index/shard/ShardPath.java +++ b/core/src/main/java/org/elasticsearch/index/shard/ShardPath.java @@ -31,7 +31,11 @@ import java.nio.file.FileStore; import java.nio.file.Files; import java.nio.file.Path; +import java.util.Arrays; +import java.util.HashMap; +import java.util.List; import java.util.Map; +import java.util.stream.Collectors; public final class ShardPath { public static final String INDEX_FOLDER_NAME = "index"; @@ -189,23 +193,49 @@ public static ShardPath selectNewPathForShard(NodeEnvironment env, ShardId shard // TODO - do we need something more extensible? Yet, this does the job for now... final NodeEnvironment.NodePath[] paths = env.nodePaths(); - NodeEnvironment.NodePath bestPath = null; - BigInteger maxUsableBytes = BigInteger.valueOf(Long.MIN_VALUE); - for (NodeEnvironment.NodePath nodePath : paths) { - FileStore fileStore = nodePath.fileStore; - - BigInteger usableBytes = BigInteger.valueOf(fileStore.getUsableSpace()); - assert usableBytes.compareTo(BigInteger.ZERO) >= 0; - - // Deduct estimated reserved bytes from usable space: - Integer count = dataPathToShardCount.get(nodePath.path); - if (count != null) { - usableBytes = usableBytes.subtract(estShardSizeInBytes.multiply(BigInteger.valueOf(count))); - } - if (bestPath == null || usableBytes.compareTo(maxUsableBytes) > 0) { - maxUsableBytes = usableBytes; - bestPath = nodePath; + + // If no better path is chosen, use the one with the most space by default + NodeEnvironment.NodePath bestPath = getPathWithMostFreeSpace(env); + + if (paths.length != 1) { + int shardCount = indexSettings.getNumberOfShards(); + // Maximum number of shards that a path should have for a particular index assuming + // all the shards were assigned to this node. For example, with a node with 4 data + // paths and an index with 9 primary shards, the maximum number of shards per path + // would be 3. + int maxShardsPerPath = Math.floorDiv(shardCount, paths.length) + ((shardCount % paths.length) == 0 ? 0 : 1); + + Map pathToShardCount = env.shardCountPerPath(shardId.getIndex()); + + // Compute how much space there is on each path + final Map pathsToSpace = new HashMap<>(paths.length); + for (NodeEnvironment.NodePath nodePath : paths) { + FileStore fileStore = nodePath.fileStore; + BigInteger usableBytes = BigInteger.valueOf(fileStore.getUsableSpace()); + pathsToSpace.put(nodePath, usableBytes); } + + bestPath = Arrays.stream(paths) + // Filter out paths that have enough space + .filter((path) -> pathsToSpace.get(path).subtract(estShardSizeInBytes).compareTo(BigInteger.ZERO) > 0) + // Sort by the number of shards for this index + .sorted((p1, p2) -> { + int cmp = Long.compare(pathToShardCount.getOrDefault(p1, 0L), pathToShardCount.getOrDefault(p2, 0L)); + if (cmp == 0) { + // if the number of shards is equal, tie-break with the number of total shards + cmp = Integer.compare(dataPathToShardCount.getOrDefault(p1.path, 0), + dataPathToShardCount.getOrDefault(p2.path, 0)); + if (cmp == 0) { + // if the number of shards is equal, tie-break with the usable bytes + cmp = pathsToSpace.get(p2).compareTo(pathsToSpace.get(p1)); + } + } + return cmp; + }) + // Return the first result + .findFirst() + // Or the existing best path if there aren't any that fit the criteria + .orElse(bestPath); } statePath = bestPath.resolve(shardId); @@ -214,6 +244,24 @@ public static ShardPath selectNewPathForShard(NodeEnvironment env, ShardId shard return new ShardPath(indexSettings.hasCustomDataPath(), dataPath, statePath, shardId); } + static NodeEnvironment.NodePath getPathWithMostFreeSpace(NodeEnvironment env) throws IOException { + final NodeEnvironment.NodePath[] paths = env.nodePaths(); + NodeEnvironment.NodePath bestPath = null; + long maxUsableBytes = Long.MIN_VALUE; + for (NodeEnvironment.NodePath nodePath : paths) { + FileStore fileStore = nodePath.fileStore; + long usableBytes = fileStore.getUsableSpace(); + assert usableBytes >= 0 : "usable bytes must be >= 0, got: " + usableBytes; + + if (bestPath == null || usableBytes > maxUsableBytes) { + // This path has been determined to be "better" based on the usable bytes + maxUsableBytes = usableBytes; + bestPath = nodePath; + } + } + return bestPath; + } + @Override public boolean equals(Object o) { if (this == o) { diff --git a/core/src/main/java/org/elasticsearch/index/shard/ShardSplittingQuery.java b/core/src/main/java/org/elasticsearch/index/shard/ShardSplittingQuery.java new file mode 100644 index 0000000000000..94aee085175a0 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/index/shard/ShardSplittingQuery.java @@ -0,0 +1,245 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.index.shard; + +import org.apache.lucene.index.FieldInfo; +import org.apache.lucene.index.LeafReader; +import org.apache.lucene.index.LeafReaderContext; +import org.apache.lucene.index.PostingsEnum; +import org.apache.lucene.index.StoredFieldVisitor; +import org.apache.lucene.index.Terms; +import org.apache.lucene.index.TermsEnum; +import org.apache.lucene.search.ConstantScoreScorer; +import org.apache.lucene.search.ConstantScoreWeight; +import org.apache.lucene.search.DocIdSetIterator; +import org.apache.lucene.search.IndexSearcher; +import org.apache.lucene.search.Query; +import org.apache.lucene.search.Scorer; +import org.apache.lucene.search.TwoPhaseIterator; +import org.apache.lucene.search.Weight; +import org.apache.lucene.util.BitSetIterator; +import org.apache.lucene.util.Bits; +import org.apache.lucene.util.BytesRef; +import org.apache.lucene.util.FixedBitSet; +import org.elasticsearch.Version; +import org.elasticsearch.cluster.metadata.IndexMetaData; +import org.elasticsearch.cluster.routing.OperationRouting; +import org.elasticsearch.index.mapper.IdFieldMapper; +import org.elasticsearch.index.mapper.RoutingFieldMapper; +import org.elasticsearch.index.mapper.Uid; + +import java.io.IOException; +import java.util.function.IntConsumer; +import java.util.function.Predicate; + +/** + * A query that selects all docs that do NOT belong in the current shards this query is executed on. + * It can be used to split a shard into N shards marking every document that doesn't belong into the shard + * as deleted. See {@link org.apache.lucene.index.IndexWriter#deleteDocuments(Query...)} + */ +final class ShardSplittingQuery extends Query { + private final IndexMetaData indexMetaData; + private final int shardId; + + ShardSplittingQuery(IndexMetaData indexMetaData, int shardId) { + if (indexMetaData.getCreationVersion().before(Version.V_6_0_0_rc2)) { + throw new IllegalArgumentException("Splitting query can only be executed on an index created with version " + + Version.V_6_0_0_rc2 + " or higher"); + } + this.indexMetaData = indexMetaData; + this.shardId = shardId; + } + + @Override + public Weight createWeight(IndexSearcher searcher, boolean needsScores, float boost) { + return new ConstantScoreWeight(this, boost) { + @Override + public String toString() { + return "weight(delete docs query)"; + } + + @Override + public Scorer scorer(LeafReaderContext context) throws IOException { + LeafReader leafReader = context.reader(); + FixedBitSet bitSet = new FixedBitSet(leafReader.maxDoc()); + Terms terms = leafReader.terms(RoutingFieldMapper.NAME); + Predicate includeInShard = ref -> { + int targetShardId = OperationRouting.generateShardId(indexMetaData, + Uid.decodeId(ref.bytes, ref.offset, ref.length), null); + return shardId == targetShardId; + }; + if (terms == null) { // this is the common case - no partitioning and no _routing values + assert indexMetaData.isRoutingPartitionedIndex() == false; + findSplitDocs(IdFieldMapper.NAME, includeInShard, leafReader, bitSet::set); + } else { + if (indexMetaData.isRoutingPartitionedIndex()) { + // this is the heaviest invariant. Here we have to visit all docs stored fields do extract _id and _routing + // this this index is routing partitioned. + Visitor visitor = new Visitor(); + return new ConstantScoreScorer(this, score(), + new RoutingPartitionedDocIdSetIterator(leafReader, visitor)); + } else { + // in the _routing case we first go and find all docs that have a routing value and mark the ones we have to delete + findSplitDocs(RoutingFieldMapper.NAME, ref -> { + int targetShardId = OperationRouting.generateShardId(indexMetaData, null, ref.utf8ToString()); + return shardId == targetShardId; + }, leafReader, bitSet::set); + // now if we have a mixed index where some docs have a _routing value and some don't we have to exclude the ones + // with a routing value from the next iteration an delete / select based on the ID. + if (terms.getDocCount() != leafReader.maxDoc()) { + // this is a special case where some of the docs have no routing values this sucks but it's possible today + FixedBitSet hasRoutingValue = new FixedBitSet(leafReader.maxDoc()); + findSplitDocs(RoutingFieldMapper.NAME, ref -> false, leafReader, + hasRoutingValue::set); + findSplitDocs(IdFieldMapper.NAME, includeInShard, leafReader, docId -> { + if (hasRoutingValue.get(docId) == false) { + bitSet.set(docId); + } + }); + } + } + } + return new ConstantScoreScorer(this, score(), new BitSetIterator(bitSet, bitSet.length())); + } + + + }; + } + + @Override + public String toString(String field) { + return "shard_splitting_query"; + } + + @Override + public boolean equals(Object o) { + if (this == o) return true; + if (o == null || getClass() != o.getClass()) return false; + + ShardSplittingQuery that = (ShardSplittingQuery) o; + + if (shardId != that.shardId) return false; + return indexMetaData.equals(that.indexMetaData); + } + + @Override + public int hashCode() { + int result = indexMetaData.hashCode(); + result = 31 * result + shardId; + return classHash() ^ result; + } + + private static void findSplitDocs(String idField, Predicate includeInShard, + LeafReader leafReader, IntConsumer consumer) throws IOException { + Terms terms = leafReader.terms(idField); + TermsEnum iterator = terms.iterator(); + BytesRef idTerm; + PostingsEnum postingsEnum = null; + while ((idTerm = iterator.next()) != null) { + if (includeInShard.test(idTerm) == false) { + postingsEnum = iterator.postings(postingsEnum); + int doc; + while ((doc = postingsEnum.nextDoc()) != DocIdSetIterator.NO_MORE_DOCS) { + consumer.accept(doc); + } + } + } + } + + private static final class Visitor extends StoredFieldVisitor { + int leftToVisit = 2; + final BytesRef spare = new BytesRef(); + String routing; + String id; + + void reset() { + routing = id = null; + leftToVisit = 2; + } + + @Override + public void binaryField(FieldInfo fieldInfo, byte[] value) throws IOException { + switch (fieldInfo.name) { + case IdFieldMapper.NAME: + id = Uid.decodeId(value); + break; + default: + throw new IllegalStateException("Unexpected field: " + fieldInfo.name); + } + } + + @Override + public void stringField(FieldInfo fieldInfo, byte[] value) throws IOException { + spare.bytes = value; + spare.offset = 0; + spare.length = value.length; + switch (fieldInfo.name) { + case RoutingFieldMapper.NAME: + routing = spare.utf8ToString(); + break; + default: + throw new IllegalStateException("Unexpected field: " + fieldInfo.name); + } + } + + @Override + public Status needsField(FieldInfo fieldInfo) throws IOException { + // we don't support 5.x so no need for the uid field + switch (fieldInfo.name) { + case IdFieldMapper.NAME: + case RoutingFieldMapper.NAME: + leftToVisit--; + return Status.YES; + default: + return leftToVisit == 0 ? Status.STOP : Status.NO; + } + } + } + + /** + * This two phase iterator visits every live doc and selects all docs that don't belong into this + * shard based on their id and routing value. This is only used in a routing partitioned index. + */ + private final class RoutingPartitionedDocIdSetIterator extends TwoPhaseIterator { + private final LeafReader leafReader; + private final Visitor visitor; + + RoutingPartitionedDocIdSetIterator(LeafReader leafReader, Visitor visitor) { + super(DocIdSetIterator.all(leafReader.maxDoc())); // we iterate all live-docs + this.leafReader = leafReader; + this.visitor = visitor; + } + + @Override + public boolean matches() throws IOException { + int doc = approximation.docID(); + visitor.reset(); + leafReader.document(doc, visitor); + int targetShardId = OperationRouting.generateShardId(indexMetaData, visitor.id, visitor.routing); + return targetShardId != shardId; + } + + @Override + public float matchCost() { + return 42; // that's obvious, right? + } + } +} + + diff --git a/core/src/main/java/org/elasticsearch/index/shard/SnapshotStatus.java b/core/src/main/java/org/elasticsearch/index/shard/SnapshotStatus.java deleted file mode 100644 index 32ddcd1c2733b..0000000000000 --- a/core/src/main/java/org/elasticsearch/index/shard/SnapshotStatus.java +++ /dev/null @@ -1,142 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.index.shard; - -public class SnapshotStatus { - - public enum Stage { - NONE, - INDEX, - TRANSLOG, - FINALIZE, - DONE, - FAILURE - } - - private Stage stage = Stage.NONE; - - private long startTime; - - private long time; - - private Index index = new Index(); - - private Translog translog = new Translog(); - - public Stage stage() { - return this.stage; - } - - public SnapshotStatus updateStage(Stage stage) { - this.stage = stage; - return this; - } - - public long startTime() { - return this.startTime; - } - - public void startTime(long startTime) { - this.startTime = startTime; - } - - public long time() { - return this.time; - } - - public void time(long time) { - this.time = time; - } - - public Index index() { - return index; - } - - public Translog translog() { - return translog; - } - - public static class Index { - private long startTime; - private long time; - - private int numberOfFiles; - private long totalSize; - - public long startTime() { - return this.startTime; - } - - public void startTime(long startTime) { - this.startTime = startTime; - } - - public long time() { - return this.time; - } - - public void time(long time) { - this.time = time; - } - - public void files(int numberOfFiles, long totalSize) { - this.numberOfFiles = numberOfFiles; - this.totalSize = totalSize; - } - - public int numberOfFiles() { - return numberOfFiles; - } - - public long totalSize() { - return totalSize; - } - } - - public static class Translog { - private long startTime; - private long time; - private int expectedNumberOfOperations; - - public long startTime() { - return this.startTime; - } - - public void startTime(long startTime) { - this.startTime = startTime; - } - - public long time() { - return this.time; - } - - public void time(long time) { - this.time = time; - } - - public int expectedNumberOfOperations() { - return expectedNumberOfOperations; - } - - public void expectedNumberOfOperations(int expectedNumberOfOperations) { - this.expectedNumberOfOperations = expectedNumberOfOperations; - } - } -} diff --git a/core/src/main/java/org/elasticsearch/index/shard/StoreRecovery.java b/core/src/main/java/org/elasticsearch/index/shard/StoreRecovery.java index 078e8b06d6e20..b59ab14961769 100644 --- a/core/src/main/java/org/elasticsearch/index/shard/StoreRecovery.java +++ b/core/src/main/java/org/elasticsearch/index/shard/StoreRecovery.java @@ -31,6 +31,7 @@ import org.apache.lucene.store.IOContext; import org.apache.lucene.store.IndexInput; import org.elasticsearch.ExceptionsHelper; +import org.elasticsearch.Version; import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.cluster.metadata.MappingMetaData; import org.elasticsearch.cluster.routing.RecoverySource; @@ -107,13 +108,16 @@ boolean recoverFromLocalShards(BiConsumer mappingUpdate if (indices.size() > 1) { throw new IllegalArgumentException("can't add shards from more than one index"); } - IndexMetaData indexMetaData = shards.get(0).getIndexMetaData(); - for (ObjectObjectCursor mapping : indexMetaData.getMappings()) { + IndexMetaData sourceMetaData = shards.get(0).getIndexMetaData(); + for (ObjectObjectCursor mapping : sourceMetaData.getMappings()) { mappingUpdateConsumer.accept(mapping.key, mapping.value); } - indexShard.mapperService().merge(indexMetaData, MapperService.MergeReason.MAPPING_RECOVERY, true); + indexShard.mapperService().merge(sourceMetaData, MapperService.MergeReason.MAPPING_RECOVERY, true); // now that the mapping is merged we can validate the index sort configuration. Sort indexSort = indexShard.getIndexSort(); + final boolean isSplit = sourceMetaData.getNumberOfShards() < indexShard.indexSettings().getNumberOfShards(); + assert isSplit == false || sourceMetaData.getCreationVersion().onOrAfter(Version.V_6_0_0_alpha1) : "for split we require a " + + "single type but the index is created before 6.0.0"; return executeRecovery(indexShard, () -> { logger.debug("starting recovery from local shards {}", shards); try { @@ -122,7 +126,8 @@ boolean recoverFromLocalShards(BiConsumer mappingUpdate final long maxSeqNo = shards.stream().mapToLong(LocalShardSnapshot::maxSeqNo).max().getAsLong(); final long maxUnsafeAutoIdTimestamp = shards.stream().mapToLong(LocalShardSnapshot::maxUnsafeAutoIdTimestamp).max().getAsLong(); - addIndices(indexShard.recoveryState().getIndex(), directory, indexSort, sources, maxSeqNo, maxUnsafeAutoIdTimestamp); + addIndices(indexShard.recoveryState().getIndex(), directory, indexSort, sources, maxSeqNo, maxUnsafeAutoIdTimestamp, + indexShard.indexSettings().getIndexMetaData(), indexShard.shardId().id(), isSplit); internalRecoverFromStore(indexShard); // just trigger a merge to do housekeeping on the // copied segments - we will also see them in stats etc. @@ -136,13 +141,9 @@ boolean recoverFromLocalShards(BiConsumer mappingUpdate return false; } - void addIndices( - final RecoveryState.Index indexRecoveryStats, - final Directory target, - final Sort indexSort, - final Directory[] sources, - final long maxSeqNo, - final long maxUnsafeAutoIdTimestamp) throws IOException { + void addIndices(final RecoveryState.Index indexRecoveryStats, final Directory target, final Sort indexSort, final Directory[] sources, + final long maxSeqNo, final long maxUnsafeAutoIdTimestamp, IndexMetaData indexMetaData, int shardId, boolean split) + throws IOException { final Directory hardLinkOrCopyTarget = new org.apache.lucene.store.HardlinkCopyDirectoryWrapper(target); IndexWriterConfig iwc = new IndexWriterConfig(null) .setCommitOnClose(false) @@ -154,15 +155,20 @@ void addIndices( if (indexSort != null) { iwc.setIndexSort(indexSort); } + try (IndexWriter writer = new IndexWriter(new StatsDirectoryWrapper(hardLinkOrCopyTarget, indexRecoveryStats), iwc)) { writer.addIndexes(sources); + + if (split) { + writer.deleteDocuments(new ShardSplittingQuery(indexMetaData, shardId)); + } /* * We set the maximum sequence number and the local checkpoint on the target to the maximum of the maximum sequence numbers on * the source shards. This ensures that history after this maximum sequence number can advance and we have correct * document-level semantics. */ writer.setLiveCommitData(() -> { - final HashMap liveCommitData = new HashMap<>(2); + final HashMap liveCommitData = new HashMap<>(3); liveCommitData.put(SequenceNumbers.MAX_SEQ_NO, Long.toString(maxSeqNo)); liveCommitData.put(SequenceNumbers.LOCAL_CHECKPOINT_KEY, Long.toString(maxSeqNo)); liveCommitData.put(InternalEngine.MAX_UNSAFE_AUTO_ID_TIMESTAMP_COMMIT_ID, Long.toString(maxUnsafeAutoIdTimestamp)); @@ -272,7 +278,7 @@ private boolean canRecover(IndexShard indexShard) { // got closed on us, just ignore this recovery return false; } - if (!indexShard.routingEntry().primary()) { + if (indexShard.routingEntry().primary() == false) { throw new IndexShardRecoveryException(shardId, "Trying to recover when the shard is in backup state", null); } return true; diff --git a/core/src/main/java/org/elasticsearch/index/similarity/BM25SimilarityProvider.java b/core/src/main/java/org/elasticsearch/index/similarity/BM25SimilarityProvider.java index a1323a95984fc..ad49e7e9cc901 100644 --- a/core/src/main/java/org/elasticsearch/index/similarity/BM25SimilarityProvider.java +++ b/core/src/main/java/org/elasticsearch/index/similarity/BM25SimilarityProvider.java @@ -21,9 +21,6 @@ import org.apache.lucene.search.similarities.BM25Similarity; import org.apache.lucene.search.similarities.Similarity; -import org.elasticsearch.Version; -import org.elasticsearch.common.logging.DeprecationLogger; -import org.elasticsearch.common.logging.ESLoggerFactory; import org.elasticsearch.common.settings.Settings; /** @@ -45,9 +42,7 @@ public BM25SimilarityProvider(String name, Settings settings, Settings indexSett super(name); float k1 = settings.getAsFloat("k1", 1.2f); float b = settings.getAsFloat("b", 0.75f); - final DeprecationLogger deprecationLogger = new DeprecationLogger(ESLoggerFactory.getLogger(getClass())); - boolean discountOverlaps = - settings.getAsBooleanLenientForPreEs6Indices(Version.indexCreated(indexSettings), "discount_overlaps", true, deprecationLogger); + boolean discountOverlaps = settings.getAsBoolean("discount_overlaps", true); this.similarity = new BM25Similarity(k1, b); this.similarity.setDiscountOverlaps(discountOverlaps); diff --git a/core/src/main/java/org/elasticsearch/index/similarity/ClassicSimilarityProvider.java b/core/src/main/java/org/elasticsearch/index/similarity/ClassicSimilarityProvider.java index e031c5b3dac1c..419321996a301 100644 --- a/core/src/main/java/org/elasticsearch/index/similarity/ClassicSimilarityProvider.java +++ b/core/src/main/java/org/elasticsearch/index/similarity/ClassicSimilarityProvider.java @@ -20,9 +20,6 @@ package org.elasticsearch.index.similarity; import org.apache.lucene.search.similarities.ClassicSimilarity; -import org.elasticsearch.Version; -import org.elasticsearch.common.logging.DeprecationLogger; -import org.elasticsearch.common.logging.ESLoggerFactory; import org.elasticsearch.common.settings.Settings; /** @@ -40,8 +37,7 @@ public class ClassicSimilarityProvider extends AbstractSimilarityProvider { public ClassicSimilarityProvider(String name, Settings settings, Settings indexSettings) { super(name); - boolean discountOverlaps = settings.getAsBooleanLenientForPreEs6Indices( - Version.indexCreated(indexSettings), "discount_overlaps", true, new DeprecationLogger(ESLoggerFactory.getLogger(getClass()))); + boolean discountOverlaps = settings.getAsBoolean("discount_overlaps", true); this.similarity.setDiscountOverlaps(discountOverlaps); } diff --git a/core/src/main/java/org/elasticsearch/index/similarity/DFISimilarityProvider.java b/core/src/main/java/org/elasticsearch/index/similarity/DFISimilarityProvider.java index b1148fbb2d52d..324314b2669b2 100644 --- a/core/src/main/java/org/elasticsearch/index/similarity/DFISimilarityProvider.java +++ b/core/src/main/java/org/elasticsearch/index/similarity/DFISimilarityProvider.java @@ -25,9 +25,6 @@ import org.apache.lucene.search.similarities.IndependenceSaturated; import org.apache.lucene.search.similarities.IndependenceStandardized; import org.apache.lucene.search.similarities.Similarity; -import org.elasticsearch.Version; -import org.elasticsearch.common.logging.DeprecationLogger; -import org.elasticsearch.common.logging.ESLoggerFactory; import org.elasticsearch.common.settings.Settings; import java.util.HashMap; @@ -60,8 +57,7 @@ public class DFISimilarityProvider extends AbstractSimilarityProvider { public DFISimilarityProvider(String name, Settings settings, Settings indexSettings) { super(name); - boolean discountOverlaps = settings.getAsBooleanLenientForPreEs6Indices( - Version.indexCreated(indexSettings), "discount_overlaps", true, new DeprecationLogger(ESLoggerFactory.getLogger(getClass()))); + boolean discountOverlaps = settings.getAsBoolean("discount_overlaps", true); Independence measure = parseIndependence(settings); this.similarity = new DFISimilarity(measure); this.similarity.setDiscountOverlaps(discountOverlaps); diff --git a/core/src/main/java/org/elasticsearch/index/similarity/SimilarityService.java b/core/src/main/java/org/elasticsearch/index/similarity/SimilarityService.java index 4d8c9359a8172..e1080f2c2ccae 100644 --- a/core/src/main/java/org/elasticsearch/index/similarity/SimilarityService.java +++ b/core/src/main/java/org/elasticsearch/index/similarity/SimilarityService.java @@ -46,30 +46,27 @@ public final class SimilarityService extends AbstractIndexComponent { public static final Map BUILT_IN; static { Map defaults = new HashMap<>(); - Map buildIn = new HashMap<>(); defaults.put("classic", (name, settings, indexSettings, scriptService) -> new ClassicSimilarityProvider(name, settings, indexSettings)); defaults.put("BM25", (name, settings, indexSettings, scriptService) -> new BM25SimilarityProvider(name, settings, indexSettings)); defaults.put("boolean", (name, settings, indexSettings, scriptService) -> new BooleanSimilarityProvider(name, settings, indexSettings)); - buildIn.put("classic", - (name, settings, indexSettings, scriptService) -> new ClassicSimilarityProvider(name, settings, indexSettings)); - buildIn.put("BM25", - (name, settings, indexSettings, scriptService) -> new BM25SimilarityProvider(name, settings, indexSettings)); - buildIn.put("DFR", + + Map builtIn = new HashMap<>(defaults); + builtIn.put("DFR", (name, settings, indexSettings, scriptService) -> new DFRSimilarityProvider(name, settings, indexSettings)); - buildIn.put("IB", + builtIn.put("IB", (name, settings, indexSettings, scriptService) -> new IBSimilarityProvider(name, settings, indexSettings)); - buildIn.put("LMDirichlet", + builtIn.put("LMDirichlet", (name, settings, indexSettings, scriptService) -> new LMDirichletSimilarityProvider(name, settings, indexSettings)); - buildIn.put("LMJelinekMercer", + builtIn.put("LMJelinekMercer", (name, settings, indexSettings, scriptService) -> new LMJelinekMercerSimilarityProvider(name, settings, indexSettings)); - buildIn.put("DFI", + builtIn.put("DFI", (name, settings, indexSettings, scriptService) -> new DFISimilarityProvider(name, settings, indexSettings)); - buildIn.put("scripted", ScriptedSimilarityProvider::new); + builtIn.put("scripted", ScriptedSimilarityProvider::new); DEFAULTS = Collections.unmodifiableMap(defaults); - BUILT_IN = Collections.unmodifiableMap(buildIn); + BUILT_IN = Collections.unmodifiableMap(builtIn); } public SimilarityService(IndexSettings indexSettings, ScriptService scriptService, @@ -79,8 +76,7 @@ public SimilarityService(IndexSettings indexSettings, ScriptService scriptServic Map similaritySettings = this.indexSettings.getSettings().getGroups(IndexModule.SIMILARITY_SETTINGS_PREFIX); for (Map.Entry entry : similaritySettings.entrySet()) { String name = entry.getKey(); - // Starting with v5.0 indices, it should no longer be possible to redefine built-in similarities - if(BUILT_IN.containsKey(name) && indexSettings.getIndexVersionCreated().onOrAfter(Version.V_5_0_0_alpha1)) { + if (BUILT_IN.containsKey(name)) { throw new IllegalArgumentException("Cannot redefine built-in Similarity [" + name + "]"); } Settings providerSettings = entry.getValue(); @@ -97,10 +93,6 @@ public SimilarityService(IndexSettings indexSettings, ScriptService scriptServic Map providerMapping = addSimilarities(similaritySettings, indexSettings.getSettings(), scriptService, DEFAULTS); for (Map.Entry entry : providerMapping.entrySet()) { - // Avoid overwriting custom providers for indices older that v5.0 - if (providers.containsKey(entry.getKey()) && indexSettings.getIndexVersionCreated().before(Version.V_5_0_0_alpha1)) { - continue; - } providers.put(entry.getKey(), entry.getValue()); } this.similarities = providers; diff --git a/core/src/main/java/org/elasticsearch/index/snapshots/blobstore/BlobStoreIndexShardSnapshot.java b/core/src/main/java/org/elasticsearch/index/snapshots/blobstore/BlobStoreIndexShardSnapshot.java index 0abc0724c3a4e..b0767a7c512ec 100644 --- a/core/src/main/java/org/elasticsearch/index/snapshots/blobstore/BlobStoreIndexShardSnapshot.java +++ b/core/src/main/java/org/elasticsearch/index/snapshots/blobstore/BlobStoreIndexShardSnapshot.java @@ -66,7 +66,7 @@ public FileInfo(String name, StoreFileMetaData metaData, ByteSizeValue partSize) this.metadata = metaData; long partBytes = Long.MAX_VALUE; - if (partSize != null) { + if (partSize != null && partSize.getBytes() > 0) { partBytes = partSize.getBytes(); } diff --git a/core/src/main/java/org/elasticsearch/index/store/Store.java b/core/src/main/java/org/elasticsearch/index/store/Store.java index 6700a005c9c96..fa992e12ef220 100644 --- a/core/src/main/java/org/elasticsearch/index/store/Store.java +++ b/core/src/main/java/org/elasticsearch/index/store/Store.java @@ -79,6 +79,7 @@ import org.elasticsearch.index.shard.AbstractIndexShardComponent; import org.elasticsearch.index.shard.IndexShard; import org.elasticsearch.index.shard.ShardId; +import org.elasticsearch.index.translog.Translog; import java.io.Closeable; import java.io.EOFException; @@ -1027,6 +1028,20 @@ public Map getCommitUserData() { return commitUserData; } + /** + * returns the history uuid the store points at, or null if not existant. + */ + public String getHistoryUUID() { + return commitUserData.get(Engine.HISTORY_UUID_KEY); + } + + /** + * returns the translog uuid the store points at + */ + public String getTranslogUUID() { + return commitUserData.get(Translog.TRANSLOG_UUID_KEY); + } + /** * Returns true iff this metadata contains the given file. */ diff --git a/core/src/main/java/org/elasticsearch/index/store/StoreStats.java b/core/src/main/java/org/elasticsearch/index/store/StoreStats.java index 1ed7d6d01e823..b3f9f32905bbe 100644 --- a/core/src/main/java/org/elasticsearch/index/store/StoreStats.java +++ b/core/src/main/java/org/elasticsearch/index/store/StoreStats.java @@ -24,12 +24,13 @@ import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Streamable; import org.elasticsearch.common.unit.ByteSizeValue; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContent.Params; +import org.elasticsearch.common.xcontent.ToXContentFragment; import org.elasticsearch.common.xcontent.XContentBuilder; import java.io.IOException; -public class StoreStats implements Streamable, ToXContent { +public class StoreStats implements Streamable, ToXContentFragment { private long sizeInBytes; diff --git a/core/src/main/java/org/elasticsearch/index/translog/Checkpoint.java b/core/src/main/java/org/elasticsearch/index/translog/Checkpoint.java index 547d5aa499fb3..32aef840b6f37 100644 --- a/core/src/main/java/org/elasticsearch/index/translog/Checkpoint.java +++ b/core/src/main/java/org/elasticsearch/index/translog/Checkpoint.java @@ -28,7 +28,7 @@ import org.apache.lucene.store.OutputStreamIndexOutput; import org.apache.lucene.store.SimpleFSDirectory; import org.elasticsearch.common.io.Channels; -import org.elasticsearch.index.seqno.SequenceNumbersService; +import org.elasticsearch.index.seqno.SequenceNumbers; import java.io.ByteArrayOutputStream; import java.io.IOException; @@ -105,8 +105,8 @@ private void write(DataOutput out) throws IOException { static Checkpoint emptyTranslogCheckpoint(final long offset, final long generation, final long globalCheckpoint, long minTranslogGeneration) { - final long minSeqNo = SequenceNumbersService.NO_OPS_PERFORMED; - final long maxSeqNo = SequenceNumbersService.NO_OPS_PERFORMED; + final long minSeqNo = SequenceNumbers.NO_OPS_PERFORMED; + final long maxSeqNo = SequenceNumbers.NO_OPS_PERFORMED; return new Checkpoint(offset, 0, generation, minSeqNo, maxSeqNo, globalCheckpoint, minTranslogGeneration); } @@ -116,9 +116,9 @@ static Checkpoint readCheckpointV6_0_0(final DataInput in) throws IOException { // reads a checksummed checkpoint introduced in ES 5.0.0 static Checkpoint readCheckpointV5_0_0(final DataInput in) throws IOException { - final long minSeqNo = SequenceNumbersService.NO_OPS_PERFORMED; - final long maxSeqNo = SequenceNumbersService.NO_OPS_PERFORMED; - final long globalCheckpoint = SequenceNumbersService.UNASSIGNED_SEQ_NO; + final long minSeqNo = SequenceNumbers.NO_OPS_PERFORMED; + final long maxSeqNo = SequenceNumbers.NO_OPS_PERFORMED; + final long globalCheckpoint = SequenceNumbers.UNASSIGNED_SEQ_NO; final long minTranslogGeneration = -1L; return new Checkpoint(in.readLong(), in.readInt(), in.readLong(), minSeqNo, maxSeqNo, globalCheckpoint, minTranslogGeneration); } diff --git a/core/src/main/java/org/elasticsearch/index/translog/Translog.java b/core/src/main/java/org/elasticsearch/index/translog/Translog.java index 3664b76807818..20c428960f747 100644 --- a/core/src/main/java/org/elasticsearch/index/translog/Translog.java +++ b/core/src/main/java/org/elasticsearch/index/translog/Translog.java @@ -43,7 +43,7 @@ import org.elasticsearch.index.VersionType; import org.elasticsearch.index.engine.Engine; import org.elasticsearch.index.mapper.Uid; -import org.elasticsearch.index.seqno.SequenceNumbersService; +import org.elasticsearch.index.seqno.SequenceNumbers; import org.elasticsearch.index.shard.AbstractIndexShardComponent; import org.elasticsearch.index.shard.IndexShardComponent; @@ -634,7 +634,7 @@ private Stream readersAboveMinSeqNo(long minSeqNo) return Stream.concat(readers.stream(), Stream.of(current)) .filter(reader -> { final long maxSeqNo = reader.getCheckpoint().maxSeqNo; - return maxSeqNo == SequenceNumbersService.UNASSIGNED_SEQ_NO || maxSeqNo >= minSeqNo; + return maxSeqNo == SequenceNumbers.UNASSIGNED_SEQ_NO || maxSeqNo >= minSeqNo; }); } @@ -978,7 +978,7 @@ public Index(StreamInput in) throws IOException { seqNo = in.readLong(); primaryTerm = in.readLong(); } else { - seqNo = SequenceNumbersService.UNASSIGNED_SEQ_NO; + seqNo = SequenceNumbers.UNASSIGNED_SEQ_NO; primaryTerm = 0; } } @@ -1182,7 +1182,7 @@ public Delete(StreamInput in) throws IOException { seqNo = in.readLong(); primaryTerm = in.readLong(); } else { - seqNo = SequenceNumbersService.UNASSIGNED_SEQ_NO; + seqNo = SequenceNumbers.UNASSIGNED_SEQ_NO; primaryTerm = 0; } } @@ -1329,7 +1329,7 @@ public String reason() { } public NoOp(final long seqNo, final long primaryTerm, final String reason) { - assert seqNo > SequenceNumbersService.NO_OPS_PERFORMED; + assert seqNo > SequenceNumbers.NO_OPS_PERFORMED; assert primaryTerm >= 0; assert reason != null; this.seqNo = seqNo; diff --git a/core/src/main/java/org/elasticsearch/index/translog/TranslogReader.java b/core/src/main/java/org/elasticsearch/index/translog/TranslogReader.java index 46439afead10a..b88037c32fd59 100644 --- a/core/src/main/java/org/elasticsearch/index/translog/TranslogReader.java +++ b/core/src/main/java/org/elasticsearch/index/translog/TranslogReader.java @@ -79,7 +79,8 @@ public static TranslogReader open( final FileChannel channel, final Path path, final Checkpoint checkpoint, final String translogUUID) throws IOException { try { - InputStreamStreamInput headerStream = new InputStreamStreamInput(java.nio.channels.Channels.newInputStream(channel)); // don't close + InputStreamStreamInput headerStream = new InputStreamStreamInput(java.nio.channels.Channels.newInputStream(channel), + channel.size()); // don't close // Lucene's CodecUtil writes a magic number of 0x3FD76C17 with the // header, in binary this looks like: // diff --git a/core/src/main/java/org/elasticsearch/index/translog/TranslogStats.java b/core/src/main/java/org/elasticsearch/index/translog/TranslogStats.java index 7dc7234c2eb26..c49de67f56aea 100644 --- a/core/src/main/java/org/elasticsearch/index/translog/TranslogStats.java +++ b/core/src/main/java/org/elasticsearch/index/translog/TranslogStats.java @@ -19,15 +19,16 @@ package org.elasticsearch.index.translog; import org.elasticsearch.Version; -import org.elasticsearch.action.support.ToXContentToBytes; +import org.elasticsearch.common.Strings; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Streamable; +import org.elasticsearch.common.xcontent.ToXContentFragment; import org.elasticsearch.common.xcontent.XContentBuilder; import java.io.IOException; -public class TranslogStats extends ToXContentToBytes implements Streamable { +public class TranslogStats implements Streamable, ToXContentFragment { private long translogSizeInBytes; private int numberOfOperations; @@ -96,6 +97,11 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws return builder; } + @Override + public String toString() { + return Strings.toString(this, true, true); + } + @Override public void readFrom(StreamInput in) throws IOException { numberOfOperations = in.readVInt(); diff --git a/core/src/main/java/org/elasticsearch/index/translog/TranslogWriter.java b/core/src/main/java/org/elasticsearch/index/translog/TranslogWriter.java index 9c95471e60e82..c12299feaa596 100644 --- a/core/src/main/java/org/elasticsearch/index/translog/TranslogWriter.java +++ b/core/src/main/java/org/elasticsearch/index/translog/TranslogWriter.java @@ -31,7 +31,6 @@ import org.elasticsearch.common.io.Channels; import org.elasticsearch.common.unit.ByteSizeValue; import org.elasticsearch.index.seqno.SequenceNumbers; -import org.elasticsearch.index.seqno.SequenceNumbersService; import org.elasticsearch.index.shard.ShardId; import java.io.BufferedOutputStream; @@ -97,9 +96,9 @@ private TranslogWriter( this.outputStream = new BufferedChannelOutputStream(java.nio.channels.Channels.newOutputStream(channel), bufferSize.bytesAsInt()); this.lastSyncedCheckpoint = initialCheckpoint; this.totalOffset = initialCheckpoint.offset; - assert initialCheckpoint.minSeqNo == SequenceNumbersService.NO_OPS_PERFORMED : initialCheckpoint.minSeqNo; + assert initialCheckpoint.minSeqNo == SequenceNumbers.NO_OPS_PERFORMED : initialCheckpoint.minSeqNo; this.minSeqNo = initialCheckpoint.minSeqNo; - assert initialCheckpoint.maxSeqNo == SequenceNumbersService.NO_OPS_PERFORMED : initialCheckpoint.maxSeqNo; + assert initialCheckpoint.maxSeqNo == SequenceNumbers.NO_OPS_PERFORMED : initialCheckpoint.maxSeqNo; this.maxSeqNo = initialCheckpoint.maxSeqNo; this.globalCheckpointSupplier = globalCheckpointSupplier; this.seenSequenceNumbers = Assertions.ENABLED ? new HashMap<>() : null; @@ -193,10 +192,10 @@ public synchronized Translog.Location add(final BytesReference data, final long } totalOffset += data.length(); - if (minSeqNo == SequenceNumbersService.NO_OPS_PERFORMED) { + if (minSeqNo == SequenceNumbers.NO_OPS_PERFORMED) { assert operationCounter == 0; } - if (maxSeqNo == SequenceNumbersService.NO_OPS_PERFORMED) { + if (maxSeqNo == SequenceNumbers.NO_OPS_PERFORMED) { assert operationCounter == 0; } @@ -211,7 +210,7 @@ public synchronized Translog.Location add(final BytesReference data, final long } private synchronized boolean assertNoSeqNumberConflict(long seqNo, BytesReference data) throws IOException { - if (seqNo == SequenceNumbersService.UNASSIGNED_SEQ_NO) { + if (seqNo == SequenceNumbers.UNASSIGNED_SEQ_NO) { // nothing to do } else if (seenSequenceNumbers.containsKey(seqNo)) { final Tuple previous = seenSequenceNumbers.get(seqNo); diff --git a/core/src/main/java/org/elasticsearch/index/translog/TruncateTranslogCommand.java b/core/src/main/java/org/elasticsearch/index/translog/TruncateTranslogCommand.java index 408691692cacf..d9b77f841ed09 100644 --- a/core/src/main/java/org/elasticsearch/index/translog/TruncateTranslogCommand.java +++ b/core/src/main/java/org/elasticsearch/index/translog/TruncateTranslogCommand.java @@ -25,6 +25,8 @@ import org.apache.lucene.index.DirectoryReader; import org.apache.lucene.index.IndexCommit; import org.apache.lucene.index.IndexWriter; +import org.apache.lucene.index.IndexWriterConfig; +import org.apache.lucene.index.NoMergePolicy; import org.apache.lucene.store.Directory; import org.apache.lucene.store.FSDirectory; import org.apache.lucene.store.Lock; @@ -37,10 +39,12 @@ import org.elasticsearch.cli.EnvironmentAwareCommand; import org.elasticsearch.cli.Terminal; import org.elasticsearch.common.SuppressForbidden; +import org.elasticsearch.common.UUIDs; import org.elasticsearch.common.io.PathUtils; import org.elasticsearch.env.Environment; import org.elasticsearch.index.IndexNotFoundException; -import org.elasticsearch.index.seqno.SequenceNumbersService; +import org.elasticsearch.index.engine.Engine; +import org.elasticsearch.index.seqno.SequenceNumbers; import java.io.IOException; import java.nio.channels.Channels; @@ -51,6 +55,7 @@ import java.nio.file.StandardCopyOption; import java.nio.file.StandardOpenOption; import java.util.Arrays; +import java.util.HashMap; import java.util.HashSet; import java.util.List; import java.util.Map; @@ -101,64 +106,82 @@ protected void execute(Terminal terminal, OptionSet options, Environment env) th if (Files.exists(idxLocation) == false || Files.isDirectory(idxLocation) == false) { throw new ElasticsearchException("unable to find a shard at [" + idxLocation + "], which must exist and be a directory"); } - - // Hold the lock open for the duration of the tool running - try (Directory dir = FSDirectory.open(idxLocation, NativeFSLockFactory.INSTANCE); - Lock writeLock = dir.obtainLock(IndexWriter.WRITE_LOCK_NAME)) { - Set translogFiles; - try { - terminal.println("Checking existing translog files"); - translogFiles = filesInDirectory(translogPath); - } catch (IOException e) { - terminal.println("encountered IOException while listing directory, aborting..."); - throw new ElasticsearchException("failed to find existing translog files", e); - } - - // Warn about ES being stopped and files being deleted - warnAboutDeletingFiles(terminal, translogFiles, batch); - - List commits; - try { - terminal.println("Reading translog UUID information from Lucene commit from shard at [" + idxLocation + "]"); - commits = DirectoryReader.listCommits(dir); - } catch (IndexNotFoundException infe) { - throw new ElasticsearchException("unable to find a valid shard at [" + idxLocation + "]", infe); - } - - // Retrieve the generation and UUID from the existing data - Map commitData = commits.get(commits.size() - 1).getUserData(); - String translogGeneration = commitData.get(Translog.TRANSLOG_GENERATION_KEY); - String translogUUID = commitData.get(Translog.TRANSLOG_UUID_KEY); - if (translogGeneration == null || translogUUID == null) { - throw new ElasticsearchException("shard must have a valid translog generation and UUID but got: [{}] and: [{}]", + try (Directory dir = FSDirectory.open(idxLocation, NativeFSLockFactory.INSTANCE)) { + final String historyUUID = UUIDs.randomBase64UUID(); + final Map commitData; + // Hold the lock open for the duration of the tool running + try (Lock writeLock = dir.obtainLock(IndexWriter.WRITE_LOCK_NAME)) { + Set translogFiles; + try { + terminal.println("Checking existing translog files"); + translogFiles = filesInDirectory(translogPath); + } catch (IOException e) { + terminal.println("encountered IOException while listing directory, aborting..."); + throw new ElasticsearchException("failed to find existing translog files", e); + } + + // Warn about ES being stopped and files being deleted + warnAboutDeletingFiles(terminal, translogFiles, batch); + + List commits; + try { + terminal.println("Reading translog UUID information from Lucene commit from shard at [" + idxLocation + "]"); + commits = DirectoryReader.listCommits(dir); + } catch (IndexNotFoundException infe) { + throw new ElasticsearchException("unable to find a valid shard at [" + idxLocation + "]", infe); + } + + // Retrieve the generation and UUID from the existing data + commitData = commits.get(commits.size() - 1).getUserData(); + String translogGeneration = commitData.get(Translog.TRANSLOG_GENERATION_KEY); + String translogUUID = commitData.get(Translog.TRANSLOG_UUID_KEY); + if (translogGeneration == null || translogUUID == null) { + throw new ElasticsearchException("shard must have a valid translog generation and UUID but got: [{}] and: [{}]", translogGeneration, translogUUID); + } + terminal.println("Translog Generation: " + translogGeneration); + terminal.println("Translog UUID : " + translogUUID); + terminal.println("History UUID : " + historyUUID); + + Path tempEmptyCheckpoint = translogPath.resolve("temp-" + Translog.CHECKPOINT_FILE_NAME); + Path realEmptyCheckpoint = translogPath.resolve(Translog.CHECKPOINT_FILE_NAME); + Path tempEmptyTranslog = translogPath.resolve("temp-" + Translog.TRANSLOG_FILE_PREFIX + + translogGeneration + Translog.TRANSLOG_FILE_SUFFIX); + Path realEmptyTranslog = translogPath.resolve(Translog.TRANSLOG_FILE_PREFIX + + translogGeneration + Translog.TRANSLOG_FILE_SUFFIX); + + // Write empty checkpoint and translog to empty files + long gen = Long.parseLong(translogGeneration); + int translogLen = writeEmptyTranslog(tempEmptyTranslog, translogUUID); + writeEmptyCheckpoint(tempEmptyCheckpoint, translogLen, gen); + + terminal.println("Removing existing translog files"); + IOUtils.rm(translogFiles.toArray(new Path[]{})); + + terminal.println("Creating new empty checkpoint at [" + realEmptyCheckpoint + "]"); + Files.move(tempEmptyCheckpoint, realEmptyCheckpoint, StandardCopyOption.ATOMIC_MOVE); + terminal.println("Creating new empty translog at [" + realEmptyTranslog + "]"); + Files.move(tempEmptyTranslog, realEmptyTranslog, StandardCopyOption.ATOMIC_MOVE); + + // Fsync the translog directory after rename + IOUtils.fsync(translogPath, true); } - terminal.println("Translog Generation: " + translogGeneration); - terminal.println("Translog UUID : " + translogUUID); - - Path tempEmptyCheckpoint = translogPath.resolve("temp-" + Translog.CHECKPOINT_FILE_NAME); - Path realEmptyCheckpoint = translogPath.resolve(Translog.CHECKPOINT_FILE_NAME); - Path tempEmptyTranslog = translogPath.resolve("temp-" + Translog.TRANSLOG_FILE_PREFIX + - translogGeneration + Translog.TRANSLOG_FILE_SUFFIX); - Path realEmptyTranslog = translogPath.resolve(Translog.TRANSLOG_FILE_PREFIX + - translogGeneration + Translog.TRANSLOG_FILE_SUFFIX); - - // Write empty checkpoint and translog to empty files - long gen = Long.parseLong(translogGeneration); - int translogLen = writeEmptyTranslog(tempEmptyTranslog, translogUUID); - writeEmptyCheckpoint(tempEmptyCheckpoint, translogLen, gen); - - terminal.println("Removing existing translog files"); - IOUtils.rm(translogFiles.toArray(new Path[]{})); - - terminal.println("Creating new empty checkpoint at [" + realEmptyCheckpoint + "]"); - Files.move(tempEmptyCheckpoint, realEmptyCheckpoint, StandardCopyOption.ATOMIC_MOVE); - terminal.println("Creating new empty translog at [" + realEmptyTranslog + "]"); - Files.move(tempEmptyTranslog, realEmptyTranslog, StandardCopyOption.ATOMIC_MOVE); - - // Fsync the translog directory after rename - IOUtils.fsync(translogPath, true); + terminal.println("Marking index with the new history uuid"); + // commit the new histroy id + IndexWriterConfig iwc = new IndexWriterConfig(null) + .setCommitOnClose(false) + // we don't want merges to happen here - we call maybe merge on the engine + // later once we stared it up otherwise we would need to wait for it here + // we also don't specify a codec here and merges should use the engines for this index + .setMergePolicy(NoMergePolicy.INSTANCE) + .setOpenMode(IndexWriterConfig.OpenMode.APPEND); + try (IndexWriter writer = new IndexWriter(dir, iwc)) { + Map newCommitData = new HashMap<>(commitData); + newCommitData.put(Engine.HISTORY_UUID_KEY, historyUUID); + writer.setLiveCommitData(newCommitData.entrySet()); + writer.commit(); + } } catch (LockObtainFailedException lofe) { throw new ElasticsearchException("Failed to lock shard's directory at [" + idxLocation + "], is Elasticsearch still running?"); } @@ -169,7 +192,7 @@ protected void execute(Terminal terminal, OptionSet options, Environment env) th /** Write a checkpoint file to the given location with the given generation */ public static void writeEmptyCheckpoint(Path filename, int translogLength, long translogGeneration) throws IOException { Checkpoint emptyCheckpoint = Checkpoint.emptyTranslogCheckpoint(translogLength, translogGeneration, - SequenceNumbersService.UNASSIGNED_SEQ_NO, translogGeneration); + SequenceNumbers.UNASSIGNED_SEQ_NO, translogGeneration); Checkpoint.write(FileChannel::open, filename, emptyCheckpoint, StandardOpenOption.WRITE, StandardOpenOption.READ, StandardOpenOption.CREATE_NEW); // fsync with metadata here to make sure. diff --git a/core/src/main/java/org/elasticsearch/index/warmer/WarmerStats.java b/core/src/main/java/org/elasticsearch/index/warmer/WarmerStats.java index 21dec0f62a043..8149b091a3148 100644 --- a/core/src/main/java/org/elasticsearch/index/warmer/WarmerStats.java +++ b/core/src/main/java/org/elasticsearch/index/warmer/WarmerStats.java @@ -23,12 +23,13 @@ import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Streamable; import org.elasticsearch.common.unit.TimeValue; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContent.Params; +import org.elasticsearch.common.xcontent.ToXContentFragment; import org.elasticsearch.common.xcontent.XContentBuilder; import java.io.IOException; -public class WarmerStats implements Streamable, ToXContent { +public class WarmerStats implements Streamable, ToXContentFragment { private long current; diff --git a/core/src/main/java/org/elasticsearch/indices/IndexingMemoryController.java b/core/src/main/java/org/elasticsearch/indices/IndexingMemoryController.java index 1b960bb15992b..73ba9342175d4 100644 --- a/core/src/main/java/org/elasticsearch/indices/IndexingMemoryController.java +++ b/core/src/main/java/org/elasticsearch/indices/IndexingMemoryController.java @@ -36,7 +36,7 @@ import org.elasticsearch.index.shard.IndexingOperationListener; import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.threadpool.ThreadPool; -import org.elasticsearch.threadpool.ThreadPool.Cancellable; +import org.elasticsearch.threadpool.Scheduler.Cancellable; import org.elasticsearch.threadpool.ThreadPool.Names; import java.io.Closeable; @@ -152,8 +152,7 @@ ByteSizeValue indexingBufferSize() { protected List availableShards() { List availableShards = new ArrayList<>(); for (IndexShard shard : indexShards) { - // shadow replica doesn't have an indexing buffer - if (shard.canIndex() && CAN_WRITE_INDEX_BUFFER_STATES.contains(shard.state())) { + if (CAN_WRITE_INDEX_BUFFER_STATES.contains(shard.state())) { availableShards.add(shard); } } diff --git a/core/src/main/java/org/elasticsearch/indices/IndicesModule.java b/core/src/main/java/org/elasticsearch/indices/IndicesModule.java index bbcc508dbd589..e446ec7e6d3ea 100644 --- a/core/src/main/java/org/elasticsearch/indices/IndicesModule.java +++ b/core/src/main/java/org/elasticsearch/indices/IndicesModule.java @@ -22,12 +22,12 @@ import org.elasticsearch.action.admin.indices.rollover.Condition; import org.elasticsearch.action.admin.indices.rollover.MaxAgeCondition; import org.elasticsearch.action.admin.indices.rollover.MaxDocsCondition; +import org.elasticsearch.action.admin.indices.rollover.MaxSizeCondition; import org.elasticsearch.action.resync.TransportResyncReplicationAction; import org.elasticsearch.index.shard.PrimaryReplicaSyncer; import org.elasticsearch.common.geo.ShapesAvailability; import org.elasticsearch.common.inject.AbstractModule; import org.elasticsearch.common.io.stream.NamedWriteableRegistry.Entry; -import org.elasticsearch.index.mapper.AllFieldMapper; import org.elasticsearch.index.mapper.BinaryFieldMapper; import org.elasticsearch.index.mapper.BooleanFieldMapper; import org.elasticsearch.index.mapper.CompletionFieldMapper; @@ -44,13 +44,10 @@ import org.elasticsearch.index.mapper.NumberFieldMapper; import org.elasticsearch.index.mapper.ObjectMapper; import org.elasticsearch.index.mapper.ParentFieldMapper; -import org.elasticsearch.index.mapper.RangeFieldMapper; import org.elasticsearch.index.mapper.RoutingFieldMapper; -import org.elasticsearch.index.mapper.ScaledFloatFieldMapper; import org.elasticsearch.index.mapper.SeqNoFieldMapper; import org.elasticsearch.index.mapper.SourceFieldMapper; import org.elasticsearch.index.mapper.TextFieldMapper; -import org.elasticsearch.index.mapper.TokenCountFieldMapper; import org.elasticsearch.index.mapper.TypeFieldMapper; import org.elasticsearch.index.mapper.UidFieldMapper; import org.elasticsearch.index.mapper.VersionFieldMapper; @@ -83,6 +80,7 @@ public IndicesModule(List mapperPlugins) { private void registerBuiltinWritables() { namedWritables.add(new Entry(Condition.class, MaxAgeCondition.NAME, MaxAgeCondition::new)); namedWritables.add(new Entry(Condition.class, MaxDocsCondition.NAME, MaxDocsCondition::new)); + namedWritables.add(new Entry(Condition.class, MaxSizeCondition.NAME, MaxSizeCondition::new)); } public List getNamedWriteables() { @@ -96,17 +94,12 @@ private Map getMappers(List mapperPlugi for (NumberFieldMapper.NumberType type : NumberFieldMapper.NumberType.values()) { mappers.put(type.typeName(), new NumberFieldMapper.TypeParser(type)); } - for (RangeFieldMapper.RangeType type : RangeFieldMapper.RangeType.values()) { - mappers.put(type.typeName(), new RangeFieldMapper.TypeParser(type)); - } mappers.put(BooleanFieldMapper.CONTENT_TYPE, new BooleanFieldMapper.TypeParser()); mappers.put(BinaryFieldMapper.CONTENT_TYPE, new BinaryFieldMapper.TypeParser()); mappers.put(DateFieldMapper.CONTENT_TYPE, new DateFieldMapper.TypeParser()); mappers.put(IpFieldMapper.CONTENT_TYPE, new IpFieldMapper.TypeParser()); - mappers.put(ScaledFloatFieldMapper.CONTENT_TYPE, new ScaledFloatFieldMapper.TypeParser()); mappers.put(TextFieldMapper.CONTENT_TYPE, new TextFieldMapper.TypeParser()); mappers.put(KeywordFieldMapper.CONTENT_TYPE, new KeywordFieldMapper.TypeParser()); - mappers.put(TokenCountFieldMapper.CONTENT_TYPE, new TokenCountFieldMapper.TypeParser()); mappers.put(ObjectMapper.CONTENT_TYPE, new ObjectMapper.TypeParser()); mappers.put(ObjectMapper.NESTED_CONTENT_TYPE, new ObjectMapper.TypeParser()); mappers.put(CompletionFieldMapper.CONTENT_TYPE, new CompletionFieldMapper.TypeParser()); @@ -138,7 +131,6 @@ private Map getMetadataMappers(List INDICES_CACHE_QUERY_SIZE_SETTING = Setting.memorySizeSetting("indices.queries.cache.size", "10%", Property.NodeScope); public static final Setting INDICES_CACHE_QUERY_COUNT_SETTING = - Setting.intSetting("indices.queries.cache.count", 10000, 1, Property.NodeScope); + Setting.intSetting("indices.queries.cache.count", 1000, 1, Property.NodeScope); // enables caching on all segments instead of only the larger ones, for testing only public static final Setting INDICES_QUERIES_CACHE_ALL_SEGMENTS_SETTING = Setting.boolSetting("indices.queries.cache.all_segments", false, Property.NodeScope); diff --git a/core/src/main/java/org/elasticsearch/indices/IndicesService.java b/core/src/main/java/org/elasticsearch/indices/IndicesService.java index 3039305c42a3d..caffa1b7befda 100644 --- a/core/src/main/java/org/elasticsearch/indices/IndicesService.java +++ b/core/src/main/java/org/elasticsearch/indices/IndicesService.java @@ -43,7 +43,6 @@ import org.elasticsearch.cluster.routing.ShardRouting; import org.elasticsearch.common.CheckedFunction; import org.elasticsearch.common.Nullable; -import org.elasticsearch.common.Strings; import org.elasticsearch.common.breaker.CircuitBreaker; import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.component.AbstractLifecycleComponent; @@ -375,12 +374,15 @@ public IndexService indexServiceSafe(Index index) { /** * Creates a new {@link IndexService} for the given metadata. - * @param indexMetaData the index metadata to create the index for - * @param builtInListeners a list of built-in lifecycle {@link IndexEventListener} that should should be used along side with the per-index listeners + * + * @param indexMetaData the index metadata to create the index for + * @param builtInListeners a list of built-in lifecycle {@link IndexEventListener} that should should be used along side with the + * per-index listeners * @throws ResourceAlreadyExistsException if the index already exists. */ @Override - public synchronized IndexService createIndex(IndexMetaData indexMetaData, List builtInListeners) throws IOException { + public synchronized IndexService createIndex( + final IndexMetaData indexMetaData, final List builtInListeners) throws IOException { ensureChangesAllowed(); if (indexMetaData.getIndexUUID().equals(IndexMetaData.INDEX_UUID_NA_VALUE)) { throw new IllegalArgumentException("index must have a real UUID found value: [" + indexMetaData.getIndexUUID() + "]"); @@ -399,13 +401,13 @@ public void onStoreClosed(ShardId shardId) { finalListeners.add(onStoreClose); finalListeners.add(oldShardsStats); final IndexService indexService = - createIndexService( - "create index", - indexMetaData, - indicesQueryCache, - indicesFieldDataCache, - finalListeners, - indexingMemoryController); + createIndexService( + "create index", + indexMetaData, + indicesQueryCache, + indicesFieldDataCache, + finalListeners, + indexingMemoryController); boolean success = false; try { indexService.getIndexEventListener().afterIndexCreated(indexService); @@ -423,7 +425,8 @@ public void onStoreClosed(ShardId shardId) { * This creates a new IndexService without registering it */ private synchronized IndexService createIndexService(final String reason, - IndexMetaData indexMetaData, IndicesQueryCache indicesQueryCache, + IndexMetaData indexMetaData, + IndicesQueryCache indicesQueryCache, IndicesFieldDataCache indicesFieldDataCache, List builtInListeners, IndexingOperationListener... indexingOperationListeners) throws IOException { @@ -454,7 +457,8 @@ private synchronized IndexService createIndexService(final String reason, indicesQueryCache, mapperRegistry, indicesFieldDataCache, - namedWriteableRegistry); + namedWriteableRegistry + ); } /** @@ -499,10 +503,11 @@ public synchronized void verifyIndexMetadata(IndexMetaData metaData, IndexMetaDa @Override public IndexShard createShard(ShardRouting shardRouting, RecoveryState recoveryState, PeerRecoveryTargetService recoveryTargetService, PeerRecoveryTargetService.RecoveryListener recoveryListener, RepositoriesService repositoriesService, - Consumer onShardFailure) throws IOException { + Consumer onShardFailure, + Consumer globalCheckpointSyncer) throws IOException { ensureChangesAllowed(); IndexService indexService = indexService(shardRouting.index()); - IndexShard indexShard = indexService.createShard(shardRouting); + IndexShard indexShard = indexService.createShard(shardRouting, globalCheckpointSyncer); indexShard.addShardFailureCallback(onShardFailure); indexShard.startRecovery(recoveryState, recoveryTargetService, recoveryListener, repositoriesService, (type, mapping) -> { diff --git a/core/src/main/java/org/elasticsearch/indices/NodeIndicesStats.java b/core/src/main/java/org/elasticsearch/indices/NodeIndicesStats.java index 9133ca81e2837..a08e97ab133d9 100644 --- a/core/src/main/java/org/elasticsearch/indices/NodeIndicesStats.java +++ b/core/src/main/java/org/elasticsearch/indices/NodeIndicesStats.java @@ -26,7 +26,8 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Streamable; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContent.Params; +import org.elasticsearch.common.xcontent.ToXContentFragment; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.index.Index; import org.elasticsearch.index.cache.query.QueryCacheStats; @@ -53,7 +54,7 @@ /** * Global information on indices stats running on a specific node. */ -public class NodeIndicesStats implements Streamable, ToXContent { +public class NodeIndicesStats implements Streamable, ToXContentFragment { private CommonStats stats; private Map> statsByShard; diff --git a/core/src/main/java/org/elasticsearch/indices/TermsLookup.java b/core/src/main/java/org/elasticsearch/indices/TermsLookup.java index fea00812d04c6..c1acce072b166 100644 --- a/core/src/main/java/org/elasticsearch/indices/TermsLookup.java +++ b/core/src/main/java/org/elasticsearch/indices/TermsLookup.java @@ -24,7 +24,8 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContent.Params; +import org.elasticsearch.common.xcontent.ToXContentFragment; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.index.query.TermsQueryBuilder; @@ -35,7 +36,7 @@ /** * Encapsulates the parameters needed to fetch terms. */ -public class TermsLookup implements Writeable, ToXContent { +public class TermsLookup implements Writeable, ToXContentFragment { private final String index; private final String type; private final String id; diff --git a/core/src/main/java/org/elasticsearch/indices/analysis/AnalysisModule.java b/core/src/main/java/org/elasticsearch/indices/analysis/AnalysisModule.java index ee87aa5e630bd..2d9e8e78b7768 100644 --- a/core/src/main/java/org/elasticsearch/indices/analysis/AnalysisModule.java +++ b/core/src/main/java/org/elasticsearch/indices/analysis/AnalysisModule.java @@ -32,6 +32,7 @@ import org.elasticsearch.index.analysis.ArabicAnalyzerProvider; import org.elasticsearch.index.analysis.ArmenianAnalyzerProvider; import org.elasticsearch.index.analysis.BasqueAnalyzerProvider; +import org.elasticsearch.index.analysis.BengaliAnalyzerProvider; import org.elasticsearch.index.analysis.BrazilianAnalyzerProvider; import org.elasticsearch.index.analysis.BulgarianAnalyzerProvider; import org.elasticsearch.index.analysis.CatalanAnalyzerProvider; @@ -270,6 +271,7 @@ private NamedRegistry>> setupAnalyzers(List analyzers.register("arabic", ArabicAnalyzerProvider::new); analyzers.register("armenian", ArmenianAnalyzerProvider::new); analyzers.register("basque", BasqueAnalyzerProvider::new); + analyzers.register("bengali", BengaliAnalyzerProvider::new); analyzers.register("brazilian", BrazilianAnalyzerProvider::new); analyzers.register("bulgarian", BulgarianAnalyzerProvider::new); analyzers.register("catalan", CatalanAnalyzerProvider::new); diff --git a/core/src/main/java/org/elasticsearch/indices/analysis/PreBuiltAnalyzers.java b/core/src/main/java/org/elasticsearch/indices/analysis/PreBuiltAnalyzers.java index 6c4a3cc2578db..3c286f7dd5ec5 100644 --- a/core/src/main/java/org/elasticsearch/indices/analysis/PreBuiltAnalyzers.java +++ b/core/src/main/java/org/elasticsearch/indices/analysis/PreBuiltAnalyzers.java @@ -22,6 +22,7 @@ import org.apache.lucene.analysis.CharArraySet; import org.apache.lucene.analysis.ar.ArabicAnalyzer; import org.apache.lucene.analysis.bg.BulgarianAnalyzer; +import org.apache.lucene.analysis.bn.BengaliAnalyzer; import org.apache.lucene.analysis.br.BrazilianAnalyzer; import org.apache.lucene.analysis.ca.CatalanAnalyzer; import org.apache.lucene.analysis.cjk.CJKAnalyzer; @@ -183,6 +184,15 @@ protected Analyzer create(Version version) { } }, + BENGALI { + @Override + protected Analyzer create(Version version) { + Analyzer a = new BengaliAnalyzer(); + a.setVersion(version.luceneVersion); + return a; + } + }, + BRAZILIAN { @Override protected Analyzer create(Version version) { diff --git a/core/src/main/java/org/elasticsearch/indices/cluster/IndicesClusterStateService.java b/core/src/main/java/org/elasticsearch/indices/cluster/IndicesClusterStateService.java index 3c1ee5b841293..5aa8b5f3ee1b3 100644 --- a/core/src/main/java/org/elasticsearch/indices/cluster/IndicesClusterStateService.java +++ b/core/src/main/java/org/elasticsearch/indices/cluster/IndicesClusterStateService.java @@ -118,35 +118,44 @@ public class IndicesClusterStateService extends AbstractLifecycleComponent imple private final boolean sendRefreshMapping; private final List buildInIndexListener; private final PrimaryReplicaSyncer primaryReplicaSyncer; + private final Consumer globalCheckpointSyncer; @Inject - public IndicesClusterStateService(Settings settings, IndicesService indicesService, ClusterService clusterService, - ThreadPool threadPool, PeerRecoveryTargetService recoveryTargetService, + public IndicesClusterStateService(Settings settings, + IndicesService indicesService, + ClusterService clusterService, + ThreadPool threadPool, + PeerRecoveryTargetService recoveryTargetService, ShardStateAction shardStateAction, NodeMappingRefreshAction nodeMappingRefreshAction, RepositoriesService repositoriesService, - SearchService searchService, SyncedFlushService syncedFlushService, - PeerRecoverySourceService peerRecoverySourceService, SnapshotShardsService snapshotShardsService, - GlobalCheckpointSyncAction globalCheckpointSyncAction, - PrimaryReplicaSyncer primaryReplicaSyncer) { + SearchService searchService, + SyncedFlushService syncedFlushService, + PeerRecoverySourceService peerRecoverySourceService, + SnapshotShardsService snapshotShardsService, + PrimaryReplicaSyncer primaryReplicaSyncer, + GlobalCheckpointSyncAction globalCheckpointSyncAction) { this(settings, (AllocatedIndices>) indicesService, clusterService, threadPool, recoveryTargetService, shardStateAction, nodeMappingRefreshAction, repositoriesService, searchService, syncedFlushService, peerRecoverySourceService, - snapshotShardsService, globalCheckpointSyncAction, primaryReplicaSyncer); + snapshotShardsService, primaryReplicaSyncer, globalCheckpointSyncAction::updateGlobalCheckpointForShard); } // for tests IndicesClusterStateService(Settings settings, AllocatedIndices> indicesService, ClusterService clusterService, - ThreadPool threadPool, PeerRecoveryTargetService recoveryTargetService, + ThreadPool threadPool, + PeerRecoveryTargetService recoveryTargetService, ShardStateAction shardStateAction, NodeMappingRefreshAction nodeMappingRefreshAction, RepositoriesService repositoriesService, - SearchService searchService, SyncedFlushService syncedFlushService, - PeerRecoverySourceService peerRecoverySourceService, SnapshotShardsService snapshotShardsService, - GlobalCheckpointSyncAction globalCheckpointSyncAction, - PrimaryReplicaSyncer primaryReplicaSyncer) { + SearchService searchService, + SyncedFlushService syncedFlushService, + PeerRecoverySourceService peerRecoverySourceService, + SnapshotShardsService snapshotShardsService, + PrimaryReplicaSyncer primaryReplicaSyncer, + Consumer globalCheckpointSyncer) { super(settings); this.buildInIndexListener = Arrays.asList( @@ -154,8 +163,7 @@ public IndicesClusterStateService(Settings settings, IndicesService indicesServi recoveryTargetService, searchService, syncedFlushService, - snapshotShardsService, - globalCheckpointSyncAction); + snapshotShardsService); this.indicesService = indicesService; this.clusterService = clusterService; this.threadPool = threadPool; @@ -164,6 +172,7 @@ public IndicesClusterStateService(Settings settings, IndicesService indicesServi this.nodeMappingRefreshAction = nodeMappingRefreshAction; this.repositoriesService = repositoriesService; this.primaryReplicaSyncer = primaryReplicaSyncer; + this.globalCheckpointSyncer = globalCheckpointSyncer; this.sendRefreshMapping = this.settings.getAsBoolean("indices.cluster.send_refresh_mapping", true); } @@ -541,7 +550,7 @@ private void createShard(DiscoveryNodes nodes, RoutingTable routingTable, ShardR logger.debug("{} creating shard", shardRouting.shardId()); RecoveryState recoveryState = new RecoveryState(shardRouting, nodes.getLocalNode(), sourceNode); indicesService.createShard(shardRouting, recoveryState, recoveryTargetService, new RecoveryListener(shardRouting), - repositoriesService, failedShardHandler); + repositoriesService, failedShardHandler, globalCheckpointSyncer); } catch (Exception e) { failAndRemoveShard(shardRouting, true, "failed to create shard", e, state); } @@ -830,7 +839,8 @@ U createIndex(IndexMetaData indexMetaData, */ T createShard(ShardRouting shardRouting, RecoveryState recoveryState, PeerRecoveryTargetService recoveryTargetService, PeerRecoveryTargetService.RecoveryListener recoveryListener, RepositoriesService repositoriesService, - Consumer onShardFailure) throws IOException; + Consumer onShardFailure, + Consumer globalCheckpointSyncer) throws IOException; /** * Returns shard for the specified id if it exists otherwise returns null. diff --git a/core/src/main/java/org/elasticsearch/indices/recovery/PeerRecoveryTargetService.java b/core/src/main/java/org/elasticsearch/indices/recovery/PeerRecoveryTargetService.java index dcb6f5759d120..65a8a0d0f6e0b 100644 --- a/core/src/main/java/org/elasticsearch/indices/recovery/PeerRecoveryTargetService.java +++ b/core/src/main/java/org/elasticsearch/indices/recovery/PeerRecoveryTargetService.java @@ -42,7 +42,7 @@ import org.elasticsearch.index.engine.RecoveryEngineException; import org.elasticsearch.index.mapper.MapperException; import org.elasticsearch.index.seqno.SeqNoStats; -import org.elasticsearch.index.seqno.SequenceNumbersService; +import org.elasticsearch.index.seqno.SequenceNumbers; import org.elasticsearch.index.shard.IllegalIndexShardStateException; import org.elasticsearch.index.shard.IndexEventListener; import org.elasticsearch.index.shard.IndexShard; @@ -319,10 +319,10 @@ private StartRecoveryRequest getStartRecoveryRequest(final RecoveryTarget recove if (metadataSnapshot.size() > 0) { startingSeqNo = getStartingSeqNo(recoveryTarget); } else { - startingSeqNo = SequenceNumbersService.UNASSIGNED_SEQ_NO; + startingSeqNo = SequenceNumbers.UNASSIGNED_SEQ_NO; } - if (startingSeqNo == SequenceNumbersService.UNASSIGNED_SEQ_NO) { + if (startingSeqNo == SequenceNumbers.UNASSIGNED_SEQ_NO) { logger.trace("{} preparing for file-based recovery from [{}]", recoveryTarget.shardId(), recoveryTarget.sourceNode()); } else { logger.trace( @@ -348,7 +348,7 @@ private StartRecoveryRequest getStartRecoveryRequest(final RecoveryTarget recove * Get the starting sequence number for a sequence-number-based request. * * @param recoveryTarget the target of the recovery - * @return the starting sequence number or {@link SequenceNumbersService#UNASSIGNED_SEQ_NO} if obtaining the starting sequence number + * @return the starting sequence number or {@link SequenceNumbers#UNASSIGNED_SEQ_NO} if obtaining the starting sequence number * failed */ public static long getStartingSeqNo(final RecoveryTarget recoveryTarget) { @@ -364,7 +364,7 @@ public static long getStartingSeqNo(final RecoveryTarget recoveryTarget) { */ return seqNoStats.getLocalCheckpoint() + 1; } else { - return SequenceNumbersService.UNASSIGNED_SEQ_NO; + return SequenceNumbers.UNASSIGNED_SEQ_NO; } } catch (final IOException e) { /* @@ -372,7 +372,7 @@ public static long getStartingSeqNo(final RecoveryTarget recoveryTarget) { * translog on the recovery target is opened, the recovery enters a retry loop seeing now that the index files are on disk and * proceeds to attempt a sequence-number-based recovery. */ - return SequenceNumbersService.UNASSIGNED_SEQ_NO; + return SequenceNumbers.UNASSIGNED_SEQ_NO; } } diff --git a/core/src/main/java/org/elasticsearch/indices/recovery/RecoveryFinalizeRecoveryRequest.java b/core/src/main/java/org/elasticsearch/indices/recovery/RecoveryFinalizeRecoveryRequest.java index 2bdf45fede28c..6337467e78330 100644 --- a/core/src/main/java/org/elasticsearch/indices/recovery/RecoveryFinalizeRecoveryRequest.java +++ b/core/src/main/java/org/elasticsearch/indices/recovery/RecoveryFinalizeRecoveryRequest.java @@ -22,7 +22,7 @@ import org.elasticsearch.Version; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; -import org.elasticsearch.index.seqno.SequenceNumbersService; +import org.elasticsearch.index.seqno.SequenceNumbers; import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.transport.TransportRequest; @@ -63,7 +63,7 @@ public void readFrom(StreamInput in) throws IOException { if (in.getVersion().onOrAfter(Version.V_6_0_0_alpha1)) { globalCheckpoint = in.readZLong(); } else { - globalCheckpoint = SequenceNumbersService.UNASSIGNED_SEQ_NO; + globalCheckpoint = SequenceNumbers.UNASSIGNED_SEQ_NO; } } diff --git a/core/src/main/java/org/elasticsearch/indices/recovery/RecoverySourceHandler.java b/core/src/main/java/org/elasticsearch/indices/recovery/RecoverySourceHandler.java index a5c1d9cf371e2..5f692d8e8f5fa 100644 --- a/core/src/main/java/org/elasticsearch/indices/recovery/RecoverySourceHandler.java +++ b/core/src/main/java/org/elasticsearch/indices/recovery/RecoverySourceHandler.java @@ -31,6 +31,7 @@ import org.apache.lucene.util.ArrayUtil; import org.apache.lucene.util.IOUtils; import org.elasticsearch.ExceptionsHelper; +import org.elasticsearch.Version; import org.elasticsearch.action.support.PlainActionFuture; import org.elasticsearch.cluster.routing.IndexShardRoutingTable; import org.elasticsearch.cluster.routing.ShardRouting; @@ -47,7 +48,7 @@ import org.elasticsearch.index.engine.Engine; import org.elasticsearch.index.engine.RecoveryEngineException; import org.elasticsearch.index.seqno.LocalCheckpointTracker; -import org.elasticsearch.index.seqno.SequenceNumbersService; +import org.elasticsearch.index.seqno.SequenceNumbers; import org.elasticsearch.index.shard.IndexShard; import org.elasticsearch.index.shard.IndexShardClosedException; import org.elasticsearch.index.shard.IndexShardRelocatedException; @@ -147,8 +148,8 @@ public RecoveryResponse recoverToTarget() throws IOException { final Translog translog = shard.getTranslog(); final long startingSeqNo; - boolean isSequenceNumberBasedRecoveryPossible = request.startingSeqNo() != SequenceNumbersService.UNASSIGNED_SEQ_NO && - isTranslogReadyForSequenceNumberBasedRecovery(); + final boolean isSequenceNumberBasedRecoveryPossible = request.startingSeqNo() != SequenceNumbers.UNASSIGNED_SEQ_NO && + isTargetSameHistory() && isTranslogReadyForSequenceNumberBasedRecovery(); if (isSequenceNumberBasedRecoveryPossible) { logger.trace("performing sequence numbers based recovery. starting at [{}]", request.startingSeqNo()); @@ -162,7 +163,7 @@ public RecoveryResponse recoverToTarget() throws IOException { } // we set this to unassigned to create a translog roughly according to the retention policy // on the target - startingSeqNo = SequenceNumbersService.UNASSIGNED_SEQ_NO; + startingSeqNo = SequenceNumbers.UNASSIGNED_SEQ_NO; try { phase1(phase1Snapshot.getIndexCommit(), translog::totalOperations); @@ -198,6 +199,13 @@ public RecoveryResponse recoverToTarget() throws IOException { return response; } + private boolean isTargetSameHistory() { + final String targetHistoryUUID = request.metadataSnapshot().getHistoryUUID(); + assert targetHistoryUUID != null || shard.indexSettings().getIndexVersionCreated().before(Version.V_6_0_0_rc1) : + "incoming target history N/A but index was created after or on 6.0.0-rc1"; + return targetHistoryUUID != null && targetHistoryUUID.equals(shard.getHistoryUUID()); + } + private void runUnderPrimaryPermit(CancellableThreads.Interruptable runnable) { cancellableThreads.execute(() -> { final PlainActionFuture onAcquired = new PlainActionFuture<>(); @@ -235,20 +243,17 @@ boolean isTranslogReadyForSequenceNumberBasedRecovery() throws IOException { logger.trace("all operations up to [{}] completed, checking translog content", endingSeqNo); - final LocalCheckpointTracker tracker = new LocalCheckpointTracker(shard.indexSettings(), startingSeqNo, startingSeqNo - 1); + final LocalCheckpointTracker tracker = new LocalCheckpointTracker(startingSeqNo, startingSeqNo - 1); try (Translog.Snapshot snapshot = shard.getTranslog().newSnapshotFromMinSeqNo(startingSeqNo)) { Translog.Operation operation; while ((operation = snapshot.next()) != null) { - if (operation.seqNo() != SequenceNumbersService.UNASSIGNED_SEQ_NO) { + if (operation.seqNo() != SequenceNumbers.UNASSIGNED_SEQ_NO) { tracker.markSeqNoAsCompleted(operation.seqNo()); } } } return tracker.getCheckpoint() >= endingSeqNo; } else { - // norelease this can currently happen if a snapshot restore rolls the primary back to a previous commit point; in this - // situation the local checkpoint on the replica can be far in advance of the maximum sequence number on the primary violating - // all assumptions regarding local and global checkpoints return false; } } @@ -427,7 +432,7 @@ void prepareTargetForTranslog(final int totalTranslogOps) throws IOException { * point-in-time view of the translog). It then sends each translog operation to the target node so it can be replayed into the new * shard. * - * @param startingSeqNo the sequence number to start recovery from, or {@link SequenceNumbersService#UNASSIGNED_SEQ_NO} if all + * @param startingSeqNo the sequence number to start recovery from, or {@link SequenceNumbers#UNASSIGNED_SEQ_NO} if all * ops should be sent * @param snapshot a snapshot of the translog * @@ -470,7 +475,9 @@ public void finalizeRecovery(final long targetLocalCheckpoint) { * the permit then the state of the shard will be relocated and this recovery will fail. */ runUnderPrimaryPermit(() -> shard.markAllocationIdAsInSync(request.targetAllocationId(), targetLocalCheckpoint)); - cancellableThreads.execute(() -> recoveryTarget.finalizeRecovery(shard.getGlobalCheckpoint())); + final long globalCheckpoint = shard.getGlobalCheckpoint(); + cancellableThreads.execute(() -> recoveryTarget.finalizeRecovery(globalCheckpoint)); + runUnderPrimaryPermit(() -> shard.updateGlobalCheckpointForShard(request.targetAllocationId(), globalCheckpoint)); if (request.isPrimaryRelocation()) { logger.trace("performing relocation hand-off"); @@ -513,7 +520,7 @@ protected SendSnapshotResult sendSnapshot(final long startingSeqNo, final Transl long size = 0; int skippedOps = 0; int totalSentOps = 0; - final AtomicLong targetLocalCheckpoint = new AtomicLong(SequenceNumbersService.UNASSIGNED_SEQ_NO); + final AtomicLong targetLocalCheckpoint = new AtomicLong(SequenceNumbers.UNASSIGNED_SEQ_NO); final List operations = new ArrayList<>(); final int expectedTotalOps = snapshot.totalOperations(); @@ -536,7 +543,7 @@ protected SendSnapshotResult sendSnapshot(final long startingSeqNo, final Transl * any ops before the starting sequence number. */ final long seqNo = operation.seqNo(); - if (startingSeqNo >= 0 && (seqNo == SequenceNumbersService.UNASSIGNED_SEQ_NO || seqNo < startingSeqNo)) { + if (startingSeqNo >= 0 && (seqNo == SequenceNumbers.UNASSIGNED_SEQ_NO || seqNo < startingSeqNo)) { skippedOps++; continue; } diff --git a/core/src/main/java/org/elasticsearch/indices/recovery/RecoveryState.java b/core/src/main/java/org/elasticsearch/indices/recovery/RecoveryState.java index f2139f8e251b5..3faf5e3ec8a73 100644 --- a/core/src/main/java/org/elasticsearch/indices/recovery/RecoveryState.java +++ b/core/src/main/java/org/elasticsearch/indices/recovery/RecoveryState.java @@ -27,7 +27,6 @@ import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Streamable; import org.elasticsearch.common.unit.TimeValue; -import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.ToXContentFragment; import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; @@ -45,7 +44,7 @@ /** * Keeps track of state related to shard recovery. */ -public class RecoveryState implements ToXContent, Streamable { +public class RecoveryState implements ToXContentFragment, Streamable { public enum Stage { INIT((byte) 0), @@ -413,7 +412,7 @@ public synchronized void writeTo(StreamOutput out) throws IOException { } - public static class VerifyIndex extends Timer implements ToXContent, Streamable { + public static class VerifyIndex extends Timer implements ToXContentFragment, Streamable { private volatile long checkIndexTime; @@ -450,7 +449,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws } } - public static class Translog extends Timer implements ToXContent, Streamable { + public static class Translog extends Timer implements ToXContentFragment, Streamable { public static final int UNKNOWN = -1; private int recovered; diff --git a/core/src/main/java/org/elasticsearch/indices/recovery/RecoveryTranslogOperationsResponse.java b/core/src/main/java/org/elasticsearch/indices/recovery/RecoveryTranslogOperationsResponse.java index 731eb28ed92c7..530b8b67415d3 100644 --- a/core/src/main/java/org/elasticsearch/indices/recovery/RecoveryTranslogOperationsResponse.java +++ b/core/src/main/java/org/elasticsearch/indices/recovery/RecoveryTranslogOperationsResponse.java @@ -22,7 +22,7 @@ import org.elasticsearch.Version; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; -import org.elasticsearch.index.seqno.SequenceNumbersService; +import org.elasticsearch.index.seqno.SequenceNumbers; import org.elasticsearch.transport.FutureTransportResponseHandler; import org.elasticsearch.transport.TransportResponse; import org.elasticsearch.transport.TransportResponseHandler; @@ -56,7 +56,7 @@ public void readFrom(final StreamInput in) throws IOException { localCheckpoint = in.readZLong(); } else { - localCheckpoint = SequenceNumbersService.UNASSIGNED_SEQ_NO; + localCheckpoint = SequenceNumbers.UNASSIGNED_SEQ_NO; } } diff --git a/core/src/main/java/org/elasticsearch/indices/recovery/StartRecoveryRequest.java b/core/src/main/java/org/elasticsearch/indices/recovery/StartRecoveryRequest.java index 825fa8306bada..cfdaddabdf15b 100644 --- a/core/src/main/java/org/elasticsearch/indices/recovery/StartRecoveryRequest.java +++ b/core/src/main/java/org/elasticsearch/indices/recovery/StartRecoveryRequest.java @@ -23,7 +23,7 @@ import org.elasticsearch.cluster.node.DiscoveryNode; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; -import org.elasticsearch.index.seqno.SequenceNumbersService; +import org.elasticsearch.index.seqno.SequenceNumbers; import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.index.store.Store; import org.elasticsearch.transport.TransportRequest; @@ -75,6 +75,8 @@ public StartRecoveryRequest(final ShardId shardId, this.metadataSnapshot = metadataSnapshot; this.primaryRelocation = primaryRelocation; this.startingSeqNo = startingSeqNo; + assert startingSeqNo == SequenceNumbers.UNASSIGNED_SEQ_NO || metadataSnapshot.getHistoryUUID() != null : + "starting seq no is set but not history uuid"; } public long recoveryId() { @@ -122,7 +124,7 @@ public void readFrom(StreamInput in) throws IOException { if (in.getVersion().onOrAfter(Version.V_6_0_0_alpha1)) { startingSeqNo = in.readLong(); } else { - startingSeqNo = SequenceNumbersService.UNASSIGNED_SEQ_NO; + startingSeqNo = SequenceNumbers.UNASSIGNED_SEQ_NO; } } diff --git a/core/src/main/java/org/elasticsearch/ingest/ConfigurationUtils.java b/core/src/main/java/org/elasticsearch/ingest/ConfigurationUtils.java index e2f67bf79c98b..78dc0ec6bfef1 100644 --- a/core/src/main/java/org/elasticsearch/ingest/ConfigurationUtils.java +++ b/core/src/main/java/org/elasticsearch/ingest/ConfigurationUtils.java @@ -30,6 +30,7 @@ import java.util.ArrayList; import java.util.Arrays; import java.util.Collections; +import java.util.HashMap; import java.util.List; import java.util.Map; @@ -294,13 +295,13 @@ public static ElasticsearchException newConfigurationException(String processorT return exception; } - public static List readProcessorConfigs(List>> processorConfigs, + public static List readProcessorConfigs(List> processorConfigs, Map processorFactories) throws Exception { Exception exception = null; List processors = new ArrayList<>(); if (processorConfigs != null) { - for (Map> processorConfigWithKey : processorConfigs) { - for (Map.Entry> entry : processorConfigWithKey.entrySet()) { + for (Map processorConfigWithKey : processorConfigs) { + for (Map.Entry entry : processorConfigWithKey.entrySet()) { try { processors.add(readProcessor(processorFactories, entry.getKey(), entry.getValue())); } catch (Exception e) { @@ -353,13 +354,28 @@ private static void addHeadersToException(ElasticsearchException exception, Stri } } + @SuppressWarnings("unchecked") + public static Processor readProcessor(Map processorFactories, + String type, Object config) throws Exception { + if (config instanceof Map) { + return readProcessor(processorFactories, type, (Map) config); + } else if (config instanceof String && "script".equals(type)) { + Map normalizedScript = new HashMap<>(1); + normalizedScript.put(ScriptType.INLINE.getName(), config); + return readProcessor(processorFactories, type, normalizedScript); + } else { + throw newConfigurationException(type, null, null, + "property isn't a map, but of type [" + config.getClass().getName() + "]"); + } + } + public static Processor readProcessor(Map processorFactories, String type, Map config) throws Exception { String tag = ConfigurationUtils.readOptionalStringProperty(null, null, config, TAG_KEY); Processor.Factory factory = processorFactories.get(type); if (factory != null) { boolean ignoreFailure = ConfigurationUtils.readBooleanProperty(null, null, config, "ignore_failure", false); - List>> onFailureProcessorConfigs = + List> onFailureProcessorConfigs = ConfigurationUtils.readOptionalList(null, null, config, Pipeline.ON_FAILURE_KEY); List onFailureProcessors = readProcessorConfigs(onFailureProcessorConfigs, processorFactories); diff --git a/core/src/main/java/org/elasticsearch/ingest/IngestStats.java b/core/src/main/java/org/elasticsearch/ingest/IngestStats.java index 91e1014e96fbe..fd0d7e826c070 100644 --- a/core/src/main/java/org/elasticsearch/ingest/IngestStats.java +++ b/core/src/main/java/org/elasticsearch/ingest/IngestStats.java @@ -22,7 +22,6 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; -import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.ToXContentFragment; import org.elasticsearch.common.xcontent.XContentBuilder; @@ -94,7 +93,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws return builder; } - public static class Stats implements Writeable, ToXContent { + public static class Stats implements Writeable, ToXContentFragment { private final long ingestCount; private final long ingestTimeInMillis; diff --git a/core/src/main/java/org/elasticsearch/ingest/Pipeline.java b/core/src/main/java/org/elasticsearch/ingest/Pipeline.java index 4a705c43bac8d..473b555c05d22 100644 --- a/core/src/main/java/org/elasticsearch/ingest/Pipeline.java +++ b/core/src/main/java/org/elasticsearch/ingest/Pipeline.java @@ -118,9 +118,9 @@ public static final class Factory { public Pipeline create(String id, Map config, Map processorFactories) throws Exception { String description = ConfigurationUtils.readOptionalStringProperty(null, null, config, DESCRIPTION_KEY); Integer version = ConfigurationUtils.readIntProperty(null, null, config, VERSION_KEY, null); - List>> processorConfigs = ConfigurationUtils.readList(null, null, config, PROCESSORS_KEY); + List> processorConfigs = ConfigurationUtils.readList(null, null, config, PROCESSORS_KEY); List processors = ConfigurationUtils.readProcessorConfigs(processorConfigs, processorFactories); - List>> onFailureProcessorConfigs = + List> onFailureProcessorConfigs = ConfigurationUtils.readOptionalList(null, null, config, ON_FAILURE_KEY); List onFailureProcessors = ConfigurationUtils.readProcessorConfigs(onFailureProcessorConfigs, processorFactories); if (config.isEmpty() == false) { diff --git a/core/src/main/java/org/elasticsearch/ingest/PipelineConfiguration.java b/core/src/main/java/org/elasticsearch/ingest/PipelineConfiguration.java index 1d7ba958f1498..62e0fae04b30d 100644 --- a/core/src/main/java/org/elasticsearch/ingest/PipelineConfiguration.java +++ b/core/src/main/java/org/elasticsearch/ingest/PipelineConfiguration.java @@ -28,7 +28,8 @@ import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.xcontent.ContextParser; import org.elasticsearch.common.xcontent.ObjectParser; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContent.Params; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.common.xcontent.XContentHelper; @@ -41,7 +42,7 @@ /** * Encapsulates a pipeline's id and configuration as a blob */ -public final class PipelineConfiguration extends AbstractDiffable implements ToXContent { +public final class PipelineConfiguration extends AbstractDiffable implements ToXContentObject { private static final ObjectParser PARSER = new ObjectParser<>("pipeline_config", Builder::new); static { diff --git a/core/src/main/java/org/elasticsearch/ingest/PipelineExecutionService.java b/core/src/main/java/org/elasticsearch/ingest/PipelineExecutionService.java index c1b46e495678f..cec622f4a2587 100644 --- a/core/src/main/java/org/elasticsearch/ingest/PipelineExecutionService.java +++ b/core/src/main/java/org/elasticsearch/ingest/PipelineExecutionService.java @@ -21,6 +21,7 @@ import org.elasticsearch.action.DocWriteRequest; import org.elasticsearch.action.index.IndexRequest; +import org.elasticsearch.action.update.UpdateRequest; import org.elasticsearch.cluster.ClusterChangedEvent; import org.elasticsearch.cluster.ClusterStateApplier; import org.elasticsearch.common.Strings; @@ -81,17 +82,21 @@ public void onFailure(Exception e) { @Override protected void doRun() throws Exception { for (DocWriteRequest actionRequest : actionRequests) { - if ((actionRequest instanceof IndexRequest)) { - IndexRequest indexRequest = (IndexRequest) actionRequest; - if (Strings.hasText(indexRequest.getPipeline())) { - try { - innerExecute(indexRequest, getPipeline(indexRequest.getPipeline())); - //this shouldn't be needed here but we do it for consistency with index api - // which requires it to prevent double execution - indexRequest.setPipeline(null); - } catch (Exception e) { - itemFailureHandler.accept(indexRequest, e); - } + IndexRequest indexRequest = null; + if (actionRequest instanceof IndexRequest) { + indexRequest = (IndexRequest) actionRequest; + } else if (actionRequest instanceof UpdateRequest) { + UpdateRequest updateRequest = (UpdateRequest) actionRequest; + indexRequest = updateRequest.docAsUpsert() ? updateRequest.doc() : updateRequest.upsertRequest(); + } + if (indexRequest != null && Strings.hasText(indexRequest.getPipeline())) { + try { + innerExecute(indexRequest, getPipeline(indexRequest.getPipeline())); + //this shouldn't be needed here but we do it for consistency with index api + // which requires it to prevent double execution + indexRequest.setPipeline(null); + } catch (Exception e) { + itemFailureHandler.accept(indexRequest, e); } } } diff --git a/core/src/main/java/org/elasticsearch/ingest/ProcessorInfo.java b/core/src/main/java/org/elasticsearch/ingest/ProcessorInfo.java index a7a2d122ed894..31a799e7d06d1 100644 --- a/core/src/main/java/org/elasticsearch/ingest/ProcessorInfo.java +++ b/core/src/main/java/org/elasticsearch/ingest/ProcessorInfo.java @@ -22,12 +22,13 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContent.Params; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; import java.io.IOException; -public class ProcessorInfo implements Writeable, ToXContent, Comparable { +public class ProcessorInfo implements Writeable, ToXContentObject, Comparable { private final String type; diff --git a/core/src/main/java/org/elasticsearch/monitor/fs/FsInfo.java b/core/src/main/java/org/elasticsearch/monitor/fs/FsInfo.java index 4f25c4f4dabbc..c8bdaad3f1f6c 100644 --- a/core/src/main/java/org/elasticsearch/monitor/fs/FsInfo.java +++ b/core/src/main/java/org/elasticsearch/monitor/fs/FsInfo.java @@ -26,8 +26,8 @@ import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; import org.elasticsearch.common.unit.ByteSizeValue; -import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.ToXContentFragment; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; import java.io.IOException; @@ -38,7 +38,7 @@ public class FsInfo implements Iterable, Writeable, ToXContentFragment { - public static class Path implements Writeable, ToXContent { + public static class Path implements Writeable, ToXContentObject { String path; @Nullable diff --git a/core/src/main/java/org/elasticsearch/monitor/jvm/JvmGcMonitorService.java b/core/src/main/java/org/elasticsearch/monitor/jvm/JvmGcMonitorService.java index f260d7430e2e5..2c7235ca89954 100644 --- a/core/src/main/java/org/elasticsearch/monitor/jvm/JvmGcMonitorService.java +++ b/core/src/main/java/org/elasticsearch/monitor/jvm/JvmGcMonitorService.java @@ -28,7 +28,7 @@ import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.monitor.jvm.JvmStats.GarbageCollector; import org.elasticsearch.threadpool.ThreadPool; -import org.elasticsearch.threadpool.ThreadPool.Cancellable; +import org.elasticsearch.threadpool.Scheduler.Cancellable; import org.elasticsearch.threadpool.ThreadPool.Names; import java.util.HashMap; diff --git a/core/src/main/java/org/elasticsearch/monitor/os/OsInfo.java b/core/src/main/java/org/elasticsearch/monitor/os/OsInfo.java index 7a0175c31d1da..7046b35839098 100644 --- a/core/src/main/java/org/elasticsearch/monitor/os/OsInfo.java +++ b/core/src/main/java/org/elasticsearch/monitor/os/OsInfo.java @@ -22,12 +22,13 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContent.Params; +import org.elasticsearch.common.xcontent.ToXContentFragment; import org.elasticsearch.common.xcontent.XContentBuilder; import java.io.IOException; -public class OsInfo implements Writeable, ToXContent { +public class OsInfo implements Writeable, ToXContentFragment { private final long refreshInterval; private final int availableProcessors; diff --git a/core/src/main/java/org/elasticsearch/monitor/os/OsProbe.java b/core/src/main/java/org/elasticsearch/monitor/os/OsProbe.java index 43ef51658b726..f9423f1b13cd1 100644 --- a/core/src/main/java/org/elasticsearch/monitor/os/OsProbe.java +++ b/core/src/main/java/org/elasticsearch/monitor/os/OsProbe.java @@ -36,8 +36,6 @@ import java.util.HashMap; import java.util.List; import java.util.Map; -import java.util.regex.Matcher; -import java.util.regex.Pattern; public class OsProbe { @@ -382,12 +380,70 @@ List readSysFsCgroupCpuAcctCpuStat(final String controlGroup) throws IOE } /** - * Checks if cgroup stats are available by checking for the existence of {@code /proc/self/cgroup}, {@code /sys/fs/cgroup/cpu}, and - * {@code /sys/fs/cgroup/cpuacct}. + * The maximum amount of user memory (including file cache). + * If there is no limit then some Linux versions return the maximum value that can be stored in an + * unsigned 64 bit number, and this will overflow a long, hence the result type is String. + * (The alternative would have been BigInteger but then it would not be possible to index + * the OS stats document into Elasticsearch without losing information, as BigInteger is + * not a supported Elasticsearch type.) + * + * @param controlGroup the control group for the Elasticsearch process for the {@code memory} subsystem + * @return the maximum amount of user memory (including file cache) + * @throws IOException if an I/O exception occurs reading {@code memory.limit_in_bytes} for the control group + */ + private String getCgroupMemoryLimitInBytes(final String controlGroup) throws IOException { + return readSysFsCgroupMemoryLimitInBytes(controlGroup); + } + + /** + * Returns the line from {@code memory.limit_in_bytes} for the control group to which the Elasticsearch process belongs for the + * {@code memory} subsystem. This line represents the maximum amount of user memory (including file cache). + * + * @param controlGroup the control group to which the Elasticsearch process belongs for the {@code memory} subsystem + * @return the line from {@code memory.limit_in_bytes} + * @throws IOException if an I/O exception occurs reading {@code memory.limit_in_bytes} for the control group + */ + @SuppressForbidden(reason = "access /sys/fs/cgroup/memory") + String readSysFsCgroupMemoryLimitInBytes(final String controlGroup) throws IOException { + return readSingleLine(PathUtils.get("/sys/fs/cgroup/memory", controlGroup, "memory.limit_in_bytes")); + } + + /** + * The total current memory usage by processes in the cgroup (in bytes). + * If there is no limit then some Linux versions return the maximum value that can be stored in an + * unsigned 64 bit number, and this will overflow a long, hence the result type is String. + * (The alternative would have been BigInteger but then it would not be possible to index + * the OS stats document into Elasticsearch without losing information, as BigInteger is + * not a supported Elasticsearch type.) + * + * @param controlGroup the control group for the Elasticsearch process for the {@code memory} subsystem + * @return the total current memory usage by processes in the cgroup (in bytes) + * @throws IOException if an I/O exception occurs reading {@code memory.limit_in_bytes} for the control group + */ + private String getCgroupMemoryUsageInBytes(final String controlGroup) throws IOException { + return readSysFsCgroupMemoryUsageInBytes(controlGroup); + } + + /** + * Returns the line from {@code memory.usage_in_bytes} for the control group to which the Elasticsearch process belongs for the + * {@code memory} subsystem. This line represents the total current memory usage by processes in the cgroup (in bytes). + * + * @param controlGroup the control group to which the Elasticsearch process belongs for the {@code memory} subsystem + * @return the line from {@code memory.usage_in_bytes} + * @throws IOException if an I/O exception occurs reading {@code memory.usage_in_bytes} for the control group + */ + @SuppressForbidden(reason = "access /sys/fs/cgroup/memory") + String readSysFsCgroupMemoryUsageInBytes(final String controlGroup) throws IOException { + return readSingleLine(PathUtils.get("/sys/fs/cgroup/memory", controlGroup, "memory.usage_in_bytes")); + } + + /** + * Checks if cgroup stats are available by checking for the existence of {@code /proc/self/cgroup}, {@code /sys/fs/cgroup/cpu}, + * {@code /sys/fs/cgroup/cpuacct} and {@code /sys/fs/cgroup/memory}. * * @return {@code true} if the stats are available, otherwise {@code false} */ - @SuppressForbidden(reason = "access /proc/self/cgroup, /sys/fs/cgroup/cpu, and /sys/fs/cgroup/cpuacct") + @SuppressForbidden(reason = "access /proc/self/cgroup, /sys/fs/cgroup/cpu, /sys/fs/cgroup/cpuacct and /sys/fs/cgroup/memory") boolean areCgroupStatsAvailable() { if (!Files.exists(PathUtils.get("/proc/self/cgroup"))) { return false; @@ -398,6 +454,9 @@ boolean areCgroupStatsAvailable() { if (!Files.exists(PathUtils.get("/sys/fs/cgroup/cpuacct"))) { return false; } + if (!Files.exists(PathUtils.get("/sys/fs/cgroup/memory"))) { + return false; + } return true; } @@ -424,13 +483,21 @@ private OsStats.Cgroup getCgroup() { final long cgroupCpuAcctCpuCfsQuotaMicros = getCgroupCpuAcctCpuCfsQuotaMicros(cpuControlGroup); final OsStats.Cgroup.CpuStat cpuStat = getCgroupCpuAcctCpuStat(cpuControlGroup); + final String memoryControlGroup = controllerMap.get("memory"); + assert memoryControlGroup != null; + final String cgroupMemoryLimitInBytes = getCgroupMemoryLimitInBytes(memoryControlGroup); + final String cgroupMemoryUsageInBytes = getCgroupMemoryUsageInBytes(memoryControlGroup); + return new OsStats.Cgroup( cpuAcctControlGroup, cgroupCpuAcctUsageNanos, cpuControlGroup, cgroupCpuAcctCpuCfsPeriodMicros, cgroupCpuAcctCpuCfsQuotaMicros, - cpuStat); + cpuStat, + memoryControlGroup, + cgroupMemoryLimitInBytes, + cgroupMemoryUsageInBytes); } } catch (final IOException e) { logger.debug("error reading control group stats", e); diff --git a/core/src/main/java/org/elasticsearch/monitor/os/OsStats.java b/core/src/main/java/org/elasticsearch/monitor/os/OsStats.java index 5c62008cbe856..60502679c2131 100644 --- a/core/src/main/java/org/elasticsearch/monitor/os/OsStats.java +++ b/core/src/main/java/org/elasticsearch/monitor/os/OsStats.java @@ -24,16 +24,14 @@ import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; import org.elasticsearch.common.unit.ByteSizeValue; -import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.ToXContentFragment; -import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; import java.io.IOException; import java.util.Arrays; import java.util.Objects; -public class OsStats implements Writeable, ToXContent { +public class OsStats implements Writeable, ToXContentFragment { private final long timestamp; private final Cpu cpu; @@ -187,7 +185,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws } } - public static class Swap implements Writeable, ToXContent { + public static class Swap implements Writeable, ToXContentFragment { private final long total; private final long free; @@ -288,7 +286,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws /** * Encapsulates basic cgroup statistics. */ - public static class Cgroup implements Writeable, ToXContentObject { + public static class Cgroup implements Writeable, ToXContentFragment { private final String cpuAcctControlGroup; private final long cpuAcctUsageNanos; @@ -296,6 +294,10 @@ public static class Cgroup implements Writeable, ToXContentObject { private final long cpuCfsPeriodMicros; private final long cpuCfsQuotaMicros; private final CpuStat cpuStat; + // These will be null for nodes running versions prior to 6.1.0 + private final String memoryControlGroup; + private final String memoryLimitInBytes; + private final String memoryUsageInBytes; /** * The control group for the {@code cpuacct} subsystem. @@ -357,19 +359,57 @@ public CpuStat getCpuStat() { return cpuStat; } + /** + * The control group for the {@code memory} subsystem. + * + * @return the control group + */ + public String getMemoryControlGroup() { + return memoryControlGroup; + } + + /** + * The maximum amount of user memory (including file cache). + * This is stored as a String because the value can be too big to fit in a + * long. (The alternative would have been BigInteger but then + * it would not be possible to index the OS stats document into Elasticsearch without + * losing information, as BigInteger is not a supported Elasticsearch type.) + * + * @return the maximum amount of user memory (including file cache). + */ + public String getMemoryLimitInBytes() { + return memoryLimitInBytes; + } + + /** + * The total current memory usage by processes in the cgroup (in bytes). + * This is stored as a String for consistency with memoryLimitInBytes. + * + * @return the total current memory usage by processes in the cgroup (in bytes). + */ + public String getMemoryUsageInBytes() { + return memoryUsageInBytes; + } + public Cgroup( final String cpuAcctControlGroup, final long cpuAcctUsageNanos, final String cpuControlGroup, final long cpuCfsPeriodMicros, final long cpuCfsQuotaMicros, - final CpuStat cpuStat) { + final CpuStat cpuStat, + final String memoryControlGroup, + final String memoryLimitInBytes, + final String memoryUsageInBytes) { this.cpuAcctControlGroup = Objects.requireNonNull(cpuAcctControlGroup); this.cpuAcctUsageNanos = cpuAcctUsageNanos; this.cpuControlGroup = Objects.requireNonNull(cpuControlGroup); this.cpuCfsPeriodMicros = cpuCfsPeriodMicros; this.cpuCfsQuotaMicros = cpuCfsQuotaMicros; this.cpuStat = Objects.requireNonNull(cpuStat); + this.memoryControlGroup = memoryControlGroup; + this.memoryLimitInBytes = memoryLimitInBytes; + this.memoryUsageInBytes = memoryUsageInBytes; } Cgroup(final StreamInput in) throws IOException { @@ -379,6 +419,15 @@ public Cgroup( cpuCfsPeriodMicros = in.readLong(); cpuCfsQuotaMicros = in.readLong(); cpuStat = new CpuStat(in); + if (in.getVersion().onOrAfter(Version.V_6_1_0)) { + memoryControlGroup = in.readOptionalString(); + memoryLimitInBytes = in.readOptionalString(); + memoryUsageInBytes = in.readOptionalString(); + } else { + memoryControlGroup = null; + memoryLimitInBytes = null; + memoryUsageInBytes = null; + } } @Override @@ -389,6 +438,11 @@ public void writeTo(final StreamOutput out) throws IOException { out.writeLong(cpuCfsPeriodMicros); out.writeLong(cpuCfsQuotaMicros); cpuStat.writeTo(out); + if (out.getVersion().onOrAfter(Version.V_6_1_0)) { + out.writeOptionalString(memoryControlGroup); + out.writeOptionalString(memoryLimitInBytes); + out.writeOptionalString(memoryUsageInBytes); + } } @Override @@ -409,6 +463,19 @@ public XContentBuilder toXContent(final XContentBuilder builder, final Params pa cpuStat.toXContent(builder, params); } builder.endObject(); + if (memoryControlGroup != null) { + builder.startObject("memory"); + { + builder.field("control_group", memoryControlGroup); + if (memoryLimitInBytes != null) { + builder.field("limit_in_bytes", memoryLimitInBytes); + } + if (memoryUsageInBytes != null) { + builder.field("usage_in_bytes", memoryUsageInBytes); + } + } + builder.endObject(); + } } builder.endObject(); return builder; diff --git a/core/src/main/java/org/elasticsearch/monitor/process/ProcessInfo.java b/core/src/main/java/org/elasticsearch/monitor/process/ProcessInfo.java index a0e3e7a70f23b..5d74f576181d5 100644 --- a/core/src/main/java/org/elasticsearch/monitor/process/ProcessInfo.java +++ b/core/src/main/java/org/elasticsearch/monitor/process/ProcessInfo.java @@ -22,12 +22,13 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContent.Params; +import org.elasticsearch.common.xcontent.ToXContentFragment; import org.elasticsearch.common.xcontent.XContentBuilder; import java.io.IOException; -public class ProcessInfo implements Writeable, ToXContent { +public class ProcessInfo implements Writeable, ToXContentFragment { private final long refreshInterval; private final long id; diff --git a/core/src/main/java/org/elasticsearch/monitor/process/ProcessStats.java b/core/src/main/java/org/elasticsearch/monitor/process/ProcessStats.java index 2a9b8952779d4..f85e013bc426b 100644 --- a/core/src/main/java/org/elasticsearch/monitor/process/ProcessStats.java +++ b/core/src/main/java/org/elasticsearch/monitor/process/ProcessStats.java @@ -24,12 +24,13 @@ import org.elasticsearch.common.io.stream.Writeable; import org.elasticsearch.common.unit.ByteSizeValue; import org.elasticsearch.common.unit.TimeValue; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContent.Params; +import org.elasticsearch.common.xcontent.ToXContentFragment; import org.elasticsearch.common.xcontent.XContentBuilder; import java.io.IOException; -public class ProcessStats implements Writeable, ToXContent { +public class ProcessStats implements Writeable, ToXContentFragment { private final long timestamp; private final long openFileDescriptors; diff --git a/core/src/main/java/org/elasticsearch/node/AdaptiveSelectionStats.java b/core/src/main/java/org/elasticsearch/node/AdaptiveSelectionStats.java new file mode 100644 index 0000000000000..3deb161cc8e97 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/node/AdaptiveSelectionStats.java @@ -0,0 +1,108 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.node; + +import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.io.stream.Writeable; +import org.elasticsearch.common.util.set.Sets; +import org.elasticsearch.common.xcontent.ToXContent.Params; +import org.elasticsearch.common.xcontent.ToXContentFragment; +import org.elasticsearch.common.xcontent.XContentBuilder; + +import java.io.IOException; +import java.util.HashMap; +import java.util.Locale; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.TimeUnit; +import java.util.stream.Collectors; + +/** + * Class representing statistics about adaptive replica selection. This includes + * EWMA of queue size, service time, and response time, as well as outgoing + * searches to each node and the "rank" based on the ARS formula. + */ +public class AdaptiveSelectionStats implements Writeable, ToXContentFragment { + + private final Map clientOutgoingConnections; + private final Map nodeComputedStats; + + public AdaptiveSelectionStats(Map clientConnections, + Map nodeComputedStats) { + this.clientOutgoingConnections = clientConnections; + this.nodeComputedStats = nodeComputedStats; + } + + public AdaptiveSelectionStats(StreamInput in) throws IOException { + this.clientOutgoingConnections = in.readMap(StreamInput::readString, StreamInput::readLong); + this.nodeComputedStats = in.readMap(StreamInput::readString, ResponseCollectorService.ComputedNodeStats::new); + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + out.writeMap(this.clientOutgoingConnections, StreamOutput::writeString, StreamOutput::writeLong); + out.writeMap(this.nodeComputedStats, StreamOutput::writeString, (stream, stats) -> stats.writeTo(stream)); + } + + @Override + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject("adaptive_selection"); + Set allNodeIds = Sets.union(clientOutgoingConnections.keySet(), nodeComputedStats.keySet()); + for (String nodeId : allNodeIds) { + builder.startObject(nodeId); + ResponseCollectorService.ComputedNodeStats stats = nodeComputedStats.get(nodeId); + if (stats != null) { + long outgoingSearches = clientOutgoingConnections.getOrDefault(nodeId, 0L); + builder.field("outgoing_searches", outgoingSearches); + builder.field("avg_queue_size", stats.queueSize); + builder.timeValueField("avg_service_time_ns", "avg_service_time", (long) stats.serviceTime, TimeUnit.NANOSECONDS); + builder.timeValueField("avg_response_time_ns", "avg_response_time", (long) stats.responseTime, TimeUnit.NANOSECONDS); + builder.field("rank", String.format(Locale.ROOT, "%.1f", stats.rank(outgoingSearches))); + } + builder.endObject(); + } + builder.endObject(); + return builder; + } + + /** + * Returns a map of node id to the outgoing search requests to that node + */ + public Map getOutgoingConnections() { + return clientOutgoingConnections; + } + + /** + * Returns a map of node id to the computed stats + */ + public Map getComputedStats() { + return nodeComputedStats; + } + + /** + * Returns a map of node id to the ranking of the nodes based on the adaptive replica formula + */ + public Map getRanks() { + return nodeComputedStats.entrySet().stream() + .collect(Collectors.toMap(Map.Entry::getKey, + e -> e.getValue().rank(clientOutgoingConnections.getOrDefault(e.getKey(), 0L)))); + } +} diff --git a/core/src/main/java/org/elasticsearch/node/InternalSettingsPreparer.java b/core/src/main/java/org/elasticsearch/node/InternalSettingsPreparer.java index 93c5a18222c8e..a2c7663ec9e15 100644 --- a/core/src/main/java/org/elasticsearch/node/InternalSettingsPreparer.java +++ b/core/src/main/java/org/elasticsearch/node/InternalSettingsPreparer.java @@ -134,7 +134,7 @@ static void initializeSettings(final Settings.Builder output, final Settings inp private static void finalizeSettings(Settings.Builder output, Terminal terminal) { // allow to force set properties based on configuration of the settings provided List forcedSettings = new ArrayList<>(); - for (String setting : output.internalMap().keySet()) { + for (String setting : output.keys()) { if (setting.startsWith("force.")) { forcedSettings.add(setting); } @@ -156,13 +156,13 @@ private static void finalizeSettings(Settings.Builder output, Terminal terminal) private static void replacePromptPlaceholders(Settings.Builder settings, Terminal terminal) { List secretToPrompt = new ArrayList<>(); List textToPrompt = new ArrayList<>(); - for (Map.Entry entry : settings.internalMap().entrySet()) { - switch (entry.getValue()) { + for (String key : settings.keys()) { + switch (settings.get(key)) { case SECRET_PROMPT_VALUE: - secretToPrompt.add(entry.getKey()); + secretToPrompt.add(key); break; case TEXT_PROMPT_VALUE: - textToPrompt.add(entry.getKey()); + textToPrompt.add(key); break; } } diff --git a/core/src/main/java/org/elasticsearch/node/Node.java b/core/src/main/java/org/elasticsearch/node/Node.java index 10d8ddcf2105d..0ddc03de8c049 100644 --- a/core/src/main/java/org/elasticsearch/node/Node.java +++ b/core/src/main/java/org/elasticsearch/node/Node.java @@ -35,6 +35,7 @@ import org.elasticsearch.action.support.TransportAction; import org.elasticsearch.action.update.UpdateHelper; import org.elasticsearch.bootstrap.BootstrapCheck; +import org.elasticsearch.bootstrap.BootstrapContext; import org.elasticsearch.client.Client; import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.cluster.ClusterInfo; @@ -86,6 +87,7 @@ import org.elasticsearch.env.Environment; import org.elasticsearch.env.NodeEnvironment; import org.elasticsearch.gateway.GatewayAllocator; +import org.elasticsearch.gateway.GatewayMetaState; import org.elasticsearch.gateway.GatewayModule; import org.elasticsearch.gateway.GatewayService; import org.elasticsearch.gateway.MetaStateService; @@ -98,6 +100,7 @@ import org.elasticsearch.indices.breaker.HierarchyCircuitBreakerService; import org.elasticsearch.indices.breaker.NoneCircuitBreakerService; import org.elasticsearch.indices.cluster.IndicesClusterStateService; +import org.elasticsearch.indices.mapper.MapperRegistry; import org.elasticsearch.indices.recovery.PeerRecoverySourceService; import org.elasticsearch.indices.recovery.PeerRecoveryTargetService; import org.elasticsearch.indices.recovery.RecoverySettings; @@ -139,6 +142,7 @@ import java.io.BufferedWriter; import java.io.Closeable; import java.io.IOException; +import java.io.UncheckedIOException; import java.net.Inet6Address; import java.net.InetAddress; import java.net.InetSocketAddress; @@ -186,16 +190,15 @@ public class Node implements Closeable { */ public static final Setting NODE_LOCAL_STORAGE_SETTING = Setting.boolSetting("node.local_storage", true, Property.NodeScope); public static final Setting NODE_NAME_SETTING = Setting.simpleString("node.name", Property.NodeScope); - public static final Setting NODE_ATTRIBUTES = Setting.groupSetting("node.attr.", (settings) -> { - Map settingsMap = settings.getAsMap(); - for (Map.Entry entry : settingsMap.entrySet()) { - String value = entry.getValue(); - if (Character.isWhitespace(value.charAt(0)) || Character.isWhitespace(value.charAt(value.length() - 1))) { - throw new IllegalArgumentException("node.attr." + entry.getKey() + " cannot have leading or trailing whitespace " + - "[" + value + "]"); + public static final Setting.AffixSetting NODE_ATTRIBUTES = Setting.prefixKeySetting("node.attr.", (key) -> + new Setting<>(key, "", (value) -> { + if (value.length() > 0 + && (Character.isWhitespace(value.charAt(0)) || Character.isWhitespace(value.charAt(value.length() - 1)))) { + throw new IllegalArgumentException(key + " cannot have leading or trailing whitespace " + + "[" + value + "]"); } - } - }, Property.NodeScope); + return value; + }, Property.NodeScope)); public static final Setting BREAKER_TYPE_KEY = new Setting<>("indices.breaker.type", "hierarchy", (s) -> { switch (s) { case "hierarchy": @@ -358,10 +361,6 @@ protected Node(final Environment environment, Collection CircuitBreakerService circuitBreakerService = createCircuitBreakerService(settingsModule.getSettings(), settingsModule.getClusterSettings()); resourcesToClose.add(circuitBreakerService); - ActionModule actionModule = new ActionModule(false, settings, clusterModule.getIndexNameExpressionResolver(), - settingsModule.getIndexScopedSettings(), settingsModule.getClusterSettings(), settingsModule.getSettingsFilter(), - threadPool, pluginsService.filterPlugins(ActionPlugin.class), client, circuitBreakerService, usageService); - modules.add(actionModule); modules.add(new GatewayModule()); @@ -397,6 +396,12 @@ protected Node(final Environment environment, Collection scriptModule.getScriptService(), xContentRegistry, environment, nodeEnvironment, namedWriteableRegistry).stream()) .collect(Collectors.toList()); + + ActionModule actionModule = new ActionModule(false, settings, clusterModule.getIndexNameExpressionResolver(), + settingsModule.getIndexScopedSettings(), settingsModule.getClusterSettings(), settingsModule.getSettingsFilter(), + threadPool, pluginsService.filterPlugins(ActionPlugin.class), client, circuitBreakerService, usageService); + modules.add(actionModule); + final RestController restController = actionModule.getRestController(); final NetworkModule networkModule = new NetworkModule(settings, false, pluginsService.filterPlugins(NetworkPlugin.class), threadPool, bigArrays, circuitBreakerService, namedWriteableRegistry, xContentRegistry, networkService, restController); @@ -411,6 +416,10 @@ protected Node(final Environment environment, Collection Collection> indexMetaDataUpgraders = pluginsService.filterPlugins(Plugin.class).stream() .map(Plugin::getIndexMetaDataUpgrader).collect(Collectors.toList()); final MetaDataUpgrader metaDataUpgrader = new MetaDataUpgrader(customMetaDataUpgraders, indexTemplateMetaDataUpgraders); + final MetaDataIndexUpgradeService metaDataIndexUpgradeService = new MetaDataIndexUpgradeService(settings, xContentRegistry, + indicesModule.getMapperRegistry(), settingsModule.getIndexScopedSettings(), indexMetaDataUpgraders); + final GatewayMetaState gatewayMetaState = new GatewayMetaState(settings, nodeEnvironment, metaStateService, + metaDataIndexUpgradeService, metaDataUpgrader); new TemplateUpgradeService(settings, client, clusterService, threadPool, indexTemplateMetaDataUpgraders); final Transport transport = networkModule.getTransportSupplier().get(); final TransportService transportService = newTransportService(settings, transport, threadPool, @@ -438,7 +447,8 @@ protected Node(final Environment environment, Collection clusterModule.getAllocationService()); this.nodeService = new NodeService(settings, threadPool, monitorService, discoveryModule.getDiscovery(), transportService, indicesService, pluginsService, circuitBreakerService, scriptModule.getScriptService(), - httpServerTransport, ingestService, clusterService, settingsModule.getSettingsFilter()); + httpServerTransport, ingestService, clusterService, settingsModule.getSettingsFilter(), responseCollectorService, + searchTransportService); modules.add(b -> { b.bind(Node.class).toInstance(this); b.bind(NodeService.class).toInstance(nodeService); @@ -470,9 +480,9 @@ protected Node(final Environment environment, Collection b.bind(TransportService.class).toInstance(transportService); b.bind(NetworkService.class).toInstance(networkService); b.bind(UpdateHelper.class).toInstance(new UpdateHelper(settings, scriptModule.getScriptService())); - b.bind(MetaDataIndexUpgradeService.class).toInstance(new MetaDataIndexUpgradeService(settings, xContentRegistry, - indicesModule.getMapperRegistry(), settingsModule.getIndexScopedSettings(), indexMetaDataUpgraders)); + b.bind(MetaDataIndexUpgradeService.class).toInstance(metaDataIndexUpgradeService); b.bind(ClusterInfoService.class).toInstance(clusterInfoService); + b.bind(GatewayMetaState.class).toInstance(gatewayMetaState); b.bind(Discovery.class).toInstance(discoveryModule.getDiscovery()); { RecoverySettings recoverySettings = new RecoverySettings(settings, settingsModule.getClusterSettings()); @@ -604,7 +614,23 @@ public Node start() throws NodeValidationException { assert localNodeFactory.getNode() != null; assert transportService.getLocalNode().equals(localNodeFactory.getNode()) : "transportService has a different local node than the factory provided"; - validateNodeBeforeAcceptingRequests(settings, transportService.boundAddress(), pluginsService.filterPlugins(Plugin.class).stream() + final MetaData onDiskMetadata; + try { + // we load the global state here (the persistent part of the cluster state stored on disk) to + // pass it to the bootstrap checks to allow plugins to enforce certain preconditions based on the recovered state. + if (DiscoveryNode.isMasterNode(settings) || DiscoveryNode.isDataNode(settings)) { + onDiskMetadata = injector.getInstance(GatewayMetaState.class).loadMetaState(); + } else { + onDiskMetadata = MetaData.EMPTY_META_DATA; + } + assert onDiskMetadata != null : "metadata is null but shouldn't"; // this is never null + } catch (IOException e) { + throw new UncheckedIOException(e); + } + validateNodeBeforeAcceptingRequests(new BootstrapContext(settings, onDiskMetadata), transportService.boundAddress(), pluginsService + .filterPlugins(Plugin + .class) + .stream() .flatMap(p -> p.getBootstrapChecks().stream()).collect(Collectors.toList())); clusterService.addStateApplier(transportService.getTaskManager()); @@ -811,13 +837,13 @@ public Injector injector() { * and before the network service starts accepting incoming network * requests. * - * @param settings the fully-resolved settings + * @param context the bootstrap context for this node * @param boundTransportAddress the network addresses the node is * bound and publishing to */ @SuppressWarnings("unused") protected void validateNodeBeforeAcceptingRequests( - final Settings settings, + final BootstrapContext context, final BoundTransportAddress boundTransportAddress, List bootstrapChecks) throws NodeValidationException { } diff --git a/core/src/main/java/org/elasticsearch/node/NodeService.java b/core/src/main/java/org/elasticsearch/node/NodeService.java index 319df478f950f..5fd4599df941f 100644 --- a/core/src/main/java/org/elasticsearch/node/NodeService.java +++ b/core/src/main/java/org/elasticsearch/node/NodeService.java @@ -25,6 +25,7 @@ import org.elasticsearch.action.admin.cluster.node.info.NodeInfo; import org.elasticsearch.action.admin.cluster.node.stats.NodeStats; import org.elasticsearch.action.admin.indices.stats.CommonStatsFlags; +import org.elasticsearch.action.search.SearchTransportService; import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.component.AbstractComponent; @@ -36,6 +37,7 @@ import org.elasticsearch.indices.breaker.CircuitBreakerService; import org.elasticsearch.ingest.IngestService; import org.elasticsearch.monitor.MonitorService; +import org.elasticsearch.node.ResponseCollectorService; import org.elasticsearch.plugins.PluginsService; import org.elasticsearch.script.ScriptService; import org.elasticsearch.threadpool.ThreadPool; @@ -54,17 +56,19 @@ public class NodeService extends AbstractComponent implements Closeable { private final CircuitBreakerService circuitBreakerService; private final IngestService ingestService; private final SettingsFilter settingsFilter; - private ScriptService scriptService; + private final ScriptService scriptService; private final HttpServerTransport httpServerTransport; - + private final ResponseCollectorService responseCollectorService; + private final SearchTransportService searchTransportService; private final Discovery discovery; NodeService(Settings settings, ThreadPool threadPool, MonitorService monitorService, Discovery discovery, - TransportService transportService, IndicesService indicesService, PluginsService pluginService, - CircuitBreakerService circuitBreakerService, ScriptService scriptService, - @Nullable HttpServerTransport httpServerTransport, IngestService ingestService, ClusterService clusterService, - SettingsFilter settingsFilter) { + TransportService transportService, IndicesService indicesService, PluginsService pluginService, + CircuitBreakerService circuitBreakerService, ScriptService scriptService, + @Nullable HttpServerTransport httpServerTransport, IngestService ingestService, ClusterService clusterService, + SettingsFilter settingsFilter, ResponseCollectorService responseCollectorService, + SearchTransportService searchTransportService) { super(settings); this.threadPool = threadPool; this.monitorService = monitorService; @@ -77,6 +81,8 @@ public class NodeService extends AbstractComponent implements Closeable { this.ingestService = ingestService; this.settingsFilter = settingsFilter; this.scriptService = scriptService; + this.responseCollectorService = responseCollectorService; + this.searchTransportService = searchTransportService; clusterService.addStateApplier(ingestService.getPipelineStore()); clusterService.addStateApplier(ingestService.getPipelineExecutionService()); } @@ -99,7 +105,7 @@ public NodeInfo info(boolean settings, boolean os, boolean process, boolean jvm, public NodeStats stats(CommonStatsFlags indices, boolean os, boolean process, boolean jvm, boolean threadPool, boolean fs, boolean transport, boolean http, boolean circuitBreaker, - boolean script, boolean discoveryStats, boolean ingest) { + boolean script, boolean discoveryStats, boolean ingest, boolean adaptiveSelection) { // for indices stats we want to include previous allocated shards stats as well (it will // only be applied to the sensible ones to use, like refresh/merge/flush/indexing stats) return new NodeStats(transportService.getLocalNode(), System.currentTimeMillis(), @@ -114,7 +120,8 @@ public NodeStats stats(CommonStatsFlags indices, boolean os, boolean process, bo circuitBreaker ? circuitBreakerService.stats() : null, script ? scriptService.stats() : null, discoveryStats ? discovery.stats() : null, - ingest ? ingestService.getPipelineExecutionService().stats() : null + ingest ? ingestService.getPipelineExecutionService().stats() : null, + adaptiveSelection ? responseCollectorService.getAdaptiveStats(searchTransportService.getClientConnections()) : null ); } diff --git a/core/src/main/java/org/elasticsearch/node/NodeValidationException.java b/core/src/main/java/org/elasticsearch/node/NodeValidationException.java index 01840b2556bcf..58e2c4ef951f6 100644 --- a/core/src/main/java/org/elasticsearch/node/NodeValidationException.java +++ b/core/src/main/java/org/elasticsearch/node/NodeValidationException.java @@ -27,8 +27,8 @@ /** * An exception thrown during node validation. Node validation runs immediately before a node * begins accepting network requests in - * {@link Node#validateNodeBeforeAcceptingRequests(Settings, BoundTransportAddress, List)}. This exception is a checked exception that - * is declared as thrown from this method for the purpose of bubbling up to the user. + * {@link Node#validateNodeBeforeAcceptingRequests(org.elasticsearch.bootstrap.BootstrapContext, BoundTransportAddress, List)}. + * This exception is a checked exception that is declared as thrown from this method for the purpose of bubbling up to the user. */ public class NodeValidationException extends Exception { diff --git a/core/src/main/java/org/elasticsearch/node/ResponseCollectorService.java b/core/src/main/java/org/elasticsearch/node/ResponseCollectorService.java index 1afbd3b299755..9881d4404af87 100644 --- a/core/src/main/java/org/elasticsearch/node/ResponseCollectorService.java +++ b/core/src/main/java/org/elasticsearch/node/ResponseCollectorService.java @@ -26,12 +26,18 @@ import org.elasticsearch.common.ExponentiallyWeightedMovingAverage; import org.elasticsearch.common.component.AbstractComponent; import org.elasticsearch.common.inject.Inject; +import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.io.stream.Writeable; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.util.concurrent.ConcurrentCollections; +import java.io.IOException; import java.util.Collections; import java.util.HashMap; +import java.util.Locale; import java.util.Map; +import java.util.Optional; import java.util.concurrent.ConcurrentMap; import java.util.stream.Collectors; @@ -65,13 +71,11 @@ void removeNode(String nodeId) { } public void addNodeStatistics(String nodeId, int queueSize, long responseTimeNanos, long avgServiceTimeNanos) { - NodeStatistics nodeStats = nodeIdToStats.get(nodeId); nodeIdToStats.compute(nodeId, (id, ns) -> { if (ns == null) { ExponentiallyWeightedMovingAverage queueEWMA = new ExponentiallyWeightedMovingAverage(ALPHA, queueSize); ExponentiallyWeightedMovingAverage responseEWMA = new ExponentiallyWeightedMovingAverage(ALPHA, responseTimeNanos); - NodeStatistics newStats = new NodeStatistics(nodeId, queueEWMA, responseEWMA, avgServiceTimeNanos); - return newStats; + return new NodeStatistics(nodeId, queueEWMA, responseEWMA, avgServiceTimeNanos); } else { ns.queueSize.addValue((double) queueSize); ns.responseTime.addValue((double) responseTimeNanos); @@ -82,39 +86,122 @@ public void addNodeStatistics(String nodeId, int queueSize, long responseTimeNan } public Map getAllNodeStatistics() { + final int clientNum = nodeIdToStats.size(); // Transform the mutable object internally used for accounting into the computed version Map nodeStats = new HashMap<>(nodeIdToStats.size()); nodeIdToStats.forEach((k, v) -> { - nodeStats.put(k, new ComputedNodeStats(v)); + nodeStats.put(k, new ComputedNodeStats(clientNum, v)); }); return nodeStats; } + public AdaptiveSelectionStats getAdaptiveStats(Map clientSearchConnections) { + return new AdaptiveSelectionStats(clientSearchConnections, getAllNodeStatistics()); + } + + /** + * Optionally return a {@code NodeStatistics} for the given nodeid, if + * response information exists for the given node. Returns an empty + * {@code Optional} if the node was not found. + */ + public Optional getNodeStatistics(final String nodeId) { + final int clientNum = nodeIdToStats.size(); + return Optional.ofNullable(nodeIdToStats.get(nodeId)).map(ns -> new ComputedNodeStats(clientNum, ns)); + } + /** * Struct-like class encapsulating a point-in-time snapshot of a particular * node's statistics. This includes the EWMA of queue size, response time, * and service time. */ - public static class ComputedNodeStats { + public static class ComputedNodeStats implements Writeable { + // We store timestamps with nanosecond precision, however, the + // formula specifies milliseconds, therefore we need to convert + // the values so the times don't unduely weight the formula + private final double FACTOR = 1000000.0; + private final int clientNum; + + private double cachedRank = 0; + public final String nodeId; - public final double queueSize; + public final int queueSize; public final double responseTime; public final double serviceTime; - ComputedNodeStats(NodeStatistics nodeStats) { - this.nodeId = nodeStats.nodeId; - this.queueSize = nodeStats.queueSize.getAverage(); - this.responseTime = nodeStats.responseTime.getAverage(); - this.serviceTime = nodeStats.serviceTime; + public ComputedNodeStats(String nodeId, int clientNum, int queueSize, double responseTime, double serviceTime) { + this.nodeId = nodeId; + this.clientNum = clientNum; + this.queueSize = queueSize; + this.responseTime = responseTime; + this.serviceTime = serviceTime; + } + + ComputedNodeStats(int clientNum, NodeStatistics nodeStats) { + this(nodeStats.nodeId, clientNum, + (int) nodeStats.queueSize.getAverage(), nodeStats.responseTime.getAverage(), nodeStats.serviceTime); + } + + ComputedNodeStats(StreamInput in) throws IOException { + this.nodeId = in.readString(); + this.clientNum = in.readInt(); + this.queueSize = in.readInt(); + this.responseTime = in.readDouble(); + this.serviceTime = in.readDouble(); + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + out.writeString(this.nodeId); + out.writeInt(this.clientNum); + out.writeInt(this.queueSize); + out.writeDouble(this.responseTime); + out.writeDouble(this.serviceTime); + } + + /** + * Rank this copy of the data, according to the adaptive replica selection formula from the C3 paper + * https://www.usenix.org/system/files/conference/nsdi15/nsdi15-paper-suresh.pdf + */ + private double innerRank(long outstandingRequests) { + // the concurrency compensation is defined as the number of + // outstanding requests from the client to the node times the number + // of clients in the system + double concurrencyCompensation = outstandingRequests * clientNum; + + // Cubic queue adjustment factor. The paper chose 3 though we could + // potentially make this configurable if desired. + int queueAdjustmentFactor = 3; + + // EWMA of queue size + double qBar = queueSize; + double qHatS = 1 + concurrencyCompensation + qBar; + + // EWMA of response time + double rS = responseTime / FACTOR; + // EWMA of service time + double muBarS = serviceTime / FACTOR; + + // The final formula + double rank = rS - (1.0 / muBarS) + (Math.pow(qHatS, queueAdjustmentFactor) / muBarS); + return rank; + } + + public double rank(long outstandingRequests) { + if (cachedRank == 0) { + cachedRank = innerRank(outstandingRequests); + } + return cachedRank; } @Override public String toString() { StringBuilder sb = new StringBuilder("ComputedNodeStats["); sb.append(nodeId).append("]("); - sb.append("queue: ").append(queueSize); - sb.append(", response time: ").append(responseTime); - sb.append(", service time: ").append(serviceTime); + sb.append("nodes: ").append(clientNum); + sb.append(", queue: ").append(queueSize); + sb.append(", response time: ").append(String.format(Locale.ROOT, "%.1f", responseTime)); + sb.append(", service time: ").append(String.format(Locale.ROOT, "%.1f", serviceTime)); + sb.append(", rank: ").append(String.format(Locale.ROOT, "%.1f", rank(1))); sb.append(")"); return sb.toString(); } diff --git a/core/src/main/java/org/elasticsearch/plugins/ActionPlugin.java b/core/src/main/java/org/elasticsearch/plugins/ActionPlugin.java index 346bf491d619b..377da56f6018b 100644 --- a/core/src/main/java/org/elasticsearch/plugins/ActionPlugin.java +++ b/core/src/main/java/org/elasticsearch/plugins/ActionPlugin.java @@ -33,7 +33,6 @@ import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.settings.SettingsFilter; import org.elasticsearch.common.util.concurrent.ThreadContext; -import org.elasticsearch.plugins.Plugin; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestHandler; @@ -66,7 +65,7 @@ public interface ActionPlugin { /** * Action filters added by this plugin. */ - default List> getActionFilters() { + default List getActionFilters() { return Collections.emptyList(); } /** diff --git a/core/src/main/java/org/elasticsearch/plugins/DiscoveryPlugin.java b/core/src/main/java/org/elasticsearch/plugins/DiscoveryPlugin.java index c3af5593cd7c4..912bcdc9d852a 100644 --- a/core/src/main/java/org/elasticsearch/plugins/DiscoveryPlugin.java +++ b/core/src/main/java/org/elasticsearch/plugins/DiscoveryPlugin.java @@ -19,10 +19,14 @@ package org.elasticsearch.plugins; +import java.util.Collection; import java.util.Collections; import java.util.Map; +import java.util.function.BiConsumer; import java.util.function.Supplier; +import org.elasticsearch.cluster.ClusterState; +import org.elasticsearch.cluster.node.DiscoveryNode; import org.elasticsearch.cluster.routing.allocation.AllocationService; import org.elasticsearch.cluster.service.ClusterApplier; import org.elasticsearch.cluster.service.MasterService; @@ -106,4 +110,11 @@ default Map> getZenHostsProviders(Transpo NetworkService networkService) { return Collections.emptyMap(); } + + /** + * Returns a consumer that validate the initial join cluster state. The validator, unless null is called exactly once per + * join attempt but might be called multiple times during the lifetime of a node. Validators are expected to throw a + * {@link IllegalStateException} if the node and the cluster-state are incompatible. + */ + default BiConsumer getJoinValidator() { return null; } } diff --git a/core/src/main/java/org/elasticsearch/plugins/DummyPluginInfo.java b/core/src/main/java/org/elasticsearch/plugins/DummyPluginInfo.java index 73a3811f729f7..3e7442509b6e9 100644 --- a/core/src/main/java/org/elasticsearch/plugins/DummyPluginInfo.java +++ b/core/src/main/java/org/elasticsearch/plugins/DummyPluginInfo.java @@ -21,7 +21,7 @@ public class DummyPluginInfo extends PluginInfo { private DummyPluginInfo(String name, String description, String version, String classname) { - super(name, description, version, classname, false); + super(name, description, version, classname, false, false); } public static final DummyPluginInfo INSTANCE = diff --git a/core/src/main/java/org/elasticsearch/plugins/PluginInfo.java b/core/src/main/java/org/elasticsearch/plugins/PluginInfo.java index 666cc22b92655..c301b65738b43 100644 --- a/core/src/main/java/org/elasticsearch/plugins/PluginInfo.java +++ b/core/src/main/java/org/elasticsearch/plugins/PluginInfo.java @@ -21,10 +21,12 @@ import org.elasticsearch.Version; import org.elasticsearch.bootstrap.JarHell; +import org.elasticsearch.common.Booleans; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContent.Params; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; import java.io.IOException; @@ -37,7 +39,7 @@ /** * An in-memory representation of the plugin descriptor. */ -public class PluginInfo implements Writeable, ToXContent { +public class PluginInfo implements Writeable, ToXContentObject { public static final String ES_PLUGIN_PROPERTIES = "plugin-descriptor.properties"; public static final String ES_PLUGIN_POLICY = "plugin-security.policy"; @@ -47,6 +49,7 @@ public class PluginInfo implements Writeable, ToXContent { private final String version; private final String classname; private final boolean hasNativeController; + private final boolean requiresKeystore; /** * Construct plugin info. @@ -56,18 +59,16 @@ public class PluginInfo implements Writeable, ToXContent { * @param version the version of Elasticsearch the plugin is built for * @param classname the entry point to the plugin * @param hasNativeController whether or not the plugin has a native controller + * @param requiresKeystore whether or not the plugin requires the elasticsearch keystore to be created */ - public PluginInfo( - final String name, - final String description, - final String version, - final String classname, - final boolean hasNativeController) { + public PluginInfo(String name, String description, String version, String classname, + boolean hasNativeController, boolean requiresKeystore) { this.name = name; this.description = description; this.version = version; this.classname = classname; this.hasNativeController = hasNativeController; + this.requiresKeystore = requiresKeystore; } /** @@ -86,6 +87,11 @@ public PluginInfo(final StreamInput in) throws IOException { } else { hasNativeController = false; } + if (in.getVersion().onOrAfter(Version.V_6_0_0_beta2)) { + requiresKeystore = in.readBoolean(); + } else { + requiresKeystore = false; + } } @Override @@ -97,6 +103,9 @@ public void writeTo(final StreamOutput out) throws IOException { if (out.getVersion().onOrAfter(Version.V_5_4_0)) { out.writeBoolean(hasNativeController); } + if (out.getVersion().onOrAfter(Version.V_6_0_0_beta2)) { + out.writeBoolean(requiresKeystore); + } } /** reads (and validates) plugin metadata descriptor file */ @@ -172,17 +181,26 @@ public static PluginInfo readFromProperties(final Path path) throws IOException break; default: final String message = String.format( - Locale.ROOT, - "property [%s] must be [%s], [%s], or unspecified but was [%s]", - "has_native_controller", - "true", - "false", - hasNativeControllerValue); + Locale.ROOT, + "property [%s] must be [%s], [%s], or unspecified but was [%s]", + "has_native_controller", + "true", + "false", + hasNativeControllerValue); throw new IllegalArgumentException(message); } } - return new PluginInfo(name, description, version, classname, hasNativeController); + final String requiresKeystoreValue = props.getProperty("requires.keystore", "false"); + final boolean requiresKeystore; + try { + requiresKeystore = Booleans.parseBoolean(requiresKeystoreValue); + } catch (IllegalArgumentException e) { + throw new IllegalArgumentException("property [requires.keystore] must be [true] or [false]," + + " but was [" + requiresKeystoreValue + "]", e); + } + + return new PluginInfo(name, description, version, classname, hasNativeController, requiresKeystore); } /** @@ -230,6 +248,15 @@ public boolean hasNativeController() { return hasNativeController; } + /** + * Whether or not the plugin requires the elasticsearch keystore to exist. + * + * @return {@code true} if the plugin requires a keystore, {@code false} otherwise + */ + public boolean requiresKeystore() { + return requiresKeystore; + } + @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { builder.startObject(); @@ -239,6 +266,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws builder.field("description", description); builder.field("classname", classname); builder.field("has_native_controller", hasNativeController); + builder.field("requires_keystore", requiresKeystore); } builder.endObject(); @@ -271,6 +299,7 @@ public String toString() { .append("Description: ").append(description).append("\n") .append("Version: ").append(version).append("\n") .append("Native Controller: ").append(hasNativeController).append("\n") + .append("Requires Keystore: ").append(requiresKeystore).append("\n") .append(" * Classname: ").append(classname); return information.toString(); } diff --git a/core/src/main/java/org/elasticsearch/plugins/PluginsService.java b/core/src/main/java/org/elasticsearch/plugins/PluginsService.java index 2e0ec0f242ead..1a50bcc7213ed 100644 --- a/core/src/main/java/org/elasticsearch/plugins/PluginsService.java +++ b/core/src/main/java/org/elasticsearch/plugins/PluginsService.java @@ -34,6 +34,7 @@ import org.elasticsearch.common.component.AbstractComponent; import org.elasticsearch.common.component.LifecycleComponent; import org.elasticsearch.common.inject.Module; +import org.elasticsearch.common.io.FileSystemUtils; import org.elasticsearch.common.logging.Loggers; import org.elasticsearch.common.settings.Setting; import org.elasticsearch.common.settings.Setting.Property; @@ -102,7 +103,7 @@ public PluginsService(Settings settings, Path configPath, Path modulesDirectory, // first we load plugins that are on the classpath. this is for tests and transport clients for (Class pluginClass : classpathPlugins) { Plugin plugin = loadPlugin(pluginClass, settings, configPath); - PluginInfo pluginInfo = new PluginInfo(pluginClass.getName(), "classpath plugin", "NA", pluginClass.getName(), false); + PluginInfo pluginInfo = new PluginInfo(pluginClass.getName(), "classpath plugin", "NA", pluginClass.getName(), false, false); if (logger.isTraceEnabled()) { logger.trace("plugin loaded from classpath [{}]", pluginInfo); } @@ -186,7 +187,7 @@ public Settings updatedSettings() { final Settings.Builder builder = Settings.builder(); for (Tuple plugin : plugins) { Settings settings = plugin.v2().additionalSettings(); - for (String setting : settings.getAsMap().keySet()) { + for (String setting : settings.keySet()) { String oldPlugin = foundSettings.put(setting, plugin.v1().getName()); if (oldPlugin != null) { throw new IllegalArgumentException("Cannot have additional setting [" + setting + "] " + @@ -326,6 +327,9 @@ static Set getPluginBundles(Path pluginsDirectory) throws IOException { try (DirectoryStream stream = Files.newDirectoryStream(pluginsDirectory)) { for (Path plugin : stream) { + if (FileSystemUtils.isDesktopServicesStore(plugin)) { + continue; + } logger.trace("--- adding plugin [{}]", plugin.toAbsolutePath()); final PluginInfo info; try { diff --git a/core/src/main/java/org/elasticsearch/plugins/SearchPlugin.java b/core/src/main/java/org/elasticsearch/plugins/SearchPlugin.java index 01685535a4e0e..1453a9e3b675a 100644 --- a/core/src/main/java/org/elasticsearch/plugins/SearchPlugin.java +++ b/core/src/main/java/org/elasticsearch/plugins/SearchPlugin.java @@ -48,6 +48,8 @@ import org.elasticsearch.search.aggregations.pipeline.movavg.models.MovAvgModel; import org.elasticsearch.search.fetch.FetchSubPhase; import org.elasticsearch.search.fetch.subphase.highlight.Highlighter; +import org.elasticsearch.search.rescore.RescorerBuilder; +import org.elasticsearch.search.rescore.Rescorer; import org.elasticsearch.search.suggest.Suggester; import org.elasticsearch.search.suggest.SuggestionBuilder; @@ -126,6 +128,12 @@ default List getAggregations() { default List getPipelineAggregations() { return emptyList(); } + /** + * The next {@link Rescorer}s added by this plugin. + */ + default List> getRescorers() { + return emptyList(); + } /** * The new search response listeners in the form of {@link BiConsumer}s added by this plugin. * The listeners are invoked on the coordinating node, at the very end of the search request. @@ -360,6 +368,17 @@ public SearchExtSpec(String name, Writeable.Reader reader, CheckedF } } + class RescorerSpec> extends SearchExtensionSpec> { + public RescorerSpec(ParseField name, Writeable.Reader reader, + CheckedFunction parser) { + super(name, reader, parser); + } + + public RescorerSpec(String name, Writeable.Reader reader, CheckedFunction parser) { + super(name, reader, parser); + } + } + /** * Specification of search time behavior extension like a custom {@link MovAvgModel} or {@link ScoreFunction}. * diff --git a/core/src/main/java/org/elasticsearch/repositories/blobstore/BlobStoreRepository.java b/core/src/main/java/org/elasticsearch/repositories/blobstore/BlobStoreRepository.java index dccf12c8ed3b9..939c33d00a8d4 100644 --- a/core/src/main/java/org/elasticsearch/repositories/blobstore/BlobStoreRepository.java +++ b/core/src/main/java/org/elasticsearch/repositories/blobstore/BlobStoreRepository.java @@ -241,6 +241,10 @@ protected BlobStoreRepository(RepositoryMetaData metadata, Settings globalSettin BlobStoreIndexShardSnapshot::fromXContent, namedXContentRegistry, isCompress()); indexShardSnapshotsFormat = new ChecksumBlobStoreFormat<>(SNAPSHOT_INDEX_CODEC, SNAPSHOT_INDEX_NAME_FORMAT, BlobStoreIndexShardSnapshots::fromXContent, namedXContentRegistry, isCompress()); + ByteSizeValue chunkSize = chunkSize(); + if (chunkSize != null && chunkSize.getBytes() <= 0) { + throw new IllegalArgumentException("the chunk size cannot be negative: [" + chunkSize + "]"); + } } @Override diff --git a/core/src/main/java/org/elasticsearch/repositories/fs/FsRepository.java b/core/src/main/java/org/elasticsearch/repositories/fs/FsRepository.java index b490a2e784dc1..4d4ab60feef0f 100644 --- a/core/src/main/java/org/elasticsearch/repositories/fs/FsRepository.java +++ b/core/src/main/java/org/elasticsearch/repositories/fs/FsRepository.java @@ -54,10 +54,10 @@ public class FsRepository extends BlobStoreRepository { new Setting<>("location", "", Function.identity(), Property.NodeScope); public static final Setting REPOSITORIES_LOCATION_SETTING = new Setting<>("repositories.fs.location", LOCATION_SETTING, Function.identity(), Property.NodeScope); - public static final Setting CHUNK_SIZE_SETTING = - Setting.byteSizeSetting("chunk_size", new ByteSizeValue(-1), Property.NodeScope); - public static final Setting REPOSITORIES_CHUNK_SIZE_SETTING = - Setting.byteSizeSetting("repositories.fs.chunk_size", new ByteSizeValue(-1), Property.NodeScope); + public static final Setting CHUNK_SIZE_SETTING = Setting.byteSizeSetting("chunk_size", + new ByteSizeValue(Long.MAX_VALUE), new ByteSizeValue(5), new ByteSizeValue(Long.MAX_VALUE), Property.NodeScope); + public static final Setting REPOSITORIES_CHUNK_SIZE_SETTING = Setting.byteSizeSetting("repositories.fs.chunk_size", + new ByteSizeValue(Long.MAX_VALUE), new ByteSizeValue(5), new ByteSizeValue(Long.MAX_VALUE), Property.NodeScope); public static final Setting COMPRESS_SETTING = Setting.boolSetting("compress", false, Property.NodeScope); public static final Setting REPOSITORIES_COMPRESS_SETTING = Setting.boolSetting("repositories.fs.compress", false, Property.NodeScope); @@ -95,10 +95,8 @@ public FsRepository(RepositoryMetaData metadata, Environment environment, blobStore = new FsBlobStore(settings, locationFile); if (CHUNK_SIZE_SETTING.exists(metadata.settings())) { this.chunkSize = CHUNK_SIZE_SETTING.get(metadata.settings()); - } else if (REPOSITORIES_CHUNK_SIZE_SETTING.exists(settings)) { - this.chunkSize = REPOSITORIES_CHUNK_SIZE_SETTING.get(settings); } else { - this.chunkSize = null; + this.chunkSize = REPOSITORIES_CHUNK_SIZE_SETTING.get(settings); } this.compress = COMPRESS_SETTING.exists(metadata.settings()) ? COMPRESS_SETTING.get(metadata.settings()) : REPOSITORIES_COMPRESS_SETTING.get(settings); this.basePath = BlobPath.cleanPath(); diff --git a/core/src/main/java/org/elasticsearch/rest/action/AcknowledgedRestListener.java b/core/src/main/java/org/elasticsearch/rest/action/AcknowledgedRestListener.java index e12329f93a395..9f08c43fa0f3f 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/AcknowledgedRestListener.java +++ b/core/src/main/java/org/elasticsearch/rest/action/AcknowledgedRestListener.java @@ -36,6 +36,7 @@ public AcknowledgedRestListener(RestChannel channel) { @Override public RestResponse buildResponse(T response, XContentBuilder builder) throws Exception { + // TODO - Once AcknowledgedResponse implements ToXContent, this method should be updated to call response.toXContent. builder.startObject() .field(Fields.ACKNOWLEDGED, response.isAcknowledged()); addCustomFields(builder, response); diff --git a/core/src/main/java/org/elasticsearch/rest/action/RestActions.java b/core/src/main/java/org/elasticsearch/rest/action/RestActions.java index 61e3ded6456b6..759cd4a773dd9 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/RestActions.java +++ b/core/src/main/java/org/elasticsearch/rest/action/RestActions.java @@ -243,6 +243,15 @@ public RestResponse buildResponse(NodesResponse response, XContentBuilder builde private static QueryBuilder parseTopLevelQueryBuilder(XContentParser parser) { try { QueryBuilder queryBuilder = null; + XContentParser.Token first = parser.nextToken(); + if (first == null) { + return null; + } else if (first != XContentParser.Token.START_OBJECT) { + throw new ParsingException( + parser.getTokenLocation(), "Expected [" + XContentParser.Token.START_OBJECT + + "] but found [" + first + "]", parser.getTokenLocation() + ); + } for (XContentParser.Token token = parser.nextToken(); token != XContentParser.Token.END_OBJECT; token = parser.nextToken()) { if (token == XContentParser.Token.FIELD_NAME) { String fieldName = parser.currentName(); diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestPutStoredScriptAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestPutStoredScriptAction.java index 2ad2db2964d56..27083503195e0 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestPutStoredScriptAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestPutStoredScriptAction.java @@ -41,6 +41,7 @@ public RestPutStoredScriptAction(Settings settings, RestController controller) { controller.registerHandler(POST, "/_scripts/{id}", this); controller.registerHandler(PUT, "/_scripts/{id}", this); + controller.registerHandler(POST, "/_scripts/{id}/{context}", this); controller.registerHandler(PUT, "/_scripts/{id}/{context}", this); } diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestOpenIndexAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestOpenIndexAction.java index 0e6ca47dd1ab5..73817c1565d4a 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestOpenIndexAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestOpenIndexAction.java @@ -21,10 +21,12 @@ import org.elasticsearch.action.admin.indices.open.OpenIndexRequest; import org.elasticsearch.action.admin.indices.open.OpenIndexResponse; +import org.elasticsearch.action.support.ActiveShardCount; import org.elasticsearch.action.support.IndicesOptions; import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.common.Strings; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.rest.BaseRestHandler; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; @@ -50,6 +52,15 @@ public RestChannelConsumer prepareRequest(final RestRequest request, final NodeC openIndexRequest.timeout(request.paramAsTime("timeout", openIndexRequest.timeout())); openIndexRequest.masterNodeTimeout(request.paramAsTime("master_timeout", openIndexRequest.masterNodeTimeout())); openIndexRequest.indicesOptions(IndicesOptions.fromRequest(request, openIndexRequest.indicesOptions())); - return channel -> client.admin().indices().open(openIndexRequest, new AcknowledgedRestListener(channel)); + String waitForActiveShards = request.param("wait_for_active_shards"); + if (waitForActiveShards != null) { + openIndexRequest.waitForActiveShards(ActiveShardCount.parseString(waitForActiveShards)); + } + return channel -> client.admin().indices().open(openIndexRequest, new AcknowledgedRestListener(channel) { + @Override + protected void addCustomFields(XContentBuilder builder, OpenIndexResponse response) throws IOException { + builder.field("shards_acknowledged", response.isShardsAcknowledged()); + } + }); } } diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestShrinkIndexAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestShrinkIndexAction.java index 10b46be6760bb..a0071d70758af 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestShrinkIndexAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestShrinkIndexAction.java @@ -19,8 +19,9 @@ package org.elasticsearch.rest.action.admin.indices; -import org.elasticsearch.action.admin.indices.shrink.ShrinkRequest; -import org.elasticsearch.action.admin.indices.shrink.ShrinkResponse; +import org.elasticsearch.action.admin.indices.shrink.ResizeRequest; +import org.elasticsearch.action.admin.indices.shrink.ResizeResponse; +import org.elasticsearch.action.admin.indices.shrink.ResizeType; import org.elasticsearch.action.support.ActiveShardCount; import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.common.settings.Settings; @@ -52,14 +53,15 @@ public RestChannelConsumer prepareRequest(final RestRequest request, final NodeC if (request.param("index") == null) { throw new IllegalArgumentException("no source index"); } - ShrinkRequest shrinkIndexRequest = new ShrinkRequest(request.param("target"), request.param("index")); - request.applyContentParser(parser -> ShrinkRequest.PARSER.parse(parser, shrinkIndexRequest, null)); + ResizeRequest shrinkIndexRequest = new ResizeRequest(request.param("target"), request.param("index")); + shrinkIndexRequest.setResizeType(ResizeType.SHRINK); + request.applyContentParser(parser -> ResizeRequest.PARSER.parse(parser, shrinkIndexRequest, null)); shrinkIndexRequest.timeout(request.paramAsTime("timeout", shrinkIndexRequest.timeout())); shrinkIndexRequest.masterNodeTimeout(request.paramAsTime("master_timeout", shrinkIndexRequest.masterNodeTimeout())); shrinkIndexRequest.setWaitForActiveShards(ActiveShardCount.parseString(request.param("wait_for_active_shards"))); - return channel -> client.admin().indices().shrinkIndex(shrinkIndexRequest, new AcknowledgedRestListener(channel) { + return channel -> client.admin().indices().resizeIndex(shrinkIndexRequest, new AcknowledgedRestListener(channel) { @Override - public void addCustomFields(XContentBuilder builder, ShrinkResponse response) throws IOException { + public void addCustomFields(XContentBuilder builder, ResizeResponse response) throws IOException { response.addCustomFields(builder); } }); diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestSplitIndexAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestSplitIndexAction.java new file mode 100644 index 0000000000000..dcc811bd0177b --- /dev/null +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/indices/RestSplitIndexAction.java @@ -0,0 +1,69 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.rest.action.admin.indices; + +import org.elasticsearch.action.admin.indices.shrink.ResizeRequest; +import org.elasticsearch.action.admin.indices.shrink.ResizeResponse; +import org.elasticsearch.action.admin.indices.shrink.ResizeType; +import org.elasticsearch.action.support.ActiveShardCount; +import org.elasticsearch.client.node.NodeClient; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.rest.BaseRestHandler; +import org.elasticsearch.rest.RestController; +import org.elasticsearch.rest.RestRequest; +import org.elasticsearch.rest.action.AcknowledgedRestListener; + +import java.io.IOException; + +public class RestSplitIndexAction extends BaseRestHandler { + public RestSplitIndexAction(Settings settings, RestController controller) { + super(settings); + controller.registerHandler(RestRequest.Method.PUT, "/{index}/_split/{target}", this); + controller.registerHandler(RestRequest.Method.POST, "/{index}/_split/{target}", this); + } + + @Override + public String getName() { + return "split_index_action"; + } + + @Override + public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { + if (request.param("target") == null) { + throw new IllegalArgumentException("no target index"); + } + if (request.param("index") == null) { + throw new IllegalArgumentException("no source index"); + } + ResizeRequest shrinkIndexRequest = new ResizeRequest(request.param("target"), request.param("index")); + shrinkIndexRequest.setResizeType(ResizeType.SPLIT); + request.applyContentParser(parser -> ResizeRequest.PARSER.parse(parser, shrinkIndexRequest, null)); + shrinkIndexRequest.timeout(request.paramAsTime("timeout", shrinkIndexRequest.timeout())); + shrinkIndexRequest.masterNodeTimeout(request.paramAsTime("master_timeout", shrinkIndexRequest.masterNodeTimeout())); + shrinkIndexRequest.setWaitForActiveShards(ActiveShardCount.parseString(request.param("wait_for_active_shards"))); + return channel -> client.admin().indices().resizeIndex(shrinkIndexRequest, new AcknowledgedRestListener(channel) { + @Override + public void addCustomFields(XContentBuilder builder, ResizeResponse response) throws IOException { + response.addCustomFields(builder); + } + }); + } +} diff --git a/core/src/main/java/org/elasticsearch/rest/action/cat/RestIndicesAction.java b/core/src/main/java/org/elasticsearch/rest/action/cat/RestIndicesAction.java index 6bcb073d1106e..52da10a378576 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/cat/RestIndicesAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/cat/RestIndicesAction.java @@ -23,6 +23,7 @@ import org.elasticsearch.action.admin.cluster.health.ClusterHealthResponse; import org.elasticsearch.action.admin.cluster.state.ClusterStateRequest; import org.elasticsearch.action.admin.cluster.state.ClusterStateResponse; +import org.elasticsearch.action.admin.indices.stats.CommonStats; import org.elasticsearch.action.admin.indices.stats.IndexStats; import org.elasticsearch.action.admin.indices.stats.IndicesStatsRequest; import org.elasticsearch.action.admin.indices.stats.IndicesStatsResponse; @@ -363,6 +364,9 @@ Table buildTable(RestRequest request, Index[] indices, ClusterHealthResponse res } } + final CommonStats primaryStats = indexStats == null ? new CommonStats() : indexStats.getPrimaries(); + final CommonStats totalStats = indexStats == null ? new CommonStats() : indexStats.getTotal(); + table.startRow(); table.addCell(state == IndexMetaData.State.OPEN ? (indexHealth == null ? "red*" : indexHealth.getStatus().toString().toLowerCase(Locale.ROOT)) : null); table.addCell(state.toString().toLowerCase(Locale.ROOT)); @@ -370,182 +374,183 @@ Table buildTable(RestRequest request, Index[] indices, ClusterHealthResponse res table.addCell(index.getUUID()); table.addCell(indexHealth == null ? null : indexHealth.getNumberOfShards()); table.addCell(indexHealth == null ? null : indexHealth.getNumberOfReplicas()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getDocs().getCount()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getDocs().getDeleted()); + + table.addCell(primaryStats.getDocs() == null ? null : primaryStats.getDocs().getCount()); + table.addCell(primaryStats.getDocs() == null ? null : primaryStats.getDocs().getDeleted()); table.addCell(indexMetaData.getCreationDate()); table.addCell(new DateTime(indexMetaData.getCreationDate(), DateTimeZone.UTC)); - table.addCell(indexStats == null ? null : indexStats.getTotal().getStore().size()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getStore().size()); + table.addCell(totalStats.getStore() == null ? null : totalStats.getStore().size()); + table.addCell(primaryStats.getStore() == null ? null : primaryStats.getStore().size()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getCompletion().getSize()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getCompletion().getSize()); + table.addCell(totalStats.getCompletion() == null ? null : totalStats.getCompletion().getSize()); + table.addCell(primaryStats.getCompletion() == null ? null : primaryStats.getCompletion().getSize()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getFieldData().getMemorySize()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getFieldData().getMemorySize()); + table.addCell(totalStats.getFieldData() == null ? null : totalStats.getFieldData().getMemorySize()); + table.addCell(primaryStats.getFieldData() == null ? null : primaryStats.getFieldData().getMemorySize()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getFieldData().getEvictions()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getFieldData().getEvictions()); + table.addCell(totalStats.getFieldData() == null ? null : totalStats.getFieldData().getEvictions()); + table.addCell(primaryStats.getFieldData() == null ? null : primaryStats.getFieldData().getEvictions()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getQueryCache().getMemorySize()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getQueryCache().getMemorySize()); + table.addCell(totalStats.getQueryCache() == null ? null : totalStats.getQueryCache().getMemorySize()); + table.addCell(primaryStats.getQueryCache() == null ? null : primaryStats.getQueryCache().getMemorySize()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getQueryCache().getEvictions()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getQueryCache().getEvictions()); + table.addCell(totalStats.getQueryCache() == null ? null : totalStats.getQueryCache().getEvictions()); + table.addCell(primaryStats.getQueryCache() == null ? null : primaryStats.getQueryCache().getEvictions()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getRequestCache().getMemorySize()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getRequestCache().getMemorySize()); + table.addCell(totalStats.getRequestCache() == null ? null : totalStats.getRequestCache().getMemorySize()); + table.addCell(primaryStats.getRequestCache() == null ? null : primaryStats.getRequestCache().getMemorySize()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getRequestCache().getEvictions()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getRequestCache().getEvictions()); + table.addCell(totalStats.getRequestCache() == null ? null : totalStats.getRequestCache().getEvictions()); + table.addCell(primaryStats.getRequestCache() == null ? null : primaryStats.getRequestCache().getEvictions()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getRequestCache().getHitCount()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getRequestCache().getHitCount()); + table.addCell(totalStats.getRequestCache() == null ? null : totalStats.getRequestCache().getHitCount()); + table.addCell(primaryStats.getRequestCache() == null ? null : primaryStats.getRequestCache().getHitCount()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getRequestCache().getMissCount()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getRequestCache().getMissCount()); + table.addCell(totalStats.getRequestCache() == null ? null : totalStats.getRequestCache().getMissCount()); + table.addCell(primaryStats.getRequestCache() == null ? null : primaryStats.getRequestCache().getMissCount()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getFlush().getTotal()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getFlush().getTotal()); + table.addCell(totalStats.getFlush() == null ? null : totalStats.getFlush().getTotal()); + table.addCell(primaryStats.getFlush() == null ? null : primaryStats.getFlush().getTotal()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getFlush().getTotalTime()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getFlush().getTotalTime()); + table.addCell(totalStats.getFlush() == null ? null : totalStats.getFlush().getTotalTime()); + table.addCell(primaryStats.getFlush() == null ? null : primaryStats.getFlush().getTotalTime()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getGet().current()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getGet().current()); + table.addCell(totalStats.getGet() == null ? null : totalStats.getGet().current()); + table.addCell(primaryStats.getGet() == null ? null : primaryStats.getGet().current()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getGet().getTime()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getGet().getTime()); + table.addCell(totalStats.getGet() == null ? null : totalStats.getGet().getTime()); + table.addCell(primaryStats.getGet() == null ? null : primaryStats.getGet().getTime()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getGet().getCount()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getGet().getCount()); + table.addCell(totalStats.getGet() == null ? null : totalStats.getGet().getCount()); + table.addCell(primaryStats.getGet() == null ? null : primaryStats.getGet().getCount()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getGet().getExistsTime()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getGet().getExistsTime()); + table.addCell(totalStats.getGet() == null ? null : totalStats.getGet().getExistsTime()); + table.addCell(primaryStats.getGet() == null ? null : primaryStats.getGet().getExistsTime()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getGet().getExistsCount()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getGet().getExistsCount()); + table.addCell(totalStats.getGet() == null ? null : totalStats.getGet().getExistsCount()); + table.addCell(primaryStats.getGet() == null ? null : primaryStats.getGet().getExistsCount()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getGet().getMissingTime()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getGet().getMissingTime()); + table.addCell(totalStats.getGet() == null ? null : totalStats.getGet().getMissingTime()); + table.addCell(primaryStats.getGet() == null ? null : primaryStats.getGet().getMissingTime()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getGet().getMissingCount()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getGet().getMissingCount()); + table.addCell(totalStats.getGet() == null ? null : totalStats.getGet().getMissingCount()); + table.addCell(primaryStats.getGet() == null ? null : primaryStats.getGet().getMissingCount()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getIndexing().getTotal().getDeleteCurrent()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getIndexing().getTotal().getDeleteCurrent()); + table.addCell(totalStats.getIndexing() == null ? null : totalStats.getIndexing().getTotal().getDeleteCurrent()); + table.addCell(primaryStats.getIndexing() == null ? null : primaryStats.getIndexing().getTotal().getDeleteCurrent()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getIndexing().getTotal().getDeleteTime()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getIndexing().getTotal().getDeleteTime()); + table.addCell(totalStats.getIndexing() == null ? null : totalStats.getIndexing().getTotal().getDeleteTime()); + table.addCell(primaryStats.getIndexing() == null ? null : primaryStats.getIndexing().getTotal().getDeleteTime()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getIndexing().getTotal().getDeleteCount()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getIndexing().getTotal().getDeleteCount()); + table.addCell(totalStats.getIndexing() == null ? null : totalStats.getIndexing().getTotal().getDeleteCount()); + table.addCell(primaryStats.getIndexing() == null ? null : primaryStats.getIndexing().getTotal().getDeleteCount()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getIndexing().getTotal().getIndexCurrent()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getIndexing().getTotal().getIndexCurrent()); + table.addCell(totalStats.getIndexing() == null ? null : totalStats.getIndexing().getTotal().getIndexCurrent()); + table.addCell(primaryStats.getIndexing() == null ? null : primaryStats.getIndexing().getTotal().getIndexCurrent()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getIndexing().getTotal().getIndexTime()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getIndexing().getTotal().getIndexTime()); + table.addCell(totalStats.getIndexing() == null ? null : totalStats.getIndexing().getTotal().getIndexTime()); + table.addCell(primaryStats.getIndexing() == null ? null : primaryStats.getIndexing().getTotal().getIndexTime()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getIndexing().getTotal().getIndexCount()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getIndexing().getTotal().getIndexCount()); + table.addCell(totalStats.getIndexing() == null ? null : totalStats.getIndexing().getTotal().getIndexCount()); + table.addCell(primaryStats.getIndexing() == null ? null : primaryStats.getIndexing().getTotal().getIndexCount()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getIndexing().getTotal().getIndexFailedCount()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getIndexing().getTotal().getIndexFailedCount()); + table.addCell(totalStats.getIndexing() == null ? null : totalStats.getIndexing().getTotal().getIndexFailedCount()); + table.addCell(primaryStats.getIndexing() == null ? null : primaryStats.getIndexing().getTotal().getIndexFailedCount()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getMerge().getCurrent()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getMerge().getCurrent()); + table.addCell(totalStats.getMerge() == null ? null : totalStats.getMerge().getCurrent()); + table.addCell(primaryStats.getMerge() == null ? null : primaryStats.getMerge().getCurrent()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getMerge().getCurrentNumDocs()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getMerge().getCurrentNumDocs()); + table.addCell(totalStats.getMerge() == null ? null : totalStats.getMerge().getCurrentNumDocs()); + table.addCell(primaryStats.getMerge() == null ? null : primaryStats.getMerge().getCurrentNumDocs()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getMerge().getCurrentSize()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getMerge().getCurrentSize()); + table.addCell(totalStats.getMerge() == null ? null : totalStats.getMerge().getCurrentSize()); + table.addCell(primaryStats.getMerge() == null ? null : primaryStats.getMerge().getCurrentSize()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getMerge().getTotal()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getMerge().getTotal()); + table.addCell(totalStats.getMerge() == null ? null : totalStats.getMerge().getTotal()); + table.addCell(primaryStats.getMerge() == null ? null : primaryStats.getMerge().getTotal()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getMerge().getTotalNumDocs()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getMerge().getTotalNumDocs()); + table.addCell(totalStats.getMerge() == null ? null : totalStats.getMerge().getTotalNumDocs()); + table.addCell(primaryStats.getMerge() == null ? null : primaryStats.getMerge().getTotalNumDocs()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getMerge().getTotalSize()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getMerge().getTotalSize()); + table.addCell(totalStats.getMerge() == null ? null : totalStats.getMerge().getTotalSize()); + table.addCell(primaryStats.getMerge() == null ? null : primaryStats.getMerge().getTotalSize()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getMerge().getTotalTime()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getMerge().getTotalTime()); + table.addCell(totalStats.getMerge() == null ? null : totalStats.getMerge().getTotalTime()); + table.addCell(primaryStats.getMerge() == null ? null : primaryStats.getMerge().getTotalTime()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getRefresh().getTotal()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getRefresh().getTotal()); + table.addCell(totalStats.getRefresh() == null ? null : totalStats.getRefresh().getTotal()); + table.addCell(primaryStats.getRefresh() == null ? null : primaryStats.getRefresh().getTotal()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getRefresh().getTotalTime()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getRefresh().getTotalTime()); + table.addCell(totalStats.getRefresh() == null ? null : totalStats.getRefresh().getTotalTime()); + table.addCell(primaryStats.getRefresh() == null ? null : primaryStats.getRefresh().getTotalTime()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getRefresh().getListeners()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getRefresh().getListeners()); + table.addCell(totalStats.getRefresh() == null ? null : totalStats.getRefresh().getListeners()); + table.addCell(primaryStats.getRefresh() == null ? null : primaryStats.getRefresh().getListeners()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getSearch().getTotal().getFetchCurrent()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getSearch().getTotal().getFetchCurrent()); + table.addCell(totalStats.getSearch() == null ? null : totalStats.getSearch().getTotal().getFetchCurrent()); + table.addCell(primaryStats.getSearch() == null ? null : primaryStats.getSearch().getTotal().getFetchCurrent()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getSearch().getTotal().getFetchTime()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getSearch().getTotal().getFetchTime()); + table.addCell(totalStats.getSearch() == null ? null : totalStats.getSearch().getTotal().getFetchTime()); + table.addCell(primaryStats.getSearch() == null ? null : primaryStats.getSearch().getTotal().getFetchTime()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getSearch().getTotal().getFetchCount()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getSearch().getTotal().getFetchCount()); + table.addCell(totalStats.getSearch() == null ? null : totalStats.getSearch().getTotal().getFetchCount()); + table.addCell(primaryStats.getSearch() == null ? null : primaryStats.getSearch().getTotal().getFetchCount()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getSearch().getOpenContexts()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getSearch().getOpenContexts()); + table.addCell(totalStats.getSearch() == null ? null : totalStats.getSearch().getOpenContexts()); + table.addCell(primaryStats.getSearch() == null ? null : primaryStats.getSearch().getOpenContexts()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getSearch().getTotal().getQueryCurrent()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getSearch().getTotal().getQueryCurrent()); + table.addCell(totalStats.getSearch() == null ? null : totalStats.getSearch().getTotal().getQueryCurrent()); + table.addCell(primaryStats.getSearch() == null ? null : primaryStats.getSearch().getTotal().getQueryCurrent()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getSearch().getTotal().getQueryTime()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getSearch().getTotal().getQueryTime()); + table.addCell(totalStats.getSearch() == null ? null : totalStats.getSearch().getTotal().getQueryTime()); + table.addCell(primaryStats.getSearch() == null ? null : primaryStats.getSearch().getTotal().getQueryTime()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getSearch().getTotal().getQueryCount()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getSearch().getTotal().getQueryCount()); + table.addCell(totalStats.getSearch() == null ? null : totalStats.getSearch().getTotal().getQueryCount()); + table.addCell(primaryStats.getSearch() == null ? null : primaryStats.getSearch().getTotal().getQueryCount()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getSearch().getTotal().getScrollCurrent()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getSearch().getTotal().getScrollCurrent()); + table.addCell(totalStats.getSearch() == null ? null : totalStats.getSearch().getTotal().getScrollCurrent()); + table.addCell(primaryStats.getSearch() == null ? null : primaryStats.getSearch().getTotal().getScrollCurrent()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getSearch().getTotal().getScrollTime()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getSearch().getTotal().getScrollTime()); + table.addCell(totalStats.getSearch() == null ? null : totalStats.getSearch().getTotal().getScrollTime()); + table.addCell(primaryStats.getSearch() == null ? null : primaryStats.getSearch().getTotal().getScrollTime()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getSearch().getTotal().getScrollCount()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getSearch().getTotal().getScrollCount()); + table.addCell(totalStats.getSearch() == null ? null : totalStats.getSearch().getTotal().getScrollCount()); + table.addCell(primaryStats.getSearch() == null ? null : primaryStats.getSearch().getTotal().getScrollCount()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getSegments().getCount()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getSegments().getCount()); + table.addCell(totalStats.getSegments() == null ? null : totalStats.getSegments().getCount()); + table.addCell(primaryStats.getSegments() == null ? null : primaryStats.getSegments().getCount()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getSegments().getMemory()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getSegments().getMemory()); + table.addCell(totalStats.getSegments() == null ? null : totalStats.getSegments().getMemory()); + table.addCell(primaryStats.getSegments() == null ? null : primaryStats.getSegments().getMemory()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getSegments().getIndexWriterMemory()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getSegments().getIndexWriterMemory()); + table.addCell(totalStats.getSegments() == null ? null : totalStats.getSegments().getIndexWriterMemory()); + table.addCell(primaryStats.getSegments() == null ? null : primaryStats.getSegments().getIndexWriterMemory()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getSegments().getVersionMapMemory()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getSegments().getVersionMapMemory()); + table.addCell(totalStats.getSegments() == null ? null : totalStats.getSegments().getVersionMapMemory()); + table.addCell(primaryStats.getSegments() == null ? null : primaryStats.getSegments().getVersionMapMemory()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getSegments().getBitsetMemory()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getSegments().getBitsetMemory()); + table.addCell(totalStats.getSegments() == null ? null : totalStats.getSegments().getBitsetMemory()); + table.addCell(primaryStats.getSegments() == null ? null : primaryStats.getSegments().getBitsetMemory()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getWarmer().current()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getWarmer().current()); + table.addCell(totalStats.getWarmer() == null ? null : totalStats.getWarmer().current()); + table.addCell(primaryStats.getWarmer() == null ? null : primaryStats.getWarmer().current()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getWarmer().total()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getWarmer().total()); + table.addCell(totalStats.getWarmer() == null ? null : totalStats.getWarmer().total()); + table.addCell(primaryStats.getWarmer() == null ? null : primaryStats.getWarmer().total()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getWarmer().totalTime()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getWarmer().totalTime()); + table.addCell(totalStats.getWarmer() == null ? null : totalStats.getWarmer().totalTime()); + table.addCell(primaryStats.getWarmer() == null ? null : primaryStats.getWarmer().totalTime()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getSearch().getTotal().getSuggestCurrent()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getSearch().getTotal().getSuggestCurrent()); + table.addCell(totalStats.getSearch() == null ? null : totalStats.getSearch().getTotal().getSuggestCurrent()); + table.addCell(primaryStats.getSearch() == null ? null : primaryStats.getSearch().getTotal().getSuggestCurrent()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getSearch().getTotal().getSuggestTime()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getSearch().getTotal().getSuggestTime()); + table.addCell(totalStats.getSearch() == null ? null : totalStats.getSearch().getTotal().getSuggestTime()); + table.addCell(primaryStats.getSearch() == null ? null : primaryStats.getSearch().getTotal().getSuggestTime()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getSearch().getTotal().getSuggestCount()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getSearch().getTotal().getSuggestCount()); + table.addCell(totalStats.getSearch() == null ? null : totalStats.getSearch().getTotal().getSuggestCount()); + table.addCell(primaryStats.getSearch() == null ? null : primaryStats.getSearch().getTotal().getSuggestCount()); table.addCell(indexStats == null ? null : indexStats.getTotal().getTotalMemory()); table.addCell(indexStats == null ? null : indexStats.getPrimaries().getTotalMemory()); diff --git a/core/src/main/java/org/elasticsearch/rest/action/cat/RestNodesAction.java b/core/src/main/java/org/elasticsearch/rest/action/cat/RestNodesAction.java index 07f39b54f613a..0a16193894466 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/cat/RestNodesAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/cat/RestNodesAction.java @@ -205,6 +205,8 @@ protected Table getTableWithHeader(final RestRequest request) { table.addCell("script.compilations", "alias:scrcc,scriptCompilations;default:false;text-align:right;desc:script compilations"); table.addCell("script.cache_evictions", "alias:scrce,scriptCacheEvictions;default:false;text-align:right;desc:script cache evictions"); + table.addCell("script.compilation_limit_triggered", "alias:scrclt,scriptCacheCompilationLimitTriggered;default:false;" + + "text-align:right;desc:script cache compilation limit triggered"); table.addCell("search.fetch_current", "alias:sfc,searchFetchCurrent;default:false;text-align:right;desc:current fetch phase ops"); table.addCell("search.fetch_time", "alias:sfti,searchFetchTime;default:false;text-align:right;desc:time spent in fetch phase"); @@ -367,6 +369,7 @@ private Table buildTable(boolean fullId, RestRequest req, ClusterStateResponse s ScriptStats scriptStats = stats == null ? null : stats.getScriptStats(); table.addCell(scriptStats == null ? null : scriptStats.getCompilations()); table.addCell(scriptStats == null ? null : scriptStats.getCacheEvictions()); + table.addCell(scriptStats == null ? null : scriptStats.getCompilationLimitTriggered()); SearchStats searchStats = indicesStats == null ? null : indicesStats.getSearch(); table.addCell(searchStats == null ? null : searchStats.getTotal().getFetchCurrent()); diff --git a/core/src/main/java/org/elasticsearch/rest/action/document/RestGetAction.java b/core/src/main/java/org/elasticsearch/rest/action/document/RestGetAction.java index 857af483325aa..3265a59692b61 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/document/RestGetAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/document/RestGetAction.java @@ -50,7 +50,7 @@ public RestGetAction(final Settings settings, final RestController controller) { @Override public String getName() { - return "docuemnt_get_action"; + return "document_get_action"; } @Override diff --git a/core/src/main/java/org/elasticsearch/rest/action/search/RestSearchAction.java b/core/src/main/java/org/elasticsearch/rest/action/search/RestSearchAction.java index 2e97560cf789f..f269cb2314932 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/search/RestSearchAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/search/RestSearchAction.java @@ -44,6 +44,7 @@ import java.util.Arrays; import java.util.Collections; import java.util.Set; +import java.util.function.IntConsumer; import static org.elasticsearch.common.unit.TimeValue.parseTimeValue; import static org.elasticsearch.rest.RestRequest.Method.GET; @@ -73,8 +74,21 @@ public String getName() { @Override public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { SearchRequest searchRequest = new SearchRequest(); + /* + * We have to pull out the call to `source().size(size)` because + * _update_by_query and _delete_by_query uses this same parsing + * path but sets a different variable when it sees the `size` + * url parameter. + * + * Note that we can't use `searchRequest.source()::size` because + * `searchRequest.source()` is null right now. We don't have to + * guard against it being null in the IntConsumer because it can't + * be null later. If that is confusing to you then you are in good + * company. + */ + IntConsumer setSize = size -> searchRequest.source().size(size); request.withContentOrSourceParamParserOrNull(parser -> - parseSearchRequest(searchRequest, request, parser)); + parseSearchRequest(searchRequest, request, parser, setSize)); return channel -> client.search(searchRequest, new RestStatusToXContentListener<>(channel)); } @@ -84,9 +98,11 @@ public RestChannelConsumer prepareRequest(final RestRequest request, final NodeC * * @param requestContentParser body of the request to read. This method does not attempt to read the body from the {@code request} * parameter + * @param setSize how the size url parameter is handled. {@code udpate_by_query} and regular search differ here. */ public static void parseSearchRequest(SearchRequest searchRequest, RestRequest request, - XContentParser requestContentParser) throws IOException { + XContentParser requestContentParser, + IntConsumer setSize) throws IOException { if (searchRequest.source() == null) { searchRequest.source(new SearchSourceBuilder()); @@ -118,7 +134,7 @@ public static void parseSearchRequest(SearchRequest searchRequest, RestRequest r } else { searchRequest.searchType(searchType); } - parseSearchSource(searchRequest.source(), request); + parseSearchSource(searchRequest.source(), request, setSize); searchRequest.requestCache(request.paramAsBoolean("request_cache", null)); String scroll = request.param("scroll"); @@ -136,7 +152,7 @@ public static void parseSearchRequest(SearchRequest searchRequest, RestRequest r * Parses the rest request on top of the SearchSourceBuilder, preserving * values that are not overridden by the rest request. */ - private static void parseSearchSource(final SearchSourceBuilder searchSourceBuilder, RestRequest request) { + private static void parseSearchSource(final SearchSourceBuilder searchSourceBuilder, RestRequest request, IntConsumer setSize) { QueryBuilder queryBuilder = RestActions.urlParamsToQueryBuilder(request); if (queryBuilder != null) { searchSourceBuilder.query(queryBuilder); @@ -148,7 +164,7 @@ private static void parseSearchSource(final SearchSourceBuilder searchSourceBuil } int size = request.paramAsInt("size", -1); if (size != -1) { - searchSourceBuilder.size(size); + setSize.accept(size); } if (request.hasParam("explain")) { diff --git a/core/src/main/java/org/elasticsearch/script/FilterScript.java b/core/src/main/java/org/elasticsearch/script/FilterScript.java new file mode 100644 index 0000000000000..f1d3c13786009 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/script/FilterScript.java @@ -0,0 +1,80 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.script; + +import java.io.IOException; +import java.util.Map; + +import org.apache.lucene.index.LeafReaderContext; +import org.elasticsearch.index.fielddata.ScriptDocValues; +import org.elasticsearch.search.lookup.LeafDocLookup; +import org.elasticsearch.search.lookup.LeafSearchLookup; +import org.elasticsearch.search.lookup.SearchLookup; + +/** + * A script implementation of a query filter. + * See {@link org.elasticsearch.index.query.ScriptQueryBuilder}. + */ +public abstract class FilterScript { + + // no parameters for execute, but constant still required... + public static final String[] PARAMETERS = {}; + + /** The generic runtime parameters for the script. */ + private final Map params; + + /** A leaf lookup for the bound segment this script will operate on. */ + private final LeafSearchLookup leafLookup; + + public FilterScript(Map params, SearchLookup lookup, LeafReaderContext leafContext) { + this.params = params; + this.leafLookup = lookup.getLeafSearchLookup(leafContext); + } + + /** Return {@code true} if the current document matches the filter, or {@code false} otherwise. */ + public abstract boolean execute(); + + /** Return the parameters for this script. */ + public Map getParams() { + return params; + } + + /** The doc lookup for the Lucene segment this script was created for. */ + public final Map> getDoc() { + return leafLookup.doc(); + } + + /** Set the current document to run the script on next. */ + public void setDocument(int docid) { + leafLookup.setDocument(docid); + } + + /** A factory to construct {@link FilterScript} instances. */ + public interface LeafFactory { + FilterScript newInstance(LeafReaderContext ctx) throws IOException; + } + + /** A factory to construct stateful {@link FilterScript} factories for a specific index. */ + public interface Factory { + LeafFactory newFactory(Map params, SearchLookup lookup); + } + + /** The context used to compile {@link FilterScript} factories. */ + public static final ScriptContext CONTEXT = new ScriptContext<>("filter", Factory.class); +} diff --git a/core/src/main/java/org/elasticsearch/script/Script.java b/core/src/main/java/org/elasticsearch/script/Script.java index 7e931a894671b..6078c55f76640 100644 --- a/core/src/main/java/org/elasticsearch/script/Script.java +++ b/core/src/main/java/org/elasticsearch/script/Script.java @@ -29,6 +29,7 @@ import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.common.xcontent.ObjectParser; import org.elasticsearch.common.xcontent.ObjectParser.ValueType; +import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentFactory; @@ -276,9 +277,10 @@ public static Script parse(XContentParser parser) throws IOException { * Parse the script configured in the given settings. */ public static Script parse(Settings settings) { - try { - XContentBuilder builder = JsonXContent.contentBuilder(); - builder.map(settings.getAsStructuredMap()); + try (XContentBuilder builder = JsonXContent.contentBuilder()){ + builder.startObject(); + settings.toXContent(builder, ToXContent.EMPTY_PARAMS); + builder.endObject(); return parse(JsonXContent.jsonXContent.createParser(NamedXContentRegistry.EMPTY, builder.bytes())); } catch (IOException e) { // it should not happen since we are not actually reading from a stream but an in-memory byte[] diff --git a/core/src/main/java/org/elasticsearch/script/ScriptMetaData.java b/core/src/main/java/org/elasticsearch/script/ScriptMetaData.java index a6e55dbb898b8..dca17ce486607 100644 --- a/core/src/main/java/org/elasticsearch/script/ScriptMetaData.java +++ b/core/src/main/java/org/elasticsearch/script/ScriptMetaData.java @@ -18,7 +18,6 @@ */ package org.elasticsearch.script; -import org.apache.logging.log4j.Logger; import org.elasticsearch.ResourceNotFoundException; import org.elasticsearch.Version; import org.elasticsearch.cluster.ClusterState; @@ -30,9 +29,7 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; -import org.elasticsearch.common.logging.DeprecationLogger; -import org.elasticsearch.common.logging.ESLoggerFactory; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentFragment; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.common.xcontent.XContentParser.Token; @@ -47,7 +44,7 @@ * {@link ScriptMetaData} is used to store user-defined scripts * as part of the {@link ClusterState} using only an id as the key. */ -public final class ScriptMetaData implements MetaData.Custom, Writeable, ToXContent { +public final class ScriptMetaData implements MetaData.Custom, Writeable, ToXContentFragment { /** * A builder used to modify the currently stored scripts data held within diff --git a/core/src/main/java/org/elasticsearch/script/ScriptMetrics.java b/core/src/main/java/org/elasticsearch/script/ScriptMetrics.java index c779c196290bd..18c1027074b16 100644 --- a/core/src/main/java/org/elasticsearch/script/ScriptMetrics.java +++ b/core/src/main/java/org/elasticsearch/script/ScriptMetrics.java @@ -24,9 +24,10 @@ public class ScriptMetrics { final CounterMetric compilationsMetric = new CounterMetric(); final CounterMetric cacheEvictionsMetric = new CounterMetric(); + final CounterMetric compilationLimitTriggered = new CounterMetric(); public ScriptStats stats() { - return new ScriptStats(compilationsMetric.count(), cacheEvictionsMetric.count()); + return new ScriptStats(compilationsMetric.count(), cacheEvictionsMetric.count(), compilationLimitTriggered.count()); } public void onCompilation() { @@ -36,4 +37,8 @@ public void onCompilation() { public void onCacheEviction() { cacheEvictionsMetric.inc(); } + + public void onCompilationLimit() { + compilationLimitTriggered.inc(); + } } diff --git a/core/src/main/java/org/elasticsearch/script/ScriptModule.java b/core/src/main/java/org/elasticsearch/script/ScriptModule.java index 9db07423f86dc..727651be6a565 100644 --- a/core/src/main/java/org/elasticsearch/script/ScriptModule.java +++ b/core/src/main/java/org/elasticsearch/script/ScriptModule.java @@ -45,6 +45,7 @@ public class ScriptModule { ExecutableScript.AGGS_CONTEXT, ExecutableScript.UPDATE_CONTEXT, ExecutableScript.INGEST_CONTEXT, + FilterScript.CONTEXT, SimilarityScript.CONTEXT, SimilarityWeightScript.CONTEXT, TemplateScript.CONTEXT diff --git a/core/src/main/java/org/elasticsearch/script/ScriptService.java b/core/src/main/java/org/elasticsearch/script/ScriptService.java index 17b9fbb57359f..652ec3dda3d29 100644 --- a/core/src/main/java/org/elasticsearch/script/ScriptService.java +++ b/core/src/main/java/org/elasticsearch/script/ScriptService.java @@ -39,6 +39,7 @@ import org.elasticsearch.common.cache.CacheBuilder; import org.elasticsearch.common.cache.RemovalListener; import org.elasticsearch.common.cache.RemovalNotification; +import org.elasticsearch.common.collect.Tuple; import org.elasticsearch.common.component.AbstractComponent; import org.elasticsearch.common.settings.ClusterSettings; import org.elasticsearch.common.settings.Setting; @@ -60,14 +61,47 @@ public class ScriptService extends AbstractComponent implements Closeable, Clust static final String DISABLE_DYNAMIC_SCRIPTING_SETTING = "script.disable_dynamic"; + // a parsing function that requires a non negative int and a timevalue as arguments split by a slash + // this allows you to easily define rates + static final Function> MAX_COMPILATION_RATE_FUNCTION = + (String value) -> { + if (value.contains("/") == false || value.startsWith("/") || value.endsWith("/")) { + throw new IllegalArgumentException("parameter must contain a positive integer and a timevalue, i.e. 10/1m, but was [" + + value + "]"); + } + int idx = value.indexOf("/"); + String count = value.substring(0, idx); + String time = value.substring(idx + 1); + try { + + int rate = Integer.parseInt(count); + if (rate < 0) { + throw new IllegalArgumentException("rate [" + rate + "] must be positive"); + } + TimeValue timeValue = TimeValue.parseTimeValue(time, "script.max_compilations_rate"); + if (timeValue.nanos() <= 0) { + throw new IllegalArgumentException("time value [" + time + "] must be positive"); + } + // protect against a too hard to check limit, like less than a minute + if (timeValue.seconds() < 60) { + throw new IllegalArgumentException("time value [" + time + "] must be at least on a one minute resolution"); + } + return Tuple.tuple(rate, timeValue); + } catch (NumberFormatException e) { + // the number format exception message is so confusing, that it makes more sense to wrap it with a useful one + throw new IllegalArgumentException("could not parse [" + count + "] as integer in value [" + value + "]", e); + } + }; + public static final Setting SCRIPT_CACHE_SIZE_SETTING = Setting.intSetting("script.cache.max_size", 100, 0, Property.NodeScope); public static final Setting SCRIPT_CACHE_EXPIRE_SETTING = Setting.positiveTimeSetting("script.cache.expire", TimeValue.timeValueMillis(0), Property.NodeScope); public static final Setting SCRIPT_MAX_SIZE_IN_BYTES = Setting.intSetting("script.max_size_in_bytes", 65535, Property.NodeScope); - public static final Setting SCRIPT_MAX_COMPILATIONS_PER_MINUTE = - Setting.intSetting("script.max_compilations_per_minute", 15, 0, Property.Dynamic, Property.NodeScope); + // public Setting(String key, Function defaultValue, Function parser, Property... properties) { + public static final Setting> SCRIPT_MAX_COMPILATIONS_RATE = + new Setting<>("script.max_compilations_rate", "75/5m", MAX_COMPILATION_RATE_FUNCTION, Property.Dynamic, Property.NodeScope); public static final String ALLOW_NONE = "none"; @@ -88,9 +122,9 @@ public class ScriptService extends AbstractComponent implements Closeable, Clust private ClusterState clusterState; - private int totalCompilesPerMinute; + private Tuple rate; private long lastInlineCompileTime; - private double scriptsPerMinCounter; + private double scriptsPerTimeWindow; private double compilesAllowedPerNano; public ScriptService(Settings settings, Map engines, Map> contexts) { @@ -188,11 +222,11 @@ public ScriptService(Settings settings, Map engines, Map newRate) { + this.rate = newRate; // Reset the counter to allow new compilations - this.scriptsPerMinCounter = totalCompilesPerMinute; - this.compilesAllowedPerNano = ((double) totalCompilesPerMinute) / TimeValue.timeValueMinutes(1).nanos(); + this.scriptsPerTimeWindow = rate.v1(); + this.compilesAllowedPerNano = ((double) rate.v1()) / newRate.v2().nanos(); } /** @@ -233,7 +272,7 @@ public FactoryType compile(Script script, ScriptContext totalCompilesPerMinute) { - scriptsPerMinCounter = totalCompilesPerMinute; + if (scriptsPerTimeWindow > rate.v1()) { + scriptsPerTimeWindow = rate.v1(); } // If there is enough tokens in the bucket, allow the request and decrease the tokens by 1 - if (scriptsPerMinCounter >= 1) { - scriptsPerMinCounter -= 1.0; + if (scriptsPerTimeWindow >= 1) { + scriptsPerTimeWindow -= 1.0; } else { + scriptMetrics.onCompilationLimit(); // Otherwise reject the request - throw new CircuitBreakingException("[script] Too many dynamic script compilations within one minute, max: [" + - totalCompilesPerMinute + "/min]; please use on-disk, indexed, or scripts with parameters instead; " + - "this limit can be changed by the [" + SCRIPT_MAX_COMPILATIONS_PER_MINUTE.getKey() + "] setting"); + throw new CircuitBreakingException("[script] Too many dynamic script compilations within, max: [" + + rate.v1() + "/" + rate.v2() +"]; please use indexed, or scripts with parameters instead; " + + "this limit can be changed by the [" + SCRIPT_MAX_COMPILATIONS_RATE.getKey() + "] setting"); } } diff --git a/core/src/main/java/org/elasticsearch/script/ScriptStats.java b/core/src/main/java/org/elasticsearch/script/ScriptStats.java index 33f5dc21874b4..abf54e7e3d0a2 100644 --- a/core/src/main/java/org/elasticsearch/script/ScriptStats.java +++ b/core/src/main/java/org/elasticsearch/script/ScriptStats.java @@ -19,32 +19,39 @@ package org.elasticsearch.script; +import org.elasticsearch.Version; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentFragment; import org.elasticsearch.common.xcontent.XContentBuilder; import java.io.IOException; -public class ScriptStats implements Writeable, ToXContent { +public class ScriptStats implements Writeable, ToXContentFragment { private final long compilations; private final long cacheEvictions; + private final long compilationLimitTriggered; - public ScriptStats(long compilations, long cacheEvictions) { + public ScriptStats(long compilations, long cacheEvictions, long compilationLimitTriggered) { this.compilations = compilations; this.cacheEvictions = cacheEvictions; + this.compilationLimitTriggered = compilationLimitTriggered; } public ScriptStats(StreamInput in) throws IOException { compilations = in.readVLong(); cacheEvictions = in.readVLong(); + compilationLimitTriggered = in.getVersion().onOrAfter(Version.V_7_0_0_alpha1) ? in.readVLong() : 0; } @Override public void writeTo(StreamOutput out) throws IOException { out.writeVLong(compilations); out.writeVLong(cacheEvictions); + if (out.getVersion().onOrAfter(Version.V_7_0_0_alpha1)) { + out.writeVLong(compilationLimitTriggered); + } } public long getCompilations() { @@ -55,11 +62,16 @@ public long getCacheEvictions() { return cacheEvictions; } + public long getCompilationLimitTriggered() { + return compilationLimitTriggered; + } + @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { builder.startObject(Fields.SCRIPT_STATS); builder.field(Fields.COMPILATIONS, getCompilations()); builder.field(Fields.CACHE_EVICTIONS, getCacheEvictions()); + builder.field(Fields.COMPILATION_LIMIT_TRIGGERED, getCompilationLimitTriggered()); builder.endObject(); return builder; } @@ -68,5 +80,6 @@ static final class Fields { static final String SCRIPT_STATS = "script"; static final String COMPILATIONS = "compilations"; static final String CACHE_EVICTIONS = "cache_evictions"; + static final String COMPILATION_LIMIT_TRIGGERED = "compilation_limit_triggered"; } } diff --git a/core/src/main/java/org/elasticsearch/search/DefaultSearchContext.java b/core/src/main/java/org/elasticsearch/search/DefaultSearchContext.java index 07fa91c789555..34c3c03f758d7 100644 --- a/core/src/main/java/org/elasticsearch/search/DefaultSearchContext.java +++ b/core/src/main/java/org/elasticsearch/search/DefaultSearchContext.java @@ -24,7 +24,6 @@ import org.apache.lucene.search.Collector; import org.apache.lucene.search.FieldDoc; import org.apache.lucene.search.Query; -import org.apache.lucene.util.BytesRef; import org.apache.lucene.util.Counter; import org.elasticsearch.action.search.SearchTask; import org.elasticsearch.action.search.SearchType; @@ -68,7 +67,7 @@ import org.elasticsearch.search.profile.Profilers; import org.elasticsearch.search.query.QueryPhaseExecutionException; import org.elasticsearch.search.query.QuerySearchResult; -import org.elasticsearch.search.rescore.RescoreSearchContext; +import org.elasticsearch.search.rescore.RescoreContext; import org.elasticsearch.search.slice.SliceBuilder; import org.elasticsearch.search.sort.SortAndFormats; import org.elasticsearch.search.suggest.SuggestionSearchContext; @@ -81,7 +80,6 @@ import java.util.HashMap; import java.util.List; import java.util.Map; -import java.util.concurrent.ExecutorService; final class DefaultSearchContext extends SearchContext { @@ -143,7 +141,7 @@ final class DefaultSearchContext extends SearchContext { private SearchContextAggregations aggregations; private SearchContextHighlight highlight; private SuggestionSearchContext suggest; - private List rescore; + private List rescore; private volatile long keepAlive; private final long originNanoTime = System.nanoTime(); private volatile long lastAccessTime = -1; @@ -200,26 +198,28 @@ public void preProcess(boolean rewrite) { if (resultWindow > maxResultWindow) { if (scrollContext == null) { - throw new QueryPhaseExecutionException(this, + throw new IllegalArgumentException( "Result window is too large, from + size must be less than or equal to: [" + maxResultWindow + "] but was [" + resultWindow + "]. See the scroll api for a more efficient way to request large data sets. " + "This limit can be set by changing the [" + IndexSettings.MAX_RESULT_WINDOW_SETTING.getKey() + "] index level setting."); } - throw new QueryPhaseExecutionException(this, + throw new IllegalArgumentException( "Batch size is too large, size must be less than or equal to: [" + maxResultWindow + "] but was [" + resultWindow + "]. Scroll batch sizes cost as much memory as result windows so they are controlled by the [" + IndexSettings.MAX_RESULT_WINDOW_SETTING.getKey() + "] index level setting."); } if (rescore != null) { + if (sort != null) { + throw new IllegalArgumentException("Cannot use [sort] option in conjunction with [rescore]."); + } int maxWindow = indexService.getIndexSettings().getMaxRescoreWindow(); - for (RescoreSearchContext rescoreContext: rescore) { - if (rescoreContext.window() > maxWindow) { - throw new QueryPhaseExecutionException(this, "Rescore window [" + rescoreContext.window() + "] is too large. It must " - + "be less than [" + maxWindow + "]. This prevents allocating massive heaps for storing the results to be " - + "rescored. This limit can be set by changing the [" + IndexSettings.MAX_RESCORE_WINDOW_SETTING.getKey() + for (RescoreContext rescoreContext: rescore) { + if (rescoreContext.getWindowSize() > maxWindow) { + throw new IllegalArgumentException("Rescore window [" + rescoreContext.getWindowSize() + "] is too large. " + + "It must be less than [" + maxWindow + "]. This prevents allocating massive heaps for storing the results " + + "to be rescored. This limit can be set by changing the [" + IndexSettings.MAX_RESCORE_WINDOW_SETTING.getKey() + "] index level setting."); - } } } @@ -228,7 +228,7 @@ public void preProcess(boolean rewrite) { int sliceLimit = indexService.getIndexSettings().getMaxSlicesPerScroll(); int numSlices = sliceBuilder.getMax(); if (numSlices > sliceLimit) { - throw new QueryPhaseExecutionException(this, "The number of slices [" + numSlices + "] is too large. It must " + throw new IllegalArgumentException("The number of slices [" + numSlices + "] is too large. It must " + "be less than [" + sliceLimit + "]. This limit can be set by changing the [" + IndexSettings.MAX_SLICES_PER_SCROLL.getKey() + "] index level setting."); } @@ -400,7 +400,7 @@ public void suggest(SuggestionSearchContext suggest) { } @Override - public List rescore() { + public List rescore() { if (rescore == null) { return Collections.emptyList(); } @@ -408,7 +408,7 @@ public List rescore() { } @Override - public void addRescore(RescoreSearchContext rescore) { + public void addRescore(RescoreContext rescore) { if (this.rescore == null) { this.rescore = new ArrayList<>(); } diff --git a/core/src/main/java/org/elasticsearch/search/MultiValueMode.java b/core/src/main/java/org/elasticsearch/search/MultiValueMode.java index 231bc8bf3c050..b2ee4b8ffbd5f 100644 --- a/core/src/main/java/org/elasticsearch/search/MultiValueMode.java +++ b/core/src/main/java/org/elasticsearch/search/MultiValueMode.java @@ -104,16 +104,6 @@ protected double pick(SortedNumericDoubleValues values, double missingValue, Doc } return totalCount > 0 ? totalValue : missingValue; } - - @Override - protected double pick(UnsortedNumericDoubleValues values) throws IOException { - final int count = values.docValueCount(); - double total = 0; - for (int index = 0; index < count; ++index) { - total += values.nextValue(); - } - return total; - } }, /** @@ -177,16 +167,6 @@ protected double pick(SortedNumericDoubleValues values, double missingValue, Doc } return totalValue/totalCount; } - - @Override - protected double pick(UnsortedNumericDoubleValues values) throws IOException { - final int count = values.docValueCount(); - double total = 0; - for (int index = 0; index < count; ++index) { - total += values.nextValue(); - } - return total/count; - } }, /** @@ -303,16 +283,6 @@ protected int pick(SortedDocValues values, DocIdSetIterator docItr, int startDoc } return hasValue ? ord : -1; } - - @Override - protected double pick(UnsortedNumericDoubleValues values) throws IOException { - int count = values.docValueCount(); - double min = Double.POSITIVE_INFINITY; - for (int index = 0; index < count; ++index) { - min = Math.min(values.nextValue(), min); - } - return min; - } }, /** @@ -419,16 +389,6 @@ protected int pick(SortedDocValues values, DocIdSetIterator docItr, int startDoc } return ord; } - - @Override - protected double pick(UnsortedNumericDoubleValues values) throws IOException { - int count = values.docValueCount(); - double max = Double.NEGATIVE_INFINITY; - for (int index = 0; index < count; ++index) { - max = Math.max(values.nextValue(), max); - } - return max; - } }; /** @@ -456,11 +416,11 @@ public NumericDocValues select(final SortedNumericDocValues values, final long m if (singleton != null) { return new AbstractNumericDocValues() { - private boolean hasValue; + private long value; @Override public boolean advanceExact(int target) throws IOException { - hasValue = singleton.advanceExact(target); + this.value = singleton.advanceExact(target) ? singleton.longValue() : missingValue; return true; } @@ -471,17 +431,17 @@ public int docID() { @Override public long longValue() throws IOException { - return hasValue ? singleton.longValue() : missingValue; + return this.value; } }; } else { return new AbstractNumericDocValues() { - private boolean hasValue; + private long value; @Override public boolean advanceExact(int target) throws IOException { - hasValue = values.advanceExact(target); + this.value = values.advanceExact(target) ? pick(values) : missingValue; return true; } @@ -492,7 +452,7 @@ public int docID() { @Override public long longValue() throws IOException { - return hasValue ? pick(values) : missingValue; + return value; } }; } @@ -514,42 +474,41 @@ protected long pick(SortedNumericDocValues values) throws IOException { * NOTE: Calling the returned instance on docs that are not root docs is illegal * The returned instance can only be evaluate the current and upcoming docs */ - public NumericDocValues select(final SortedNumericDocValues values, final long missingValue, final BitSet rootDocs, final DocIdSetIterator innerDocs, int maxDoc) throws IOException { - if (rootDocs == null || innerDocs == null) { + public NumericDocValues select(final SortedNumericDocValues values, final long missingValue, final BitSet parentDocs, final DocIdSetIterator childDocs, int maxDoc) throws IOException { + if (parentDocs == null || childDocs == null) { return select(DocValues.emptySortedNumeric(maxDoc), missingValue); } return new AbstractNumericDocValues() { - int lastSeenRootDoc = -1; + int lastSeenParentDoc = -1; long lastEmittedValue = missingValue; @Override - public boolean advanceExact(int rootDoc) throws IOException { - assert rootDocs.get(rootDoc) : "can only sort root documents"; - assert rootDoc >= lastSeenRootDoc : "can only evaluate current and upcoming root docs"; - if (rootDoc == lastSeenRootDoc) { + public boolean advanceExact(int parentDoc) throws IOException { + assert parentDoc >= lastSeenParentDoc : "can only evaluate current and upcoming parent docs"; + if (parentDoc == lastSeenParentDoc) { return true; - } else if (rootDoc == 0) { + } else if (parentDoc == 0) { lastEmittedValue = missingValue; return true; } - final int prevRootDoc = rootDocs.prevSetBit(rootDoc - 1); - final int firstNestedDoc; - if (innerDocs.docID() > prevRootDoc) { - firstNestedDoc = innerDocs.docID(); + final int prevParentDoc = parentDocs.prevSetBit(parentDoc - 1); + final int firstChildDoc; + if (childDocs.docID() > prevParentDoc) { + firstChildDoc = childDocs.docID(); } else { - firstNestedDoc = innerDocs.advance(prevRootDoc + 1); + firstChildDoc = childDocs.advance(prevParentDoc + 1); } - lastSeenRootDoc = rootDoc; - lastEmittedValue = pick(values, missingValue, innerDocs, firstNestedDoc, rootDoc); + lastSeenParentDoc = parentDoc; + lastEmittedValue = pick(values, missingValue, childDocs, firstChildDoc, parentDoc); return true; } @Override public int docID() { - return lastSeenRootDoc; + return lastSeenParentDoc; } @Override @@ -574,35 +533,33 @@ public NumericDoubleValues select(final SortedNumericDoubleValues values, final final NumericDoubleValues singleton = FieldData.unwrapSingleton(values); if (singleton != null) { return new NumericDoubleValues() { - - private boolean hasValue; + private double value; @Override public boolean advanceExact(int doc) throws IOException { - hasValue = singleton.advanceExact(doc); + this.value = singleton.advanceExact(doc) ? singleton.doubleValue() : missingValue; return true; } @Override public double doubleValue() throws IOException { - return hasValue ? singleton.doubleValue() : missingValue; + return this.value; } - }; } else { return new NumericDoubleValues() { - private boolean hasValue; + private double value; @Override public boolean advanceExact(int target) throws IOException { - hasValue = values.advanceExact(target); + value = values.advanceExact(target) ? pick(values) : missingValue; return true; } @Override public double doubleValue() throws IOException { - return hasValue ? pick(values) : missingValue; + return this.value; } }; } @@ -624,33 +581,32 @@ protected double pick(SortedNumericDoubleValues values) throws IOException { * NOTE: Calling the returned instance on docs that are not root docs is illegal * The returned instance can only be evaluate the current and upcoming docs */ - public NumericDoubleValues select(final SortedNumericDoubleValues values, final double missingValue, final BitSet rootDocs, final DocIdSetIterator innerDocs, int maxDoc) throws IOException { - if (rootDocs == null || innerDocs == null) { + public NumericDoubleValues select(final SortedNumericDoubleValues values, final double missingValue, final BitSet parentDocs, final DocIdSetIterator childDocs, int maxDoc) throws IOException { + if (parentDocs == null || childDocs == null) { return select(FieldData.emptySortedNumericDoubles(), missingValue); } return new NumericDoubleValues() { - int lastSeenRootDoc = 0; + int lastSeenParentDoc = 0; double lastEmittedValue = missingValue; @Override - public boolean advanceExact(int rootDoc) throws IOException { - assert rootDocs.get(rootDoc) : "can only sort root documents"; - assert rootDoc >= lastSeenRootDoc : "can only evaluate current and upcoming root docs"; - if (rootDoc == lastSeenRootDoc) { + public boolean advanceExact(int parentDoc) throws IOException { + assert parentDoc >= lastSeenParentDoc : "can only evaluate current and upcoming parent docs"; + if (parentDoc == lastSeenParentDoc) { return true; } - final int prevRootDoc = rootDocs.prevSetBit(rootDoc - 1); - final int firstNestedDoc; - if (innerDocs.docID() > prevRootDoc) { - firstNestedDoc = innerDocs.docID(); + final int prevParentDoc = parentDocs.prevSetBit(parentDoc - 1); + final int firstChildDoc; + if (childDocs.docID() > prevParentDoc) { + firstChildDoc = childDocs.docID(); } else { - firstNestedDoc = innerDocs.advance(prevRootDoc + 1); + firstChildDoc = childDocs.advance(prevParentDoc + 1); } - lastSeenRootDoc = rootDoc; - lastEmittedValue = pick(values, missingValue, innerDocs, firstNestedDoc, rootDoc); + lastSeenParentDoc = parentDoc; + lastEmittedValue = pick(values, missingValue, childDocs, firstChildDoc, parentDoc); return true; } @@ -680,17 +636,17 @@ public BinaryDocValues select(final SortedBinaryDocValues values, final BytesRef } return new AbstractBinaryDocValues() { - private boolean hasValue; + private BytesRef value; @Override public boolean advanceExact(int target) throws IOException { - hasValue = singleton.advanceExact(target); + this.value = singleton.advanceExact(target) ? singleton.binaryValue() : missingValue; return true; } @Override public BytesRef binaryValue() throws IOException { - return hasValue ? singleton.binaryValue() : missingValue; + return this.value; } }; } else { @@ -733,8 +689,8 @@ protected BytesRef pick(SortedBinaryDocValues values) throws IOException { * NOTE: Calling the returned instance on docs that are not root docs is illegal * The returned instance can only be evaluate the current and upcoming docs */ - public BinaryDocValues select(final SortedBinaryDocValues values, final BytesRef missingValue, final BitSet rootDocs, final DocIdSetIterator innerDocs, int maxDoc) throws IOException { - if (rootDocs == null || innerDocs == null) { + public BinaryDocValues select(final SortedBinaryDocValues values, final BytesRef missingValue, final BitSet parentDocs, final DocIdSetIterator childDocs, int maxDoc) throws IOException { + if (parentDocs == null || childDocs == null) { return select(FieldData.emptySortedBinary(), missingValue); } final BinaryDocValues selectedValues = select(values, null); @@ -743,27 +699,26 @@ public BinaryDocValues select(final SortedBinaryDocValues values, final BytesRef final BytesRefBuilder builder = new BytesRefBuilder(); - int lastSeenRootDoc = 0; + int lastSeenParentDoc = 0; BytesRef lastEmittedValue = missingValue; @Override - public boolean advanceExact(int rootDoc) throws IOException { - assert rootDocs.get(rootDoc) : "can only sort root documents"; - assert rootDoc >= lastSeenRootDoc : "can only evaluate current and upcoming root docs"; - if (rootDoc == lastSeenRootDoc) { + public boolean advanceExact(int parentDoc) throws IOException { + assert parentDoc >= lastSeenParentDoc : "can only evaluate current and upcoming root docs"; + if (parentDoc == lastSeenParentDoc) { return true; } - final int prevRootDoc = rootDocs.prevSetBit(rootDoc - 1); - final int firstNestedDoc; - if (innerDocs.docID() > prevRootDoc) { - firstNestedDoc = innerDocs.docID(); + final int prevParentDoc = parentDocs.prevSetBit(parentDoc - 1); + final int firstChildDoc; + if (childDocs.docID() > prevParentDoc) { + firstChildDoc = childDocs.docID(); } else { - firstNestedDoc = innerDocs.advance(prevRootDoc + 1); + firstChildDoc = childDocs.advance(prevParentDoc + 1); } - lastSeenRootDoc = rootDoc; - lastEmittedValue = pick(selectedValues, builder, innerDocs, firstNestedDoc, rootDoc); + lastSeenParentDoc = parentDoc; + lastEmittedValue = pick(selectedValues, builder, childDocs, firstChildDoc, parentDoc); if (lastEmittedValue == null) { lastEmittedValue = missingValue; } @@ -850,8 +805,8 @@ protected int pick(SortedSetDocValues values) throws IOException { * NOTE: Calling the returned instance on docs that are not root docs is illegal * The returned instance can only be evaluate the current and upcoming docs */ - public SortedDocValues select(final SortedSetDocValues values, final BitSet rootDocs, final DocIdSetIterator innerDocs) throws IOException { - if (rootDocs == null || innerDocs == null) { + public SortedDocValues select(final SortedSetDocValues values, final BitSet parentDocs, final DocIdSetIterator childDocs) throws IOException { + if (parentDocs == null || childDocs == null) { return select(DocValues.emptySortedSet()); } final SortedDocValues selectedValues = select(values); @@ -859,7 +814,7 @@ public SortedDocValues select(final SortedSetDocValues values, final BitSet root return new AbstractSortedDocValues() { int docID = -1; - int lastSeenRootDoc = 0; + int lastSeenParentDoc = 0; int lastEmittedOrd = -1; @Override @@ -873,23 +828,22 @@ public int getValueCount() { } @Override - public boolean advanceExact(int rootDoc) throws IOException { - assert rootDocs.get(rootDoc) : "can only sort root documents"; - assert rootDoc >= lastSeenRootDoc : "can only evaluate current and upcoming root docs"; - if (rootDoc == lastSeenRootDoc) { + public boolean advanceExact(int parentDoc) throws IOException { + assert parentDoc >= lastSeenParentDoc : "can only evaluate current and upcoming root docs"; + if (parentDoc == lastSeenParentDoc) { return lastEmittedOrd != -1; } - final int prevRootDoc = rootDocs.prevSetBit(rootDoc - 1); - final int firstNestedDoc; - if (innerDocs.docID() > prevRootDoc) { - firstNestedDoc = innerDocs.docID(); + final int prevParentDoc = parentDocs.prevSetBit(parentDoc - 1); + final int firstChildDoc; + if (childDocs.docID() > prevParentDoc) { + firstChildDoc = childDocs.docID(); } else { - firstNestedDoc = innerDocs.advance(prevRootDoc + 1); + firstChildDoc = childDocs.advance(prevParentDoc + 1); } - docID = lastSeenRootDoc = rootDoc; - lastEmittedOrd = pick(selectedValues, innerDocs, firstNestedDoc, rootDoc); + docID = lastSeenParentDoc = parentDoc; + lastEmittedOrd = pick(selectedValues, childDocs, firstChildDoc, parentDoc); return lastEmittedOrd != -1; } @@ -909,43 +863,6 @@ protected int pick(SortedDocValues values, DocIdSetIterator docItr, int startDoc throw new IllegalArgumentException("Unsupported sort mode: " + this); } - /** - * Return a {@link NumericDoubleValues} instance that can be used to sort documents - * with this mode and the provided values. When a document has no value, - * missingValue is returned. - * - * Allowed Modes: SUM, AVG, MIN, MAX - */ - public NumericDoubleValues select(final UnsortedNumericDoubleValues values, final double missingValue) { - return new NumericDoubleValues() { - private boolean hasValue; - - @Override - public boolean advanceExact(int doc) throws IOException { - hasValue = values.advanceExact(doc); - return true; - } - @Override - public double doubleValue() throws IOException { - return hasValue ? pick(values) : missingValue; - } - }; - } - - protected double pick(UnsortedNumericDoubleValues values) throws IOException { - throw new IllegalArgumentException("Unsupported sort mode: " + this); - } - - /** - * Interface allowing custom value generators to be used in MultiValueMode. - */ - // TODO: why do we need it??? - public interface UnsortedNumericDoubleValues { - boolean advanceExact(int doc) throws IOException; - int docValueCount() throws IOException; - double nextValue() throws IOException; - } - @Override public void writeTo(StreamOutput out) throws IOException { out.writeEnum(this); diff --git a/core/src/main/java/org/elasticsearch/search/SearchExtBuilder.java b/core/src/main/java/org/elasticsearch/search/SearchExtBuilder.java index bf696fcc917ad..c30bee027dcd1 100644 --- a/core/src/main/java/org/elasticsearch/search/SearchExtBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/SearchExtBuilder.java @@ -24,7 +24,7 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentFragment; import org.elasticsearch.plugins.SearchPlugin; import org.elasticsearch.plugins.SearchPlugin.SearchExtSpec; @@ -43,7 +43,7 @@ * * @see SearchExtSpec */ -public abstract class SearchExtBuilder implements NamedWriteable, ToXContent { +public abstract class SearchExtBuilder implements NamedWriteable, ToXContentFragment { public abstract int hashCode(); diff --git a/core/src/main/java/org/elasticsearch/search/SearchHit.java b/core/src/main/java/org/elasticsearch/search/SearchHit.java index 45510ea1af943..7566d5ad279f5 100644 --- a/core/src/main/java/org/elasticsearch/search/SearchHit.java +++ b/core/src/main/java/org/elasticsearch/search/SearchHit.java @@ -37,7 +37,7 @@ import org.elasticsearch.common.xcontent.ConstructingObjectParser; import org.elasticsearch.common.xcontent.ObjectParser; import org.elasticsearch.common.xcontent.ObjectParser.ValueType; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentFragment; import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentHelper; @@ -834,7 +834,7 @@ public int hashCode() { /** * Encapsulates the nested identity of a hit. */ - public static final class NestedIdentity implements Writeable, ToXContent { + public static final class NestedIdentity implements Writeable, ToXContentFragment { private static final String _NESTED = "_nested"; private static final String FIELD = "field"; diff --git a/core/src/main/java/org/elasticsearch/search/SearchHits.java b/core/src/main/java/org/elasticsearch/search/SearchHits.java index cc1aae7973673..edbcb021331f5 100644 --- a/core/src/main/java/org/elasticsearch/search/SearchHits.java +++ b/core/src/main/java/org/elasticsearch/search/SearchHits.java @@ -23,7 +23,8 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Streamable; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContent.Params; +import org.elasticsearch.common.xcontent.ToXContentFragment; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentParser; @@ -36,7 +37,7 @@ import static org.elasticsearch.common.xcontent.XContentParserUtils.ensureExpectedToken; -public final class SearchHits implements Streamable, ToXContent, Iterable { +public final class SearchHits implements Streamable, ToXContentFragment, Iterable { public static SearchHits empty() { // We shouldn't use static final instance, since that could directly be returned by native transport clients diff --git a/core/src/main/java/org/elasticsearch/search/SearchModule.java b/core/src/main/java/org/elasticsearch/search/SearchModule.java index 117e32801ef27..53f8840f8bf02 100644 --- a/core/src/main/java/org/elasticsearch/search/SearchModule.java +++ b/core/src/main/java/org/elasticsearch/search/SearchModule.java @@ -69,6 +69,7 @@ import org.elasticsearch.index.query.SpanWithinQueryBuilder; import org.elasticsearch.index.query.TermQueryBuilder; import org.elasticsearch.index.query.TermsQueryBuilder; +import org.elasticsearch.index.query.TermsSetQueryBuilder; import org.elasticsearch.index.query.TypeQueryBuilder; import org.elasticsearch.index.query.WildcardQueryBuilder; import org.elasticsearch.index.query.WrapperQueryBuilder; @@ -86,6 +87,7 @@ import org.elasticsearch.plugins.SearchPlugin.FetchPhaseConstructionContext; import org.elasticsearch.plugins.SearchPlugin.PipelineAggregationSpec; import org.elasticsearch.plugins.SearchPlugin.QuerySpec; +import org.elasticsearch.plugins.SearchPlugin.RescorerSpec; import org.elasticsearch.plugins.SearchPlugin.ScoreFunctionSpec; import org.elasticsearch.plugins.SearchPlugin.SearchExtSpec; import org.elasticsearch.plugins.SearchPlugin.SearchExtensionSpec; @@ -230,7 +232,7 @@ import org.elasticsearch.search.fetch.subphase.highlight.PlainHighlighter; import org.elasticsearch.search.fetch.subphase.highlight.UnifiedHighlighter; import org.elasticsearch.search.rescore.QueryRescorerBuilder; -import org.elasticsearch.search.rescore.RescoreBuilder; +import org.elasticsearch.search.rescore.RescorerBuilder; import org.elasticsearch.search.sort.FieldSortBuilder; import org.elasticsearch.search.sort.GeoDistanceSortBuilder; import org.elasticsearch.search.sort.ScoreSortBuilder; @@ -281,7 +283,7 @@ public SearchModule(Settings settings, boolean transportClient, List plugins) { + registerRescorer(new RescorerSpec<>(QueryRescorerBuilder.NAME, QueryRescorerBuilder::new, QueryRescorerBuilder::fromXContent)); + registerFromPlugin(plugins, SearchPlugin::getRescorers, this::registerRescorer); + } + + private void registerRescorer(RescorerSpec spec) { + if (false == transportClient) { + namedXContents.add(new NamedXContentRegistry.Entry(RescorerBuilder.class, spec.getName(), (p, c) -> spec.getParser().apply(p))); + } + namedWriteables.add(new NamedWriteableRegistry.Entry(RescorerBuilder.class, spec.getName().getPreferredName(), spec.getReader())); } private void registerSorts() { @@ -739,6 +749,7 @@ private void registerQueryParsers(List plugins) { registerQuery(new QuerySpec<>(GeoPolygonQueryBuilder.NAME, GeoPolygonQueryBuilder::new, GeoPolygonQueryBuilder::fromXContent)); registerQuery(new QuerySpec<>(ExistsQueryBuilder.NAME, ExistsQueryBuilder::new, ExistsQueryBuilder::fromXContent)); registerQuery(new QuerySpec<>(MatchNoneQueryBuilder.NAME, MatchNoneQueryBuilder::new, MatchNoneQueryBuilder::fromXContent)); + registerQuery(new QuerySpec<>(TermsSetQueryBuilder.NAME, TermsSetQueryBuilder::new, TermsSetQueryBuilder::fromXContent)); if (ShapesAvailability.JTS_AVAILABLE && ShapesAvailability.SPATIAL4J_AVAILABLE) { registerQuery(new QuerySpec<>(GeoShapeQueryBuilder.NAME, GeoShapeQueryBuilder::new, GeoShapeQueryBuilder::fromXContent)); diff --git a/core/src/main/java/org/elasticsearch/search/SearchService.java b/core/src/main/java/org/elasticsearch/search/SearchService.java index 76b016cfadc29..49ab665295793 100644 --- a/core/src/main/java/org/elasticsearch/search/SearchService.java +++ b/core/src/main/java/org/elasticsearch/search/SearchService.java @@ -30,7 +30,6 @@ import org.elasticsearch.action.search.SearchType; import org.elasticsearch.cluster.ClusterState; import org.elasticsearch.cluster.service.ClusterService; -import org.elasticsearch.common.Nullable; import org.elasticsearch.common.component.AbstractLifecycleComponent; import org.elasticsearch.common.lucene.Lucene; import org.elasticsearch.common.settings.Setting; @@ -85,14 +84,14 @@ import org.elasticsearch.search.query.QuerySearchRequest; import org.elasticsearch.search.query.QuerySearchResult; import org.elasticsearch.search.query.ScrollQuerySearchResult; -import org.elasticsearch.search.rescore.RescoreBuilder; +import org.elasticsearch.search.rescore.RescorerBuilder; import org.elasticsearch.search.searchafter.SearchAfterBuilder; import org.elasticsearch.search.sort.SortAndFormats; import org.elasticsearch.search.sort.SortBuilder; import org.elasticsearch.search.suggest.Suggest; import org.elasticsearch.search.suggest.completion.CompletionSuggestion; import org.elasticsearch.threadpool.ThreadPool; -import org.elasticsearch.threadpool.ThreadPool.Cancellable; +import org.elasticsearch.threadpool.Scheduler.Cancellable; import org.elasticsearch.threadpool.ThreadPool.Names; import org.elasticsearch.transport.TransportRequest; @@ -101,12 +100,12 @@ import java.util.HashMap; import java.util.List; import java.util.Map; -import java.util.Objects; import java.util.Optional; import java.util.concurrent.ExecutionException; import java.util.concurrent.atomic.AtomicLong; import java.util.function.LongSupplier; +import static org.elasticsearch.common.unit.TimeValue.timeValueHours; import static org.elasticsearch.common.unit.TimeValue.timeValueMillis; import static org.elasticsearch.common.unit.TimeValue.timeValueMinutes; @@ -114,7 +113,9 @@ public class SearchService extends AbstractLifecycleComponent implements IndexEv // we can have 5 minutes here, since we make sure to clean with search requests and when shard/index closes public static final Setting DEFAULT_KEEPALIVE_SETTING = - Setting.positiveTimeSetting("search.default_keep_alive", timeValueMinutes(5), Property.NodeScope); + Setting.positiveTimeSetting("search.default_keep_alive", timeValueMinutes(5), Property.NodeScope, Property.Dynamic); + public static final Setting MAX_KEEPALIVE_SETTING = + Setting.positiveTimeSetting("search.max_keep_alive", timeValueHours(24), Property.NodeScope, Property.Dynamic); public static final Setting KEEPALIVE_INTERVAL_SETTING = Setting.positiveTimeSetting("search.keep_alive_interval", timeValueMinutes(1), Property.NodeScope); /** @@ -148,7 +149,9 @@ public class SearchService extends AbstractLifecycleComponent implements IndexEv private final FetchPhase fetchPhase; - private final long defaultKeepAlive; + private volatile long defaultKeepAlive; + + private volatile long maxKeepAlive; private volatile TimeValue defaultSearchTimeout; @@ -174,7 +177,10 @@ public SearchService(ClusterService clusterService, IndicesService indicesServic this.fetchPhase = fetchPhase; TimeValue keepAliveInterval = KEEPALIVE_INTERVAL_SETTING.get(settings); - this.defaultKeepAlive = DEFAULT_KEEPALIVE_SETTING.get(settings).millis(); + setKeepAlives(DEFAULT_KEEPALIVE_SETTING.get(settings), MAX_KEEPALIVE_SETTING.get(settings)); + + clusterService.getClusterSettings().addSettingsUpdateConsumer(DEFAULT_KEEPALIVE_SETTING, MAX_KEEPALIVE_SETTING, + this::setKeepAlives, this::validateKeepAlives); this.keepAliveReaper = threadPool.scheduleWithFixedDelay(new Reaper(), keepAliveInterval, Names.SAME); @@ -185,6 +191,20 @@ public SearchService(ClusterService clusterService, IndicesService indicesServic clusterService.getClusterSettings().addSettingsUpdateConsumer(LOW_LEVEL_CANCELLATION_SETTING, this::setLowLevelCancellation); } + private void validateKeepAlives(TimeValue defaultKeepAlive, TimeValue maxKeepAlive) { + if (defaultKeepAlive.millis() > maxKeepAlive.millis()) { + throw new IllegalArgumentException("Default keep alive setting for scroll [" + DEFAULT_KEEPALIVE_SETTING.getKey() + "]" + + " should be smaller than max keep alive [" + MAX_KEEPALIVE_SETTING.getKey() + "], " + + "was (" + defaultKeepAlive.format() + " > " + maxKeepAlive.format() + ")"); + } + } + + private void setKeepAlives(TimeValue defaultKeepAlive, TimeValue maxKeepAlive) { + validateKeepAlives(defaultKeepAlive, maxKeepAlive); + this.defaultKeepAlive = defaultKeepAlive.millis(); + this.maxKeepAlive = maxKeepAlive.millis(); + } + private void setDefaultSearchTimeout(TimeValue defaultSearchTimeout) { this.defaultSearchTimeout = defaultSearchTimeout; } @@ -504,7 +524,7 @@ private SearchContext findContext(long id, TransportRequest request) throws Sear } final SearchContext createAndPutContext(ShardSearchRequest request) throws IOException { - SearchContext context = createContext(request, null); + SearchContext context = createContext(request); boolean success = false; try { putContext(context); @@ -521,8 +541,8 @@ final SearchContext createAndPutContext(ShardSearchRequest request) throws IOExc } } - final SearchContext createContext(ShardSearchRequest request, @Nullable Engine.Searcher searcher) throws IOException { - final DefaultSearchContext context = createSearchContext(request, defaultSearchTimeout, searcher); + final SearchContext createContext(ShardSearchRequest request) throws IOException { + final DefaultSearchContext context = createSearchContext(request, defaultSearchTimeout); try { if (request.scroll() != null) { context.scrollContext(new ScrollContext()); @@ -548,7 +568,7 @@ final SearchContext createContext(ShardSearchRequest request, @Nullable Engine.S if (request.scroll() != null && request.scroll().keepAlive() != null) { keepAlive = request.scroll().keepAlive().millis(); } - context.keepAlive(keepAlive); + contextScrollKeepAlive(context, keepAlive); context.lowLevelCancellation(lowLevelCancellation); } catch (Exception e) { context.close(); @@ -558,18 +578,18 @@ final SearchContext createContext(ShardSearchRequest request, @Nullable Engine.S return context; } - public DefaultSearchContext createSearchContext(ShardSearchRequest request, TimeValue timeout, @Nullable Engine.Searcher searcher) + public DefaultSearchContext createSearchContext(ShardSearchRequest request, TimeValue timeout) throws IOException { - return createSearchContext(request, timeout, searcher, true); + return createSearchContext(request, timeout, true); } - private DefaultSearchContext createSearchContext(ShardSearchRequest request, TimeValue timeout, @Nullable Engine.Searcher searcher, + private DefaultSearchContext createSearchContext(ShardSearchRequest request, TimeValue timeout, boolean assertAsyncActions) throws IOException { IndexService indexService = indicesService.indexServiceSafe(request.shardId().getIndex()); IndexShard indexShard = indexService.getShard(request.shardId().getId()); SearchShardTarget shardTarget = new SearchShardTarget(clusterService.localNode().getId(), indexShard.shardId(), request.getClusterAlias(), OriginalIndices.NONE); - Engine.Searcher engineSearcher = searcher == null ? indexShard.acquireSearcher("search") : searcher; + Engine.Searcher engineSearcher = indexShard.acquireSearcher("search"); final DefaultSearchContext searchContext = new DefaultSearchContext(idGenerator.incrementAndGet(), request, shardTarget, engineSearcher, indexService, indexShard, bigArrays, threadPool.estimatedTimeInMillisCounter(), timeout, fetchPhase, @@ -626,6 +646,16 @@ public void freeAllScrollContexts() { } } + private void contextScrollKeepAlive(SearchContext context, long keepAlive) throws IOException { + if (keepAlive > maxKeepAlive) { + throw new IllegalArgumentException( + "Keep alive for scroll (" + TimeValue.timeValueMillis(keepAlive).format() + ") is too large. " + + "It must be less than (" + TimeValue.timeValueMillis(maxKeepAlive).format() + "). " + + "This limit can be set by changing the [" + MAX_KEEPALIVE_SETTING.getKey() + "] cluster level setting."); + } + context.keepAlive(keepAlive); + } + private void contextProcessing(SearchContext context) { // disable timeout while executing a search context.accessed(-1); @@ -710,7 +740,6 @@ private void parseSource(DefaultSearchContext context, SearchSourceBuilder sourc if (source.aggregations() != null) { try { AggregatorFactories factories = source.aggregations().build(context, null); - factories.validate(); context.aggregations(new SearchContextAggregations(factories)); } catch (IOException e) { throw new AggregationInitializationException("Failed to create aggregators", e); @@ -725,8 +754,8 @@ private void parseSource(DefaultSearchContext context, SearchSourceBuilder sourc } if (source.rescores() != null) { try { - for (RescoreBuilder rescore : source.rescores()) { - context.addRescore(rescore.build(queryShardContext)); + for (RescorerBuilder rescore : source.rescores()) { + context.addRescore(rescore.buildContext(queryShardContext)); } } catch (IOException e) { throw new SearchContextException(context, "failed to create RescoreSearchContext", e); @@ -739,6 +768,13 @@ private void parseSource(DefaultSearchContext context, SearchSourceBuilder sourc context.fetchSourceContext(source.fetchSource()); } if (source.docValueFields() != null) { + int maxAllowedDocvalueFields = context.mapperService().getIndexSettings().getMaxDocvalueFields(); + if (source.docValueFields().size() > maxAllowedDocvalueFields) { + throw new IllegalArgumentException( + "Trying to retrieve too many docvalue_fields. Must be less than or equal to: [" + maxAllowedDocvalueFields + + "] but was [" + source.docValueFields().size() + "]. This limit can be set by changing the [" + + IndexSettings.MAX_DOCVALUE_FIELDS_SEARCH_SETTING.getKey() + "] index level setting."); + } context.docValueFieldsContext(new DocValueFieldsContext(source.docValueFields())); } if (source.highlighter() != null) { @@ -750,6 +786,13 @@ private void parseSource(DefaultSearchContext context, SearchSourceBuilder sourc } } if (source.scriptFields() != null) { + int maxAllowedScriptFields = context.mapperService().getIndexSettings().getMaxScriptFields(); + if (source.scriptFields().size() > maxAllowedScriptFields) { + throw new IllegalArgumentException( + "Trying to retrieve too many script_fields. Must be less than or equal to: [" + maxAllowedScriptFields + + "] but was [" + source.scriptFields().size() + "]. This limit can be set by changing the [" + + IndexSettings.MAX_SCRIPT_FIELDS_SETTING.getKey() + "] index level setting."); + } for (org.elasticsearch.search.builder.SearchSourceBuilder.ScriptField field : source.scriptFields()) { SearchScript.Factory factory = scriptService.compile(field.script(), SearchScript.CONTEXT); SearchScript.LeafFactory searchScript = factory.newFactory(field.script().getParams(), context.lookup()); @@ -849,13 +892,13 @@ private void shortcutDocIdsToLoad(SearchContext context) { context.docIdsToLoad(docIdsToLoad, 0, docIdsToLoad.length); } - private void processScroll(InternalScrollSearchRequest request, SearchContext context) { + private void processScroll(InternalScrollSearchRequest request, SearchContext context) throws IOException { // process scroll context.from(context.from() + context.size()); context.scrollContext().scroll = request.scroll(); // update the context keep alive based on the new scroll value if (request.scroll() != null && request.scroll().keepAlive() != null) { - context.keepAlive(request.scroll().keepAlive().millis()); + contextScrollKeepAlive(context, request.scroll().keepAlive().millis()); } } @@ -902,7 +945,7 @@ public AliasFilter buildAliasFilter(ClusterState state, String index, String... */ public boolean canMatch(ShardSearchRequest request) throws IOException { assert request.searchType() == SearchType.QUERY_THEN_FETCH : "unexpected search type: " + request.searchType(); - try (DefaultSearchContext context = createSearchContext(request, defaultSearchTimeout, null, false)) { + try (DefaultSearchContext context = createSearchContext(request, defaultSearchTimeout, false)) { SearchSourceBuilder source = context.request().source(); if (canRewriteToMatchNone(source)) { QueryBuilder queryBuilder = source.query(); diff --git a/core/src/main/java/org/elasticsearch/search/SearchSortValues.java b/core/src/main/java/org/elasticsearch/search/SearchSortValues.java index d3d55ff481afd..271b448d49670 100644 --- a/core/src/main/java/org/elasticsearch/search/SearchSortValues.java +++ b/core/src/main/java/org/elasticsearch/search/SearchSortValues.java @@ -23,7 +23,8 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContent.Params; +import org.elasticsearch.common.xcontent.ToXContentFragment; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.common.xcontent.XContentParserUtils; @@ -33,7 +34,7 @@ import java.util.Arrays; import java.util.Objects; -public class SearchSortValues implements ToXContent, Writeable { +public class SearchSortValues implements ToXContentFragment, Writeable { static final SearchSortValues EMPTY = new SearchSortValues(new Object[0]); private final Object[] sortValues; diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/Aggregation.java b/core/src/main/java/org/elasticsearch/search/aggregations/Aggregation.java index 2d8a81b080829..5d5ff680ef696 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/Aggregation.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/Aggregation.java @@ -20,13 +20,14 @@ import org.elasticsearch.common.ParseField; import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentFragment; import java.util.Map; /** * An aggregation. Extends {@link ToXContent} as it makes it easier to print out its content. */ -public interface Aggregation extends ToXContent { +public interface Aggregation extends ToXContentFragment { /** * Delimiter used when prefixing aggregation names with their type diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/AggregatorBase.java b/core/src/main/java/org/elasticsearch/search/aggregations/AggregatorBase.java index 18cc2ffc2f6be..65696e1239cfc 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/AggregatorBase.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/AggregatorBase.java @@ -18,12 +18,10 @@ */ package org.elasticsearch.search.aggregations; -import org.elasticsearch.common.breaker.CircuitBreaker; import org.apache.lucene.index.LeafReaderContext; +import org.elasticsearch.common.breaker.CircuitBreaker; import org.elasticsearch.common.breaker.CircuitBreakingException; import org.elasticsearch.indices.breaker.CircuitBreakerService; -import org.elasticsearch.search.aggregations.bucket.BestBucketsDeferringCollector; -import org.elasticsearch.search.aggregations.bucket.DeferringBucketCollector; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; import org.elasticsearch.search.internal.SearchContext; import org.elasticsearch.search.internal.SearchContext.Lifetime; @@ -31,6 +29,7 @@ import java.io.IOException; import java.util.ArrayList; +import java.util.Arrays; import java.util.HashMap; import java.util.List; import java.util.Map; @@ -52,7 +51,6 @@ public abstract class AggregatorBase extends Aggregator { protected BucketCollector collectableSubAggregators; private Map subAggregatorbyName; - private DeferringBucketCollector recordingWrapper; private final List pipelineAggregators; private final CircuitBreakerService breakerService; private long requestBytesUsed; @@ -176,56 +174,12 @@ protected void doPreCollection() throws IOException { @Override public final void preCollection() throws IOException { - List collectors = new ArrayList<>(); - List deferredCollectors = new ArrayList<>(); - for (int i = 0; i < subAggregators.length; ++i) { - if (shouldDefer(subAggregators[i])) { - if (recordingWrapper == null) { - recordingWrapper = getDeferringCollector(); - } - deferredCollectors.add(subAggregators[i]); - subAggregators[i] = recordingWrapper.wrap(subAggregators[i]); - } else { - collectors.add(subAggregators[i]); - } - } - if (recordingWrapper != null) { - recordingWrapper.setDeferredCollector(deferredCollectors); - collectors.add(recordingWrapper); - } + List collectors = Arrays.asList(subAggregators); collectableSubAggregators = BucketCollector.wrap(collectors); doPreCollection(); collectableSubAggregators.preCollection(); } - public DeferringBucketCollector getDeferringCollector() { - // Default impl is a collector that selects the best buckets - // but an alternative defer policy may be based on best docs. - return new BestBucketsDeferringCollector(context()); - } - - /** - * This method should be overridden by subclasses that want to defer calculation - * of a child aggregation until a first pass is complete and a set of buckets has - * been pruned. - * Deferring collection will require the recording of all doc/bucketIds from the first - * pass and then the sub class should call {@link #runDeferredCollections(long...)} - * for the selected set of buckets that survive the pruning. - * @param aggregator the child aggregator - * @return true if the aggregator should be deferred - * until a first pass at collection has completed - */ - protected boolean shouldDefer(Aggregator aggregator) { - return false; - } - - protected final void runDeferredCollections(long... bucketOrds) throws IOException{ - // Being lenient here - ignore calls where there are no deferred collections to playback - if (recordingWrapper != null) { - recordingWrapper.replay(bucketOrds); - } - } - /** * @return The name of the aggregation. */ diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/AggregatorFactories.java b/core/src/main/java/org/elasticsearch/search/aggregations/AggregatorFactories.java index 579ff4bdad1da..de4f0aab67696 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/AggregatorFactories.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/AggregatorFactories.java @@ -18,17 +18,16 @@ */ package org.elasticsearch.search.aggregations; -import org.elasticsearch.action.support.ToXContentToBytes; import org.elasticsearch.common.ParsingException; +import org.elasticsearch.common.Strings; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.index.query.QueryRewriteContext; import org.elasticsearch.search.aggregations.bucket.global.GlobalAggregationBuilder; -import org.elasticsearch.search.aggregations.bucket.histogram.DateHistogramAggregationBuilder; -import org.elasticsearch.search.aggregations.bucket.histogram.HistogramAggregationBuilder; import org.elasticsearch.search.aggregations.bucket.terms.TermsAggregationBuilder; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; import org.elasticsearch.search.aggregations.support.AggregationPath; @@ -239,16 +238,7 @@ public int countPipelineAggregators() { return pipelineAggregatorFactories.size(); } - public void validate() { - for (AggregatorFactory factory : factories) { - factory.validate(); - } - for (PipelineAggregationBuilder factory : pipelineAggregatorFactories) { - factory.validate(parent, factories, pipelineAggregatorFactories); - } - } - - public static class Builder extends ToXContentToBytes implements Writeable { + public static class Builder implements Writeable, ToXContentObject { private final Set names = new HashSet<>(); private final List aggregationBuilders = new ArrayList<>(); private final List pipelineAggregatorBuilders = new ArrayList<>(); @@ -331,7 +321,8 @@ public AggregatorFactories build(SearchContext context, AggregatorFactory par if (skipResolveOrder) { orderedpipelineAggregators = new ArrayList<>(pipelineAggregatorBuilders); } else { - orderedpipelineAggregators = resolvePipelineAggregatorOrder(this.pipelineAggregatorBuilders, this.aggregationBuilders); + orderedpipelineAggregators = resolvePipelineAggregatorOrder(this.pipelineAggregatorBuilders, this.aggregationBuilders, + parent); } AggregatorFactory[] aggFactories = new AggregatorFactory[aggregationBuilders.size()]; for (int i = 0; i < aggregationBuilders.size(); i++) { @@ -341,7 +332,8 @@ public AggregatorFactories build(SearchContext context, AggregatorFactory par } private List resolvePipelineAggregatorOrder( - List pipelineAggregatorBuilders, List aggBuilders) { + List pipelineAggregatorBuilders, List aggBuilders, + AggregatorFactory parent) { Map pipelineAggregatorBuildersMap = new HashMap<>(); for (PipelineAggregationBuilder builder : pipelineAggregatorBuilders) { pipelineAggregatorBuildersMap.put(builder.getName(), builder); @@ -355,6 +347,7 @@ private List resolvePipelineAggregatorOrder( Set temporarilyMarked = new HashSet<>(); while (!unmarkedBuilders.isEmpty()) { PipelineAggregationBuilder builder = unmarkedBuilders.get(0); + builder.validate(parent, aggBuilders, pipelineAggregatorBuilders); resolvePipelineAggregatorOrder(aggBuildersMap, pipelineAggregatorBuildersMap, orderedPipelineAggregatorrs, unmarkedBuilders, temporarilyMarked, builder); } @@ -456,6 +449,11 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws return builder; } + @Override + public String toString() { + return Strings.toString(this, true, true); + } + @Override public int hashCode() { return Objects.hash(aggregationBuilders, pipelineAggregatorBuilders); diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/AggregatorFactory.java b/core/src/main/java/org/elasticsearch/search/aggregations/AggregatorFactory.java index 14b66c475ecd3..9a47635416aed 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/AggregatorFactory.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/AggregatorFactory.java @@ -130,7 +130,11 @@ public void collect(int doc, long bucket) throws IOException { aggregators.set(bucket, aggregator); } collector = aggregator.getLeafCollector(ctx); - collector.setScorer(scorer); + if (scorer != null) { + // Passing a null scorer can cause unexpected NPE at a later time, + // which can't not be directly linked to the fact that a null scorer has been supplied. + collector.setScorer(scorer); + } collectors.set(bucket, collector); } collector.collect(doc, 0); @@ -188,15 +192,6 @@ public String name() { return name; } - /** - * Validates the state of this factory (makes sure the factory is properly - * configured) - */ - public final void validate() { - doValidate(); - factories.validate(); - } - public void doValidate() { } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/ParsedAggregation.java b/core/src/main/java/org/elasticsearch/search/aggregations/ParsedAggregation.java index d79baac06b097..ba1d847f23b03 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/ParsedAggregation.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/ParsedAggregation.java @@ -21,6 +21,7 @@ import org.elasticsearch.common.xcontent.ObjectParser; import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentFragment; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.common.xcontent.XContentParser.Token; @@ -33,7 +34,7 @@ * An implementation of {@link Aggregation} that is parsed from a REST response. * Serves as a base class for all aggregation implementations that are parsed from REST. */ -public abstract class ParsedAggregation implements Aggregation, ToXContent { +public abstract class ParsedAggregation implements Aggregation, ToXContentFragment { protected static void declareAggregationFields(ObjectParser objectParser) { objectParser.declareObject((parsedAgg, metadata) -> parsedAgg.metadata = Collections.unmodifiableMap(metadata), diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/PipelineAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/PipelineAggregationBuilder.java index 8f965d1d87eb4..1b751d8c6847d 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/PipelineAggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/PipelineAggregationBuilder.java @@ -18,8 +18,9 @@ */ package org.elasticsearch.search.aggregations; -import org.elasticsearch.action.support.ToXContentToBytes; +import org.elasticsearch.common.Strings; import org.elasticsearch.common.io.stream.NamedWriteable; +import org.elasticsearch.common.xcontent.ToXContentFragment; import org.elasticsearch.search.aggregations.AggregatorFactories.Builder; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; @@ -31,8 +32,7 @@ * A factory that knows how to create an {@link PipelineAggregator} of a * specific type. */ -public abstract class PipelineAggregationBuilder extends ToXContentToBytes - implements NamedWriteable, BaseAggregationBuilder { +public abstract class PipelineAggregationBuilder implements NamedWriteable, BaseAggregationBuilder, ToXContentFragment { protected final String name; protected final String[] bucketsPaths; @@ -68,7 +68,7 @@ public final String[] getBucketsPaths() { * Internal: Validates the state of this factory (makes sure the factory is properly * configured) */ - protected abstract void validate(AggregatorFactory parent, AggregatorFactory[] factories, + protected abstract void validate(AggregatorFactory parent, List factories, List pipelineAggregatorFactories); /** @@ -86,4 +86,9 @@ protected abstract void validate(AggregatorFactory parent, AggregatorFactory< public PipelineAggregationBuilder subAggregations(Builder subFactories) { throw new IllegalArgumentException("Aggregation [" + name + "] cannot define sub-aggregations"); } + + @Override + public String toString() { + return Strings.toString(this, true, true); + } } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/DeferableBucketAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/DeferableBucketAggregator.java new file mode 100644 index 0000000000000..bbfcef0af4000 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/DeferableBucketAggregator.java @@ -0,0 +1,95 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.search.aggregations.bucket; + +import org.elasticsearch.search.aggregations.Aggregator; +import org.elasticsearch.search.aggregations.AggregatorFactories; +import org.elasticsearch.search.aggregations.BucketCollector; +import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; +import org.elasticsearch.search.internal.SearchContext; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.List; +import java.util.Map; + +public abstract class DeferableBucketAggregator extends BucketsAggregator { + + private DeferringBucketCollector recordingWrapper; + + protected DeferableBucketAggregator(String name, AggregatorFactories factories, SearchContext context, Aggregator parent, + List pipelineAggregators, Map metaData) throws IOException { + super(name, factories, context, parent, pipelineAggregators, metaData); + } + + @Override + protected void doPreCollection() throws IOException { + List collectors = new ArrayList<>(); + List deferredCollectors = new ArrayList<>(); + for (int i = 0; i < subAggregators.length; ++i) { + if (shouldDefer(subAggregators[i])) { + if (recordingWrapper == null) { + recordingWrapper = getDeferringCollector(); + } + deferredCollectors.add(subAggregators[i]); + subAggregators[i] = recordingWrapper.wrap(subAggregators[i]); + } else { + collectors.add(subAggregators[i]); + } + } + if (recordingWrapper != null) { + recordingWrapper.setDeferredCollector(deferredCollectors); + collectors.add(recordingWrapper); + } + collectableSubAggregators = BucketCollector.wrap(collectors); + } + + public DeferringBucketCollector getDeferringCollector() { + // Default impl is a collector that selects the best buckets + // but an alternative defer policy may be based on best docs. + return new BestBucketsDeferringCollector(context()); + } + + /** + * This method should be overridden by subclasses that want to defer + * calculation of a child aggregation until a first pass is complete and a + * set of buckets has been pruned. Deferring collection will require the + * recording of all doc/bucketIds from the first pass and then the sub class + * should call {@link #runDeferredCollections(long...)} for the selected set + * of buckets that survive the pruning. + * + * @param aggregator + * the child aggregator + * @return true if the aggregator should be deferred until a first pass at + * collection has completed + */ + protected boolean shouldDefer(Aggregator aggregator) { + return false; + } + + protected final void runDeferredCollections(long... bucketOrds) throws IOException { + // Being lenient here - ignore calls where there are no deferred + // collections to playback + if (recordingWrapper != null) { + recordingWrapper.replay(bucketOrds); + } + } + +} diff --git a/core/src/test/java/org/elasticsearch/index/mapper/AllFieldTypeTests.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/MultiBucketAggregationBuilder.java similarity index 70% rename from core/src/test/java/org/elasticsearch/index/mapper/AllFieldTypeTests.java rename to core/src/main/java/org/elasticsearch/search/aggregations/bucket/MultiBucketAggregationBuilder.java index 44e95b5dd7149..38f3d9b0dcab9 100644 --- a/core/src/test/java/org/elasticsearch/index/mapper/AllFieldTypeTests.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/MultiBucketAggregationBuilder.java @@ -16,14 +16,15 @@ * specific language governing permissions and limitations * under the License. */ -package org.elasticsearch.index.mapper; -import org.elasticsearch.index.mapper.AllFieldMapper; -import org.elasticsearch.index.mapper.MappedFieldType; +package org.elasticsearch.search.aggregations.bucket; + +import org.elasticsearch.search.aggregations.AggregationBuilder; + +/** + * Marker interface to indicate that the {@link AggregationBuilder} is for a + * multi-bucket aggregation. + */ +public interface MultiBucketAggregationBuilder { -public class AllFieldTypeTests extends FieldTypeTestCase { - @Override - protected MappedFieldType createDefaultFieldType() { - return new AllFieldMapper.AllFieldType(); - } } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/SingleBucketAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/SingleBucketAggregator.java index f74df5d63b167..25d3e1e7188f9 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/SingleBucketAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/SingleBucketAggregator.java @@ -18,24 +18,10 @@ */ package org.elasticsearch.search.aggregations.bucket; -import org.elasticsearch.search.aggregations.Aggregator; -import org.elasticsearch.search.aggregations.AggregatorFactories; -import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.internal.SearchContext; - -import java.io.IOException; -import java.util.List; -import java.util.Map; - /** * A bucket aggregator that doesn't create new buckets. */ -public abstract class SingleBucketAggregator extends BucketsAggregator { - - protected SingleBucketAggregator(String name, AggregatorFactories factories, - SearchContext aggregationContext, Aggregator parent, - List pipelineAggregators, Map metaData) throws IOException { - super(name, factories, aggregationContext, parent, pipelineAggregators, metaData); - } +public interface SingleBucketAggregator { + int bucketDocCount(long bucketOrd); } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/adjacency/AdjacencyMatrixAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/adjacency/AdjacencyMatrixAggregationBuilder.java index a2a0ceb321660..325e8b07ca625 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/adjacency/AdjacencyMatrixAggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/adjacency/AdjacencyMatrixAggregationBuilder.java @@ -32,6 +32,7 @@ import org.elasticsearch.search.aggregations.AggregationBuilder; import org.elasticsearch.search.aggregations.AggregatorFactories.Builder; import org.elasticsearch.search.aggregations.AggregatorFactory; +import org.elasticsearch.search.aggregations.bucket.MultiBucketAggregationBuilder; import org.elasticsearch.search.aggregations.bucket.adjacency.AdjacencyMatrixAggregator.KeyedFilter; import org.elasticsearch.search.internal.SearchContext; import org.elasticsearch.search.query.QueryPhaseExecutionException; @@ -46,7 +47,8 @@ import java.util.Map.Entry; import java.util.Objects; -public class AdjacencyMatrixAggregationBuilder extends AbstractAggregationBuilder { +public class AdjacencyMatrixAggregationBuilder extends AbstractAggregationBuilder + implements MultiBucketAggregationBuilder { public static final String NAME = "adjacency_matrix"; private static final String DEFAULT_SEPARATOR = "&"; @@ -187,7 +189,7 @@ protected AggregatorFactory doBuild(SearchContext context, AggregatorFactory< throws IOException { int maxFilters = context.indexShard().indexSettings().getMaxAdjacencyMatrixFilters(); if (filters.size() > maxFilters){ - throw new QueryPhaseExecutionException(context, + throw new IllegalArgumentException( "Number of filters is too large, must be less than or equal to: [" + maxFilters + "] but was [" + filters.size() + "]." + "This limit can be set by changing the [" + IndexSettings.MAX_ADJACENCY_MATRIX_FILTERS_SETTING.getKey() diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/filter/FilterAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/filter/FilterAggregator.java index 46a9049711f85..fc4ac58fb15ac 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/filter/FilterAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/filter/FilterAggregator.java @@ -27,6 +27,7 @@ import org.elasticsearch.search.aggregations.InternalAggregation; import org.elasticsearch.search.aggregations.LeafBucketCollector; import org.elasticsearch.search.aggregations.LeafBucketCollectorBase; +import org.elasticsearch.search.aggregations.bucket.BucketsAggregator; import org.elasticsearch.search.aggregations.bucket.SingleBucketAggregator; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; import org.elasticsearch.search.internal.SearchContext; @@ -34,16 +35,17 @@ import java.io.IOException; import java.util.List; import java.util.Map; +import java.util.function.Supplier; /** * Aggregate all docs that match a filter. */ -public class FilterAggregator extends SingleBucketAggregator { +public class FilterAggregator extends BucketsAggregator implements SingleBucketAggregator { - private final Weight filter; + private final Supplier filter; public FilterAggregator(String name, - Weight filter, + Supplier filter, AggregatorFactories factories, SearchContext context, Aggregator parent, List pipelineAggregators, @@ -56,7 +58,7 @@ public FilterAggregator(String name, public LeafBucketCollector getLeafCollector(LeafReaderContext ctx, final LeafBucketCollector sub) throws IOException { // no need to provide deleted docs to the filter - final Bits bits = Lucene.asSequentialAccessBits(ctx.reader().maxDoc(), filter.scorerSupplier(ctx)); + final Bits bits = Lucene.asSequentialAccessBits(ctx.reader().maxDoc(), filter.get().scorerSupplier(ctx)); return new LeafBucketCollectorBase(sub, null) { @Override public void collect(int doc, long bucket) throws IOException { diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/filter/FilterAggregatorFactory.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/filter/FilterAggregatorFactory.java index 482bcb3d00951..4b54dccbf96c1 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/filter/FilterAggregatorFactory.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/filter/FilterAggregatorFactory.java @@ -23,6 +23,7 @@ import org.apache.lucene.search.Query; import org.apache.lucene.search.Weight; import org.elasticsearch.index.query.QueryBuilder; +import org.elasticsearch.search.aggregations.AggregationInitializationException; import org.elasticsearch.search.aggregations.Aggregator; import org.elasticsearch.search.aggregations.AggregatorFactories; import org.elasticsearch.search.aggregations.AggregatorFactory; @@ -35,20 +36,40 @@ public class FilterAggregatorFactory extends AggregatorFactory { - final Weight weight; + private Weight weight; + private Query filter; public FilterAggregatorFactory(String name, QueryBuilder filterBuilder, SearchContext context, AggregatorFactory parent, AggregatorFactories.Builder subFactoriesBuilder, Map metaData) throws IOException { super(name, context, parent, subFactoriesBuilder, metaData); - IndexSearcher contextSearcher = context.searcher(); - Query filter = filterBuilder.toFilter(context.getQueryShardContext()); - weight = contextSearcher.createNormalizedWeight(filter, false); + filter = filterBuilder.toFilter(context.getQueryShardContext()); + } + + /** + * Returns the {@link Weight} for this filter aggregation, creating it if + * necessary. This is done lazily so that the {@link Weight} is only created + * if the aggregation collects documents reducing the overhead of the + * aggregation in teh case where no documents are collected. + * + * Note that as aggregations are initialsed and executed in a serial manner, + * no concurrency considerations are necessary here. + */ + public Weight getWeight() { + if (weight == null) { + IndexSearcher contextSearcher = context.searcher(); + try { + weight = contextSearcher.createNormalizedWeight(filter, false); + } catch (IOException e) { + throw new AggregationInitializationException("Failed to initialse filter", e); + } + } + return weight; } @Override public Aggregator createInternal(Aggregator parent, boolean collectsFromSingleBucket, List pipelineAggregators, Map metaData) throws IOException { - return new FilterAggregator(name, weight, factories, context, parent, pipelineAggregators, metaData); + return new FilterAggregator(name, () -> this.getWeight(), factories, context, parent, pipelineAggregators, metaData); } } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/filter/FiltersAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/filter/FiltersAggregationBuilder.java index f9f21b3cf622a..7f3f485d270d5 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/filter/FiltersAggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/filter/FiltersAggregationBuilder.java @@ -31,8 +31,9 @@ import org.elasticsearch.search.aggregations.AbstractAggregationBuilder; import org.elasticsearch.search.aggregations.AggregationBuilder; import org.elasticsearch.search.aggregations.AggregatorFactories.Builder; -import org.elasticsearch.search.aggregations.bucket.filter.FiltersAggregator.KeyedFilter; import org.elasticsearch.search.aggregations.AggregatorFactory; +import org.elasticsearch.search.aggregations.bucket.MultiBucketAggregationBuilder; +import org.elasticsearch.search.aggregations.bucket.filter.FiltersAggregator.KeyedFilter; import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; @@ -44,7 +45,8 @@ import static org.elasticsearch.index.query.AbstractQueryBuilder.parseInnerQueryBuilder; -public class FiltersAggregationBuilder extends AbstractAggregationBuilder { +public class FiltersAggregationBuilder extends AbstractAggregationBuilder + implements MultiBucketAggregationBuilder { public static final String NAME = "filters"; private static final ParseField FILTERS_FIELD = new ParseField("filters"); diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/filter/FiltersAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/filter/FiltersAggregator.java index d488d092360d8..97724aa8b9735 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/filter/FiltersAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/filter/FiltersAggregator.java @@ -45,6 +45,7 @@ import java.util.List; import java.util.Map; import java.util.Objects; +import java.util.function.Supplier; public class FiltersAggregator extends BucketsAggregator { @@ -115,13 +116,13 @@ public boolean equals(Object obj) { } private final String[] keys; - private Weight[] filters; + private Supplier filters; private final boolean keyed; private final boolean showOtherBucket; private final String otherBucketKey; private final int totalNumKeys; - public FiltersAggregator(String name, AggregatorFactories factories, String[] keys, Weight[] filters, boolean keyed, + public FiltersAggregator(String name, AggregatorFactories factories, String[] keys, Supplier filters, boolean keyed, String otherBucketKey, SearchContext context, Aggregator parent, List pipelineAggregators, Map metaData) throws IOException { super(name, factories, context, parent, pipelineAggregators, metaData); @@ -141,6 +142,7 @@ public FiltersAggregator(String name, AggregatorFactories factories, String[] ke public LeafBucketCollector getLeafCollector(LeafReaderContext ctx, final LeafBucketCollector sub) throws IOException { // no need to provide deleted docs to the filter + Weight[] filters = this.filters.get(); final Bits[] bits = new Bits[filters.length]; for (int i = 0; i < filters.length; ++i) { bits[i] = Lucene.asSequentialAccessBits(ctx.reader().maxDoc(), filters[i].scorerSupplier(ctx)); @@ -164,7 +166,7 @@ public void collect(int doc, long bucket) throws IOException { @Override public InternalAggregation buildAggregation(long owningBucketOrdinal) throws IOException { - List buckets = new ArrayList<>(filters.length); + List buckets = new ArrayList<>(keys.length); for (int i = 0; i < keys.length; i++) { long bucketOrd = bucketOrd(owningBucketOrdinal, i); InternalFilters.InternalBucket bucket = new InternalFilters.InternalBucket(keys[i], bucketDocCount(bucketOrd), @@ -184,7 +186,7 @@ public InternalAggregation buildAggregation(long owningBucketOrdinal) throws IOE @Override public InternalAggregation buildEmptyAggregation() { InternalAggregations subAggs = buildEmptySubAggregations(); - List buckets = new ArrayList<>(filters.length); + List buckets = new ArrayList<>(keys.length); for (int i = 0; i < keys.length; i++) { InternalFilters.InternalBucket bucket = new InternalFilters.InternalBucket(keys[i], 0, subAggs, keyed); buckets.add(bucket); diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/filter/FiltersAggregatorFactory.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/filter/FiltersAggregatorFactory.java index 07c7af1d19d66..048042f05ff65 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/filter/FiltersAggregatorFactory.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/filter/FiltersAggregatorFactory.java @@ -22,6 +22,7 @@ import org.apache.lucene.search.IndexSearcher; import org.apache.lucene.search.Query; import org.apache.lucene.search.Weight; +import org.elasticsearch.search.aggregations.AggregationInitializationException; import org.elasticsearch.search.aggregations.Aggregator; import org.elasticsearch.search.aggregations.AggregatorFactories; import org.elasticsearch.search.aggregations.AggregatorFactory; @@ -36,7 +37,8 @@ public class FiltersAggregatorFactory extends AggregatorFactory { private final String[] keys; - final Weight[] weights; + private final Query[] filters; + private Weight[] weights; private final boolean keyed; private final boolean otherBucket; private final String otherBucketKey; @@ -48,21 +50,43 @@ public FiltersAggregatorFactory(String name, List filters, boolean this.keyed = keyed; this.otherBucket = otherBucket; this.otherBucketKey = otherBucketKey; - IndexSearcher contextSearcher = context.searcher(); - weights = new Weight[filters.size()]; keys = new String[filters.size()]; + this.filters = new Query[filters.size()]; for (int i = 0; i < filters.size(); ++i) { KeyedFilter keyedFilter = filters.get(i); this.keys[i] = keyedFilter.key(); - Query filter = keyedFilter.filter().toFilter(context.getQueryShardContext()); - this.weights[i] = contextSearcher.createNormalizedWeight(filter, false); + this.filters[i] = keyedFilter.filter().toFilter(context.getQueryShardContext()); } } + /** + * Returns the {@link Weight}s for this filter aggregation, creating it if + * necessary. This is done lazily so that the {@link Weight}s are only + * created if the aggregation collects documents reducing the overhead of + * the aggregation in the case where no documents are collected. + * + * Note that as aggregations are initialsed and executed in a serial manner, + * no concurrency considerations are necessary here. + */ + public Weight[] getWeights() { + if (weights == null) { + try { + IndexSearcher contextSearcher = context.searcher(); + weights = new Weight[filters.length]; + for (int i = 0; i < filters.length; ++i) { + this.weights[i] = contextSearcher.createNormalizedWeight(filters[i], false); + } + } catch (IOException e) { + throw new AggregationInitializationException("Failed to initialse filters for aggregation [" + name() + "]", e); + } + } + return weights; + } + @Override public Aggregator createInternal(Aggregator parent, boolean collectsFromSingleBucket, List pipelineAggregators, Map metaData) throws IOException { - return new FiltersAggregator(name, factories, keys, weights, keyed, otherBucket ? otherBucketKey : null, context, parent, + return new FiltersAggregator(name, factories, keys, () -> getWeights(), keyed, otherBucket ? otherBucketKey : null, context, parent, pipelineAggregators, metaData); } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/geogrid/GeoGridAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/geogrid/GeoGridAggregationBuilder.java index 4722c86c98206..a457e9472090e 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/geogrid/GeoGridAggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/geogrid/GeoGridAggregationBuilder.java @@ -24,11 +24,13 @@ import org.elasticsearch.ElasticsearchException; import org.elasticsearch.common.geo.GeoHashUtils; import org.elasticsearch.common.geo.GeoPoint; +import org.elasticsearch.common.geo.GeoUtils; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.xcontent.ObjectParser; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.common.xcontent.support.XContentMapValues; import org.elasticsearch.index.fielddata.AbstractSortingNumericDocValues; import org.elasticsearch.index.fielddata.MultiGeoPointValues; import org.elasticsearch.index.fielddata.SortedBinaryDocValues; @@ -36,6 +38,7 @@ import org.elasticsearch.search.aggregations.AggregatorFactories.Builder; import org.elasticsearch.search.aggregations.AggregatorFactory; import org.elasticsearch.search.aggregations.bucket.BucketUtils; +import org.elasticsearch.search.aggregations.bucket.MultiBucketAggregationBuilder; import org.elasticsearch.search.aggregations.support.ValueType; import org.elasticsearch.search.aggregations.support.ValuesSource; import org.elasticsearch.search.aggregations.support.ValuesSourceAggregationBuilder; @@ -48,7 +51,8 @@ import java.io.IOException; import java.util.Objects; -public class GeoGridAggregationBuilder extends ValuesSourceAggregationBuilder { +public class GeoGridAggregationBuilder extends ValuesSourceAggregationBuilder + implements MultiBucketAggregationBuilder { public static final String NAME = "geohash_grid"; public static final int DEFAULT_PRECISION = 5; public static final int DEFAULT_MAX_NUM_CELLS = 10000; @@ -57,7 +61,29 @@ public class GeoGridAggregationBuilder extends ValuesSourceAggregationBuilder(GeoGridAggregationBuilder.NAME); ValuesSourceParserHelper.declareGeoFields(PARSER, false, false); - PARSER.declareInt(GeoGridAggregationBuilder::precision, GeoHashGridParams.FIELD_PRECISION); + PARSER.declareField((parser, builder, context) -> { + XContentParser.Token token = parser.currentToken(); + if (token.equals(XContentParser.Token.VALUE_NUMBER)) { + builder.precision(XContentMapValues.nodeIntegerValue(parser.intValue())); + } else { + String precision = parser.text(); + try { + // we want to treat simple integer strings as precision levels, not distances + builder.precision(XContentMapValues.nodeIntegerValue(Integer.parseInt(precision))); + } catch (NumberFormatException e) { + // try to parse as a distance value + try { + builder.precision(GeoUtils.geoHashLevelsForPrecision(precision)); + } catch (NumberFormatException e2) { + // can happen when distance unit is unknown, in this case we simply want to know the reason + throw e2; + } catch (IllegalArgumentException e3) { + // this happens when distance too small, so precision > 12. We'd like to see the original string + throw new IllegalArgumentException("precision too high [" + precision + "]", e3); + } + } + } + }, GeoHashGridParams.FIELD_PRECISION, org.elasticsearch.common.xcontent.ObjectParser.ValueType.INT); PARSER.declareInt(GeoGridAggregationBuilder::size, GeoHashGridParams.FIELD_SIZE); PARSER.declareInt(GeoGridAggregationBuilder::shardSize, GeoHashGridParams.FIELD_SHARD_SIZE); } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/global/GlobalAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/global/GlobalAggregator.java index e46b7623e3486..68e07c3657f8a 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/global/GlobalAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/global/GlobalAggregator.java @@ -23,6 +23,7 @@ import org.elasticsearch.search.aggregations.InternalAggregation; import org.elasticsearch.search.aggregations.LeafBucketCollector; import org.elasticsearch.search.aggregations.LeafBucketCollectorBase; +import org.elasticsearch.search.aggregations.bucket.BucketsAggregator; import org.elasticsearch.search.aggregations.bucket.SingleBucketAggregator; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; import org.elasticsearch.search.internal.SearchContext; @@ -31,7 +32,7 @@ import java.util.List; import java.util.Map; -public class GlobalAggregator extends SingleBucketAggregator { +public class GlobalAggregator extends BucketsAggregator implements SingleBucketAggregator { public GlobalAggregator(String name, AggregatorFactories subFactories, SearchContext aggregationContext, List pipelineAggregators, Map metaData) throws IOException { diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/DateHistogramAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/DateHistogramAggregationBuilder.java index d50eab9d8c0ae..dde5556e8518e 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/DateHistogramAggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/DateHistogramAggregationBuilder.java @@ -32,6 +32,7 @@ import org.elasticsearch.search.aggregations.BucketOrder; import org.elasticsearch.search.aggregations.InternalOrder; import org.elasticsearch.search.aggregations.InternalOrder.CompoundOrder; +import org.elasticsearch.search.aggregations.bucket.MultiBucketAggregationBuilder; import org.elasticsearch.search.aggregations.support.ValueType; import org.elasticsearch.search.aggregations.support.ValuesSource; import org.elasticsearch.search.aggregations.support.ValuesSource.Numeric; @@ -53,8 +54,8 @@ /** * A builder for histograms on date fields. */ -public class DateHistogramAggregationBuilder - extends ValuesSourceAggregationBuilder { +public class DateHistogramAggregationBuilder extends ValuesSourceAggregationBuilder + implements MultiBucketAggregationBuilder { public static final String NAME = "date_histogram"; public static final Map DATE_FIELD_UNITS; diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/HistogramAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/HistogramAggregationBuilder.java index 2f36291f381c3..c746b77ef83f5 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/HistogramAggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/HistogramAggregationBuilder.java @@ -30,6 +30,7 @@ import org.elasticsearch.search.aggregations.BucketOrder; import org.elasticsearch.search.aggregations.InternalOrder; import org.elasticsearch.search.aggregations.InternalOrder.CompoundOrder; +import org.elasticsearch.search.aggregations.bucket.MultiBucketAggregationBuilder; import org.elasticsearch.search.aggregations.support.ValueType; import org.elasticsearch.search.aggregations.support.ValuesSource; import org.elasticsearch.search.aggregations.support.ValuesSource.Numeric; @@ -47,8 +48,8 @@ /** * A builder for histograms on numeric fields. */ -public class HistogramAggregationBuilder - extends ValuesSourceAggregationBuilder { +public class HistogramAggregationBuilder extends ValuesSourceAggregationBuilder + implements MultiBucketAggregationBuilder { public static final String NAME = "histogram"; private static final ObjectParser EXTENDED_BOUNDS_PARSER = new ObjectParser<>( diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/InternalHistogram.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/InternalHistogram.java index 21cd2347cc61b..aa94bb762596a 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/InternalHistogram.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/InternalHistogram.java @@ -321,8 +321,9 @@ protected boolean lessThan(IteratorAndCurrent a, IteratorAndCurrent b) { do { final IteratorAndCurrent top = pq.top(); - if (top.current.key != key) { - // the key changes, reduce what we already buffered and reset the buffer for current buckets + if (Double.compare(top.current.key, key) != 0) { + // The key changes, reduce what we already buffered and reset the buffer for current buckets. + // Using Double.compare instead of != to handle NaN correctly. final Bucket reduced = currentBuckets.get(0).reduce(currentBuckets, reduceContext); if (reduced.getDocCount() >= minDocCount || reduceContext.isFinalReduce() == false) { reducedBuckets.add(reduced); @@ -335,7 +336,7 @@ protected boolean lessThan(IteratorAndCurrent a, IteratorAndCurrent b) { if (top.iterator.hasNext()) { final Bucket next = top.iterator.next(); - assert next.key > top.current.key : "shards must return data sorted by key"; + assert Double.compare(next.key, top.current.key) > 0 : "shards must return data sorted by key"; top.current = next; pq.updateTop(); } else { diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/missing/MissingAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/missing/MissingAggregator.java index 0f2217dfe269e..5864f5c5ca770 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/missing/MissingAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/missing/MissingAggregator.java @@ -25,6 +25,7 @@ import org.elasticsearch.search.aggregations.InternalAggregation; import org.elasticsearch.search.aggregations.LeafBucketCollector; import org.elasticsearch.search.aggregations.LeafBucketCollectorBase; +import org.elasticsearch.search.aggregations.bucket.BucketsAggregator; import org.elasticsearch.search.aggregations.bucket.SingleBucketAggregator; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; import org.elasticsearch.search.aggregations.support.ValuesSource; @@ -34,7 +35,7 @@ import java.util.List; import java.util.Map; -public class MissingAggregator extends SingleBucketAggregator { +public class MissingAggregator extends BucketsAggregator implements SingleBucketAggregator { private final ValuesSource valuesSource; diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/nested/NestedAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/nested/NestedAggregator.java index 004c88d43f0e0..c7d721baa6798 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/nested/NestedAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/nested/NestedAggregator.java @@ -18,6 +18,8 @@ */ package org.elasticsearch.search.aggregations.bucket.nested; +import com.carrotsearch.hppc.LongArrayList; + import org.apache.lucene.index.IndexReaderContext; import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.index.ReaderUtil; @@ -36,6 +38,7 @@ import org.elasticsearch.search.aggregations.InternalAggregation; import org.elasticsearch.search.aggregations.LeafBucketCollector; import org.elasticsearch.search.aggregations.LeafBucketCollectorBase; +import org.elasticsearch.search.aggregations.bucket.BucketsAggregator; import org.elasticsearch.search.aggregations.bucket.SingleBucketAggregator; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; import org.elasticsearch.search.internal.SearchContext; @@ -44,20 +47,25 @@ import java.util.List; import java.util.Map; -class NestedAggregator extends SingleBucketAggregator { +class NestedAggregator extends BucketsAggregator implements SingleBucketAggregator { static final ParseField PATH_FIELD = new ParseField("path"); private final BitSetProducer parentFilter; private final Query childFilter; + private final boolean collectsFromSingleBucket; + + private BufferingNestedLeafBucketCollector bufferingNestedLeafBucketCollector; NestedAggregator(String name, AggregatorFactories factories, ObjectMapper parentObjectMapper, ObjectMapper childObjectMapper, - SearchContext context, Aggregator parentAggregator, - List pipelineAggregators, Map metaData) throws IOException { + SearchContext context, Aggregator parentAggregator, + List pipelineAggregators, Map metaData, + boolean collectsFromSingleBucket) throws IOException { super(name, factories, context, parentAggregator, pipelineAggregators, metaData); Query parentFilter = parentObjectMapper != null ? parentObjectMapper.nestedTypeFilter() : Queries.newNonNestedFilter(); this.parentFilter = context.bitsetFilterCache().getBitSetProducer(parentFilter); this.childFilter = childObjectMapper.nestedTypeFilter(); + this.collectsFromSingleBucket = collectsFromSingleBucket; } @Override @@ -70,26 +78,38 @@ public LeafBucketCollector getLeafCollector(final LeafReaderContext ctx, final L final BitSet parentDocs = parentFilter.getBitSet(ctx); final DocIdSetIterator childDocs = childDocsScorer != null ? childDocsScorer.iterator() : null; - return new LeafBucketCollectorBase(sub, null) { - @Override - public void collect(int parentDoc, long bucket) throws IOException { - // if parentDoc is 0 then this means that this parent doesn't have child docs (b/c these appear always before the parent - // doc), so we can skip: - if (parentDoc == 0 || parentDocs == null || childDocs == null) { - return; - } + if (collectsFromSingleBucket) { + return new LeafBucketCollectorBase(sub, null) { + @Override + public void collect(int parentDoc, long bucket) throws IOException { + // if parentDoc is 0 then this means that this parent doesn't have child docs (b/c these appear always before the parent + // doc), so we can skip: + if (parentDoc == 0 || parentDocs == null || childDocs == null) { + return; + } - final int prevParentDoc = parentDocs.prevSetBit(parentDoc - 1); - int childDocId = childDocs.docID(); - if (childDocId <= prevParentDoc) { - childDocId = childDocs.advance(prevParentDoc + 1); - } + final int prevParentDoc = parentDocs.prevSetBit(parentDoc - 1); + int childDocId = childDocs.docID(); + if (childDocId <= prevParentDoc) { + childDocId = childDocs.advance(prevParentDoc + 1); + } - for (; childDocId < parentDoc; childDocId = childDocs.nextDoc()) { - collectBucket(sub, childDocId, bucket); + for (; childDocId < parentDoc; childDocId = childDocs.nextDoc()) { + collectBucket(sub, childDocId, bucket); + } } - } - }; + }; + } else { + doPostCollection(); + return bufferingNestedLeafBucketCollector = new BufferingNestedLeafBucketCollector(sub, parentDocs, childDocs); + } + } + + @Override + protected void doPostCollection() throws IOException { + if (bufferingNestedLeafBucketCollector != null) { + bufferingNestedLeafBucketCollector.postCollect(); + } } @Override @@ -103,4 +123,63 @@ public InternalAggregation buildEmptyAggregation() { return new InternalNested(name, 0, buildEmptySubAggregations(), pipelineAggregators(), metaData()); } + class BufferingNestedLeafBucketCollector extends LeafBucketCollectorBase { + + final BitSet parentDocs; + final LeafBucketCollector sub; + final DocIdSetIterator childDocs; + final LongArrayList bucketBuffer = new LongArrayList(); + + int currentParentDoc = -1; + + BufferingNestedLeafBucketCollector(LeafBucketCollector sub, BitSet parentDocs, DocIdSetIterator childDocs) { + super(sub, null); + this.sub = sub; + this.parentDocs = parentDocs; + this.childDocs = childDocs; + } + + @Override + public void collect(int parentDoc, long bucket) throws IOException { + // if parentDoc is 0 then this means that this parent doesn't have child docs (b/c these appear always before the parent + // doc), so we can skip: + if (parentDoc == 0 || parentDocs == null || childDocs == null) { + return; + } + + if (currentParentDoc != parentDoc) { + processChildBuckets(currentParentDoc, bucketBuffer); + currentParentDoc = parentDoc; + } + bucketBuffer.add(bucket); + } + + void processChildBuckets(int parentDoc, LongArrayList buckets) throws IOException { + if (bucketBuffer.isEmpty()) { + return; + } + + + final int prevParentDoc = parentDocs.prevSetBit(parentDoc - 1); + int childDocId = childDocs.docID(); + if (childDocId <= prevParentDoc) { + childDocId = childDocs.advance(prevParentDoc + 1); + } + + for (; childDocId < parentDoc; childDocId = childDocs.nextDoc()) { + final long[] buffer = buckets.buffer; + final int size = buckets.size(); + for (int i = 0; i < size; i++) { + collectBucket(sub, childDocId, buffer[i]); + } + } + bucketBuffer.clear(); + } + + void postCollect() throws IOException { + processChildBuckets(currentParentDoc, bucketBuffer); + } + + } + } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/nested/NestedAggregatorFactory.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/nested/NestedAggregatorFactory.java index b491bf8ff0dc4..dfbe18ba87b4f 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/nested/NestedAggregatorFactory.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/nested/NestedAggregatorFactory.java @@ -48,13 +48,11 @@ class NestedAggregatorFactory extends AggregatorFactory @Override public Aggregator createInternal(Aggregator parent, boolean collectsFromSingleBucket, List pipelineAggregators, Map metaData) throws IOException { - if (collectsFromSingleBucket == false) { - return asMultiBucketAggregator(this, context, parent); - } if (childObjectMapper == null) { return new Unmapped(name, context, parent, pipelineAggregators, metaData); } - return new NestedAggregator(name, factories, parentObjectMapper, childObjectMapper, context, parent, pipelineAggregators, metaData); + return new NestedAggregator(name, factories, parentObjectMapper, childObjectMapper, context, parent, + pipelineAggregators, metaData, collectsFromSingleBucket); } private static final class Unmapped extends NonCollectingAggregator { diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/nested/ReverseNestedAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/nested/ReverseNestedAggregator.java index d52412ec1ecc3..cf45a09ef6193 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/nested/ReverseNestedAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/nested/ReverseNestedAggregator.java @@ -33,6 +33,7 @@ import org.elasticsearch.search.aggregations.InternalAggregation; import org.elasticsearch.search.aggregations.LeafBucketCollector; import org.elasticsearch.search.aggregations.LeafBucketCollectorBase; +import org.elasticsearch.search.aggregations.bucket.BucketsAggregator; import org.elasticsearch.search.aggregations.bucket.SingleBucketAggregator; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; import org.elasticsearch.search.internal.SearchContext; @@ -41,7 +42,7 @@ import java.util.List; import java.util.Map; -public class ReverseNestedAggregator extends SingleBucketAggregator { +public class ReverseNestedAggregator extends BucketsAggregator implements SingleBucketAggregator { static final ParseField PATH_FIELD = new ParseField("path"); diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/AbstractRangeBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/AbstractRangeBuilder.java index 079a6b123f094..9fe0409d0e10c 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/AbstractRangeBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/AbstractRangeBuilder.java @@ -24,6 +24,7 @@ import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.search.aggregations.bucket.MultiBucketAggregationBuilder; import org.elasticsearch.search.aggregations.bucket.range.RangeAggregator.Range; import org.elasticsearch.search.aggregations.support.ValuesSource; import org.elasticsearch.search.aggregations.support.ValuesSourceAggregationBuilder; @@ -35,7 +36,7 @@ import java.util.function.Function; public abstract class AbstractRangeBuilder, R extends Range> - extends ValuesSourceAggregationBuilder { + extends ValuesSourceAggregationBuilder implements MultiBucketAggregationBuilder { protected final InternalRange.Factory rangeFactory; protected List ranges = new ArrayList<>(); diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/IpRangeAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/IpRangeAggregationBuilder.java index 1d97f1886b4c1..05b05c2fc01d3 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/IpRangeAggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/IpRangeAggregationBuilder.java @@ -22,11 +22,12 @@ import org.apache.lucene.util.BytesRef; import org.elasticsearch.common.ParseField; import org.elasticsearch.common.ParsingException; +import org.elasticsearch.common.collect.Tuple; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.network.InetAddresses; import org.elasticsearch.common.xcontent.ObjectParser; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.common.xcontent.XContentParser.Token; @@ -108,7 +109,7 @@ private static Range parseRange(XContentParser parser) throws IOException { } } - public static class Range implements ToXContent { + public static class Range implements ToXContentObject { private final String key; private final String from; @@ -127,21 +128,12 @@ public static class Range implements ToXContent { } Range(String key, String mask) { - String[] splits = mask.split("/"); - if (splits.length != 2) { - throw new IllegalArgumentException("Expected [ip/prefix_length] but got [" + mask - + "], which contains zero or more than one [/]"); - } - InetAddress value = InetAddresses.forString(splits[0]); - int prefixLength = Integer.parseInt(splits[1]); - // copied from InetAddressPoint.newPrefixQuery - if (prefixLength < 0 || prefixLength > 8 * value.getAddress().length) { - throw new IllegalArgumentException("illegal prefixLength [" + prefixLength - + "] in [" + mask + "]. Must be 0-32 for IPv4 ranges, 0-128 for IPv6 ranges"); - } + final Tuple cidr = InetAddresses.parseCidr(mask); + final InetAddress address = cidr.v1(); + final int prefixLength = cidr.v2(); // create the lower value by zeroing out the host portion, upper value by filling it with all ones. - byte lower[] = value.getAddress(); - byte upper[] = value.getAddress(); + byte lower[] = address.getAddress(); + byte upper[] = address.getAddress(); for (int i = prefixLength; i < 8 * lower.length; i++) { int m = 1 << (7 - (i & 7)); lower[i >> 3] &= ~m; diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/RangeAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/RangeAggregator.java index a6268f75c0912..e502de6210b70 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/RangeAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/RangeAggregator.java @@ -23,7 +23,8 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContent.Params; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.common.xcontent.XContentParserUtils; @@ -52,7 +53,7 @@ public class RangeAggregator extends BucketsAggregator { public static final ParseField RANGES_FIELD = new ParseField("ranges"); public static final ParseField KEYED_FIELD = new ParseField("keyed"); - public static class Range implements Writeable, ToXContent { + public static class Range implements Writeable, ToXContentObject { public static final ParseField KEY_FIELD = new ParseField("key"); public static final ParseField FROM_FIELD = new ParseField("from"); public static final ParseField TO_FIELD = new ParseField("to"); diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/sampler/SamplerAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/sampler/SamplerAggregator.java index 7be81c08120c9..ba6cece729504 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/sampler/SamplerAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/sampler/SamplerAggregator.java @@ -26,6 +26,7 @@ import org.elasticsearch.search.aggregations.AggregatorFactories; import org.elasticsearch.search.aggregations.InternalAggregation; import org.elasticsearch.search.aggregations.LeafBucketCollector; +import org.elasticsearch.search.aggregations.bucket.DeferableBucketAggregator; import org.elasticsearch.search.aggregations.bucket.DeferringBucketCollector; import org.elasticsearch.search.aggregations.bucket.SingleBucketAggregator; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; @@ -47,7 +48,7 @@ * values would be preferable to users having to recreate this logic in a * 'script' e.g. to turn a datetime in milliseconds into a month key value. */ -public class SamplerAggregator extends SingleBucketAggregator { +public class SamplerAggregator extends DeferableBucketAggregator implements SingleBucketAggregator { public static final ParseField SHARD_SIZE_FIELD = new ParseField("shard_size"); public static final ParseField MAX_DOCS_PER_VALUE_FIELD = new ParseField("max_docs_per_value"); diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/SignificantTermsAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/SignificantTermsAggregationBuilder.java index a14a9873db320..938ae4763d17f 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/SignificantTermsAggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/SignificantTermsAggregationBuilder.java @@ -30,6 +30,7 @@ import org.elasticsearch.search.aggregations.Aggregator; import org.elasticsearch.search.aggregations.AggregatorFactories.Builder; import org.elasticsearch.search.aggregations.AggregatorFactory; +import org.elasticsearch.search.aggregations.bucket.MultiBucketAggregationBuilder; import org.elasticsearch.search.aggregations.bucket.significant.heuristics.JLHScore; import org.elasticsearch.search.aggregations.bucket.significant.heuristics.SignificanceHeuristic; import org.elasticsearch.search.aggregations.bucket.significant.heuristics.SignificanceHeuristicParser; @@ -51,7 +52,8 @@ import static org.elasticsearch.index.query.AbstractQueryBuilder.parseInnerQueryBuilder; -public class SignificantTermsAggregationBuilder extends ValuesSourceAggregationBuilder { +public class SignificantTermsAggregationBuilder extends ValuesSourceAggregationBuilder + implements MultiBucketAggregationBuilder { public static final String NAME = "significant_terms"; static final ParseField BACKGROUND_FILTER = new ParseField("background_filter"); diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/SignificantTermsAggregatorFactory.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/SignificantTermsAggregatorFactory.java index 969fbb38a5948..da87e59a6fdfd 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/SignificantTermsAggregatorFactory.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/SignificantTermsAggregatorFactory.java @@ -30,6 +30,8 @@ import org.apache.lucene.util.BytesRef; import org.elasticsearch.common.ParseField; import org.elasticsearch.common.lease.Releasable; +import org.elasticsearch.common.logging.DeprecationLogger; +import org.elasticsearch.common.logging.Loggers; import org.elasticsearch.common.lucene.index.FilterableTermsEnum; import org.elasticsearch.common.lucene.index.FreqTermsEnum; import org.elasticsearch.index.mapper.MappedFieldType; @@ -53,12 +55,12 @@ import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; -import java.util.Arrays; import java.util.List; import java.util.Map; public class SignificantTermsAggregatorFactory extends ValuesSourceAggregatorFactory implements Releasable { + private static final DeprecationLogger DEPRECATION_LOGGER = new DeprecationLogger(Loggers.getLogger(SignificantTermsAggregatorFactory.class)); private final IncludeExclude includeExclude; private final String executionHint; @@ -202,17 +204,13 @@ protected Aggregator doCreateInternal(ValuesSource valuesSource, Aggregator pare if (valuesSource instanceof ValuesSource.Bytes) { ExecutionMode execution = null; if (executionHint != null) { - execution = ExecutionMode.fromString(executionHint); + execution = ExecutionMode.fromString(executionHint, DEPRECATION_LOGGER); } - if (!(valuesSource instanceof ValuesSource.Bytes.WithOrdinals)) { + if (valuesSource instanceof ValuesSource.Bytes.WithOrdinals == false) { execution = ExecutionMode.MAP; } if (execution == null) { - if (Aggregator.descendsFromBucketAggregator(parent)) { - execution = ExecutionMode.GLOBAL_ORDINALS_HASH; - } else { - execution = ExecutionMode.GLOBAL_ORDINALS; - } + execution = ExecutionMode.GLOBAL_ORDINALS; } assert execution != null; @@ -291,44 +289,35 @@ Aggregator create(String name, Map metaData) throws IOException { final IncludeExclude.OrdinalsFilter filter = includeExclude == null ? null : includeExclude.convertToOrdinalsFilter(format); + boolean remapGlobalOrd = true; + if (Aggregator.descendsFromBucketAggregator(parent) == false && + factories == AggregatorFactories.EMPTY && + includeExclude == null) { + /** + * We don't need to remap global ords iff this aggregator: + * - is not a child of a bucket aggregator AND + * - has no include/exclude rules AND + * - has no sub-aggregator + **/ + remapGlobalOrd = false; + } return new GlobalOrdinalsSignificantTermsAggregator(name, factories, (ValuesSource.Bytes.WithOrdinals.FieldData) valuesSource, format, bucketCountThresholds, filter, - aggregationContext, parent, false, significanceHeuristic, termsAggregatorFactory, pipelineAggregators, metaData); - - } - - }, - GLOBAL_ORDINALS_HASH(new ParseField("global_ordinals_hash")) { - - @Override - Aggregator create(String name, - AggregatorFactories factories, - ValuesSource valuesSource, - DocValueFormat format, - TermsAggregator.BucketCountThresholds bucketCountThresholds, - IncludeExclude includeExclude, - SearchContext aggregationContext, - Aggregator parent, - SignificanceHeuristic significanceHeuristic, - SignificantTermsAggregatorFactory termsAggregatorFactory, - List pipelineAggregators, - Map metaData) throws IOException { - - final IncludeExclude.OrdinalsFilter filter = includeExclude == null ? null : includeExclude.convertToOrdinalsFilter(format); - return new GlobalOrdinalsSignificantTermsAggregator(name, factories, - (ValuesSource.Bytes.WithOrdinals.FieldData) valuesSource, format, bucketCountThresholds, filter, aggregationContext, parent, - true, significanceHeuristic, termsAggregatorFactory, pipelineAggregators, metaData); + aggregationContext, parent, remapGlobalOrd, significanceHeuristic, termsAggregatorFactory, pipelineAggregators, metaData); } }; - public static ExecutionMode fromString(String value) { - for (ExecutionMode mode : values()) { - if (mode.parseField.match(value)) { - return mode; - } + public static ExecutionMode fromString(String value, final DeprecationLogger deprecationLogger) { + if ("global_ordinals".equals(value)) { + return GLOBAL_ORDINALS; + } else if ("global_ordinals_hash".equals(value)) { + deprecationLogger.deprecated("global_ordinals_hash is deprecated. Please use [global_ordinals] instead."); + return GLOBAL_ORDINALS; + } else if ("map".equals(value)) { + return MAP; } - throw new IllegalArgumentException("Unknown `execution_hint`: [" + value + "], expected any of " + Arrays.toString(values())); + throw new IllegalArgumentException("Unknown `execution_hint`: [" + value + "], expected any of [map, global_ordinals]"); } private final ParseField parseField; diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/heuristics/SignificanceHeuristic.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/heuristics/SignificanceHeuristic.java index 7b6cf699741c6..4b5d8b3a30dd8 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/heuristics/SignificanceHeuristic.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/heuristics/SignificanceHeuristic.java @@ -20,7 +20,7 @@ package org.elasticsearch.search.aggregations.bucket.significant.heuristics; import org.elasticsearch.common.io.stream.NamedWriteable; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentFragment; import org.elasticsearch.search.aggregations.InternalAggregation; import org.elasticsearch.search.aggregations.bucket.significant.SignificantTerms; import org.elasticsearch.search.internal.SearchContext; @@ -28,7 +28,7 @@ /** * Heuristic for that {@link SignificantTerms} uses to pick out significant terms. */ -public abstract class SignificanceHeuristic implements NamedWriteable, ToXContent { +public abstract class SignificanceHeuristic implements NamedWriteable, ToXContentFragment { /** * @param subsetFreq The frequency of the term in the selected sample * @param subsetSize The size of the selected sample (typically number of docs) diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/heuristics/SignificanceHeuristicBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/heuristics/SignificanceHeuristicBuilder.java index 6c669ec9850e0..fbf8c2a8c29ee 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/heuristics/SignificanceHeuristicBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/heuristics/SignificanceHeuristicBuilder.java @@ -20,8 +20,8 @@ package org.elasticsearch.search.aggregations.bucket.significant.heuristics; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentFragment; -public interface SignificanceHeuristicBuilder extends ToXContent { +public interface SignificanceHeuristicBuilder extends ToXContentFragment { } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/GlobalOrdinalsStringTermsAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/GlobalOrdinalsStringTermsAggregator.java index 6782abf2c2a5f..a9d8841dd4d0d 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/GlobalOrdinalsStringTermsAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/GlobalOrdinalsStringTermsAggregator.java @@ -78,20 +78,20 @@ public interface GlobalOrdLookupFunction { } public GlobalOrdinalsStringTermsAggregator(String name, AggregatorFactories factories, - ValuesSource.Bytes.WithOrdinals valuesSource, - BucketOrder order, - DocValueFormat format, - BucketCountThresholds bucketCountThresholds, - IncludeExclude.OrdinalsFilter includeExclude, - SearchContext context, - Aggregator parent, - boolean forceRemapGlobalOrds, - SubAggCollectionMode collectionMode, - boolean showTermDocCountError, - List pipelineAggregators, - Map metaData) throws IOException { + ValuesSource.Bytes.WithOrdinals valuesSource, + BucketOrder order, + DocValueFormat format, + BucketCountThresholds bucketCountThresholds, + IncludeExclude.OrdinalsFilter includeExclude, + SearchContext context, + Aggregator parent, + boolean remapGlobalOrds, + SubAggCollectionMode collectionMode, + boolean showTermDocCountError, + List pipelineAggregators, + Map metaData) throws IOException { super(name, factories, context, parent, order, format, bucketCountThresholds, collectionMode, showTermDocCountError, - pipelineAggregators, metaData); + pipelineAggregators, metaData); this.valuesSource = valuesSource; this.includeExclude = includeExclude; final IndexReader reader = context.searcher().getIndexReader(); @@ -100,18 +100,9 @@ public GlobalOrdinalsStringTermsAggregator(String name, AggregatorFactories fact this.valueCount = values.getValueCount(); this.lookupGlobalOrd = values::lookupOrd; this.acceptedGlobalOrdinals = includeExclude != null ? includeExclude.acceptedGlobalOrdinals(values) : null; - - /** - * Remap global ords to dense bucket ordinals if any sub-aggregator cannot be deferred. - * Sub-aggregators expect dense buckets and allocate memories based on this assumption. - * Deferred aggregators are safe because the selected ordinals are remapped when the buckets - * are replayed. - */ - boolean remapGlobalOrds = forceRemapGlobalOrds || Arrays.stream(subAggregators).anyMatch((a) -> shouldDefer(a) == false); this.bucketOrds = remapGlobalOrds ? new LongHash(1, context.bigArrays()) : null; } - boolean remapGlobalOrds() { return bucketOrds != null; } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/TermsAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/TermsAggregationBuilder.java index a1d958e846c98..6742632ddd50a 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/TermsAggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/TermsAggregationBuilder.java @@ -31,6 +31,7 @@ import org.elasticsearch.search.aggregations.BucketOrder; import org.elasticsearch.search.aggregations.InternalOrder; import org.elasticsearch.search.aggregations.InternalOrder.CompoundOrder; +import org.elasticsearch.search.aggregations.bucket.MultiBucketAggregationBuilder; import org.elasticsearch.search.aggregations.bucket.terms.TermsAggregator.BucketCountThresholds; import org.elasticsearch.search.aggregations.support.ValueType; import org.elasticsearch.search.aggregations.support.ValuesSource; @@ -45,7 +46,8 @@ import java.util.List; import java.util.Objects; -public class TermsAggregationBuilder extends ValuesSourceAggregationBuilder { +public class TermsAggregationBuilder extends ValuesSourceAggregationBuilder + implements MultiBucketAggregationBuilder { public static final String NAME = "terms"; public static final ParseField EXECUTION_HINT_FIELD_NAME = new ParseField("execution_hint"); diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/TermsAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/TermsAggregator.java index ee8ed9ef6f480..8294986bcce8c 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/TermsAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/TermsAggregator.java @@ -35,6 +35,7 @@ import org.elasticsearch.search.aggregations.InternalOrder.Aggregation; import org.elasticsearch.search.aggregations.InternalOrder.CompoundOrder; import org.elasticsearch.search.aggregations.bucket.BucketsAggregator; +import org.elasticsearch.search.aggregations.bucket.DeferableBucketAggregator; import org.elasticsearch.search.aggregations.bucket.MultiBucketsAggregation.Bucket; import org.elasticsearch.search.aggregations.bucket.SingleBucketAggregator; import org.elasticsearch.search.aggregations.metrics.NumericMetricsAggregator; @@ -50,7 +51,7 @@ import java.util.Objects; import java.util.Set; -public abstract class TermsAggregator extends BucketsAggregator { +public abstract class TermsAggregator extends DeferableBucketAggregator { public static class BucketCountThresholds implements Writeable, ToXContentFragment { private long minDocCount; diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/TermsAggregatorFactory.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/TermsAggregatorFactory.java index d2c124529204e..386a7da3e6436 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/TermsAggregatorFactory.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/TermsAggregatorFactory.java @@ -21,6 +21,8 @@ import org.apache.lucene.search.IndexSearcher; import org.elasticsearch.common.ParseField; +import org.elasticsearch.common.logging.DeprecationLogger; +import org.elasticsearch.common.logging.Loggers; import org.elasticsearch.search.DocValueFormat; import org.elasticsearch.search.aggregations.AggregationExecutionException; import org.elasticsearch.search.aggregations.Aggregator; @@ -41,18 +43,18 @@ import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; -import java.util.Arrays; import java.util.List; import java.util.Map; public class TermsAggregatorFactory extends ValuesSourceAggregatorFactory { + private static final DeprecationLogger DEPRECATION_LOGGER = new DeprecationLogger(Loggers.getLogger(TermsAggregatorFactory.class)); private final BucketOrder order; private final IncludeExclude includeExclude; private final String executionHint; private final SubAggCollectionMode collectMode; private final TermsAggregator.BucketCountThresholds bucketCountThresholds; - private boolean showTermDocCountError; + private final boolean showTermDocCountError; TermsAggregatorFactory(String name, ValuesSourceConfig config, @@ -124,61 +126,15 @@ protected Aggregator doCreateInternal(ValuesSource valuesSource, Aggregator pare if (valuesSource instanceof ValuesSource.Bytes) { ExecutionMode execution = null; if (executionHint != null) { - execution = ExecutionMode.fromString(executionHint); + execution = ExecutionMode.fromString(executionHint, DEPRECATION_LOGGER); } - // In some cases, using ordinals is just not supported: override it - if (!(valuesSource instanceof ValuesSource.Bytes.WithOrdinals)) { + if (valuesSource instanceof ValuesSource.Bytes.WithOrdinals == false) { execution = ExecutionMode.MAP; } - - final long maxOrd; - final double ratio; - if (execution == null || execution.needsGlobalOrdinals()) { - ValuesSource.Bytes.WithOrdinals valueSourceWithOrdinals = (ValuesSource.Bytes.WithOrdinals) valuesSource; - IndexSearcher indexSearcher = context.searcher(); - maxOrd = valueSourceWithOrdinals.globalMaxOrd(indexSearcher); - ratio = maxOrd / ((double) indexSearcher.getIndexReader().numDocs()); - } else { - maxOrd = -1; - ratio = -1; - } - - // Let's try to use a good default + final long maxOrd = getMaxOrd(valuesSource, context.searcher()); if (execution == null) { - // if there is a parent bucket aggregator the number of - // instances of this aggregator is going - // to be unbounded and most instances may only aggregate few - // documents, so use hashed based - // global ordinals to keep the bucket ords dense. - - // Additionally, if using partitioned terms the regular global - // ordinals would be sparse so we opt for hash - - // Finally if we are sorting by sub aggregations, then these - // aggregations cannot be deferred, so global_ordinals_hash is - // a safer choice as we won't use memory for sub aggregations - // for buckets that are not collected. - if (Aggregator.descendsFromBucketAggregator(parent) || - (includeExclude != null && includeExclude.isPartitionBased()) || - isAggregationSort(order)) { - execution = ExecutionMode.GLOBAL_ORDINALS_HASH; - } else { - if (factories == AggregatorFactories.EMPTY) { - if (ratio <= 0.5 && maxOrd <= 2048) { - // 0.5: At least we need reduce the number of global - // ordinals look-ups by half - // 2048: GLOBAL_ORDINALS_LOW_CARDINALITY has - // additional memory usage, which directly linked to - // maxOrd, so we need to limit. - execution = ExecutionMode.GLOBAL_ORDINALS_LOW_CARDINALITY; - } else { - execution = ExecutionMode.GLOBAL_ORDINALS; - } - } else { - execution = ExecutionMode.GLOBAL_ORDINALS; - } - } + execution = ExecutionMode.GLOBAL_ORDINALS; } SubAggCollectionMode cm = collectMode; if (cm == null) { @@ -247,6 +203,19 @@ static SubAggCollectionMode subAggCollectionMode(int expectedSize, long maxOrd) return SubAggCollectionMode.DEPTH_FIRST; } + /** + * Get the maximum global ordinal value for the provided {@link ValuesSource} or -1 + * if the values source is not an instance of {@link ValuesSource.Bytes.WithOrdinals}. + */ + static long getMaxOrd(ValuesSource source, IndexSearcher searcher) throws IOException { + if (source instanceof ValuesSource.Bytes.WithOrdinals) { + ValuesSource.Bytes.WithOrdinals valueSourceWithOrdinals = (ValuesSource.Bytes.WithOrdinals) source; + return valueSourceWithOrdinals.globalMaxOrd(searcher); + } else { + return -1; + } + } + public enum ExecutionMode { MAP(new ParseField("map")) { @@ -265,18 +234,10 @@ Aggregator create(String name, boolean showTermDocCountError, List pipelineAggregators, Map metaData) throws IOException { - final IncludeExclude.StringFilter filter = includeExclude == null ? null : includeExclude.convertToStringFilter(format); return new StringTermsAggregator(name, factories, valuesSource, order, format, bucketCountThresholds, filter, context, parent, subAggCollectMode, showTermDocCountError, pipelineAggregators, metaData); - } - - @Override - boolean needsGlobalOrdinals() { - return false; - } - }, GLOBAL_ORDINALS(new ParseField("global_ordinals")) { @@ -294,91 +255,55 @@ Aggregator create(String name, List pipelineAggregators, Map metaData) throws IOException { - final IncludeExclude.OrdinalsFilter filter = includeExclude == null ? null : includeExclude.convertToOrdinalsFilter(format); - return new GlobalOrdinalsStringTermsAggregator(name, factories, (ValuesSource.Bytes.WithOrdinals) valuesSource, order, - format, bucketCountThresholds, filter, context, parent, false, subAggCollectMode, showTermDocCountError, + final long maxOrd = getMaxOrd(valuesSource, context.searcher()); + assert maxOrd != -1; + final double ratio = maxOrd / ((double) context.searcher().getIndexReader().numDocs()); + if (factories == AggregatorFactories.EMPTY && + includeExclude == null && + Aggregator.descendsFromBucketAggregator(parent) == false && + ratio <= 0.5 && maxOrd <= 2048) { + /** + * We can use the low cardinality execution mode iff this aggregator: + * - has no sub-aggregator AND + * - is not a child of a bucket aggregator AND + * - At least we reduce the number of global ordinals look-ups by half (ration <= 0.5) AND + * - the maximum global ordinal is less than 2048 (LOW_CARDINALITY has additional memory usage, + * which directly linked to maxOrd, so we need to limit). + */ + return new GlobalOrdinalsStringTermsAggregator.LowCardinality(name, factories, (ValuesSource.Bytes.WithOrdinals) valuesSource, order, + format, bucketCountThresholds, context, parent, false, subAggCollectMode, showTermDocCountError, pipelineAggregators, metaData); - } - - @Override - boolean needsGlobalOrdinals() { - return true; - } - - }, - GLOBAL_ORDINALS_HASH(new ParseField("global_ordinals_hash")) { - - @Override - Aggregator create(String name, - AggregatorFactories factories, - ValuesSource valuesSource, - BucketOrder order, - DocValueFormat format, - TermsAggregator.BucketCountThresholds bucketCountThresholds, - IncludeExclude includeExclude, - SearchContext context, - Aggregator parent, - SubAggCollectionMode subAggCollectMode, - boolean showTermDocCountError, - List pipelineAggregators, - Map metaData) throws IOException { - + } final IncludeExclude.OrdinalsFilter filter = includeExclude == null ? null : includeExclude.convertToOrdinalsFilter(format); - return new GlobalOrdinalsStringTermsAggregator(name, factories, (ValuesSource.Bytes.WithOrdinals) valuesSource, - order, format, bucketCountThresholds, filter, context, parent, true, subAggCollectMode, - showTermDocCountError, pipelineAggregators, metaData); - - } - - @Override - boolean needsGlobalOrdinals() { - return true; - } - }, - GLOBAL_ORDINALS_LOW_CARDINALITY(new ParseField("global_ordinals_low_cardinality")) { - - @Override - Aggregator create(String name, - AggregatorFactories factories, - ValuesSource valuesSource, - BucketOrder order, - DocValueFormat format, - TermsAggregator.BucketCountThresholds bucketCountThresholds, - IncludeExclude includeExclude, - SearchContext context, - Aggregator parent, - SubAggCollectionMode subAggCollectMode, - boolean showTermDocCountError, - List pipelineAggregators, - Map metaData) throws IOException { - - if (includeExclude != null || factories.countAggregators() > 0 - // we need the FieldData impl to be able to extract the - // segment to global ord mapping - || valuesSource.getClass() != ValuesSource.Bytes.FieldData.class) { - return GLOBAL_ORDINALS.create(name, factories, valuesSource, order, format, bucketCountThresholds, includeExclude, - context, parent, subAggCollectMode, showTermDocCountError, pipelineAggregators, metaData); + boolean remapGlobalOrds = true; + if (includeExclude == null && + Aggregator.descendsFromBucketAggregator(parent) == false && + (factories == AggregatorFactories.EMPTY || + (isAggregationSort(order) == false && subAggCollectMode == SubAggCollectionMode.BREADTH_FIRST))) { + /** + * We don't need to remap global ords iff this aggregator: + * - has no include/exclude rules AND + * - is not a child of a bucket aggregator AND + * - has no sub-aggregator or only sub-aggregator that can be deferred ({@link SubAggCollectionMode#BREADTH_FIRST}). + **/ + remapGlobalOrds = false; } - return new GlobalOrdinalsStringTermsAggregator.LowCardinality(name, factories, - (ValuesSource.Bytes.WithOrdinals) valuesSource, order, format, bucketCountThresholds, context, parent, - false, subAggCollectMode, showTermDocCountError, pipelineAggregators, metaData); - - } - - @Override - boolean needsGlobalOrdinals() { - return true; + return new GlobalOrdinalsStringTermsAggregator(name, factories, (ValuesSource.Bytes.WithOrdinals) valuesSource, order, + format, bucketCountThresholds, filter, context, parent, remapGlobalOrds, subAggCollectMode, showTermDocCountError, + pipelineAggregators, metaData); } }; - public static ExecutionMode fromString(String value) { - for (ExecutionMode mode : values()) { - if (mode.parseField.match(value)) { - return mode; - } + public static ExecutionMode fromString(String value, final DeprecationLogger deprecationLogger) { + switch (value) { + case "global_ordinals": + return GLOBAL_ORDINALS; + case "map": + return MAP; + default: + throw new IllegalArgumentException("Unknown `execution_hint`: [" + value + "], expected any of [map, global_ordinals]"); } - throw new IllegalArgumentException("Unknown `execution_hint`: [" + value + "], expected any of " + Arrays.toString(values())); } private final ParseField parseField; @@ -401,8 +326,6 @@ abstract Aggregator create(String name, List pipelineAggregators, Map metaData) throws IOException; - abstract boolean needsGlobalOrdinals(); - @Override public String toString() { return parseField.getPreferredName(); diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/scripted/ScriptedMetricAggregatorFactory.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/scripted/ScriptedMetricAggregatorFactory.java index 2d9e02d08cb01..aa7de3e1ab6e1 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/scripted/ScriptedMetricAggregatorFactory.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/scripted/ScriptedMetricAggregatorFactory.java @@ -70,6 +70,8 @@ public Aggregator createInternal(Aggregator parent, boolean collectsFromSingleBu params = deepCopyParams(params, context); } else { params = new HashMap<>(); + } + if (params.containsKey("_agg") == false) { params.put("_agg", new HashMap()); } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/InternalStats.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/InternalStats.java index 172e3691127d1..6d7ae0cddc0df 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/InternalStats.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/InternalStats.java @@ -192,7 +192,7 @@ public XContentBuilder doXContentBody(XContentBuilder builder, Params params) th builder.nullField(Fields.MIN); builder.nullField(Fields.MAX); builder.nullField(Fields.AVG); - builder.nullField(Fields.SUM); + builder.field(Fields.SUM, 0.0d); } otherStatsToXContent(builder, params); return builder; diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/ParsedStats.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/ParsedStats.java index 239548ecdebc6..4c676cf227838 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/ParsedStats.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/ParsedStats.java @@ -109,7 +109,7 @@ protected XContentBuilder doXContentBody(XContentBuilder builder, Params params) builder.nullField(Fields.MIN); builder.nullField(Fields.MAX); builder.nullField(Fields.AVG); - builder.nullField(Fields.SUM); + builder.field(Fields.SUM, 0.0d); } otherStatsToXContent(builder, params); return builder; diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/tophits/TopHitsAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/tophits/TopHitsAggregationBuilder.java index c5be9f3551b15..cede3ae9661db 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/tophits/TopHitsAggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/tophits/TopHitsAggregationBuilder.java @@ -26,6 +26,7 @@ import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.index.IndexSettings; import org.elasticsearch.index.query.QueryShardContext; import org.elasticsearch.script.Script; import org.elasticsearch.script.SearchScript; @@ -529,6 +530,17 @@ public TopHitsAggregationBuilder subAggregations(Builder subFactories) { @Override protected TopHitsAggregatorFactory doBuild(SearchContext context, AggregatorFactory parent, Builder subfactoriesBuilder) throws IOException { + long innerResultWindow = from() + size(); + int maxInnerResultWindow = context.mapperService().getIndexSettings().getMaxInnerResultWindow(); + if (innerResultWindow > maxInnerResultWindow) { + throw new IllegalArgumentException( + "Top hits result window is too large, the top hits aggregator [" + name + "]'s from + size must be less " + + "than or equal to: [" + maxInnerResultWindow + "] but was [" + innerResultWindow + + "]. This limit can be set by changing the [" + IndexSettings.MAX_INNER_RESULT_WINDOW_SETTING.getKey() + + "] index level setting." + ); + } + List fields = new ArrayList<>(); if (scriptFields != null) { for (ScriptField field : scriptFields) { diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/tophits/TopHitsAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/tophits/TopHitsAggregator.java index e26cf00671f2d..700acdf797a74 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/tophits/TopHitsAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/tophits/TopHitsAggregator.java @@ -19,6 +19,9 @@ package org.elasticsearch.search.aggregations.metrics.tophits; +import com.carrotsearch.hppc.LongObjectHashMap; +import com.carrotsearch.hppc.cursors.ObjectCursor; + import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.search.FieldDoc; import org.apache.lucene.search.LeafCollector; @@ -45,7 +48,7 @@ import org.elasticsearch.search.fetch.FetchSearchResult; import org.elasticsearch.search.internal.SearchContext; import org.elasticsearch.search.internal.SubSearchContext; -import org.elasticsearch.search.rescore.RescoreSearchContext; +import org.elasticsearch.search.rescore.RescoreContext; import org.elasticsearch.search.sort.SortAndFormats; import java.io.IOException; @@ -54,21 +57,11 @@ public class TopHitsAggregator extends MetricsAggregator { - /** Simple wrapper around a top-level collector and the current leaf collector. */ - private static class TopDocsAndLeafCollector { - final TopDocsCollector topLevelCollector; - LeafCollector leafCollector; - - TopDocsAndLeafCollector(TopDocsCollector topLevelCollector) { - this.topLevelCollector = topLevelCollector; - } - } + private final FetchPhase fetchPhase; + private final SubSearchContext subSearchContext; + private final LongObjectPagedHashMap> topDocsCollectors; - final FetchPhase fetchPhase; - final SubSearchContext subSearchContext; - final LongObjectPagedHashMap topDocsCollectors; - - public TopHitsAggregator(FetchPhase fetchPhase, SubSearchContext subSearchContext, String name, SearchContext context, + TopHitsAggregator(FetchPhase fetchPhase, SubSearchContext subSearchContext, String name, SearchContext context, Aggregator parent, List pipelineAggregators, Map metaData) throws IOException { super(name, context, parent, pipelineAggregators, metaData); this.fetchPhase = fetchPhase; @@ -88,13 +81,12 @@ public boolean needsScores() { } @Override - public LeafBucketCollector getLeafCollector(final LeafReaderContext ctx, - final LeafBucketCollector sub) throws IOException { - - for (LongObjectPagedHashMap.Cursor cursor : topDocsCollectors) { - cursor.value.leafCollector = cursor.value.topLevelCollector.getLeafCollector(ctx); - } - + public LeafBucketCollector getLeafCollector(LeafReaderContext ctx, LeafBucketCollector sub) throws IOException { + // Create leaf collectors here instead of at the aggregator level. Otherwise in case this collector get invoked + // when post collecting then we have already replaced the leaf readers on the aggregator level have already been + // replaced with the next leaf readers and then post collection pushes docids of the previous segement, which + // then causes assertions to trip or incorrect top docs to be computed. + final LongObjectHashMap leafCollectors = new LongObjectHashMap<>(1); return new LeafBucketCollectorBase(sub, null) { Scorer scorer; @@ -102,55 +94,63 @@ public LeafBucketCollector getLeafCollector(final LeafReaderContext ctx, @Override public void setScorer(Scorer scorer) throws IOException { this.scorer = scorer; - for (LongObjectPagedHashMap.Cursor cursor : topDocsCollectors) { - cursor.value.leafCollector.setScorer(scorer); - } super.setScorer(scorer); + for (ObjectCursor cursor : leafCollectors.values()) { + cursor.value.setScorer(scorer); + } } @Override public void collect(int docId, long bucket) throws IOException { - TopDocsAndLeafCollector collectors = topDocsCollectors.get(bucket); - if (collectors == null) { + TopDocsCollector topDocsCollector = topDocsCollectors.get(bucket); + if (topDocsCollector == null) { SortAndFormats sort = subSearchContext.sort(); int topN = subSearchContext.from() + subSearchContext.size(); if (sort == null) { - for (RescoreSearchContext rescoreContext : context.rescore()) { - topN = Math.max(rescoreContext.window(), topN); + for (RescoreContext rescoreContext : context.rescore()) { + topN = Math.max(rescoreContext.getWindowSize(), topN); } } // In the QueryPhase we don't need this protection, because it is build into the IndexSearcher, // but here we create collectors ourselves and we need prevent OOM because of crazy an offset and size. topN = Math.min(topN, subSearchContext.searcher().getIndexReader().maxDoc()); - TopDocsCollector topLevelCollector; if (sort == null) { - topLevelCollector = TopScoreDocCollector.create(topN); + topDocsCollector = TopScoreDocCollector.create(topN); } else { - topLevelCollector = TopFieldCollector.create(sort.sort, topN, true, subSearchContext.trackScores(), - subSearchContext.trackScores()); + topDocsCollector = TopFieldCollector.create(sort.sort, topN, true, subSearchContext.trackScores(), + subSearchContext.trackScores()); + } + topDocsCollectors.put(bucket, topDocsCollector); + } + + final LeafCollector leafCollector; + final int key = leafCollectors.indexOf(bucket); + if (key < 0) { + leafCollector = topDocsCollector.getLeafCollector(ctx); + if (scorer != null) { + leafCollector.setScorer(scorer); } - collectors = new TopDocsAndLeafCollector(topLevelCollector); - collectors.leafCollector = collectors.topLevelCollector.getLeafCollector(ctx); - collectors.leafCollector.setScorer(scorer); - topDocsCollectors.put(bucket, collectors); + leafCollectors.indexInsert(key, bucket, leafCollector); + } else { + leafCollector = leafCollectors.indexGet(key); } - collectors.leafCollector.collect(docId); + leafCollector.collect(docId); } }; } @Override public InternalAggregation buildAggregation(long owningBucketOrdinal) { - TopDocsAndLeafCollector topDocsCollector = topDocsCollectors.get(owningBucketOrdinal); + TopDocsCollector topDocsCollector = topDocsCollectors.get(owningBucketOrdinal); final InternalTopHits topHits; if (topDocsCollector == null) { topHits = buildEmptyAggregation(); } else { - TopDocs topDocs = topDocsCollector.topLevelCollector.topDocs(); + TopDocs topDocs = topDocsCollector.topDocs(); if (subSearchContext.sort() == null) { - for (RescoreSearchContext ctx : context().rescore()) { + for (RescoreContext ctx : context().rescore()) { try { - topDocs = ctx.rescorer().rescore(topDocs, context, ctx); + topDocs = ctx.rescorer().rescore(topDocs, context.searcher(), ctx); } catch (IOException e) { throw new ElasticsearchException("Rescore TopHits Failed", e); } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/tophits/TopHitsAggregatorFactory.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/tophits/TopHitsAggregatorFactory.java index 6a41cc97f8ec5..09c26b169e528 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/tophits/TopHitsAggregatorFactory.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/tophits/TopHitsAggregatorFactory.java @@ -51,7 +51,7 @@ public class TopHitsAggregatorFactory extends AggregatorFactory scriptFields; private final FetchSourceContext fetchSourceContext; - public TopHitsAggregatorFactory(String name, int from, int size, boolean explain, boolean version, boolean trackScores, + TopHitsAggregatorFactory(String name, int from, int size, boolean explain, boolean version, boolean trackScores, Optional sort, HighlightBuilder highlightBuilder, StoredFieldsContext storedFieldsContext, List docValueFields, List scriptFields, FetchSourceContext fetchSourceContext, SearchContext context, AggregatorFactory parent, AggregatorFactories.Builder subFactories, Map metaData) diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/AbstractPipelineAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/AbstractPipelineAggregationBuilder.java index 8d28195d55163..33c68aff26d56 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/AbstractPipelineAggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/AbstractPipelineAggregationBuilder.java @@ -22,6 +22,7 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.search.aggregations.AggregationBuilder; import org.elasticsearch.search.aggregations.AggregatorFactory; import org.elasticsearch.search.aggregations.PipelineAggregationBuilder; @@ -80,7 +81,7 @@ public String type() { * configured) */ @Override - public final void validate(AggregatorFactory parent, AggregatorFactory[] factories, + public final void validate(AggregatorFactory parent, List factories, List pipelineAggregatorFactories) { doValidate(parent, factories, pipelineAggregatorFactories); } @@ -98,7 +99,7 @@ public final PipelineAggregator create() throws IOException { return aggregator; } - public void doValidate(AggregatorFactory parent, AggregatorFactory[] factories, + public void doValidate(AggregatorFactory parent, List factories, List pipelineAggregatorFactories) { } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/BucketMetricsPipelineAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/BucketMetricsPipelineAggregationBuilder.java index be34236d0448a..56db4310c9497 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/BucketMetricsPipelineAggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/BucketMetricsPipelineAggregationBuilder.java @@ -23,16 +23,19 @@ import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.search.DocValueFormat; +import org.elasticsearch.search.aggregations.AggregationBuilder; import org.elasticsearch.search.aggregations.AggregatorFactory; import org.elasticsearch.search.aggregations.PipelineAggregationBuilder; -import org.elasticsearch.search.aggregations.pipeline.BucketHelpers.GapPolicy; +import org.elasticsearch.search.aggregations.bucket.MultiBucketAggregationBuilder; import org.elasticsearch.search.aggregations.pipeline.AbstractPipelineAggregationBuilder; +import org.elasticsearch.search.aggregations.pipeline.BucketHelpers.GapPolicy; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; import java.io.IOException; import java.util.List; import java.util.Map; import java.util.Objects; +import java.util.Optional; public abstract class BucketMetricsPipelineAggregationBuilder> extends AbstractPipelineAggregationBuilder { @@ -106,12 +109,29 @@ public GapPolicy gapPolicy() { protected abstract PipelineAggregator createInternal(Map metaData) throws IOException; @Override - public void doValidate(AggregatorFactory parent, AggregatorFactory[] aggFactories, + public void doValidate(AggregatorFactory parent, List aggBuilders, List pipelineAggregatorFactories) { if (bucketsPaths.length != 1) { throw new IllegalStateException(PipelineAggregator.Parser.BUCKETS_PATH.getPreferredName() + " must contain a single entry for aggregation [" + name + "]"); } + // Need to find the first agg name in the buckets path to check its a + // multi bucket agg: aggs are split with '>' and can optionally have a + // metric name after them by using '.' so need to split on both to get + // just the agg name + final String firstAgg = bucketsPaths[0].split("[>\\.]")[0]; + Optional aggBuilder = aggBuilders.stream().filter((builder) -> builder.getName().equals(firstAgg)) + .findAny(); + if (aggBuilder.isPresent()) { + if ((aggBuilder.get() instanceof MultiBucketAggregationBuilder) == false) { + throw new IllegalArgumentException("The first aggregation in " + PipelineAggregator.Parser.BUCKETS_PATH.getPreferredName() + + " must be a multi-bucket aggregation for aggregation [" + name + "] found :" + + aggBuilder.get().getClass().getName() + " for buckets path: " + bucketsPaths[0]); + } + } else { + throw new IllegalArgumentException(PipelineAggregator.Parser.BUCKETS_PATH.getPreferredName() + + " aggregation does not exist for aggregation [" + name + "]: " + bucketsPaths[0]); + } } @Override diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/avg/AvgBucketPipelineAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/avg/AvgBucketPipelineAggregationBuilder.java index 28280edb87230..d9aa2ae0ebc4b 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/avg/AvgBucketPipelineAggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/avg/AvgBucketPipelineAggregationBuilder.java @@ -22,14 +22,11 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.xcontent.XContentBuilder; -import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.PipelineAggregationBuilder; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; import org.elasticsearch.search.aggregations.pipeline.bucketmetrics.BucketMetricsParser; import org.elasticsearch.search.aggregations.pipeline.bucketmetrics.BucketMetricsPipelineAggregationBuilder; import java.io.IOException; -import java.util.List; import java.util.Map; public class AvgBucketPipelineAggregationBuilder extends BucketMetricsPipelineAggregationBuilder { @@ -56,15 +53,6 @@ protected PipelineAggregator createInternal(Map metaData) throws return new AvgBucketPipelineAggregator(name, bucketsPaths, gapPolicy(), formatter(), metaData); } - @Override - public void doValidate(AggregatorFactory parent, AggregatorFactory[] aggFactories, - List pipelineAggregatorFactories) { - if (bucketsPaths.length != 1) { - throw new IllegalStateException(PipelineAggregator.Parser.BUCKETS_PATH.getPreferredName() - + " must contain a single entry for aggregation [" + name + "]"); - } - } - @Override protected XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException { return builder; diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/max/MaxBucketPipelineAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/max/MaxBucketPipelineAggregationBuilder.java index 6044fdea14d3b..fc2e1cd3e23b5 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/max/MaxBucketPipelineAggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/max/MaxBucketPipelineAggregationBuilder.java @@ -22,14 +22,11 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.xcontent.XContentBuilder; -import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.PipelineAggregationBuilder; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; import org.elasticsearch.search.aggregations.pipeline.bucketmetrics.BucketMetricsParser; import org.elasticsearch.search.aggregations.pipeline.bucketmetrics.BucketMetricsPipelineAggregationBuilder; import java.io.IOException; -import java.util.List; import java.util.Map; public class MaxBucketPipelineAggregationBuilder extends BucketMetricsPipelineAggregationBuilder { @@ -56,15 +53,6 @@ protected PipelineAggregator createInternal(Map metaData) throws return new MaxBucketPipelineAggregator(name, bucketsPaths, gapPolicy(), formatter(), metaData); } - @Override - public void doValidate(AggregatorFactory parent, AggregatorFactory[] aggFactories, - List pipelineAggregatorFactories) { - if (bucketsPaths.length != 1) { - throw new IllegalStateException(PipelineAggregator.Parser.BUCKETS_PATH.getPreferredName() - + " must contain a single entry for aggregation [" + name + "]"); - } - } - @Override protected XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException { return builder; diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/min/MinBucketPipelineAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/min/MinBucketPipelineAggregationBuilder.java index f6533a65dbad6..75cf756441bef 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/min/MinBucketPipelineAggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/min/MinBucketPipelineAggregationBuilder.java @@ -22,14 +22,11 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.xcontent.XContentBuilder; -import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.PipelineAggregationBuilder; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; import org.elasticsearch.search.aggregations.pipeline.bucketmetrics.BucketMetricsParser; import org.elasticsearch.search.aggregations.pipeline.bucketmetrics.BucketMetricsPipelineAggregationBuilder; import java.io.IOException; -import java.util.List; import java.util.Map; public class MinBucketPipelineAggregationBuilder extends BucketMetricsPipelineAggregationBuilder { @@ -56,15 +53,6 @@ protected PipelineAggregator createInternal(Map metaData) throws return new MinBucketPipelineAggregator(name, bucketsPaths, gapPolicy(), formatter(), metaData); } - @Override - public void doValidate(AggregatorFactory parent, AggregatorFactory[] aggFactories, - List pipelineAggregatorFactories) { - if (bucketsPaths.length != 1) { - throw new IllegalStateException(PipelineAggregator.Parser.BUCKETS_PATH.getPreferredName() - + " must contain a single entry for aggregation [" + name + "]"); - } - } - @Override protected XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException { return builder; diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/percentile/PercentilesBucketPipelineAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/percentile/PercentilesBucketPipelineAggregationBuilder.java index 266540aaa8d04..a6673d3a9dacd 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/percentile/PercentilesBucketPipelineAggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/percentile/PercentilesBucketPipelineAggregationBuilder.java @@ -26,6 +26,7 @@ import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.search.aggregations.AggregationBuilder; import org.elasticsearch.search.aggregations.AggregatorFactory; import org.elasticsearch.search.aggregations.PipelineAggregationBuilder; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; @@ -42,7 +43,7 @@ public class PercentilesBucketPipelineAggregationBuilder extends BucketMetricsPipelineAggregationBuilder { public static final String NAME = "percentiles_bucket"; - private static final ParseField PERCENTS_FIELD = new ParseField("percents"); + public static final ParseField PERCENTS_FIELD = new ParseField("percents"); private double[] percents = new double[] { 1.0, 5.0, 25.0, 50.0, 75.0, 95.0, 99.0 }; @@ -94,12 +95,9 @@ protected PipelineAggregator createInternal(Map metaData) throws } @Override - public void doValidate(AggregatorFactory parent, AggregatorFactory[] aggFactories, + public void doValidate(AggregatorFactory parent, List aggFactories, List pipelineAggregatorFactories) { - if (bucketsPaths.length != 1) { - throw new IllegalStateException(PipelineAggregator.Parser.BUCKETS_PATH.getPreferredName() - + " must contain a single entry for aggregation [" + name + "]"); - } + super.doValidate(parent, aggFactories, pipelineAggregatorFactories); for (Double p : percents) { if (p == null || p < 0.0 || p > 100.0) { diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/stats/StatsBucketPipelineAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/stats/StatsBucketPipelineAggregationBuilder.java index ef0852f30af68..c472f1a3487e0 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/stats/StatsBucketPipelineAggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/stats/StatsBucketPipelineAggregationBuilder.java @@ -22,15 +22,11 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.xcontent.XContentBuilder; -import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.PipelineAggregationBuilder; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator.Parser; import org.elasticsearch.search.aggregations.pipeline.bucketmetrics.BucketMetricsParser; import org.elasticsearch.search.aggregations.pipeline.bucketmetrics.BucketMetricsPipelineAggregationBuilder; import java.io.IOException; -import java.util.List; import java.util.Map; public class StatsBucketPipelineAggregationBuilder extends BucketMetricsPipelineAggregationBuilder { @@ -58,15 +54,6 @@ protected PipelineAggregator createInternal(Map metaData) throws return new StatsBucketPipelineAggregator(name, bucketsPaths, gapPolicy(), formatter(), metaData); } - @Override - public void doValidate(AggregatorFactory parent, AggregatorFactory[] aggFactories, - List pipelineAggregatorFactories) { - if (bucketsPaths.length != 1) { - throw new IllegalStateException(Parser.BUCKETS_PATH.getPreferredName() - + " must contain a single entry for aggregation [" + name + "]"); - } - } - @Override protected XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException { return builder; diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/stats/extended/ExtendedStatsBucketPipelineAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/stats/extended/ExtendedStatsBucketPipelineAggregationBuilder.java index fb18c918b0ad8..00db3fabaa69d 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/stats/extended/ExtendedStatsBucketPipelineAggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/stats/extended/ExtendedStatsBucketPipelineAggregationBuilder.java @@ -22,10 +22,10 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.search.aggregations.AggregationBuilder; import org.elasticsearch.search.aggregations.AggregatorFactory; import org.elasticsearch.search.aggregations.PipelineAggregationBuilder; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; -import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator.Parser; import org.elasticsearch.search.aggregations.pipeline.bucketmetrics.BucketMetricsPipelineAggregationBuilder; import java.io.IOException; @@ -82,12 +82,9 @@ protected PipelineAggregator createInternal(Map metaData) throws } @Override - public void doValidate(AggregatorFactory parent, AggregatorFactory[] aggFactories, + public void doValidate(AggregatorFactory parent, List aggBuilders, List pipelineAggregatorFactories) { - if (bucketsPaths.length != 1) { - throw new IllegalStateException(Parser.BUCKETS_PATH.getPreferredName() - + " must contain a single entry for aggregation [" + name + "]"); - } + super.doValidate(parent, aggBuilders, pipelineAggregatorFactories); if (sigma < 0.0 ) { throw new IllegalStateException(ExtendedStatsBucketParser.SIGMA.getPreferredName() diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/sum/SumBucketPipelineAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/sum/SumBucketPipelineAggregationBuilder.java index 604cba0505613..e415f3adc409a 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/sum/SumBucketPipelineAggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/sum/SumBucketPipelineAggregationBuilder.java @@ -22,14 +22,11 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.xcontent.XContentBuilder; -import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.PipelineAggregationBuilder; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; import org.elasticsearch.search.aggregations.pipeline.bucketmetrics.BucketMetricsParser; import org.elasticsearch.search.aggregations.pipeline.bucketmetrics.BucketMetricsPipelineAggregationBuilder; import java.io.IOException; -import java.util.List; import java.util.Map; public class SumBucketPipelineAggregationBuilder extends BucketMetricsPipelineAggregationBuilder { @@ -56,15 +53,6 @@ protected PipelineAggregator createInternal(Map metaData) throws return new SumBucketPipelineAggregator(name, bucketsPaths, gapPolicy(), formatter(), metaData); } - @Override - public void doValidate(AggregatorFactory parent, AggregatorFactory[] aggFactories, - List pipelineAggregatorFactories) { - if (bucketsPaths.length != 1) { - throw new IllegalStateException(PipelineAggregator.Parser.BUCKETS_PATH.getPreferredName() - + " must contain a single entry for aggregation [" + name + "]"); - } - } - @Override protected XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException { return builder; diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/cumulativesum/CumulativeSumPipelineAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/cumulativesum/CumulativeSumPipelineAggregationBuilder.java index 7546b62426ee5..eaf34f7e70946 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/cumulativesum/CumulativeSumPipelineAggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/cumulativesum/CumulativeSumPipelineAggregationBuilder.java @@ -25,6 +25,7 @@ import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.search.DocValueFormat; +import org.elasticsearch.search.aggregations.AggregationBuilder; import org.elasticsearch.search.aggregations.AggregatorFactory; import org.elasticsearch.search.aggregations.PipelineAggregationBuilder; import org.elasticsearch.search.aggregations.bucket.histogram.DateHistogramAggregatorFactory; @@ -96,7 +97,7 @@ protected PipelineAggregator createInternal(Map metaData) throws } @Override - public void doValidate(AggregatorFactory parent, AggregatorFactory[] aggFactories, + public void doValidate(AggregatorFactory parent, List aggFactories, List pipelineAggregatorFactories) { if (bucketsPaths.length != 1) { throw new IllegalStateException(BUCKETS_PATH.getPreferredName() diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/derivative/DerivativePipelineAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/derivative/DerivativePipelineAggregationBuilder.java index 1e722ac0bae4e..3d2494459f6bf 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/derivative/DerivativePipelineAggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/derivative/DerivativePipelineAggregationBuilder.java @@ -28,6 +28,7 @@ import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.search.DocValueFormat; +import org.elasticsearch.search.aggregations.AggregationBuilder; import org.elasticsearch.search.aggregations.AggregatorFactory; import org.elasticsearch.search.aggregations.PipelineAggregationBuilder; import org.elasticsearch.search.aggregations.bucket.histogram.DateHistogramAggregationBuilder; @@ -155,7 +156,7 @@ protected PipelineAggregator createInternal(Map metaData) throws } @Override - public void doValidate(AggregatorFactory parent, AggregatorFactory[] aggFactories, + public void doValidate(AggregatorFactory parent, List aggFactories, List pipelineAggregatoractories) { if (bucketsPaths.length != 1) { throw new IllegalStateException(PipelineAggregator.Parser.BUCKETS_PATH.getPreferredName() diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/movavg/MovAvgPipelineAggregationBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/movavg/MovAvgPipelineAggregationBuilder.java index f5f71b3ca75f8..4fca370ab3ffd 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/movavg/MovAvgPipelineAggregationBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/movavg/MovAvgPipelineAggregationBuilder.java @@ -27,6 +27,7 @@ import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.search.DocValueFormat; +import org.elasticsearch.search.aggregations.AggregationBuilder; import org.elasticsearch.search.aggregations.AggregatorFactory; import org.elasticsearch.search.aggregations.PipelineAggregationBuilder; import org.elasticsearch.search.aggregations.bucket.histogram.DateHistogramAggregatorFactory; @@ -255,7 +256,7 @@ protected PipelineAggregator createInternal(Map metaData) throws } @Override - public void doValidate(AggregatorFactory parent, AggregatorFactory[] aggFactories, + public void doValidate(AggregatorFactory parent, List aggFactories, List pipelineAggregatoractories) { if (minimize != null && minimize && !model.canBeMinimized()) { // If the user asks to minimize, but this model doesn't support diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/movavg/models/MovAvgModelBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/movavg/models/MovAvgModelBuilder.java index 5d858b48d16eb..0c74ead985e32 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/movavg/models/MovAvgModelBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/pipeline/movavg/models/MovAvgModelBuilder.java @@ -19,13 +19,13 @@ package org.elasticsearch.search.aggregations.pipeline.movavg.models; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentFragment; /** * Represents the common interface that all moving average models share. Moving * average models are used by the MovAvg aggregation */ -public interface MovAvgModelBuilder extends ToXContent { +public interface MovAvgModelBuilder extends ToXContentFragment { MovAvgModel build(); } diff --git a/core/src/main/java/org/elasticsearch/search/builder/SearchSourceBuilder.java b/core/src/main/java/org/elasticsearch/search/builder/SearchSourceBuilder.java index d1f5dcf5487eb..4654e2bb57fde 100644 --- a/core/src/main/java/org/elasticsearch/search/builder/SearchSourceBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/builder/SearchSourceBuilder.java @@ -19,8 +19,8 @@ package org.elasticsearch.search.builder; +import org.elasticsearch.ElasticsearchException; import org.elasticsearch.Version; -import org.elasticsearch.action.support.ToXContentToBytes; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.ParseField; import org.elasticsearch.common.ParsingException; @@ -31,10 +31,12 @@ import org.elasticsearch.common.logging.DeprecationLogger; import org.elasticsearch.common.logging.Loggers; import org.elasticsearch.common.unit.TimeValue; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentFragment; import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentHelper; import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.index.query.QueryBuilder; import org.elasticsearch.index.query.QueryRewriteContext; import org.elasticsearch.index.query.Rewriteable; @@ -48,7 +50,7 @@ import org.elasticsearch.search.fetch.subphase.FetchSourceContext; import org.elasticsearch.search.fetch.subphase.highlight.HighlightBuilder; import org.elasticsearch.search.internal.SearchContext; -import org.elasticsearch.search.rescore.RescoreBuilder; +import org.elasticsearch.search.rescore.RescorerBuilder; import org.elasticsearch.search.searchafter.SearchAfterBuilder; import org.elasticsearch.search.slice.SliceBuilder; import org.elasticsearch.search.sort.ScoreSortBuilder; @@ -72,7 +74,7 @@ * * @see org.elasticsearch.action.search.SearchRequest#source(SearchSourceBuilder) */ -public final class SearchSourceBuilder extends ToXContentToBytes implements Writeable, ToXContentObject, Rewriteable { +public final class SearchSourceBuilder implements Writeable, ToXContentObject, Rewriteable { private static final DeprecationLogger DEPRECATION_LOGGER = new DeprecationLogger(Loggers.getLogger(SearchSourceBuilder.class)); @@ -166,7 +168,7 @@ public static HighlightBuilder highlight() { private SuggestBuilder suggestBuilder; - private List rescoreBuilders; + private List rescoreBuilders; private List indexBoosts = new ArrayList<>(); @@ -200,7 +202,7 @@ public SearchSourceBuilder(StreamInput in) throws IOException { postQueryBuilder = in.readOptionalNamedWriteable(QueryBuilder.class); queryBuilder = in.readOptionalNamedWriteable(QueryBuilder.class); if (in.readBoolean()) { - rescoreBuilders = in.readNamedWriteableList(RescoreBuilder.class); + rescoreBuilders = in.readNamedWriteableList(RescorerBuilder.class); } if (in.readBoolean()) { scriptFields = in.readList(ScriptField::new); @@ -619,7 +621,7 @@ public SuggestBuilder suggest() { return suggestBuilder; } - public SearchSourceBuilder addRescorer(RescoreBuilder rescoreBuilder) { + public SearchSourceBuilder addRescorer(RescorerBuilder rescoreBuilder) { if (rescoreBuilders == null) { rescoreBuilders = new ArrayList<>(); } @@ -651,7 +653,7 @@ public boolean profile() { /** * Gets the bytes representing the rescore builders for this request. */ - public List rescores() { + public List rescores() { return rescoreBuilders; } @@ -889,7 +891,7 @@ public SearchSourceBuilder rewrite(QueryRewriteContext context) throws IOExcepti } List> sorts = Rewriteable.rewrite(this.sorts, context); - List rescoreBuilders = Rewriteable.rewrite(this.rescoreBuilders, context); + List rescoreBuilders = Rewriteable.rewrite(this.rescoreBuilders, context); HighlightBuilder highlightBuilder = this.highlightBuilder; if (highlightBuilder != null) { highlightBuilder = this.highlightBuilder.rewrite(context); @@ -917,7 +919,7 @@ public SearchSourceBuilder copyWithNewSlice(SliceBuilder slice) { */ private SearchSourceBuilder shallowCopy(QueryBuilder queryBuilder, QueryBuilder postQueryBuilder, AggregatorFactories.Builder aggregations, SliceBuilder slice, List> sorts, - List rescoreBuilders, HighlightBuilder highlightBuilder) { + List rescoreBuilders, HighlightBuilder highlightBuilder) { SearchSourceBuilder rewrittenBuilder = new SearchSourceBuilder(); rewrittenBuilder.aggregations = aggregations; rewrittenBuilder.explain = explain; @@ -1032,7 +1034,7 @@ public void parseXContent(XContentParser parser) throws IOException { sorts = new ArrayList<>(SortBuilder.fromXContent(parser)); } else if (RESCORE_FIELD.match(currentFieldName)) { rescoreBuilders = new ArrayList<>(); - rescoreBuilders.add(RescoreBuilder.parseFromXContent(parser)); + rescoreBuilders.add(RescorerBuilder.parseFromXContent(parser)); } else if (EXT_FIELD.match(currentFieldName)) { extBuilders = new ArrayList<>(); String extSectionName = null; @@ -1079,7 +1081,7 @@ public void parseXContent(XContentParser parser) throws IOException { } else if (RESCORE_FIELD.match(currentFieldName)) { rescoreBuilders = new ArrayList<>(); while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { - rescoreBuilders.add(RescoreBuilder.parseFromXContent(parser)); + rescoreBuilders.add(RescorerBuilder.parseFromXContent(parser)); } } else if (STATS_FIELD.match(currentFieldName)) { stats = new ArrayList<>(); @@ -1220,7 +1222,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws if (rescoreBuilders != null) { builder.startArray(RESCORE_FIELD.getPreferredName()); - for (RescoreBuilder rescoreBuilder : rescoreBuilders) { + for (RescorerBuilder rescoreBuilder : rescoreBuilders) { rescoreBuilder.toXContent(builder, params); } builder.endArray(); @@ -1329,7 +1331,8 @@ public boolean equals(Object obj) { } } - public static class ScriptField implements Writeable, ToXContent { + + public static class ScriptField implements Writeable, ToXContentFragment { private final boolean ignoreFailure; private final String fieldName; @@ -1485,4 +1488,17 @@ public boolean equals(Object obj) { && Objects.equals(collapse, other.collapse) && Objects.equals(trackTotalHits, other.trackTotalHits); } + + @Override + public String toString() { + return toString(EMPTY_PARAMS); + } + + public String toString(Params params) { + try { + return XContentHelper.toXContent(this, XContentType.JSON, params, true).utf8ToString(); + } catch (IOException e) { + throw new ElasticsearchException(e); + } + } } diff --git a/core/src/main/java/org/elasticsearch/search/dfs/DfsPhase.java b/core/src/main/java/org/elasticsearch/search/dfs/DfsPhase.java index 6be95a8bceb20..fa7e611348d78 100644 --- a/core/src/main/java/org/elasticsearch/search/dfs/DfsPhase.java +++ b/core/src/main/java/org/elasticsearch/search/dfs/DfsPhase.java @@ -22,18 +22,19 @@ import com.carrotsearch.hppc.ObjectHashSet; import com.carrotsearch.hppc.ObjectObjectHashMap; import com.carrotsearch.hppc.cursors.ObjectCursor; + import org.apache.lucene.index.IndexReaderContext; import org.apache.lucene.index.Term; import org.apache.lucene.index.TermContext; import org.apache.lucene.search.CollectionStatistics; import org.apache.lucene.search.TermStatistics; import org.elasticsearch.common.collect.HppcMaps; -import org.elasticsearch.search.SearchContextException; import org.elasticsearch.search.SearchPhase; import org.elasticsearch.search.internal.SearchContext; -import org.elasticsearch.search.rescore.RescoreSearchContext; +import org.elasticsearch.search.rescore.RescoreContext; import org.elasticsearch.tasks.TaskCancelledException; +import java.io.IOException; import java.util.AbstractSet; import java.util.Collection; import java.util.Iterator; @@ -53,8 +54,12 @@ public void execute(SearchContext context) { final ObjectHashSet termsSet = new ObjectHashSet<>(); try { context.searcher().createNormalizedWeight(context.query(), true).extractTerms(new DelegateSet(termsSet)); - for (RescoreSearchContext rescoreContext : context.rescore()) { - rescoreContext.rescorer().extractTerms(context, rescoreContext, new DelegateSet(termsSet)); + for (RescoreContext rescoreContext : context.rescore()) { + try { + rescoreContext.rescorer().extractTerms(context.searcher(), rescoreContext, new DelegateSet(termsSet)); + } catch (IOException e) { + throw new IllegalStateException("Failed to extract terms", e); + } } Term[] terms = termsSet.toArray(Term.class); diff --git a/core/src/main/java/org/elasticsearch/search/fetch/FetchPhase.java b/core/src/main/java/org/elasticsearch/search/fetch/FetchPhase.java index 8892a69f2dfc1..4596162970b9b 100644 --- a/core/src/main/java/org/elasticsearch/search/fetch/FetchPhase.java +++ b/core/src/main/java/org/elasticsearch/search/fetch/FetchPhase.java @@ -40,6 +40,7 @@ import org.elasticsearch.index.fieldvisitor.FieldsVisitor; import org.elasticsearch.index.mapper.DocumentMapper; import org.elasticsearch.index.mapper.MappedFieldType; +import org.elasticsearch.index.mapper.MapperService; import org.elasticsearch.index.mapper.ObjectMapper; import org.elasticsearch.index.mapper.SourceFieldMapper; import org.elasticsearch.index.mapper.Uid; @@ -161,9 +162,15 @@ public void execute(SearchContext context) { fetchSubPhase.hitExecute(context, hitContext); } } + if (context.isCancelled()) { + throw new TaskCancelledException("cancelled"); + } for (FetchSubPhase fetchSubPhase : fetchSubPhases) { fetchSubPhase.hitsExecute(context, hits); + if (context.isCancelled()) { + throw new TaskCancelledException("cancelled"); + } } context.fetchResult().hits(new SearchHits(hits, context.queryResult().getTotalHits(), context.queryResult().getMaxScore())); @@ -246,7 +253,7 @@ private SearchHit createNestedSearchHit(SearchContext context, int nestedTopDocI ObjectMapper nestedObjectMapper = documentMapper.findNestedObjectMapper(nestedSubDocId, context, subReaderContext); assert nestedObjectMapper != null; SearchHit.NestedIdentity nestedIdentity = - getInternalNestedIdentity(context, nestedSubDocId, subReaderContext, documentMapper, nestedObjectMapper); + getInternalNestedIdentity(context, nestedSubDocId, subReaderContext, context.mapperService(), nestedObjectMapper); if (source != null) { Tuple> tuple = XContentHelper.convertToMap(source, true); @@ -262,18 +269,28 @@ private SearchHit createNestedSearchHit(SearchContext context, int nestedTopDocI String nestedPath = nested.getField().string(); current.put(nestedPath, new HashMap<>()); Object extractedValue = XContentMapValues.extractValue(nestedPath, sourceAsMap); - List> nestedParsedSource; + List nestedParsedSource; if (extractedValue instanceof List) { // nested field has an array value in the _source - nestedParsedSource = (List>) extractedValue; + nestedParsedSource = (List) extractedValue; } else if (extractedValue instanceof Map) { // nested field has an object value in the _source. This just means the nested field has just one inner object, // which is valid, but uncommon. - nestedParsedSource = Collections.singletonList((Map) extractedValue); + nestedParsedSource = Collections.singletonList(extractedValue); } else { throw new IllegalStateException("extracted source isn't an object or an array"); } - sourceAsMap = nestedParsedSource.get(nested.getOffset()); + if ((nestedParsedSource.get(0) instanceof Map) == false && + nestedObjectMapper.parentObjectMapperAreNested(context.mapperService()) == false) { + // When one of the parent objects are not nested then XContentMapValues.extractValue(...) extracts the values + // from two or more layers resulting in a list of list being returned. This is because nestedPath + // encapsulates two or more object layers in the _source. + // + // This is why only the first element of nestedParsedSource needs to be checked. + throw new IllegalArgumentException("Cannot execute inner hits. One or more parent object fields of nested field [" + + nestedObjectMapper.name() + "] are not nested. All parent fields need to be nested fields too"); + } + sourceAsMap = (Map) nestedParsedSource.get(nested.getOffset()); if (nested.getChild() == null) { current.put(nestedPath, sourceAsMap); } else { @@ -284,8 +301,6 @@ private SearchHit createNestedSearchHit(SearchContext context, int nestedTopDocI } context.lookup().source().setSource(nestedSourceAsMap); XContentType contentType = tuple.v1(); - BytesReference nestedSource = contentBuilder(contentType).map(nestedSourceAsMap).bytes(); - context.lookup().source().setSource(nestedSource); context.lookup().source().setSourceContentType(contentType); } return new SearchHit(nestedTopDocId, uid.id(), documentMapper.typeText(), nestedIdentity, searchFields); @@ -312,7 +327,8 @@ private Map getSearchFields(SearchContext context, int ne } private SearchHit.NestedIdentity getInternalNestedIdentity(SearchContext context, int nestedSubDocId, - LeafReaderContext subReaderContext, DocumentMapper documentMapper, + LeafReaderContext subReaderContext, + MapperService mapperService, ObjectMapper nestedObjectMapper) throws IOException { int currentParent = nestedSubDocId; ObjectMapper nestedParentObjectMapper; @@ -321,7 +337,7 @@ private SearchHit.NestedIdentity getInternalNestedIdentity(SearchContext context SearchHit.NestedIdentity nestedIdentity = null; do { Query parentFilter; - nestedParentObjectMapper = documentMapper.findParentObjectMapper(current); + nestedParentObjectMapper = current.getParentObjectMapper(mapperService); if (nestedParentObjectMapper != null) { if (nestedParentObjectMapper.nested().isNested() == false) { current = nestedParentObjectMapper; diff --git a/core/src/main/java/org/elasticsearch/search/fetch/ShardFetchRequest.java b/core/src/main/java/org/elasticsearch/search/fetch/ShardFetchRequest.java index dcea42e5ecb7f..ac71b84d54f34 100644 --- a/core/src/main/java/org/elasticsearch/search/fetch/ShardFetchRequest.java +++ b/core/src/main/java/org/elasticsearch/search/fetch/ShardFetchRequest.java @@ -56,25 +56,8 @@ public ShardFetchRequest(long id, IntArrayList list, ScoreDoc lastEmittedDoc) { this.lastEmittedDoc = lastEmittedDoc; } - public long id() { - return id; - } - - public int[] docIds() { - return docIds; - } - - public int docIdsSize() { - return size; - } - - public ScoreDoc lastEmittedDoc() { - return lastEmittedDoc; - } - - @Override - public void readFrom(StreamInput in) throws IOException { - super.readFrom(in); + public ShardFetchRequest(StreamInput in) throws IOException { + super(in); id = in.readLong(); size = in.readVInt(); docIds = new int[size]; @@ -110,6 +93,27 @@ public void writeTo(StreamOutput out) throws IOException { } } + public long id() { + return id; + } + + public int[] docIds() { + return docIds; + } + + public int docIdsSize() { + return size; + } + + public ScoreDoc lastEmittedDoc() { + return lastEmittedDoc; + } + + @Override + public void readFrom(StreamInput in) throws IOException { + throw new UnsupportedOperationException("usage of Streamable is to be replaced by Writeable"); + } + @Override public Task createTask(long id, String type, String action, TaskId parentTaskId) { return new SearchTask(id, type, action, getDescription(), parentTaskId); diff --git a/core/src/main/java/org/elasticsearch/search/fetch/ShardFetchSearchRequest.java b/core/src/main/java/org/elasticsearch/search/fetch/ShardFetchSearchRequest.java index fdfc582c95295..b81d9eded9cd6 100644 --- a/core/src/main/java/org/elasticsearch/search/fetch/ShardFetchSearchRequest.java +++ b/core/src/main/java/org/elasticsearch/search/fetch/ShardFetchSearchRequest.java @@ -46,6 +46,17 @@ public ShardFetchSearchRequest(OriginalIndices originalIndices, long id, IntArra this.originalIndices = originalIndices; } + public ShardFetchSearchRequest(StreamInput in) throws IOException { + super(in); + originalIndices = OriginalIndices.readOriginalIndices(in); + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + super.writeTo(out); + OriginalIndices.writeOriginalIndices(originalIndices, out); + } + @Override public String[] indices() { if (originalIndices == null) { @@ -64,13 +75,6 @@ public IndicesOptions indicesOptions() { @Override public void readFrom(StreamInput in) throws IOException { - super.readFrom(in); - originalIndices = OriginalIndices.readOriginalIndices(in); - } - - @Override - public void writeTo(StreamOutput out) throws IOException { - super.writeTo(out); - OriginalIndices.writeOriginalIndices(originalIndices, out); + throw new UnsupportedOperationException("usage of Streamable is to be replaced by Writeable"); } } diff --git a/core/src/main/java/org/elasticsearch/search/fetch/subphase/ExplainFetchSubPhase.java b/core/src/main/java/org/elasticsearch/search/fetch/subphase/ExplainFetchSubPhase.java index 5aabaf644e999..57d2ca9048de0 100644 --- a/core/src/main/java/org/elasticsearch/search/fetch/subphase/ExplainFetchSubPhase.java +++ b/core/src/main/java/org/elasticsearch/search/fetch/subphase/ExplainFetchSubPhase.java @@ -22,7 +22,7 @@ import org.elasticsearch.search.fetch.FetchPhaseExecutionException; import org.elasticsearch.search.fetch.FetchSubPhase; import org.elasticsearch.search.internal.SearchContext; -import org.elasticsearch.search.rescore.RescoreSearchContext; +import org.elasticsearch.search.rescore.RescoreContext; import java.io.IOException; @@ -40,8 +40,8 @@ public void hitExecute(SearchContext context, HitContext hitContext) { final int topLevelDocId = hitContext.hit().docId(); Explanation explanation = context.searcher().explain(context.query(), topLevelDocId); - for (RescoreSearchContext rescore : context.rescore()) { - explanation = rescore.rescorer().explain(topLevelDocId, context, rescore, explanation); + for (RescoreContext rescore : context.rescore()) { + explanation = rescore.rescorer().explain(topLevelDocId, context.searcher(), rescore, explanation); } // we use the top level doc id, since we work with the top level searcher hitContext.hit().explanation(explanation); diff --git a/core/src/main/java/org/elasticsearch/search/fetch/subphase/FetchSourceSubPhase.java b/core/src/main/java/org/elasticsearch/search/fetch/subphase/FetchSourceSubPhase.java index 3171ca4b00834..da593d57b7739 100644 --- a/core/src/main/java/org/elasticsearch/search/fetch/subphase/FetchSourceSubPhase.java +++ b/core/src/main/java/org/elasticsearch/search/fetch/subphase/FetchSourceSubPhase.java @@ -20,13 +20,20 @@ package org.elasticsearch.search.fetch.subphase; import org.elasticsearch.ElasticsearchException; +import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.io.stream.BytesStreamOutput; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentType; +import org.elasticsearch.search.SearchHit; import org.elasticsearch.search.fetch.FetchSubPhase; import org.elasticsearch.search.internal.SearchContext; import org.elasticsearch.search.lookup.SourceLookup; import java.io.IOException; +import java.io.UncheckedIOException; +import java.util.Map; + +import static org.elasticsearch.common.xcontent.XContentFactory.contentBuilder; public final class FetchSourceSubPhase implements FetchSubPhase { @@ -35,22 +42,27 @@ public void hitExecute(SearchContext context, HitContext hitContext) { if (context.sourceRequested() == false) { return; } + final boolean nestedHit = hitContext.hit().getNestedIdentity() != null; SourceLookup source = context.lookup().source(); FetchSourceContext fetchSourceContext = context.fetchSourceContext(); assert fetchSourceContext.fetchSource(); - if (fetchSourceContext.includes().length == 0 && fetchSourceContext.excludes().length == 0) { - hitContext.hit().sourceRef(source.internalSourceRef()); - return; + if (nestedHit == false) { + if (fetchSourceContext.includes().length == 0 && fetchSourceContext.excludes().length == 0) { + hitContext.hit().sourceRef(source.internalSourceRef()); + return; + } + if (source.internalSourceRef() == null) { + throw new IllegalArgumentException("unable to fetch fields from _source field: _source is disabled in the mappings " + + "for index [" + context.indexShard().shardId().getIndexName() + "]"); + } } - if (source.internalSourceRef() == null) { - throw new IllegalArgumentException("unable to fetch fields from _source field: _source is disabled in the mappings " + - "for index [" + context.indexShard().shardId().getIndexName() + "]"); + Object value = source.filter(fetchSourceContext); + if (nestedHit) { + value = getNestedSource((Map) value, hitContext); } - - final Object value = source.filter(fetchSourceContext); try { - final int initialCapacity = Math.min(1024, source.internalSourceRef().length()); + final int initialCapacity = nestedHit ? 1024 : Math.min(1024, source.internalSourceRef().length()); BytesStreamOutput streamOutput = new BytesStreamOutput(initialCapacity); XContentBuilder builder = new XContentBuilder(source.sourceContentType().xContent(), streamOutput); builder.value(value); @@ -58,6 +70,12 @@ public void hitExecute(SearchContext context, HitContext hitContext) { } catch (IOException e) { throw new ElasticsearchException("Error filtering source", e); } + } + private Map getNestedSource(Map sourceAsMap, HitContext hitContext) { + for (SearchHit.NestedIdentity o = hitContext.hit().getNestedIdentity(); o != null; o = o.getChild()) { + sourceAsMap = (Map) sourceAsMap.get(o.getField().string()); + } + return sourceAsMap; } } diff --git a/core/src/main/java/org/elasticsearch/search/fetch/subphase/InnerHitsContext.java b/core/src/main/java/org/elasticsearch/search/fetch/subphase/InnerHitsContext.java index 2fb8f6789cea7..d3b1da7c9376e 100644 --- a/core/src/main/java/org/elasticsearch/search/fetch/subphase/InnerHitsContext.java +++ b/core/src/main/java/org/elasticsearch/search/fetch/subphase/InnerHitsContext.java @@ -25,7 +25,6 @@ import org.apache.lucene.search.ConjunctionDISI; import org.apache.lucene.search.DocIdSetIterator; import org.apache.lucene.search.LeafCollector; -import org.apache.lucene.search.Query; import org.apache.lucene.search.Scorer; import org.apache.lucene.search.ScorerSupplier; import org.apache.lucene.search.TopDocs; @@ -126,15 +125,15 @@ public static void intersect(Weight weight, Weight innerHitQueryWeight, Collecto if (scorerSupplier == null) { return; } - // use random access since this scorer will be consumed on a minority of documents - Scorer scorer = scorerSupplier.get(true); + // use low leadCost since this scorer will be consumed on a minority of documents + Scorer scorer = scorerSupplier.get(0); ScorerSupplier innerHitQueryScorerSupplier = innerHitQueryWeight.scorerSupplier(ctx); if (innerHitQueryScorerSupplier == null) { return; } - // use random access since this scorer will be consumed on a minority of documents - Scorer innerHitQueryScorer = innerHitQueryScorerSupplier.get(true); + // use low loadCost since this scorer will be consumed on a minority of documents + Scorer innerHitQueryScorer = innerHitQueryScorerSupplier.get(0); final LeafCollector leafCollector; try { diff --git a/core/src/main/java/org/elasticsearch/search/fetch/subphase/ParentFieldSubFetchPhase.java b/core/src/main/java/org/elasticsearch/search/fetch/subphase/ParentFieldSubFetchPhase.java index 93fc9ccb1f645..6015e3c90211d 100644 --- a/core/src/main/java/org/elasticsearch/search/fetch/subphase/ParentFieldSubFetchPhase.java +++ b/core/src/main/java/org/elasticsearch/search/fetch/subphase/ParentFieldSubFetchPhase.java @@ -20,43 +20,79 @@ package org.elasticsearch.search.fetch.subphase; import org.apache.lucene.index.LeafReader; +import org.apache.lucene.index.LeafReaderContext; +import org.apache.lucene.index.ReaderUtil; import org.apache.lucene.index.SortedDocValues; import org.apache.lucene.util.BytesRef; import org.elasticsearch.ExceptionsHelper; import org.elasticsearch.common.document.DocumentField; +import org.elasticsearch.index.mapper.MapperService; import org.elasticsearch.index.mapper.ParentFieldMapper; +import org.elasticsearch.search.SearchHit; import org.elasticsearch.search.fetch.FetchSubPhase; import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; +import java.util.Arrays; import java.util.Collections; +import java.util.Comparator; import java.util.HashMap; +import java.util.HashSet; import java.util.Map; +import java.util.Set; public final class ParentFieldSubFetchPhase implements FetchSubPhase { - @Override - public void hitExecute(SearchContext context, HitContext hitContext) { + public void hitsExecute(SearchContext context, SearchHit[] hits) throws IOException { if (context.storedFieldsContext() != null && context.storedFieldsContext().fetchFields() == false) { return ; } - ParentFieldMapper parentFieldMapper = context.mapperService().documentMapper(hitContext.hit().getType()).parentFieldMapper(); - if (parentFieldMapper.active() == false) { - return; - } - String parentId = getParentId(parentFieldMapper, hitContext.reader(), hitContext.docId()); - if (parentId == null) { - // hit has no _parent field. Can happen for nested inner hits if parent hit is a p/c document. - return; + hits = hits.clone(); // don't modify the incoming hits + Arrays.sort(hits, Comparator.comparingInt(SearchHit::docId)); + + MapperService mapperService = context.mapperService(); + Set parentFields = new HashSet<>(); + for (SearchHit hit : hits) { + ParentFieldMapper parentFieldMapper = mapperService.documentMapper(hit.getType()).parentFieldMapper(); + if (parentFieldMapper.active()) { + parentFields.add(parentFieldMapper.name()); + } } - Map fields = hitContext.hit().fieldsOrNull(); - if (fields == null) { - fields = new HashMap<>(); - hitContext.hit().fields(fields); + int lastReaderId = -1; + Map docValuesMap = new HashMap<>(); + for (SearchHit hit : hits) { + ParentFieldMapper parentFieldMapper = mapperService.documentMapper(hit.getType()).parentFieldMapper(); + if (parentFieldMapper.active() == false) { + continue; + } + int readerId = ReaderUtil.subIndex(hit.docId(), context.searcher().getIndexReader().leaves()); + LeafReaderContext subReaderContext = context.searcher().getIndexReader().leaves().get(readerId); + if (lastReaderId != readerId) { + docValuesMap.clear(); + for (String field : parentFields) { + docValuesMap.put(field, subReaderContext.reader().getSortedDocValues(field)); + } + lastReaderId = readerId; + } + int docId = hit.docId() - subReaderContext.docBase; + SortedDocValues values = docValuesMap.get(parentFieldMapper.name()); + if (values != null && values.advanceExact(docId)) { + BytesRef binaryValue = values.binaryValue(); + String value = binaryValue.length > 0 ? binaryValue.utf8ToString() : null; + if (value == null) { + // hit has no _parent field. Can happen for nested inner hits if parent hit is a p/c document. + continue; + } + Map fields = hit.fieldsOrNull(); + if (fields == null) { + fields = new HashMap<>(); + hit.fields(fields); + } + fields.put(ParentFieldMapper.NAME, new DocumentField(ParentFieldMapper.NAME, Collections.singletonList(value))); + } } - fields.put(ParentFieldMapper.NAME, new DocumentField(ParentFieldMapper.NAME, Collections.singletonList(parentId))); } public static String getParentId(ParentFieldMapper fieldMapper, LeafReader reader, int docId) { @@ -72,5 +108,4 @@ public static String getParentId(ParentFieldMapper fieldMapper, LeafReader reade throw ExceptionsHelper.convertToElastic(e); } } - } diff --git a/core/src/main/java/org/elasticsearch/search/fetch/subphase/ScriptFieldsFetchSubPhase.java b/core/src/main/java/org/elasticsearch/search/fetch/subphase/ScriptFieldsFetchSubPhase.java index 82e0725ae1d94..c45734108f56d 100644 --- a/core/src/main/java/org/elasticsearch/search/fetch/subphase/ScriptFieldsFetchSubPhase.java +++ b/core/src/main/java/org/elasticsearch/search/fetch/subphase/ScriptFieldsFetchSubPhase.java @@ -18,62 +18,87 @@ */ package org.elasticsearch.search.fetch.subphase; +import org.apache.lucene.index.IndexReader; +import org.apache.lucene.index.LeafReaderContext; +import org.apache.lucene.index.ReaderUtil; import org.elasticsearch.common.document.DocumentField; import org.elasticsearch.script.SearchScript; +import org.elasticsearch.search.SearchHit; import org.elasticsearch.search.fetch.FetchSubPhase; import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.ArrayList; +import java.util.Arrays; import java.util.Collection; import java.util.Collections; +import java.util.Comparator; import java.util.HashMap; import java.util.List; public final class ScriptFieldsFetchSubPhase implements FetchSubPhase { @Override - public void hitExecute(SearchContext context, HitContext hitContext) { + public void hitsExecute(SearchContext context, SearchHit[] hits) throws IOException { if (context.hasScriptFields() == false) { return; } - for (ScriptFieldsContext.ScriptField scriptField : context.scriptFields().fields()) { - /* Because this is called once per document we end up creating new ScriptDocValues for every document which is important because - * the values inside ScriptDocValues might be reused for different documents (Dates do this). */ - SearchScript leafScript; - try { - leafScript = scriptField.script().newInstance(hitContext.readerContext()); - } catch (IOException e1) { - throw new IllegalStateException("Failed to load script", e1); - } - leafScript.setDocument(hitContext.docId()); - final Object value; - try { - value = leafScript.run(); - } catch (RuntimeException e) { - if (scriptField.ignoreException()) { - continue; - } - throw e; - } + hits = hits.clone(); // don't modify the incoming hits + Arrays.sort(hits, Comparator.comparingInt(SearchHit::docId)); - if (hitContext.hit().fieldsOrNull() == null) { - hitContext.hit().fields(new HashMap<>(2)); + int lastReaderId = -1; + SearchScript[] leafScripts = null; + List scriptFields = context.scriptFields().fields(); + final IndexReader reader = context.searcher().getIndexReader(); + for (SearchHit hit : hits) { + int readerId = ReaderUtil.subIndex(hit.docId(), reader.leaves()); + LeafReaderContext leafReaderContext = reader.leaves().get(readerId); + if (readerId != lastReaderId) { + leafScripts = createLeafScripts(leafReaderContext, scriptFields); + lastReaderId = readerId; } - - DocumentField hitField = hitContext.hit().getFields().get(scriptField.name()); - if (hitField == null) { - final List values; - if (value instanceof Collection) { - // TODO: use diamond operator once JI-9019884 is fixed - values = new ArrayList<>((Collection) value); - } else { - values = Collections.singletonList(value); + int docId = hit.docId() - leafReaderContext.docBase; + for (int i = 0; i < leafScripts.length; i++) { + leafScripts[i].setDocument(docId); + final Object value; + try { + value = leafScripts[i].run(); + } catch (RuntimeException e) { + if (scriptFields.get(i).ignoreException()) { + continue; + } + throw e; + } + if (hit.fieldsOrNull() == null) { + hit.fields(new HashMap<>(2)); } - hitField = new DocumentField(scriptField.name(), values); - hitContext.hit().getFields().put(scriptField.name(), hitField); + String scriptFieldName = scriptFields.get(i).name(); + DocumentField hitField = hit.getFields().get(scriptFieldName); + if (hitField == null) { + final List values; + if (value instanceof Collection) { + values = new ArrayList<>((Collection) value); + } else { + values = Collections.singletonList(value); + } + hitField = new DocumentField(scriptFieldName, values); + hit.getFields().put(scriptFieldName, hitField); + } + } + } + } + + private SearchScript[] createLeafScripts(LeafReaderContext context, + List scriptFields) { + SearchScript[] scripts = new SearchScript[scriptFields.size()]; + for (int i = 0; i < scripts.length; i++) { + try { + scripts[i] = scriptFields.get(i).script().newInstance(context); + } catch (IOException e1) { + throw new IllegalStateException("Failed to load script " + scriptFields.get(i).name(), e1); } } + return scripts; } } diff --git a/core/src/main/java/org/elasticsearch/search/fetch/subphase/VersionFetchSubPhase.java b/core/src/main/java/org/elasticsearch/search/fetch/subphase/VersionFetchSubPhase.java index 5f69230ca42fd..baa0c6e9551e4 100644 --- a/core/src/main/java/org/elasticsearch/search/fetch/subphase/VersionFetchSubPhase.java +++ b/core/src/main/java/org/elasticsearch/search/fetch/subphase/VersionFetchSubPhase.java @@ -18,32 +18,45 @@ */ package org.elasticsearch.search.fetch.subphase; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.index.NumericDocValues; -import org.elasticsearch.ElasticsearchException; +import org.apache.lucene.index.ReaderUtil; import org.elasticsearch.common.lucene.uid.Versions; import org.elasticsearch.index.mapper.VersionFieldMapper; +import org.elasticsearch.search.SearchHit; import org.elasticsearch.search.fetch.FetchSubPhase; import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; +import java.util.Arrays; +import java.util.Comparator; public final class VersionFetchSubPhase implements FetchSubPhase { - @Override - public void hitExecute(SearchContext context, HitContext hitContext) { + public void hitsExecute(SearchContext context, SearchHit[] hits) throws IOException { if (context.version() == false || (context.storedFieldsContext() != null && context.storedFieldsContext().fetchFields() == false)) { return; } - long version = Versions.NOT_FOUND; - try { - NumericDocValues versions = hitContext.reader().getNumericDocValues(VersionFieldMapper.NAME); - if (versions != null && versions.advanceExact(hitContext.docId())) { + + hits = hits.clone(); // don't modify the incoming hits + Arrays.sort(hits, Comparator.comparingInt(SearchHit::docId)); + + int lastReaderId = -1; + NumericDocValues versions = null; + for (SearchHit hit : hits) { + int readerId = ReaderUtil.subIndex(hit.docId(), context.searcher().getIndexReader().leaves()); + LeafReaderContext subReaderContext = context.searcher().getIndexReader().leaves().get(readerId); + if (lastReaderId != readerId) { + versions = subReaderContext.reader().getNumericDocValues(VersionFieldMapper.NAME); + lastReaderId = readerId; + } + int docId = hit.docId() - subReaderContext.docBase; + long version = Versions.NOT_FOUND; + if (versions != null && versions.advanceExact(docId)) { version = versions.longValue(); } - } catch (IOException e) { - throw new ElasticsearchException("Could not retrieve version", e); + hit.version(version < 0 ? -1 : version); } - hitContext.hit().version(version < 0 ? -1 : version); } } diff --git a/core/src/main/java/org/elasticsearch/search/fetch/subphase/highlight/AbstractHighlighterBuilder.java b/core/src/main/java/org/elasticsearch/search/fetch/subphase/highlight/AbstractHighlighterBuilder.java index 88771ea203c8d..f8618db3048e2 100644 --- a/core/src/main/java/org/elasticsearch/search/fetch/subphase/highlight/AbstractHighlighterBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/fetch/subphase/highlight/AbstractHighlighterBuilder.java @@ -22,13 +22,14 @@ import org.apache.lucene.search.highlight.SimpleFragmenter; import org.apache.lucene.search.highlight.SimpleSpanFragmenter; import org.elasticsearch.Version; -import org.elasticsearch.action.support.ToXContentToBytes; import org.elasticsearch.common.ParseField; import org.elasticsearch.common.ParsingException; +import org.elasticsearch.common.Strings; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; import org.elasticsearch.common.xcontent.ObjectParser; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.index.query.QueryBuilder; @@ -50,8 +51,8 @@ * This abstract class holds parameters shared by {@link HighlightBuilder} and {@link HighlightBuilder.Field} * and provides the common setters, equality, hashCode calculation and common serialization */ -public abstract class AbstractHighlighterBuilder> extends ToXContentToBytes implements - Writeable, Rewriteable { +public abstract class AbstractHighlighterBuilder> + implements Writeable, Rewriteable, ToXContentObject { public static final ParseField PRE_TAGS_FIELD = new ParseField("pre_tags"); public static final ParseField POST_TAGS_FIELD = new ParseField("post_tags"); public static final ParseField FIELDS_FIELD = new ParseField("fields"); @@ -710,4 +711,9 @@ public final boolean equals(Object obj) { * fields only present in subclass should be checked for equality in the implementation */ protected abstract boolean doEquals(HB other); + + @Override + public String toString() { + return Strings.toString(this, true, true); + } } diff --git a/core/src/main/java/org/elasticsearch/search/fetch/subphase/highlight/CustomQueryScorer.java b/core/src/main/java/org/elasticsearch/search/fetch/subphase/highlight/CustomQueryScorer.java index da9bd1af695d8..da4e1d1b202c8 100644 --- a/core/src/main/java/org/elasticsearch/search/fetch/subphase/highlight/CustomQueryScorer.java +++ b/core/src/main/java/org/elasticsearch/search/fetch/subphase/highlight/CustomQueryScorer.java @@ -25,6 +25,7 @@ import org.apache.lucene.search.highlight.WeightedSpanTerm; import org.apache.lucene.search.highlight.WeightedSpanTermExtractor; import org.elasticsearch.common.lucene.search.function.FunctionScoreQuery; +import org.elasticsearch.index.search.ESToParentBlockJoinQuery; import java.io.IOException; import java.util.Map; @@ -86,6 +87,8 @@ protected void extract(Query query, float boost, Map t return; } else if (query instanceof FunctionScoreQuery) { super.extract(((FunctionScoreQuery) query).getSubQuery(), boost, terms); + } else if (query instanceof ESToParentBlockJoinQuery) { + super.extract(((ESToParentBlockJoinQuery) query).getChildQuery(), boost, terms); } else { super.extract(query, boost, terms); } diff --git a/core/src/main/java/org/elasticsearch/search/internal/FilteredSearchContext.java b/core/src/main/java/org/elasticsearch/search/internal/FilteredSearchContext.java index 28917ea270547..4f95fcc0195c0 100644 --- a/core/src/main/java/org/elasticsearch/search/internal/FilteredSearchContext.java +++ b/core/src/main/java/org/elasticsearch/search/internal/FilteredSearchContext.java @@ -51,7 +51,7 @@ import org.elasticsearch.search.lookup.SearchLookup; import org.elasticsearch.search.profile.Profilers; import org.elasticsearch.search.query.QuerySearchResult; -import org.elasticsearch.search.rescore.RescoreSearchContext; +import org.elasticsearch.search.rescore.RescoreContext; import org.elasticsearch.search.sort.SortAndFormats; import org.elasticsearch.search.suggest.SuggestionSearchContext; @@ -192,12 +192,12 @@ public void suggest(SuggestionSearchContext suggest) { } @Override - public List rescore() { + public List rescore() { return in.rescore(); } @Override - public void addRescore(RescoreSearchContext rescore) { + public void addRescore(RescoreContext rescore) { in.addRescore(rescore); } diff --git a/core/src/main/java/org/elasticsearch/search/internal/InternalScrollSearchRequest.java b/core/src/main/java/org/elasticsearch/search/internal/InternalScrollSearchRequest.java index f112c97dd0f63..d1fba0f761526 100644 --- a/core/src/main/java/org/elasticsearch/search/internal/InternalScrollSearchRequest.java +++ b/core/src/main/java/org/elasticsearch/search/internal/InternalScrollSearchRequest.java @@ -44,6 +44,19 @@ public InternalScrollSearchRequest(SearchScrollRequest request, long id) { this.scroll = request.scroll(); } + public InternalScrollSearchRequest(StreamInput in) throws IOException { + super(in); + id = in.readLong(); + scroll = in.readOptionalWriteable(Scroll::new); + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + super.writeTo(out); + out.writeLong(id); + out.writeOptionalWriteable(scroll); + } + public long id() { return id; } @@ -59,16 +72,7 @@ public InternalScrollSearchRequest scroll(Scroll scroll) { @Override public void readFrom(StreamInput in) throws IOException { - super.readFrom(in); - id = in.readLong(); - scroll = in.readOptionalWriteable(Scroll::new); - } - - @Override - public void writeTo(StreamOutput out) throws IOException { - super.writeTo(out); - out.writeLong(id); - out.writeOptionalWriteable(scroll); + throw new UnsupportedOperationException("usage of Streamable is to be replaced by Writeable"); } @Override diff --git a/core/src/main/java/org/elasticsearch/search/internal/SearchContext.java b/core/src/main/java/org/elasticsearch/search/internal/SearchContext.java index a74926c33ec27..75290a75a8a90 100644 --- a/core/src/main/java/org/elasticsearch/search/internal/SearchContext.java +++ b/core/src/main/java/org/elasticsearch/search/internal/SearchContext.java @@ -58,7 +58,7 @@ import org.elasticsearch.search.lookup.SearchLookup; import org.elasticsearch.search.profile.Profilers; import org.elasticsearch.search.query.QuerySearchResult; -import org.elasticsearch.search.rescore.RescoreSearchContext; +import org.elasticsearch.search.rescore.RescoreContext; import org.elasticsearch.search.sort.SortAndFormats; import org.elasticsearch.search.suggest.SuggestionSearchContext; @@ -175,9 +175,9 @@ public InnerHitsContext innerHits() { /** * @return list of all rescore contexts. empty if there aren't any. */ - public abstract List rescore(); + public abstract List rescore(); - public abstract void addRescore(RescoreSearchContext rescore); + public abstract void addRescore(RescoreContext rescore); public abstract boolean hasScriptFields(); diff --git a/core/src/main/java/org/elasticsearch/search/internal/ShardSearchTransportRequest.java b/core/src/main/java/org/elasticsearch/search/internal/ShardSearchTransportRequest.java index 1c2ac0e4d179c..76a13b7b02f24 100644 --- a/core/src/main/java/org/elasticsearch/search/internal/ShardSearchTransportRequest.java +++ b/core/src/main/java/org/elasticsearch/search/internal/ShardSearchTransportRequest.java @@ -62,6 +62,20 @@ public ShardSearchTransportRequest(OriginalIndices originalIndices, SearchReques this.originalIndices = originalIndices; } + public ShardSearchTransportRequest(StreamInput in) throws IOException { + super(in); + shardSearchLocalRequest = new ShardSearchLocalRequest(); + shardSearchLocalRequest.innerReadFrom(in); + originalIndices = OriginalIndices.readOriginalIndices(in); + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + super.writeTo(out); + shardSearchLocalRequest.innerWriteTo(out, false); + OriginalIndices.writeOriginalIndices(originalIndices, out); + } + public void searchType(SearchType searchType) { shardSearchLocalRequest.setSearchType(searchType); } @@ -144,18 +158,7 @@ public Scroll scroll() { @Override public void readFrom(StreamInput in) throws IOException { - super.readFrom(in); - shardSearchLocalRequest = new ShardSearchLocalRequest(); - shardSearchLocalRequest.innerReadFrom(in); - originalIndices = OriginalIndices.readOriginalIndices(in); - - } - - @Override - public void writeTo(StreamOutput out) throws IOException { - super.writeTo(out); - shardSearchLocalRequest.innerWriteTo(out, false); - OriginalIndices.writeOriginalIndices(originalIndices, out); + throw new UnsupportedOperationException("usage of Streamable is to be replaced by Writeable"); } @Override diff --git a/core/src/main/java/org/elasticsearch/search/internal/SubSearchContext.java b/core/src/main/java/org/elasticsearch/search/internal/SubSearchContext.java index 0a6d684ccf525..8c8137f5e4345 100644 --- a/core/src/main/java/org/elasticsearch/search/internal/SubSearchContext.java +++ b/core/src/main/java/org/elasticsearch/search/internal/SubSearchContext.java @@ -31,7 +31,7 @@ import org.elasticsearch.search.fetch.subphase.ScriptFieldsContext; import org.elasticsearch.search.fetch.subphase.highlight.SearchContextHighlight; import org.elasticsearch.search.query.QuerySearchResult; -import org.elasticsearch.search.rescore.RescoreSearchContext; +import org.elasticsearch.search.rescore.RescoreContext; import org.elasticsearch.search.sort.SortAndFormats; import org.elasticsearch.search.suggest.SuggestionSearchContext; @@ -111,7 +111,7 @@ public void suggest(SuggestionSearchContext suggest) { } @Override - public void addRescore(RescoreSearchContext rescore) { + public void addRescore(RescoreContext rescore) { throw new UnsupportedOperationException("Not supported"); } diff --git a/core/src/main/java/org/elasticsearch/search/profile/SearchProfileShardResults.java b/core/src/main/java/org/elasticsearch/search/profile/SearchProfileShardResults.java index b7fa39c42f3ab..2c81c6aa784a2 100644 --- a/core/src/main/java/org/elasticsearch/search/profile/SearchProfileShardResults.java +++ b/core/src/main/java/org/elasticsearch/search/profile/SearchProfileShardResults.java @@ -22,7 +22,8 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContent.Params; +import org.elasticsearch.common.xcontent.ToXContentFragment; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.search.profile.aggregation.AggregationProfileShardResult; @@ -44,7 +45,7 @@ * A container class to hold all the profile results across all shards. Internally * holds a map of shard ID -> Profiled results */ -public final class SearchProfileShardResults implements Writeable, ToXContent{ +public final class SearchProfileShardResults implements Writeable, ToXContentFragment { private static final String SEARCHES_FIELD = "searches"; private static final String ID_FIELD = "id"; diff --git a/core/src/main/java/org/elasticsearch/search/profile/aggregation/InternalAggregationProfileTree.java b/core/src/main/java/org/elasticsearch/search/profile/aggregation/InternalAggregationProfileTree.java index f367595c84c87..f3e66c1a9fda9 100644 --- a/core/src/main/java/org/elasticsearch/search/profile/aggregation/InternalAggregationProfileTree.java +++ b/core/src/main/java/org/elasticsearch/search/profile/aggregation/InternalAggregationProfileTree.java @@ -32,10 +32,16 @@ protected AggregationProfileBreakdown createProfileBreakdown() { @Override protected String getTypeFromElement(Aggregator element) { + + // Anonymous classes (such as NonCollectingAggregator in TermsAgg) won't have a name, + // we need to get the super class + if (element.getClass().getSimpleName().isEmpty() == true) { + return element.getClass().getSuperclass().getSimpleName(); + } if (element instanceof MultiBucketAggregatorWrapper) { - return ((MultiBucketAggregatorWrapper) element).getWrappedClass().getName(); + return ((MultiBucketAggregatorWrapper) element).getWrappedClass().getSimpleName(); } - return element.getClass().getName(); + return element.getClass().getSimpleName(); } @Override diff --git a/core/src/main/java/org/elasticsearch/search/profile/aggregation/ProfilingAggregator.java b/core/src/main/java/org/elasticsearch/search/profile/aggregation/ProfilingAggregator.java index d96fbe0d86697..522910e0ab9eb 100644 --- a/core/src/main/java/org/elasticsearch/search/profile/aggregation/ProfilingAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/profile/aggregation/ProfilingAggregator.java @@ -110,4 +110,8 @@ public void postCollection() throws IOException { delegate.postCollection(); } + @Override + public String toString() { + return delegate.toString(); + } } diff --git a/core/src/main/java/org/elasticsearch/search/profile/query/InternalQueryProfileTree.java b/core/src/main/java/org/elasticsearch/search/profile/query/InternalQueryProfileTree.java index 013b7d3a506cd..6a69ea968f0bd 100644 --- a/core/src/main/java/org/elasticsearch/search/profile/query/InternalQueryProfileTree.java +++ b/core/src/main/java/org/elasticsearch/search/profile/query/InternalQueryProfileTree.java @@ -41,6 +41,11 @@ protected QueryProfileBreakdown createProfileBreakdown() { @Override protected String getTypeFromElement(Query query) { + // Anonymous classes won't have a name, + // we need to get the super class + if (query.getClass().getSimpleName().isEmpty() == true) { + return query.getClass().getSuperclass().getSimpleName(); + } return query.getClass().getSimpleName(); } diff --git a/core/src/main/java/org/elasticsearch/search/profile/query/ProfileWeight.java b/core/src/main/java/org/elasticsearch/search/profile/query/ProfileWeight.java index 7cb50b292194d..bd5fd5e23a2f0 100644 --- a/core/src/main/java/org/elasticsearch/search/profile/query/ProfileWeight.java +++ b/core/src/main/java/org/elasticsearch/search/profile/query/ProfileWeight.java @@ -54,7 +54,7 @@ public Scorer scorer(LeafReaderContext context) throws IOException { if (supplier == null) { return null; } - return supplier.get(false); + return supplier.get(Long.MAX_VALUE); } @Override @@ -75,10 +75,10 @@ public ScorerSupplier scorerSupplier(LeafReaderContext context) throws IOExcepti return new ScorerSupplier() { @Override - public Scorer get(boolean randomAccess) throws IOException { + public Scorer get(long loadCost) throws IOException { timer.start(); try { - return new ProfileScorer(weight, subQueryScorerSupplier.get(randomAccess), profile); + return new ProfileScorer(weight, subQueryScorerSupplier.get(loadCost), profile); } finally { timer.stop(); } diff --git a/core/src/main/java/org/elasticsearch/search/query/QueryCollectorContext.java b/core/src/main/java/org/elasticsearch/search/query/QueryCollectorContext.java index 792f79fd16550..acb679b2180f9 100644 --- a/core/src/main/java/org/elasticsearch/search/query/QueryCollectorContext.java +++ b/core/src/main/java/org/elasticsearch/search/query/QueryCollectorContext.java @@ -217,13 +217,11 @@ static QueryCollectorContext createEarlySortingTerminationCollectorContext(Index boolean trackTotalHits, boolean shouldCollect) { return new QueryCollectorContext(REASON_SEARCH_TERMINATE_AFTER_COUNT) { - private BooleanSupplier terminatedEarlySupplier; private IntSupplier countSupplier = null; @Override Collector create(Collector in) throws IOException { EarlyTerminatingSortingCollector sortingCollector = new EarlyTerminatingSortingCollector(in, indexSort, numHits); - terminatedEarlySupplier = sortingCollector::terminatedEarly; Collector collector = sortingCollector; if (trackTotalHits) { int count = shouldCollect ? -1 : shortcutTotalHitCount(reader, query); @@ -240,9 +238,6 @@ Collector create(Collector in) throws IOException { @Override void postProcess(QuerySearchResult result, boolean hasCollected) throws IOException { - if (terminatedEarlySupplier.getAsBoolean()) { - result.terminatedEarly(true); - } if (countSupplier != null) { final TopDocs topDocs = result.topDocs(); topDocs.totalHits = countSupplier.getAsInt(); diff --git a/core/src/main/java/org/elasticsearch/search/query/QueryPhase.java b/core/src/main/java/org/elasticsearch/search/query/QueryPhase.java index 9fdaae098b8de..500612974c851 100644 --- a/core/src/main/java/org/elasticsearch/search/query/QueryPhase.java +++ b/core/src/main/java/org/elasticsearch/search/query/QueryPhase.java @@ -283,7 +283,7 @@ static boolean execute(SearchContext searchContext, final IndexSearcher searcher ctx.postProcess(result, shouldCollect); } EsThreadPoolExecutor executor = (EsThreadPoolExecutor) - searchContext.indexShard().getThreadPool().executor(ThreadPool.Names.SEARCH);; + searchContext.indexShard().getThreadPool().executor(ThreadPool.Names.SEARCH); if (executor instanceof QueueResizingEsThreadPoolExecutor) { QueueResizingEsThreadPoolExecutor rExecutor = (QueueResizingEsThreadPoolExecutor) executor; queryResult.nodeQueueSize(rExecutor.getCurrentQueueSize()); diff --git a/core/src/main/java/org/elasticsearch/search/query/QuerySearchRequest.java b/core/src/main/java/org/elasticsearch/search/query/QuerySearchRequest.java index 86a9c70dc0be1..c893ed93046f0 100644 --- a/core/src/main/java/org/elasticsearch/search/query/QuerySearchRequest.java +++ b/core/src/main/java/org/elasticsearch/search/query/QuerySearchRequest.java @@ -52,6 +52,21 @@ public QuerySearchRequest(OriginalIndices originalIndices, long id, AggregatedDf this.originalIndices = originalIndices; } + public QuerySearchRequest(StreamInput in) throws IOException { + super(in); + id = in.readLong(); + dfs = readAggregatedDfs(in); + originalIndices = OriginalIndices.readOriginalIndices(in); + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + super.writeTo(out); + out.writeLong(id); + dfs.writeTo(out); + OriginalIndices.writeOriginalIndices(originalIndices, out); + } + public long id() { return id; } @@ -72,18 +87,7 @@ public IndicesOptions indicesOptions() { @Override public void readFrom(StreamInput in) throws IOException { - super.readFrom(in); - id = in.readLong(); - dfs = readAggregatedDfs(in); - originalIndices = OriginalIndices.readOriginalIndices(in); - } - - @Override - public void writeTo(StreamOutput out) throws IOException { - super.writeTo(out); - out.writeLong(id); - dfs.writeTo(out); - OriginalIndices.writeOriginalIndices(originalIndices, out); + throw new UnsupportedOperationException("usage of Streamable is to be replaced by Writeable"); } @Override diff --git a/core/src/main/java/org/elasticsearch/search/query/TopDocsCollectorContext.java b/core/src/main/java/org/elasticsearch/search/query/TopDocsCollectorContext.java index 93c2aa17de6d9..0c14ef6a8157e 100644 --- a/core/src/main/java/org/elasticsearch/search/query/TopDocsCollectorContext.java +++ b/core/src/main/java/org/elasticsearch/search/query/TopDocsCollectorContext.java @@ -43,7 +43,7 @@ import org.elasticsearch.search.collapse.CollapseContext; import org.elasticsearch.search.internal.ScrollContext; import org.elasticsearch.search.internal.SearchContext; -import org.elasticsearch.search.rescore.RescoreSearchContext; +import org.elasticsearch.search.rescore.RescoreContext; import org.elasticsearch.search.sort.SortAndFormats; import java.io.IOException; @@ -283,14 +283,18 @@ static TopDocsCollectorContext createTopDocsCollectorContext(SearchContext searc return new ScrollingTopDocsCollectorContext(searchContext.scrollContext(), searchContext.sort(), numDocs, searchContext.trackScores(), searchContext.numberOfShards()); } else if (searchContext.collapse() != null) { + boolean trackScores = searchContext.sort() == null ? true : searchContext.trackScores(); int numDocs = Math.min(searchContext.from() + searchContext.size(), totalNumDocs); return new CollapsingTopDocsCollectorContext(searchContext.collapse(), - searchContext.sort(), numDocs, searchContext.trackScores()); + searchContext.sort(), numDocs, trackScores); } else { int numDocs = Math.min(searchContext.from() + searchContext.size(), totalNumDocs); final boolean rescore = searchContext.rescore().isEmpty() == false; - for (RescoreSearchContext rescoreContext : searchContext.rescore()) { - numDocs = Math.max(numDocs, rescoreContext.window()); + if (rescore) { + assert searchContext.sort() == null; + for (RescoreContext rescoreContext : searchContext.rescore()) { + numDocs = Math.max(numDocs, rescoreContext.getWindowSize()); + } } return new SimpleTopDocsCollectorContext(searchContext.sort(), searchContext.searchAfter(), diff --git a/core/src/main/java/org/elasticsearch/search/rescore/QueryRescorer.java b/core/src/main/java/org/elasticsearch/search/rescore/QueryRescorer.java index fe1b0577aa79f..d4cf05d542560 100644 --- a/core/src/main/java/org/elasticsearch/search/rescore/QueryRescorer.java +++ b/core/src/main/java/org/elasticsearch/search/rescore/QueryRescorer.java @@ -21,11 +21,10 @@ import org.apache.lucene.index.Term; import org.apache.lucene.search.Explanation; +import org.apache.lucene.search.IndexSearcher; import org.apache.lucene.search.Query; import org.apache.lucene.search.ScoreDoc; import org.apache.lucene.search.TopDocs; -import org.elasticsearch.search.internal.ContextIndexSearcher; -import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.Arrays; @@ -35,15 +34,9 @@ public final class QueryRescorer implements Rescorer { public static final Rescorer INSTANCE = new QueryRescorer(); - public static final String NAME = "query"; @Override - public String name() { - return NAME; - } - - @Override - public TopDocs rescore(TopDocs topDocs, SearchContext context, RescoreSearchContext rescoreContext) throws IOException { + public TopDocs rescore(TopDocs topDocs, IndexSearcher searcher, RescoreContext rescoreContext) throws IOException { assert rescoreContext != null; if (topDocs == null || topDocs.totalHits == 0 || topDocs.scoreDocs.length == 0) { @@ -66,20 +59,19 @@ protected float combine(float firstPassScore, boolean secondPassMatches, float s }; // First take top slice of incoming docs, to be rescored: - TopDocs topNFirstPass = topN(topDocs, rescoreContext.window()); + TopDocs topNFirstPass = topN(topDocs, rescoreContext.getWindowSize()); // Rescore them: - TopDocs rescored = rescorer.rescore(context.searcher(), topNFirstPass, rescoreContext.window()); + TopDocs rescored = rescorer.rescore(searcher, topNFirstPass, rescoreContext.getWindowSize()); // Splice back to non-topN hits and resort all of them: return combine(topDocs, rescored, (QueryRescoreContext) rescoreContext); } @Override - public Explanation explain(int topLevelDocId, SearchContext context, RescoreSearchContext rescoreContext, + public Explanation explain(int topLevelDocId, IndexSearcher searcher, RescoreContext rescoreContext, Explanation sourceExplanation) throws IOException { QueryRescoreContext rescore = (QueryRescoreContext) rescoreContext; - ContextIndexSearcher searcher = context.searcher(); if (sourceExplanation == null) { // this should not happen but just in case return Explanation.noMatch("nothing matched"); @@ -160,20 +152,17 @@ private TopDocs combine(TopDocs in, TopDocs resorted, QueryRescoreContext ctx) { return in; } - public static class QueryRescoreContext extends RescoreSearchContext { - - static final int DEFAULT_WINDOW_SIZE = 10; - - public QueryRescoreContext(QueryRescorer rescorer) { - super(NAME, DEFAULT_WINDOW_SIZE, rescorer); - this.scoreMode = QueryRescoreMode.Total; - } - + public static class QueryRescoreContext extends RescoreContext { private Query query; private float queryWeight = 1.0f; private float rescoreQueryWeight = 1.0f; private QueryRescoreMode scoreMode; + public QueryRescoreContext(int windowSize) { + super(windowSize, QueryRescorer.INSTANCE); + this.scoreMode = QueryRescoreMode.Total; + } + public void setQuery(Query query) { this.query = query; } @@ -212,12 +201,8 @@ public void setScoreMode(String scoreMode) { } @Override - public void extractTerms(SearchContext context, RescoreSearchContext rescoreContext, Set termsSet) { - try { - context.searcher().createNormalizedWeight(((QueryRescoreContext) rescoreContext).query(), false).extractTerms(termsSet); - } catch (IOException e) { - throw new IllegalStateException("Failed to extract terms", e); - } + public void extractTerms(IndexSearcher searcher, RescoreContext rescoreContext, Set termsSet) throws IOException { + searcher.createNormalizedWeight(((QueryRescoreContext) rescoreContext).query(), false).extractTerms(termsSet); } } diff --git a/core/src/main/java/org/elasticsearch/search/rescore/QueryRescorerBuilder.java b/core/src/main/java/org/elasticsearch/search/rescore/QueryRescorerBuilder.java index fba2a154e9194..2ee9399d9a051 100644 --- a/core/src/main/java/org/elasticsearch/search/rescore/QueryRescorerBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/rescore/QueryRescorerBuilder.java @@ -29,7 +29,6 @@ import org.elasticsearch.index.query.QueryBuilder; import org.elasticsearch.index.query.QueryRewriteContext; import org.elasticsearch.index.query.QueryShardContext; -import org.elasticsearch.index.query.Rewriteable; import org.elasticsearch.search.rescore.QueryRescorer.QueryRescoreContext; import java.io.IOException; @@ -38,25 +37,15 @@ import static org.elasticsearch.index.query.AbstractQueryBuilder.parseInnerQueryBuilder; -public class QueryRescorerBuilder extends RescoreBuilder { - +public class QueryRescorerBuilder extends RescorerBuilder { public static final String NAME = "query"; - public static final float DEFAULT_RESCORE_QUERYWEIGHT = 1.0f; - public static final float DEFAULT_QUERYWEIGHT = 1.0f; - public static final QueryRescoreMode DEFAULT_SCORE_MODE = QueryRescoreMode.Total; - private final QueryBuilder queryBuilder; - private float rescoreQueryWeight = DEFAULT_RESCORE_QUERYWEIGHT; - private float queryWeight = DEFAULT_QUERYWEIGHT; - private QueryRescoreMode scoreMode = DEFAULT_SCORE_MODE; - - private static ParseField RESCORE_QUERY_FIELD = new ParseField("rescore_query"); - private static ParseField QUERY_WEIGHT_FIELD = new ParseField("query_weight"); - private static ParseField RESCORE_QUERY_WEIGHT_FIELD = new ParseField("rescore_query_weight"); - private static ParseField SCORE_MODE_FIELD = new ParseField("score_mode"); + private static final ParseField RESCORE_QUERY_FIELD = new ParseField("rescore_query"); + private static final ParseField QUERY_WEIGHT_FIELD = new ParseField("query_weight"); + private static final ParseField RESCORE_QUERY_WEIGHT_FIELD = new ParseField("rescore_query_weight"); + private static final ParseField SCORE_MODE_FIELD = new ParseField("score_mode"); private static final ObjectParser QUERY_RESCORE_PARSER = new ObjectParser<>(NAME, null); - static { QUERY_RESCORE_PARSER.declareObject(InnerBuilder::setQueryBuilder, (p, c) -> { try { @@ -70,6 +59,14 @@ public class QueryRescorerBuilder extends RescoreBuilder { QUERY_RESCORE_PARSER.declareString((struct, value) -> struct.setScoreMode(QueryRescoreMode.fromString(value)), SCORE_MODE_FIELD); } + public static final float DEFAULT_RESCORE_QUERYWEIGHT = 1.0f; + public static final float DEFAULT_QUERYWEIGHT = 1.0f; + public static final QueryRescoreMode DEFAULT_SCORE_MODE = QueryRescoreMode.Total; + private final QueryBuilder queryBuilder; + private float rescoreQueryWeight = DEFAULT_RESCORE_QUERYWEIGHT; + private float queryWeight = DEFAULT_QUERYWEIGHT; + private QueryRescoreMode scoreMode = DEFAULT_SCORE_MODE; + /** * Creates a new {@link QueryRescorerBuilder} instance * @param builder the query builder to build the rescore query from @@ -100,6 +97,11 @@ public void doWriteTo(StreamOutput out) throws IOException { out.writeFloat(queryWeight); } + @Override + public String getWriteableName() { + return NAME; + } + /** * @return the query used for this rescore query */ @@ -169,17 +171,13 @@ public static QueryRescorerBuilder fromXContent(XContentParser parser) throws IO } @Override - public QueryRescoreContext build(QueryShardContext context) throws IOException { - org.elasticsearch.search.rescore.QueryRescorer rescorer = new org.elasticsearch.search.rescore.QueryRescorer(); - QueryRescoreContext queryRescoreContext = new QueryRescoreContext(rescorer); + public QueryRescoreContext innerBuildContext(int windowSize, QueryShardContext context) throws IOException { + QueryRescoreContext queryRescoreContext = new QueryRescoreContext(windowSize); // query is rewritten at this point already queryRescoreContext.setQuery(queryBuilder.toQuery(context)); queryRescoreContext.setQueryWeight(this.queryWeight); queryRescoreContext.setRescoreQueryWeight(this.rescoreQueryWeight); queryRescoreContext.setScoreMode(this.scoreMode); - if (this.windowSize != null) { - queryRescoreContext.setWindowSize(this.windowSize); - } return queryRescoreContext; } @@ -205,11 +203,6 @@ public final boolean equals(Object obj) { Objects.equals(queryBuilder, other.queryBuilder); } - @Override - public String getWriteableName() { - return NAME; - } - /** * Helper to be able to use {@link ObjectParser}, since we need the inner query builder * for the constructor of {@link QueryRescorerBuilder}, but {@link ObjectParser} only @@ -248,7 +241,7 @@ void setScoreMode(QueryRescoreMode scoreMode) { } @Override - public RescoreBuilder rewrite(QueryRewriteContext ctx) throws IOException { + public QueryRescorerBuilder rewrite(QueryRewriteContext ctx) throws IOException { QueryBuilder rewrite = queryBuilder.rewrite(ctx); if (rewrite == queryBuilder) { return this; diff --git a/core/src/main/java/org/elasticsearch/search/rescore/RescoreSearchContext.java b/core/src/main/java/org/elasticsearch/search/rescore/RescoreContext.java similarity index 65% rename from core/src/main/java/org/elasticsearch/search/rescore/RescoreSearchContext.java rename to core/src/main/java/org/elasticsearch/search/rescore/RescoreContext.java index aa3c66b2fd8b2..75ce807a67f47 100644 --- a/core/src/main/java/org/elasticsearch/search/rescore/RescoreSearchContext.java +++ b/core/src/main/java/org/elasticsearch/search/rescore/RescoreContext.java @@ -19,37 +19,35 @@ package org.elasticsearch.search.rescore; - - -public class RescoreSearchContext { - - private int windowSize; - - private final String type; - +/** + * Context available to the rescore while it is running. Rescore + * implementations should extend this with any additional resources that + * they will need while rescoring. + */ +public class RescoreContext { + private final int windowSize; private final Rescorer rescorer; - public RescoreSearchContext(String type, int windowSize, Rescorer rescorer) { - super(); - this.type = type; + /** + * Build the context. + * @param rescorer the rescorer actually performing the rescore. + */ + public RescoreContext(int windowSize, Rescorer rescorer) { this.windowSize = windowSize; this.rescorer = rescorer; } - + + /** + * The rescorer to actually apply. + */ public Rescorer rescorer() { return rescorer; } - public String getType() { - return type; - } - - public void setWindowSize(int windowSize) { - this.windowSize = windowSize; - } - - public int window() { + /** + * Size of the window to rescore. + */ + public int getWindowSize() { return windowSize; } - } diff --git a/core/src/main/java/org/elasticsearch/search/rescore/RescorePhase.java b/core/src/main/java/org/elasticsearch/search/rescore/RescorePhase.java index d3d4c75cd7b5a..b8ce8f8118b50 100644 --- a/core/src/main/java/org/elasticsearch/search/rescore/RescorePhase.java +++ b/core/src/main/java/org/elasticsearch/search/rescore/RescorePhase.java @@ -19,6 +19,9 @@ package org.elasticsearch.search.rescore; +import org.apache.lucene.search.ScoreDoc; +import org.apache.lucene.search.Sort; +import org.apache.lucene.search.SortField; import org.apache.lucene.search.TopDocs; import org.elasticsearch.ElasticsearchException; import org.elasticsearch.common.component.AbstractComponent; @@ -45,12 +48,33 @@ public void preProcess(SearchContext context) { public void execute(SearchContext context) { try { TopDocs topDocs = context.queryResult().topDocs(); - for (RescoreSearchContext ctx : context.rescore()) { - topDocs = ctx.rescorer().rescore(topDocs, context, ctx); + for (RescoreContext ctx : context.rescore()) { + topDocs = ctx.rescorer().rescore(topDocs, context.searcher(), ctx); + // It is the responsibility of the rescorer to sort the resulted top docs, + // here we only assert that this condition is met. + assert context.sort() == null && topDocsSortedByScore(topDocs): "topdocs should be sorted after rescore"; } context.queryResult().topDocs(topDocs, context.queryResult().sortValueFormats()); } catch (IOException e) { throw new ElasticsearchException("Rescore Phase Failed", e); } } + + /** + * Returns true if the provided docs are sorted by score. + */ + private boolean topDocsSortedByScore(TopDocs topDocs) { + if (topDocs == null || topDocs.scoreDocs == null || topDocs.scoreDocs.length < 2) { + return true; + } + float lastScore = topDocs.scoreDocs[0].score; + for (int i = 1; i < topDocs.scoreDocs.length; i++) { + ScoreDoc doc = topDocs.scoreDocs[i]; + if (Float.compare(doc.score, lastScore) > 0) { + return false; + } + lastScore = doc.score; + } + return true; + } } diff --git a/core/src/main/java/org/elasticsearch/search/rescore/Rescorer.java b/core/src/main/java/org/elasticsearch/search/rescore/Rescorer.java index 43a9dd9c64115..023c9340e4b1d 100644 --- a/core/src/main/java/org/elasticsearch/search/rescore/Rescorer.java +++ b/core/src/main/java/org/elasticsearch/search/rescore/Rescorer.java @@ -21,9 +21,9 @@ import org.apache.lucene.index.Term; import org.apache.lucene.search.Explanation; +import org.apache.lucene.search.IndexSearcher; import org.apache.lucene.search.TopDocs; import org.elasticsearch.action.search.SearchType; -import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.util.Set; @@ -31,36 +31,34 @@ /** * A query rescorer interface used to re-rank the Top-K results of a previously * executed search. + * + * Subclasses should borrow heavily from {@link QueryRescorer} because it is + * fairly well behaved and documents that tradeoffs that it is making. There + * is also an {@code ExampleRescorer} that is worth looking at. */ public interface Rescorer { - - /** - * Returns the name of this rescorer - */ - String name(); - /** * Modifies the result of the previously executed search ({@link TopDocs}) - * in place based on the given {@link RescoreSearchContext}. + * in place based on the given {@link RescoreContext}. * * @param topDocs the result of the previously executed search - * @param context the current {@link SearchContext}. This will never be null. - * @param rescoreContext the {@link RescoreSearchContext}. This will never be null + * @param searcher the searcher used for this search. This will never be null. + * @param rescoreContext the {@link RescoreContext}. This will never be null * @throws IOException if an {@link IOException} occurs during rescoring */ - TopDocs rescore(TopDocs topDocs, SearchContext context, RescoreSearchContext rescoreContext) throws IOException; + TopDocs rescore(TopDocs topDocs, IndexSearcher searcher, RescoreContext rescoreContext) throws IOException; /** * Executes an {@link Explanation} phase on the rescorer. * * @param topLevelDocId the global / top-level document ID to explain - * @param context the explanation for the results being fed to this rescorer + * @param searcher the searcher used for this search. This will never be null. * @param rescoreContext context for this rescorer * @param sourceExplanation explanation of the source of the documents being fed into this rescore * @return the explain for the given top level document ID. * @throws IOException if an {@link IOException} occurs */ - Explanation explain(int topLevelDocId, SearchContext context, RescoreSearchContext rescoreContext, + Explanation explain(int topLevelDocId, IndexSearcher searcher, RescoreContext rescoreContext, Explanation sourceExplanation) throws IOException; /** @@ -68,17 +66,5 @@ Explanation explain(int topLevelDocId, SearchContext context, RescoreSearchConte * is executed in a distributed frequency collection roundtrip for * {@link SearchType#DFS_QUERY_THEN_FETCH} */ - void extractTerms(SearchContext context, RescoreSearchContext rescoreContext, Set termsSet); - - /* - * TODO: At this point we only have one implementation which modifies the - * TopDocs given. Future implementations might return actual results that - * contain information about the rescore context. For example a pair wise - * reranker might return the feature vector for the top N window in order to - * merge results on the callers side. For now we don't have a return type at - * all since something like this requires a more general refactoring how - * documents are merged since in such a case we don't really have a score - * per document rather a "X is more relevant than Y" relation - */ - + void extractTerms(IndexSearcher searcher, RescoreContext rescoreContext, Set termsSet) throws IOException; } diff --git a/core/src/main/java/org/elasticsearch/search/rescore/RescoreBuilder.java b/core/src/main/java/org/elasticsearch/search/rescore/RescorerBuilder.java similarity index 71% rename from core/src/main/java/org/elasticsearch/search/rescore/RescoreBuilder.java rename to core/src/main/java/org/elasticsearch/search/rescore/RescorerBuilder.java index 17e562e27ba88..531961e8d8f89 100644 --- a/core/src/main/java/org/elasticsearch/search/rescore/RescoreBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/rescore/RescorerBuilder.java @@ -19,42 +19,42 @@ package org.elasticsearch.search.rescore; -import org.elasticsearch.action.support.ToXContentToBytes; import org.elasticsearch.common.ParseField; import org.elasticsearch.common.ParsingException; +import org.elasticsearch.common.Strings; import org.elasticsearch.common.io.stream.NamedWriteable; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentParser; -import org.elasticsearch.index.query.QueryBuilder; import org.elasticsearch.index.query.QueryShardContext; import org.elasticsearch.index.query.Rewriteable; -import org.elasticsearch.search.rescore.QueryRescorer.QueryRescoreContext; import java.io.IOException; import java.util.Objects; /** - * The abstract base builder for instances of {@link RescoreBuilder}. + * The abstract base builder for instances of {@link RescorerBuilder}. */ -public abstract class RescoreBuilder> extends ToXContentToBytes implements NamedWriteable, - Rewriteable> { +public abstract class RescorerBuilder> + implements NamedWriteable, ToXContentObject, Rewriteable> { + public static final int DEFAULT_WINDOW_SIZE = 10; protected Integer windowSize; - private static ParseField WINDOW_SIZE_FIELD = new ParseField("window_size"); + private static final ParseField WINDOW_SIZE_FIELD = new ParseField("window_size"); /** * Construct an empty RescoreBuilder. */ - public RescoreBuilder() { + public RescorerBuilder() { } /** * Read from a stream. */ - protected RescoreBuilder(StreamInput in) throws IOException { + protected RescorerBuilder(StreamInput in) throws IOException { windowSize = in.readOptionalVInt(); } @@ -76,9 +76,9 @@ public Integer windowSize() { return windowSize; } - public static RescoreBuilder parseFromXContent(XContentParser parser) throws IOException { + public static RescorerBuilder parseFromXContent(XContentParser parser) throws IOException { String fieldName = null; - RescoreBuilder rescorer = null; + RescorerBuilder rescorer = null; Integer windowSize = null; XContentParser.Token token; while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { @@ -91,12 +91,7 @@ public static RescoreBuilder parseFromXContent(XContentParser parser) throws throw new ParsingException(parser.getTokenLocation(), "rescore doesn't support [" + fieldName + "]"); } } else if (token == XContentParser.Token.START_OBJECT) { - // we only have QueryRescorer at this point - if (QueryRescorerBuilder.NAME.equals(fieldName)) { - rescorer = QueryRescorerBuilder.fromXContent(parser); - } else { - throw new ParsingException(parser.getTokenLocation(), "rescore doesn't support rescorer with name [" + fieldName + "]"); - } + rescorer = parser.namedObject(RescorerBuilder.class, fieldName, null); } else { throw new ParsingException(parser.getTokenLocation(), "unexpected token [" + token + "] after [" + fieldName + "]"); } @@ -123,12 +118,21 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws protected abstract void doXContent(XContentBuilder builder, Params params) throws IOException; - public abstract QueryRescoreContext build(QueryShardContext context) throws IOException; - - public static QueryRescorerBuilder queryRescorer(QueryBuilder queryBuilder) { - return new QueryRescorerBuilder(queryBuilder); + /** + * Build the {@linkplain RescoreContext} that will be used to actually + * execute the rescore against a particular shard. + */ + public final RescoreContext buildContext(QueryShardContext context) throws IOException { + int finalWindowSize = windowSize == null ? DEFAULT_WINDOW_SIZE : windowSize; + RescoreContext rescoreContext = innerBuildContext(finalWindowSize, context); + return rescoreContext; } + /** + * Extensions override this to build the context that they need for rescoring. + */ + protected abstract RescoreContext innerBuildContext(int windowSize, QueryShardContext context) throws IOException; + @Override public int hashCode() { return Objects.hash(windowSize); @@ -142,8 +146,12 @@ public boolean equals(Object obj) { if (obj == null || getClass() != obj.getClass()) { return false; } - @SuppressWarnings("rawtypes") - RescoreBuilder other = (RescoreBuilder) obj; + RescorerBuilder other = (RescorerBuilder) obj; return Objects.equals(windowSize, other.windowSize); } + + @Override + public String toString() { + return Strings.toString(this, true, true); + } } diff --git a/core/src/main/java/org/elasticsearch/search/searchafter/SearchAfterBuilder.java b/core/src/main/java/org/elasticsearch/search/searchafter/SearchAfterBuilder.java index 8a19f254a8f3d..389b81ffcbad4 100644 --- a/core/src/main/java/org/elasticsearch/search/searchafter/SearchAfterBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/searchafter/SearchAfterBuilder.java @@ -30,7 +30,7 @@ import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; import org.elasticsearch.common.text.Text; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.common.xcontent.XContentParser; @@ -44,7 +44,7 @@ import java.util.List; import java.util.Objects; -public class SearchAfterBuilder implements ToXContent, Writeable { +public class SearchAfterBuilder implements ToXContentObject, Writeable { public static final ParseField SEARCH_AFTER = new ParseField("search_after"); private static final Object[] EMPTY_SORT_VALUES = new Object[0]; @@ -130,21 +130,25 @@ public static FieldDoc buildFieldDoc(SortAndFormats sort, Object[] values) { return new FieldDoc(Integer.MAX_VALUE, 0, fieldValues); } - private static SortField.Type extractSortType(SortField sortField) { - if (sortField instanceof SortedSetSortField) { + /** + * Returns the inner {@link SortField.Type} expected for this sort field. + */ + static SortField.Type extractSortType(SortField sortField) { + if (sortField.getComparatorSource() instanceof IndexFieldData.XFieldComparatorSource) { + return ((IndexFieldData.XFieldComparatorSource) sortField.getComparatorSource()).reducedType(); + } else if (sortField instanceof SortedSetSortField) { return SortField.Type.STRING; } else if (sortField instanceof SortedNumericSortField) { return ((SortedNumericSortField) sortField).getNumericType(); + } else if ("LatLonPointSortField".equals(sortField.getClass().getSimpleName())) { + // for geo distance sorting + return SortField.Type.DOUBLE; } else { return sortField.getType(); } } - private static Object convertValueFromSortField(Object value, SortField sortField, DocValueFormat format) { - if (sortField.getComparatorSource() instanceof IndexFieldData.XFieldComparatorSource) { - IndexFieldData.XFieldComparatorSource cmpSource = (IndexFieldData.XFieldComparatorSource) sortField.getComparatorSource(); - return convertValueFromSortType(sortField.getField(), cmpSource.reducedType(), value, format); - } + static Object convertValueFromSortField(Object value, SortField sortField, DocValueFormat format) { SortField.Type sortType = extractSortType(sortField); return convertValueFromSortType(sortField.getField(), sortType, value, format); } diff --git a/core/src/main/java/org/elasticsearch/search/slice/SliceBuilder.java b/core/src/main/java/org/elasticsearch/search/slice/SliceBuilder.java index b09252889c276..7a5da9df9aa36 100644 --- a/core/src/main/java/org/elasticsearch/search/slice/SliceBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/slice/SliceBuilder.java @@ -22,13 +22,13 @@ import org.apache.lucene.search.MatchAllDocsQuery; import org.apache.lucene.search.MatchNoDocsQuery; import org.apache.lucene.search.Query; -import org.elasticsearch.action.support.ToXContentToBytes; import org.elasticsearch.common.ParseField; import org.elasticsearch.common.Strings; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; import org.elasticsearch.common.xcontent.ObjectParser; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.index.fielddata.IndexFieldData; @@ -52,7 +52,7 @@ * Otherwise the provided field must be a numeric and doc_values must be enabled. In that case a * {@link org.elasticsearch.search.slice.DocValuesSliceQuery} is used to filter the results. */ -public class SliceBuilder extends ToXContentToBytes implements Writeable { +public class SliceBuilder implements Writeable, ToXContentObject { public static final ParseField FIELD_FIELD = new ParseField("field"); public static final ParseField ID_FIELD = new ParseField("id"); public static final ParseField MAX_FIELD = new ParseField("max"); @@ -254,4 +254,9 @@ public Query toFilter(QueryShardContext context, int shardId, int numShards) { } return new MatchAllDocsQuery(); } + + @Override + public String toString() { + return Strings.toString(this, true, true); + } } diff --git a/core/src/main/java/org/elasticsearch/search/sort/FieldSortBuilder.java b/core/src/main/java/org/elasticsearch/search/sort/FieldSortBuilder.java index 6086bf36e4936..529cb4e86ac88 100644 --- a/core/src/main/java/org/elasticsearch/search/sort/FieldSortBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/sort/FieldSortBuilder.java @@ -20,9 +20,12 @@ package org.elasticsearch.search.sort; import org.apache.lucene.search.SortField; +import org.elasticsearch.Version; import org.elasticsearch.common.ParseField; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.logging.DeprecationLogger; +import org.elasticsearch.common.logging.Loggers; import org.elasticsearch.common.xcontent.ObjectParser; import org.elasticsearch.common.xcontent.ObjectParser.ValueType; import org.elasticsearch.common.xcontent.XContentBuilder; @@ -41,10 +44,14 @@ import java.io.IOException; import java.util.Objects; +import static org.elasticsearch.search.sort.NestedSortBuilder.NESTED_FIELD; + /** * A sort builder to sort based on a document field. */ public class FieldSortBuilder extends SortBuilder { + private static final DeprecationLogger DEPRECATION_LOGGER = new DeprecationLogger(Loggers.getLogger(FieldSortBuilder.class)); + public static final String NAME = "field_sort"; public static final ParseField MISSING = new ParseField("missing"); public static final ParseField SORT_MODE = new ParseField("mode"); @@ -71,6 +78,8 @@ public class FieldSortBuilder extends SortBuilder { private String nestedPath; + private NestedSortBuilder nestedSort; + /** Copy constructor. */ public FieldSortBuilder(FieldSortBuilder template) { this(template.fieldName); @@ -82,6 +91,9 @@ public FieldSortBuilder(FieldSortBuilder template) { } this.setNestedFilter(template.getNestedFilter()); this.setNestedPath(template.getNestedPath()); + if (template.getNestedSort() != null) { + this.setNestedSort(template.getNestedSort()); + }; } /** @@ -108,6 +120,9 @@ public FieldSortBuilder(StreamInput in) throws IOException { order = in.readOptionalWriteable(SortOrder::readFromStream); sortMode = in.readOptionalWriteable(SortMode::readFromStream); unmappedType = in.readOptionalString(); + if (in.getVersion().onOrAfter(Version.V_6_1_0)) { + nestedSort = in.readOptionalWriteable(NestedSortBuilder::new); + } } @Override @@ -119,6 +134,9 @@ public void writeTo(StreamOutput out) throws IOException { out.writeOptionalWriteable(order); out.writeOptionalWriteable(sortMode); out.writeOptionalString(unmappedType); + if (out.getVersion().onOrAfter(Version.V_6_1_0)) { + out.writeOptionalWriteable(nestedSort); + } } /** Returns the document field this sort should be based on. */ @@ -187,10 +205,13 @@ public SortMode sortMode() { * Sets the nested filter that the nested objects should match with in order * to be taken into account for sorting. * - * TODO should the above getters and setters be deprecated/ changed in - * favour of real getters and setters? + * @deprecated set nested sort with {@link #setNestedSort(NestedSortBuilder)} and retrieve with {@link #getNestedSort()} */ + @Deprecated public FieldSortBuilder setNestedFilter(QueryBuilder nestedFilter) { + if (this.nestedSort != null) { + throw new IllegalArgumentException("Setting both nested_path/nested_filter and nested not allowed"); + } this.nestedFilter = nestedFilter; return this; } @@ -198,7 +219,10 @@ public FieldSortBuilder setNestedFilter(QueryBuilder nestedFilter) { /** * Returns the nested filter that the nested objects should match with in * order to be taken into account for sorting. + * + * @deprecated set nested sort with {@link #setNestedSort(NestedSortBuilder)} and retrieve with {@link #getNestedSort()} */ + @Deprecated public QueryBuilder getNestedFilter() { return this.nestedFilter; } @@ -207,8 +231,14 @@ public QueryBuilder getNestedFilter() { * Sets the nested path if sorting occurs on a field that is inside a nested * object. By default when sorting on a field inside a nested object, the * nearest upper nested object is selected as nested path. + * + * @deprecated set nested sort with {@link #setNestedSort(NestedSortBuilder)} and retrieve with {@link #getNestedSort()} */ + @Deprecated public FieldSortBuilder setNestedPath(String nestedPath) { + if (this.nestedSort != null) { + throw new IllegalArgumentException("Setting both nested_path/nested_filter and nested not allowed"); + } this.nestedPath = nestedPath; return this; } @@ -216,11 +246,34 @@ public FieldSortBuilder setNestedPath(String nestedPath) { /** * Returns the nested path if sorting occurs in a field that is inside a * nested object. + * @deprecated set nested sort with {@link #setNestedSort(NestedSortBuilder)} and retrieve with {@link #getNestedSort()} */ + @Deprecated public String getNestedPath() { return this.nestedPath; } + /** + * Returns the {@link NestedSortBuilder} + */ + public NestedSortBuilder getNestedSort() { + return this.nestedSort; + } + + /** + * Sets the {@link NestedSortBuilder} to be used for fields that are inside a nested + * object. The {@link NestedSortBuilder} takes a `path` argument and an optional + * nested filter that the nested objects should match with in + * order to be taken into account for sorting. + */ + public FieldSortBuilder setNestedSort(final NestedSortBuilder nestedSort) { + if (this.nestedFilter != null || this.nestedPath != null) { + throw new IllegalArgumentException("Setting both nested_path/nested_filter and nested not allowed"); + } + this.nestedSort = nestedSort; + return this; + } + @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { builder.startObject(); @@ -241,6 +294,9 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws if (nestedPath != null) { builder.field(NESTED_PATH_FIELD.getPreferredName(), nestedPath); } + if (nestedSort != null) { + builder.field(NESTED_FIELD.getPreferredName(), nestedSort); + } builder.endObject(); builder.endObject(); return builder; @@ -274,7 +330,14 @@ public SortFieldAndFormat build(QueryShardContext context) throws IOException { localSortMode = reverse ? MultiValueMode.MAX : MultiValueMode.MIN; } - final Nested nested = resolveNested(context, nestedPath, nestedFilter); + final Nested nested; + if (nestedSort != null) { + // new nested sorts takes priority + nested = resolveNested(context, nestedSort); + } else { + nested = resolveNested(context, nestedPath, nestedFilter); + } + IndexFieldData fieldData = context.getForField(fieldType); if (fieldData instanceof IndexNumericFieldData == false && (sortMode == SortMode.SUM || sortMode == SortMode.AVG || sortMode == SortMode.MEDIAN)) { @@ -299,12 +362,13 @@ public boolean equals(Object other) { return (Objects.equals(this.fieldName, builder.fieldName) && Objects.equals(this.nestedFilter, builder.nestedFilter) && Objects.equals(this.nestedPath, builder.nestedPath) && Objects.equals(this.missing, builder.missing) && Objects.equals(this.order, builder.order) && Objects.equals(this.sortMode, builder.sortMode) - && Objects.equals(this.unmappedType, builder.unmappedType)); + && Objects.equals(this.unmappedType, builder.unmappedType) && Objects.equals(this.nestedSort, builder.nestedSort)); } @Override public int hashCode() { - return Objects.hash(this.fieldName, this.nestedFilter, this.nestedPath, this.missing, this.order, this.sortMode, this.unmappedType); + return Objects.hash(this.fieldName, this.nestedFilter, this.nestedPath, this.nestedSort, this.missing, this.order, this.sortMode, + this.unmappedType); } @Override @@ -329,22 +393,37 @@ public static FieldSortBuilder fromXContent(XContentParser parser, String fieldN static { PARSER.declareField(FieldSortBuilder::missing, p -> p.objectText(), MISSING, ValueType.VALUE); - PARSER.declareString(FieldSortBuilder::setNestedPath , NESTED_PATH_FIELD); + PARSER.declareString((fieldSortBuilder, nestedPath) -> { + DEPRECATION_LOGGER.deprecated("[nested_path] has been deprecated in favor of the [nested] parameter"); + fieldSortBuilder.setNestedPath(nestedPath); + }, NESTED_PATH_FIELD); PARSER.declareString(FieldSortBuilder::unmappedType , UNMAPPED_TYPE); PARSER.declareString((b, v) -> b.order(SortOrder.fromString(v)) , ORDER_FIELD); PARSER.declareString((b, v) -> b.sortMode(SortMode.fromString(v)), SORT_MODE); - PARSER.declareObject(FieldSortBuilder::setNestedFilter, (p, c) -> SortBuilder.parseNestedFilter(p), NESTED_FILTER_FIELD); + PARSER.declareObject(FieldSortBuilder::setNestedFilter, (p, c) -> { + DEPRECATION_LOGGER.deprecated("[nested_filter] has been deprecated in favour for the [nested] parameter"); + return SortBuilder.parseNestedFilter(p); + }, NESTED_FILTER_FIELD); + PARSER.declareObject(FieldSortBuilder::setNestedSort, (p, c) -> NestedSortBuilder.fromXContent(p), NESTED_FIELD); } @Override - public SortBuilder rewrite(QueryRewriteContext ctx) throws IOException { - if (nestedFilter == null) { + public FieldSortBuilder rewrite(QueryRewriteContext ctx) throws IOException { + if (nestedFilter == null && nestedSort == null) { return this; } - QueryBuilder rewrite = nestedFilter.rewrite(ctx); - if (nestedFilter == rewrite) { - return this; + if (nestedFilter != null) { + QueryBuilder rewrite = nestedFilter.rewrite(ctx); + if (nestedFilter == rewrite) { + return this; + } + return new FieldSortBuilder(this).setNestedFilter(rewrite); + } else { + NestedSortBuilder rewrite = nestedSort.rewrite(ctx); + if (nestedSort == rewrite) { + return this; + } + return new FieldSortBuilder(this).setNestedSort(rewrite); } - return new FieldSortBuilder(this).setNestedFilter(rewrite); } } diff --git a/core/src/main/java/org/elasticsearch/search/sort/GeoDistanceSortBuilder.java b/core/src/main/java/org/elasticsearch/search/sort/GeoDistanceSortBuilder.java index 358adf6390e68..8dcd2fc766f4e 100644 --- a/core/src/main/java/org/elasticsearch/search/sort/GeoDistanceSortBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/sort/GeoDistanceSortBuilder.java @@ -27,6 +27,7 @@ import org.apache.lucene.search.SortField; import org.apache.lucene.util.BitSet; import org.elasticsearch.ElasticsearchParseException; +import org.elasticsearch.Version; import org.elasticsearch.common.ParseField; import org.elasticsearch.common.ParsingException; import org.elasticsearch.common.geo.GeoDistance; @@ -34,6 +35,8 @@ import org.elasticsearch.common.geo.GeoUtils; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.logging.DeprecationLogger; +import org.elasticsearch.common.logging.Loggers; import org.elasticsearch.common.unit.DistanceUnit; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentParser; @@ -61,10 +64,14 @@ import java.util.Objects; import static org.elasticsearch.index.query.AbstractQueryBuilder.parseInnerQueryBuilder; +import static org.elasticsearch.search.sort.NestedSortBuilder.NESTED_FIELD; + /** * A geo distance based sorting on a geo point like field. */ public class GeoDistanceSortBuilder extends SortBuilder { + private static final DeprecationLogger DEPRECATION_LOGGER = new DeprecationLogger(Loggers.getLogger(GeoDistanceSortBuilder.class)); + public static final String NAME = "_geo_distance"; public static final String ALTERNATIVE_NAME = "_geoDistance"; public static final GeoValidationMethod DEFAULT_VALIDATION = GeoValidationMethod.DEFAULT; @@ -85,6 +92,8 @@ public class GeoDistanceSortBuilder extends SortBuilder private QueryBuilder nestedFilter; private String nestedPath; + private NestedSortBuilder nestedSort; + private GeoValidationMethod validation = DEFAULT_VALIDATION; /** @@ -141,6 +150,7 @@ public GeoDistanceSortBuilder(String fieldName, String ... geohashes) { this.nestedFilter = original.nestedFilter; this.nestedPath = original.nestedPath; this.validation = original.validation; + this.nestedSort = original.nestedSort; } /** @@ -156,6 +166,9 @@ public GeoDistanceSortBuilder(StreamInput in) throws IOException { sortMode = in.readOptionalWriteable(SortMode::readFromStream); nestedFilter = in.readOptionalNamedWriteable(QueryBuilder.class); nestedPath = in.readOptionalString(); + if (in.getVersion().onOrAfter(Version.V_6_1_0)) { + nestedSort = in.readOptionalWriteable(NestedSortBuilder::new); + } validation = GeoValidationMethod.readFromStream(in); } @@ -169,6 +182,9 @@ public void writeTo(StreamOutput out) throws IOException { out.writeOptionalWriteable(sortMode); out.writeOptionalNamedWriteable(nestedFilter); out.writeOptionalString(nestedPath); + if (out.getVersion().onOrAfter(Version.V_6_1_0)) { + out.writeOptionalWriteable(nestedSort); + } validation.writeTo(out); } @@ -284,10 +300,17 @@ public SortMode sortMode() { } /** - * Sets the nested filter that the nested objects should match with in order to be taken into account - * for sorting. - */ + * Sets the nested filter that the nested objects should match with in order to + * be taken into account for sorting. + * + * @deprecated set nested sort with {@link #setNestedSort(NestedSortBuilder)} + * and retrieve with {@link #getNestedSort()} + **/ + @Deprecated public GeoDistanceSortBuilder setNestedFilter(QueryBuilder nestedFilter) { + if (this.nestedSort != null) { + throw new IllegalArgumentException("Setting both nested_path/nested_filter and nested not allowed"); + } this.nestedFilter = nestedFilter; return this; } @@ -295,7 +318,10 @@ public GeoDistanceSortBuilder setNestedFilter(QueryBuilder nestedFilter) { /** * Returns the nested filter that the nested objects should match with in order to be taken into account * for sorting. + * @deprecated set nested sort with {@link #setNestedSort(NestedSortBuilder)} + * and retrieve with {@link #getNestedSort()} **/ + @Deprecated public QueryBuilder getNestedFilter() { return this.nestedFilter; } @@ -303,8 +329,14 @@ public QueryBuilder getNestedFilter() { /** * Sets the nested path if sorting occurs on a field that is inside a nested object. By default when sorting on a * field inside a nested object, the nearest upper nested object is selected as nested path. - */ + * @deprecated set nested sort with {@link #setNestedSort(NestedSortBuilder)} + * and retrieve with {@link #getNestedSort()} + **/ + @Deprecated public GeoDistanceSortBuilder setNestedPath(String nestedPath) { + if (this.nestedSort != null) { + throw new IllegalArgumentException("Setting both nested_path/nested_filter and nested not allowed"); + } this.nestedPath = nestedPath; return this; } @@ -312,11 +344,35 @@ public GeoDistanceSortBuilder setNestedPath(String nestedPath) { /** * Returns the nested path if sorting occurs on a field that is inside a nested object. By default when sorting on a * field inside a nested object, the nearest upper nested object is selected as nested path. - */ + * @deprecated set nested sort with {@link #setNestedSort(NestedSortBuilder)} + * and retrieve with {@link #getNestedSort()} + **/ + @Deprecated public String getNestedPath() { return this.nestedPath; } + /** + * Returns the {@link NestedSortBuilder} + */ + public NestedSortBuilder getNestedSort() { + return this.nestedSort; + } + + /** + * Sets the {@link NestedSortBuilder} to be used for fields that are inside a nested + * object. The {@link NestedSortBuilder} takes a `path` argument and an optional + * nested filter that the nested objects should match with in + * order to be taken into account for sorting. + */ + public GeoDistanceSortBuilder setNestedSort(final NestedSortBuilder nestedSort) { + if (this.nestedFilter != null || this.nestedPath != null) { + throw new IllegalArgumentException("Setting both nested_path/nested_filter and nested not allowed"); + } + this.nestedSort = nestedSort; + return this; + } + @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { builder.startObject(); @@ -342,6 +398,9 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws if (nestedFilter != null) { builder.field(NESTED_FILTER_FIELD.getPreferredName(), nestedFilter, params); } + if (nestedSort != null) { + builder.field(NESTED_FIELD.getPreferredName(), nestedSort); + } builder.field(VALIDATION_METHOD_FIELD.getPreferredName(), validation); builder.endObject(); @@ -373,14 +432,15 @@ public boolean equals(Object object) { Objects.equals(order, other.order) && Objects.equals(nestedFilter, other.nestedFilter) && Objects.equals(nestedPath, other.nestedPath) && - Objects.equals(validation, other.validation); + Objects.equals(validation, other.validation) && + Objects.equals(nestedSort, other.nestedSort); } @Override public int hashCode() { return Objects.hash(this.fieldName, this.points, this.geoDistance, this.unit, this.sortMode, this.order, this.nestedFilter, - this.nestedPath, this.validation); + this.nestedPath, this.validation, this.nestedSort); } /** @@ -402,6 +462,7 @@ public static GeoDistanceSortBuilder fromXContent(XContentParser parser, String SortMode sortMode = null; QueryBuilder nestedFilter = null; String nestedPath = null; + NestedSortBuilder nestedSort = null; GeoValidationMethod validation = null; XContentParser.Token token; @@ -415,7 +476,10 @@ public static GeoDistanceSortBuilder fromXContent(XContentParser parser, String fieldName = currentName; } else if (token == XContentParser.Token.START_OBJECT) { if (NESTED_FILTER_FIELD.match(currentName)) { + DEPRECATION_LOGGER.deprecated("[nested_filter] has been deprecated in favour of the [nested] parameter"); nestedFilter = parseInnerQueryBuilder(parser); + } else if (NESTED_FIELD.match(currentName)) { + nestedSort = NestedSortBuilder.fromXContent(parser); } else { // the json in the format of -> field : { lat : 30, lon : 12 } if (fieldName != null && fieldName.equals(currentName) == false) { @@ -442,6 +506,7 @@ public static GeoDistanceSortBuilder fromXContent(XContentParser parser, String } else if (SORTMODE_FIELD.match(currentName)) { sortMode = SortMode.fromString(parser.text()); } else if (NESTED_PATH_FIELD.match(currentName)) { + DEPRECATION_LOGGER.deprecated("[nested_path] has been deprecated in favour of the [nested] parameter"); nestedPath = parser.text(); } else if (token == Token.VALUE_STRING){ if (fieldName != null && fieldName.equals(currentName) == false) { @@ -482,6 +547,9 @@ public static GeoDistanceSortBuilder fromXContent(XContentParser parser, String result.setNestedFilter(nestedFilter); } result.setNestedPath(nestedPath); + if (nestedSort != null) { + result.setNestedSort(nestedSort); + } if (validation != null) { result.validation(validation); } @@ -531,7 +599,14 @@ public SortFieldAndFormat build(QueryShardContext context) throws IOException { + "] for geo distance based sort"); } final IndexGeoPointFieldData geoIndexFieldData = context.getForField(fieldType); - final Nested nested = resolveNested(context, nestedPath, nestedFilter); + + final Nested nested; + if (nestedSort != null) { + // new nested sorts takes priority + nested = resolveNested(context, nestedSort); + } else { + nested = resolveNested(context, nestedPath, nestedFilter); + } if (geoIndexFieldData.getClass() == LatLonPointDVIndexFieldData.class // only works with 5.x geo_point && nested == null @@ -544,7 +619,8 @@ public SortFieldAndFormat build(QueryShardContext context) throws IOException { DocValueFormat.RAW); } - IndexFieldData.XFieldComparatorSource geoDistanceComparatorSource = new IndexFieldData.XFieldComparatorSource() { + IndexFieldData.XFieldComparatorSource geoDistanceComparatorSource = new IndexFieldData.XFieldComparatorSource(null, finalSortMode, + nested) { @Override public SortField.Type reducedType() { @@ -555,11 +631,10 @@ public SortField.Type reducedType() { public FieldComparator newComparator(String fieldname, int numHits, int sortPos, boolean reversed) { return new FieldComparator.DoubleComparator(numHits, null, null) { @Override - protected NumericDocValues getNumericDocValues(LeafReaderContext context, String field) - throws IOException { + protected NumericDocValues getNumericDocValues(LeafReaderContext context, String field) throws IOException { final MultiGeoPointValues geoPointValues = geoIndexFieldData.load(context).getGeoPointValues(); - final SortedNumericDoubleValues distanceValues = GeoUtils.distanceValues(geoDistance, unit, - geoPointValues, localPoints); + final SortedNumericDoubleValues distanceValues = GeoUtils.distanceValues(geoDistance, unit, geoPointValues, + localPoints); final NumericDoubleValues selectedValues; if (nested == null) { selectedValues = finalSortMode.select(distanceValues, Double.POSITIVE_INFINITY); @@ -573,7 +648,6 @@ protected NumericDocValues getNumericDocValues(LeafReaderContext context, String } }; } - }; return new SortFieldAndFormat(new SortField(fieldName, geoDistanceComparatorSource, reverse), @@ -607,14 +681,22 @@ static void parseGeoPoints(XContentParser parser, List geoPoints) thro } @Override - public SortBuilder rewrite(QueryRewriteContext ctx) throws IOException { - if (nestedFilter == null) { + public GeoDistanceSortBuilder rewrite(QueryRewriteContext ctx) throws IOException { + if (nestedFilter == null && nestedSort == null) { return this; } - QueryBuilder rewrite = nestedFilter.rewrite(ctx); - if (nestedFilter == rewrite) { - return this; + if (nestedFilter != null) { + QueryBuilder rewrite = nestedFilter.rewrite(ctx); + if (nestedFilter == rewrite) { + return this; + } + return new GeoDistanceSortBuilder(this).setNestedFilter(rewrite); + } else { + NestedSortBuilder rewrite = nestedSort.rewrite(ctx); + if (nestedSort == rewrite) { + return this; + } + return new GeoDistanceSortBuilder(this).setNestedSort(rewrite); } - return new GeoDistanceSortBuilder(this).setNestedFilter(rewrite); } } diff --git a/core/src/main/java/org/elasticsearch/search/sort/NestedSortBuilder.java b/core/src/main/java/org/elasticsearch/search/sort/NestedSortBuilder.java new file mode 100644 index 0000000000000..a6ad028403453 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/search/sort/NestedSortBuilder.java @@ -0,0 +1,172 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.search.sort; + +import org.elasticsearch.common.ParseField; +import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.io.stream.Writeable; +import org.elasticsearch.common.xcontent.ToXContentObject; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.index.query.QueryBuilder; +import org.elasticsearch.index.query.QueryRewriteContext; + +import java.io.IOException; +import java.util.Objects; + +import static org.elasticsearch.search.sort.SortBuilder.parseNestedFilter; + +public class NestedSortBuilder implements Writeable, ToXContentObject { + public static final ParseField NESTED_FIELD = new ParseField("nested"); + public static final ParseField PATH_FIELD = new ParseField("path"); + public static final ParseField FILTER_FIELD = new ParseField("filter"); + + private final String path; + private QueryBuilder filter; + private NestedSortBuilder nestedSort; + + public NestedSortBuilder(String path) { + this.path = path; + } + + public NestedSortBuilder(StreamInput in) throws IOException { + path = in.readOptionalString(); + filter = in.readOptionalNamedWriteable(QueryBuilder.class); + nestedSort = in.readOptionalWriteable(NestedSortBuilder::new); + } + + public String getPath() { + return path; + } + + public QueryBuilder getFilter() { + return filter; + } + + public NestedSortBuilder setFilter(final QueryBuilder filter) { + this.filter = filter; + return this; + } + + public NestedSortBuilder getNestedSort() { + return nestedSort; + } + + public NestedSortBuilder setNestedSort(final NestedSortBuilder nestedSortBuilder) { + this.nestedSort = nestedSortBuilder; + return this; + } + + /** + * Write this object's fields to a {@linkplain StreamOutput}. + */ + @Override + public void writeTo(final StreamOutput out) throws IOException { + out.writeOptionalString(path); + out.writeOptionalNamedWriteable(filter); + out.writeOptionalWriteable(nestedSort); + } + + @Override + public XContentBuilder toXContent(final XContentBuilder builder, final Params params) throws IOException { + builder.startObject(); + if (path != null) { + builder.field(PATH_FIELD.getPreferredName(), path); + } + if (filter != null) { + builder.field(FILTER_FIELD.getPreferredName(), filter); + } + if (nestedSort != null) { + builder.field(NESTED_FIELD.getPreferredName(), nestedSort); + } + builder.endObject(); + return builder; + } + + public static NestedSortBuilder fromXContent(XContentParser parser) throws IOException { + String path = null; + QueryBuilder filter = null; + NestedSortBuilder nestedSort = null; + + XContentParser.Token token = parser.currentToken(); + if (token == XContentParser.Token.START_OBJECT) { + while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { + if (token == XContentParser.Token.FIELD_NAME) { + String currentName = parser.currentName(); + parser.nextToken(); + if (currentName.equals(PATH_FIELD.getPreferredName())) { + path = parser.text(); + } else if (currentName.equals(FILTER_FIELD.getPreferredName())) { + filter = parseNestedFilter(parser); + } else if (currentName.equals(NESTED_FIELD.getPreferredName())) { + nestedSort = NestedSortBuilder.fromXContent(parser); + } else { + throw new IllegalArgumentException("malformed nested sort format, unknown field name [" + currentName + "]"); + } + } else { + throw new IllegalArgumentException("malformed nested sort format, only field names are allowed"); + } + } + } else { + throw new IllegalArgumentException("malformed nested sort format, must start with an object"); + } + + return new NestedSortBuilder(path).setFilter(filter).setNestedSort(nestedSort); + } + + @Override + public boolean equals(final Object obj) { + if (this == obj) { + return true; + } + if (obj == null || getClass() != obj.getClass()) { + return false; + } + NestedSortBuilder that = (NestedSortBuilder) obj; + return Objects.equals(path, that.path) + && Objects.equals(filter, that.filter) + && Objects.equals(nestedSort, that.nestedSort); + } + + @Override + public int hashCode() { + return Objects.hash(path, filter, nestedSort); + } + + public NestedSortBuilder rewrite(QueryRewriteContext ctx) throws IOException { + if (filter == null && nestedSort == null) { + return this; + } + QueryBuilder rewriteFilter = this.filter; + NestedSortBuilder rewriteNested = this.nestedSort; + if (filter != null) { + rewriteFilter = filter.rewrite(ctx); + } + if (nestedSort != null) { + rewriteNested = nestedSort.rewrite(ctx); + } + if (rewriteFilter != this.filter || rewriteNested != this.nestedSort) { + return new NestedSortBuilder(this.path).setFilter(rewriteFilter).setNestedSort(rewriteNested); + } else { + return this; + } + } +} diff --git a/core/src/main/java/org/elasticsearch/search/sort/ScoreSortBuilder.java b/core/src/main/java/org/elasticsearch/search/sort/ScoreSortBuilder.java index 2a6eb3a561720..c27979fdc748e 100644 --- a/core/src/main/java/org/elasticsearch/search/sort/ScoreSortBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/sort/ScoreSortBuilder.java @@ -124,7 +124,7 @@ public String getWriteableName() { } @Override - public SortBuilder rewrite(QueryRewriteContext ctx) throws IOException { + public ScoreSortBuilder rewrite(QueryRewriteContext ctx) throws IOException { return this; } } diff --git a/core/src/main/java/org/elasticsearch/search/sort/ScriptSortBuilder.java b/core/src/main/java/org/elasticsearch/search/sort/ScriptSortBuilder.java index 1e605dc397afb..331988a183fa9 100644 --- a/core/src/main/java/org/elasticsearch/search/sort/ScriptSortBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/sort/ScriptSortBuilder.java @@ -25,10 +25,13 @@ import org.apache.lucene.search.SortField; import org.apache.lucene.util.BytesRef; import org.apache.lucene.util.BytesRefBuilder; +import org.elasticsearch.Version; import org.elasticsearch.common.ParseField; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; +import org.elasticsearch.common.logging.DeprecationLogger; +import org.elasticsearch.common.logging.Loggers; import org.elasticsearch.common.xcontent.ConstructingObjectParser; import org.elasticsearch.common.xcontent.ObjectParser.ValueType; import org.elasticsearch.common.xcontent.XContentBuilder; @@ -56,11 +59,13 @@ import java.util.Objects; import static org.elasticsearch.common.xcontent.ConstructingObjectParser.constructorArg; +import static org.elasticsearch.search.sort.NestedSortBuilder.NESTED_FIELD; /** * Script sort builder allows to sort based on a custom script expression. */ public class ScriptSortBuilder extends SortBuilder { + private static final DeprecationLogger DEPRECATION_LOGGER = new DeprecationLogger(Loggers.getLogger(ScriptSortBuilder.class)); public static final String NAME = "_script"; public static final ParseField TYPE_FIELD = new ParseField("type"); @@ -77,6 +82,8 @@ public class ScriptSortBuilder extends SortBuilder { private String nestedPath; + private NestedSortBuilder nestedSort; + /** * Constructs a script sort builder with the given script. * @@ -100,6 +107,7 @@ public ScriptSortBuilder(Script script, ScriptSortType type) { this.sortMode = original.sortMode; this.nestedFilter = original.nestedFilter; this.nestedPath = original.nestedPath; + this.nestedSort = original.nestedSort; } /** @@ -112,6 +120,9 @@ public ScriptSortBuilder(StreamInput in) throws IOException { sortMode = in.readOptionalWriteable(SortMode::readFromStream); nestedPath = in.readOptionalString(); nestedFilter = in.readOptionalNamedWriteable(QueryBuilder.class); + if (in.getVersion().onOrAfter(Version.V_6_1_0)) { + nestedSort = in.readOptionalWriteable(NestedSortBuilder::new); + } } @Override @@ -122,6 +133,9 @@ public void writeTo(StreamOutput out) throws IOException { out.writeOptionalWriteable(sortMode); out.writeOptionalString(nestedPath); out.writeOptionalNamedWriteable(nestedFilter); + if (out.getVersion().onOrAfter(Version.V_6_1_0)) { + out.writeOptionalWriteable(nestedSort); + } } /** @@ -162,15 +176,24 @@ public SortMode sortMode() { /** * Sets the nested filter that the nested objects should match with in order to be taken into account * for sorting. + * + * @deprecated set nested sort with {@link #setNestedSort(NestedSortBuilder)} and retrieve with {@link #getNestedSort()} */ + @Deprecated public ScriptSortBuilder setNestedFilter(QueryBuilder nestedFilter) { + if (this.nestedSort != null) { + throw new IllegalArgumentException("Setting both nested_path/nested_filter and nested not allowed"); + } this.nestedFilter = nestedFilter; return this; } /** * Gets the nested filter. + * + * @deprecated set nested sort with {@link #setNestedSort(NestedSortBuilder)} and retrieve with {@link #getNestedSort()} */ + @Deprecated public QueryBuilder getNestedFilter() { return this.nestedFilter; } @@ -178,19 +201,49 @@ public QueryBuilder getNestedFilter() { /** * Sets the nested path if sorting occurs on a field that is inside a nested object. For sorting by script this * needs to be specified. + * + * @deprecated set nested sort with {@link #setNestedSort(NestedSortBuilder)} and retrieve with {@link #getNestedSort()} */ + @Deprecated public ScriptSortBuilder setNestedPath(String nestedPath) { + if (this.nestedSort != null) { + throw new IllegalArgumentException("Setting both nested_path/nested_filter and nested not allowed"); + } this.nestedPath = nestedPath; return this; } /** * Gets the nested path. + * + * @deprecated set nested sort with {@link #setNestedSort(NestedSortBuilder)} and retrieve with {@link #getNestedSort()} */ + @Deprecated public String getNestedPath() { return this.nestedPath; } + /** + * Returns the {@link NestedSortBuilder} + */ + public NestedSortBuilder getNestedSort() { + return this.nestedSort; + } + + /** + * Sets the {@link NestedSortBuilder} to be used for fields that are inside a nested + * object. The {@link NestedSortBuilder} takes a `path` argument and an optional + * nested filter that the nested objects should match with in + * order to be taken into account for sorting. + */ + public ScriptSortBuilder setNestedSort(final NestedSortBuilder nestedSort) { + if (this.nestedFilter != null || this.nestedPath != null) { + throw new IllegalArgumentException("Setting both nested_path/nested_filter and nested not allowed"); + } + this.nestedSort = nestedSort; + return this; + } + @Override public XContentBuilder toXContent(XContentBuilder builder, Params builderParams) throws IOException { builder.startObject(); @@ -207,6 +260,9 @@ public XContentBuilder toXContent(XContentBuilder builder, Params builderParams) if (nestedFilter != null) { builder.field(NESTED_FILTER_FIELD.getPreferredName(), nestedFilter, builderParams); } + if (nestedSort != null) { + builder.field(NESTED_FIELD.getPreferredName(), nestedSort); + } builder.endObject(); builder.endObject(); return builder; @@ -221,8 +277,15 @@ public XContentBuilder toXContent(XContentBuilder builder, Params builderParams) PARSER.declareField(constructorArg(), p -> ScriptSortType.fromString(p.text()), TYPE_FIELD, ValueType.STRING); PARSER.declareString((b, v) -> b.order(SortOrder.fromString(v)), ORDER_FIELD); PARSER.declareString((b, v) -> b.sortMode(SortMode.fromString(v)), SORTMODE_FIELD); - PARSER.declareString(ScriptSortBuilder::setNestedPath , NESTED_PATH_FIELD); - PARSER.declareObject(ScriptSortBuilder::setNestedFilter, (p,c) -> SortBuilder.parseNestedFilter(p), NESTED_FILTER_FIELD); + PARSER.declareString((fieldSortBuilder, nestedPath) -> { + DEPRECATION_LOGGER.deprecated("[nested_path] has been deprecated in favor of the [nested] parameter"); + fieldSortBuilder.setNestedPath(nestedPath); + }, NESTED_PATH_FIELD); + PARSER.declareObject(ScriptSortBuilder::setNestedFilter, (p, c) -> { + DEPRECATION_LOGGER.deprecated("[nested_filter] has been deprecated in favour for the [nested] parameter"); + return SortBuilder.parseNestedFilter(p); + }, NESTED_FILTER_FIELD); + PARSER.declareObject(ScriptSortBuilder::setNestedSort, (p, c) -> NestedSortBuilder.fromXContent(p), NESTED_FIELD); } /** @@ -253,7 +316,14 @@ public SortFieldAndFormat build(QueryShardContext context) throws IOException { valueMode = reverse ? MultiValueMode.MAX : MultiValueMode.MIN; } - final Nested nested = resolveNested(context, nestedPath, nestedFilter); + final Nested nested; + if (nestedSort != null) { + // new nested sorts takes priority + nested = resolveNested(context, nestedSort); + } else { + nested = resolveNested(context, nestedPath, nestedFilter); + } + final IndexFieldData.XFieldComparatorSource fieldComparatorSource; switch (type) { case STRING: @@ -329,12 +399,13 @@ public boolean equals(Object object) { Objects.equals(order, other.order) && Objects.equals(sortMode, other.sortMode) && Objects.equals(nestedFilter, other.nestedFilter) && - Objects.equals(nestedPath, other.nestedPath); + Objects.equals(nestedPath, other.nestedPath) && + Objects.equals(nestedSort, other.nestedSort); } @Override public int hashCode() { - return Objects.hash(script, type, order, sortMode, nestedFilter, nestedPath); + return Objects.hash(script, type, order, sortMode, nestedFilter, nestedPath, nestedSort); } @Override @@ -379,14 +450,22 @@ public String toString() { } @Override - public SortBuilder rewrite(QueryRewriteContext ctx) throws IOException { - if (nestedFilter == null) { + public ScriptSortBuilder rewrite(QueryRewriteContext ctx) throws IOException { + if (nestedFilter == null && nestedSort == null) { return this; } - QueryBuilder rewrite = nestedFilter.rewrite(ctx); - if (nestedFilter == rewrite) { - return this; + if (nestedFilter != null) { + QueryBuilder rewrite = nestedFilter.rewrite(ctx); + if (nestedFilter == rewrite) { + return this; + } + return new ScriptSortBuilder(this).setNestedFilter(rewrite); + } else { + NestedSortBuilder rewrite = nestedSort.rewrite(ctx); + if (nestedSort == rewrite) { + return this; + } + return new ScriptSortBuilder(this).setNestedSort(rewrite); } - return new ScriptSortBuilder(this).setNestedFilter(rewrite); } } diff --git a/core/src/main/java/org/elasticsearch/search/sort/SortBuilder.java b/core/src/main/java/org/elasticsearch/search/sort/SortBuilder.java index 4f786d59b4bc1..61130fc34d6b8 100644 --- a/core/src/main/java/org/elasticsearch/search/sort/SortBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/sort/SortBuilder.java @@ -22,17 +22,19 @@ import org.apache.lucene.search.Query; import org.apache.lucene.search.Sort; import org.apache.lucene.search.SortField; -import org.apache.lucene.search.join.BitSetProducer; -import org.elasticsearch.action.support.ToXContentToBytes; +import org.apache.lucene.search.join.ScoreMode; +import org.apache.lucene.search.join.ToChildBlockJoinQuery; +import org.apache.lucene.search.join.ToParentBlockJoinQuery; import org.elasticsearch.common.ParseField; import org.elasticsearch.common.ParsingException; +import org.elasticsearch.common.Strings; import org.elasticsearch.common.io.stream.NamedWriteable; import org.elasticsearch.common.lucene.search.Queries; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource.Nested; import org.elasticsearch.index.mapper.ObjectMapper; import org.elasticsearch.index.query.QueryBuilder; -import org.elasticsearch.index.query.QueryRewriteContext; import org.elasticsearch.index.query.QueryShardContext; import org.elasticsearch.index.query.QueryShardException; import org.elasticsearch.index.query.Rewriteable; @@ -49,8 +51,7 @@ import static java.util.Collections.unmodifiableMap; import static org.elasticsearch.index.query.AbstractQueryBuilder.parseInnerQueryBuilder; -public abstract class SortBuilder> extends ToXContentToBytes implements NamedWriteable, - Rewriteable>{ +public abstract class SortBuilder> implements NamedWriteable, ToXContentObject, Rewriteable> { protected SortOrder order = SortOrder.ASC; @@ -179,28 +180,85 @@ public static Optional buildSort(List> sortBuilde } protected static Nested resolveNested(QueryShardContext context, String nestedPath, QueryBuilder nestedFilter) throws IOException { - Nested nested = null; - if (nestedPath != null) { - BitSetProducer rootDocumentsFilter = context.bitsetFilter(Queries.newNonNestedFilter()); - ObjectMapper nestedObjectMapper = context.getObjectMapper(nestedPath); - if (nestedObjectMapper == null) { - throw new QueryShardException(context, "[nested] failed to find nested object under path [" + nestedPath + "]"); + NestedSortBuilder nestedSortBuilder = new NestedSortBuilder(nestedPath); + nestedSortBuilder.setFilter(nestedFilter); + return resolveNested(context, nestedSortBuilder); + } + + protected static Nested resolveNested(QueryShardContext context, NestedSortBuilder nestedSort) throws IOException { + return resolveNested(context, nestedSort, null); + } + + private static Nested resolveNested(QueryShardContext context, NestedSortBuilder nestedSort, Nested nested) throws IOException { + if (nestedSort == null || nestedSort.getPath() == null) { + return null; + } + + String nestedPath = nestedSort.getPath(); + QueryBuilder nestedFilter = nestedSort.getFilter(); + NestedSortBuilder nestedNestedSort = nestedSort.getNestedSort(); + + // verify our nested path + ObjectMapper nestedObjectMapper = context.getObjectMapper(nestedPath); + + if (nestedObjectMapper == null) { + throw new QueryShardException(context, "[nested] failed to find nested object under path [" + nestedPath + "]"); + } + if (!nestedObjectMapper.nested().isNested()) { + throw new QueryShardException(context, "[nested] nested object under path [" + nestedPath + "] is not of nested type"); + } + + // get our parent query which will determines our parent documents + Query parentQuery; + ObjectMapper objectMapper = context.nestedScope().getObjectMapper(); + if (objectMapper == null) { + parentQuery = Queries.newNonNestedFilter(); + } else { + parentQuery = objectMapper.nestedTypeFilter(); + } + + // get our child query, potentially applying a users filter + Query childQuery; + try { + context.nestedScope().nextLevel(nestedObjectMapper); + if (nestedFilter != null) { + assert nestedFilter == Rewriteable.rewrite(nestedFilter, context) : "nested filter is not rewritten"; + if (nested == null) { + // this is for back-compat, original single level nested sorting never applied a nested type filter + childQuery = nestedFilter.toFilter(context); + } else { + childQuery = Queries.filtered(nestedObjectMapper.nestedTypeFilter(), nestedFilter.toFilter(context)); + } + } else { + childQuery = nestedObjectMapper.nestedTypeFilter(); } - if (!nestedObjectMapper.nested().isNested()) { - throw new QueryShardException(context, "[nested] nested object under path [" + nestedPath + "] is not of nested type"); + } finally { + context.nestedScope().previousLevel(); + } + + // apply filters from the previous nested level + if (nested != null) { + parentQuery = Queries.filtered(parentQuery, + new ToParentBlockJoinQuery(nested.getInnerQuery(), nested.getRootFilter(), ScoreMode.None)); + + if (objectMapper != null) { + childQuery = Queries.filtered(childQuery, + new ToChildBlockJoinQuery(nested.getInnerQuery(), context.bitsetFilter(objectMapper.nestedTypeFilter()))); } - Query innerDocumentsQuery; - if (nestedFilter != null) { + } + + // wrap up our parent and child and either process the next level of nesting or return + final Nested innerNested = new Nested(context.bitsetFilter(parentQuery), childQuery); + if (nestedNestedSort != null) { + try { context.nestedScope().nextLevel(nestedObjectMapper); - assert nestedFilter == Rewriteable.rewrite(nestedFilter, context) : "nested filter is not rewritten"; - innerDocumentsQuery = nestedFilter.toFilter(context); + return resolveNested(context, nestedNestedSort, innerNested); + } finally { context.nestedScope().previousLevel(); - } else { - innerDocumentsQuery = nestedObjectMapper.nestedTypeFilter(); } - nested = new Nested(rootDocumentsFilter, innerDocumentsQuery); + } else { + return innerNested; } - return nested; } protected static QueryBuilder parseNestedFilter(XContentParser parser) { @@ -215,4 +273,9 @@ protected static QueryBuilder parseNestedFilter(XContentParser parser) { private interface Parser> { T fromXContent(XContentParser parser, String elementName) throws IOException; } + + @Override + public String toString() { + return Strings.toString(this, true, true); + } } diff --git a/core/src/main/java/org/elasticsearch/search/suggest/Suggest.java b/core/src/main/java/org/elasticsearch/search/suggest/Suggest.java index f55554a4457aa..c743eb259e96f 100644 --- a/core/src/main/java/org/elasticsearch/search/suggest/Suggest.java +++ b/core/src/main/java/org/elasticsearch/search/suggest/Suggest.java @@ -29,7 +29,7 @@ import org.elasticsearch.common.text.Text; import org.elasticsearch.common.xcontent.ConstructingObjectParser; import org.elasticsearch.common.xcontent.ObjectParser; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentFragment; import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentFactory; @@ -61,7 +61,7 @@ /** * Top level suggest result, containing the result for each suggestion. */ -public class Suggest implements Iterable>>, Streamable, ToXContent { +public class Suggest implements Iterable>>, Streamable, ToXContentFragment { public static final String NAME = "suggest"; @@ -234,7 +234,7 @@ public List filter(Class suggestionType) { /** * The suggestion responses corresponding with the suggestions in the request. */ - public static class Suggestion implements Iterable, Streamable, ToXContent { + public static class Suggestion implements Iterable, Streamable, ToXContentFragment { private static final String NAME = "suggestion"; diff --git a/core/src/main/java/org/elasticsearch/search/suggest/SuggestBuilder.java b/core/src/main/java/org/elasticsearch/search/suggest/SuggestBuilder.java index fec2b0f3bf31d..10d155eea02d1 100644 --- a/core/src/main/java/org/elasticsearch/search/suggest/SuggestBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/suggest/SuggestBuilder.java @@ -18,14 +18,15 @@ */ package org.elasticsearch.search.suggest; -import org.elasticsearch.action.support.ToXContentToBytes; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.ParseField; import org.elasticsearch.common.ParsingException; +import org.elasticsearch.common.Strings; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; import org.elasticsearch.common.lucene.BytesRefs; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.index.query.QueryShardContext; @@ -44,7 +45,7 @@ * Suggesting works by suggesting terms/phrases that appear in the suggest text that are similar compared * to the terms in provided text. These suggestions are based on several options described in this class. */ -public class SuggestBuilder extends ToXContentToBytes implements Writeable { +public class SuggestBuilder implements Writeable, ToXContentObject { protected static final ParseField GLOBAL_TEXT_FIELD = new ParseField("text"); private String globalText; @@ -200,4 +201,9 @@ public boolean equals(Object other) { public int hashCode() { return Objects.hash(globalText, suggestions); } + + @Override + public String toString() { + return Strings.toString(this, true, true); + } } diff --git a/core/src/main/java/org/elasticsearch/search/suggest/SuggestionBuilder.java b/core/src/main/java/org/elasticsearch/search/suggest/SuggestionBuilder.java index 01fb76f5da373..62d93db405a2a 100644 --- a/core/src/main/java/org/elasticsearch/search/suggest/SuggestionBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/suggest/SuggestionBuilder.java @@ -27,7 +27,8 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.lucene.BytesRefs; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContent.Params; +import org.elasticsearch.common.xcontent.ToXContentFragment; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.index.mapper.MappedFieldType; @@ -41,7 +42,7 @@ /** * Base class for the different suggestion implementations. */ -public abstract class SuggestionBuilder> implements NamedWriteable, ToXContent { +public abstract class SuggestionBuilder> implements NamedWriteable, ToXContentFragment { protected final String field; protected String text; diff --git a/core/src/main/java/org/elasticsearch/search/suggest/completion/CompletionSuggester.java b/core/src/main/java/org/elasticsearch/search/suggest/completion/CompletionSuggester.java index 0b127b2eeef7d..5690acd7abd97 100644 --- a/core/src/main/java/org/elasticsearch/search/suggest/completion/CompletionSuggester.java +++ b/core/src/main/java/org/elasticsearch/search/suggest/completion/CompletionSuggester.java @@ -18,17 +18,16 @@ */ package org.elasticsearch.search.suggest.completion; +import org.apache.lucene.analysis.CharArraySet; import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.search.BulkScorer; import org.apache.lucene.search.CollectionTerminatedException; import org.apache.lucene.search.IndexSearcher; import org.apache.lucene.search.Weight; -import org.apache.lucene.search.suggest.Lookup; import org.apache.lucene.search.suggest.document.CompletionQuery; import org.apache.lucene.search.suggest.document.TopSuggestDocs; import org.apache.lucene.search.suggest.document.TopSuggestDocsCollector; import org.apache.lucene.util.CharsRefBuilder; -import org.apache.lucene.util.PriorityQueue; import org.elasticsearch.common.text.Text; import org.elasticsearch.index.mapper.CompletionFieldMapper; import org.elasticsearch.search.suggest.Suggest; @@ -53,12 +52,14 @@ protected Suggest.Suggestion getContexts() { } } - private static final class SuggestDocPriorityQueue extends PriorityQueue { + private final Map docsMap; - SuggestDocPriorityQueue(int maxSize) { - super(maxSize); - } - - @Override - protected boolean lessThan(SuggestDoc a, SuggestDoc b) { - if (a.score == b.score) { - int cmp = Lookup.CHARSEQUENCE_COMPARATOR.compare(a.key, b.key); - if (cmp == 0) { - // prefer smaller doc id, in case of a tie - return a.doc > b.doc; - } else { - return cmp > 0; - } - } - return a.score < b.score; - } - - public SuggestDoc[] getResults() { - int size = size(); - SuggestDoc[] res = new SuggestDoc[size]; - for (int i = size - 1; i >= 0; i--) { - res[i] = pop(); - } - return res; - } - } - - private final int num; - private final SuggestDocPriorityQueue pq; - private final Map scoreDocMap; - - // TODO: expose dup removal - - TopDocumentsCollector(int num) { - super(1, false); // TODO hack, we don't use the underlying pq, so we allocate a size of 1 - this.num = num; - this.scoreDocMap = new LinkedHashMap<>(num); - this.pq = new SuggestDocPriorityQueue(num); - } - - @Override - public int getCountToCollect() { - // This is only needed because we initialize - // the base class with 1 instead of the actual num - return num; - } - - - @Override - protected void doSetNextReader(LeafReaderContext context) throws IOException { - super.doSetNextReader(context); - updateResults(); - } - - private void updateResults() { - for (SuggestDoc suggestDoc : scoreDocMap.values()) { - if (pq.insertWithOverflow(suggestDoc) == suggestDoc) { - break; - } - } - scoreDocMap.clear(); + TopDocumentsCollector(int num, boolean skipDuplicates) { + super(Math.max(1, num), skipDuplicates); + this.docsMap = new LinkedHashMap<>(num); } @Override public void collect(int docID, CharSequence key, CharSequence context, float score) throws IOException { - if (scoreDocMap.containsKey(docID)) { - SuggestDoc suggestDoc = scoreDocMap.get(docID); - suggestDoc.add(key, context, score); - } else if (scoreDocMap.size() <= num) { - scoreDocMap.put(docID, new SuggestDoc(docBase + docID, key, context, score)); + int globalDoc = docID + docBase; + if (docsMap.containsKey(globalDoc)) { + docsMap.get(globalDoc).add(key, context, score); } else { - throw new CollectionTerminatedException(); + docsMap.put(globalDoc, new SuggestDoc(globalDoc, key, context, score)); + super.collect(docID, key, context, score); } } @Override public TopSuggestDocs get() throws IOException { - updateResults(); // to empty the last set of collected suggest docs - TopSuggestDocs.SuggestScoreDoc[] suggestScoreDocs = pq.getResults(); - if (suggestScoreDocs.length > 0) { - return new TopSuggestDocs(suggestScoreDocs.length, suggestScoreDocs, suggestScoreDocs[0].score); - } else { + TopSuggestDocs entries = super.get(); + if (entries.scoreDocs.length == 0) { return TopSuggestDocs.EMPTY; } + // The parent class returns suggestions, not documents, and dedup only the surface form (without contexts). + // The following code groups suggestions matching different contexts by document id and dedup the surface form + contexts + // if needed (skip_duplicates). + int size = entries.scoreDocs.length; + final List suggestDocs = new ArrayList(size); + final CharArraySet seenSurfaceForms = doSkipDuplicates() ? new CharArraySet(size, false) : null; + for (TopSuggestDocs.SuggestScoreDoc suggestEntry : entries.scoreLookupDocs()) { + final SuggestDoc suggestDoc; + if (docsMap != null) { + suggestDoc = docsMap.get(suggestEntry.doc); + } else { + suggestDoc = new SuggestDoc(suggestEntry.doc, suggestEntry.key, suggestEntry.context, suggestEntry.score); + } + if (doSkipDuplicates()) { + if (seenSurfaceForms.contains(suggestDoc.key)) { + continue; + } + seenSurfaceForms.add(suggestDoc.key); + } + suggestDocs.add(suggestDoc); + } + return new TopSuggestDocs((int) entries.totalHits, + suggestDocs.toArray(new TopSuggestDocs.SuggestScoreDoc[0]), entries.getMaxScore()); } } } diff --git a/core/src/main/java/org/elasticsearch/search/suggest/completion/CompletionSuggestion.java b/core/src/main/java/org/elasticsearch/search/suggest/completion/CompletionSuggestion.java index 229b77aad2850..6d5cd9e588a36 100644 --- a/core/src/main/java/org/elasticsearch/search/suggest/completion/CompletionSuggestion.java +++ b/core/src/main/java/org/elasticsearch/search/suggest/completion/CompletionSuggestion.java @@ -18,8 +18,10 @@ */ package org.elasticsearch.search.suggest.completion; +import org.apache.lucene.analysis.CharArraySet; import org.apache.lucene.search.ScoreDoc; import org.apache.lucene.search.suggest.Lookup; +import org.elasticsearch.Version; import org.elasticsearch.common.ParseField; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; @@ -68,11 +70,36 @@ public final class CompletionSuggestion extends Suggest.Suggestion> toRe // the global top size entries are collected from the shard results // using a priority queue OptionPriorityQueue priorityQueue = new OptionPriorityQueue(leader.getSize(), COMPARATOR); + // Dedup duplicate suggestions (based on the surface form) if skip duplicates is activated + final CharArraySet seenSurfaceForms = leader.skipDuplicates ? new CharArraySet(leader.getSize(), false) : null; for (Suggest.Suggestion suggestion : toReduce) { assert suggestion.getName().equals(name) : "name should be identical across all suggestions"; for (Entry.Option option : ((CompletionSuggestion) suggestion).getOptions()) { + if (leader.skipDuplicates) { + assert ((CompletionSuggestion) suggestion).skipDuplicates; + String text = option.getText().string(); + if (seenSurfaceForms.contains(text)) { + continue; + } + seenSurfaceForms.add(text); + } if (option == priorityQueue.insertWithOverflow(option)) { // if the current option has overflown from pq, // we can assume all of the successive options @@ -157,7 +194,7 @@ public static CompletionSuggestion reduceTo(List> toRe } } } - final CompletionSuggestion suggestion = new CompletionSuggestion(leader.getName(), leader.getSize()); + final CompletionSuggestion suggestion = new CompletionSuggestion(leader.getName(), leader.getSize(), leader.skipDuplicates); final Entry entry = new Entry(leaderEntry.getText(), leaderEntry.getOffset(), leaderEntry.getLength()); Collections.addAll(entry.getOptions(), priorityQueue.get()); suggestion.addTerm(entry); diff --git a/core/src/main/java/org/elasticsearch/search/suggest/completion/CompletionSuggestionBuilder.java b/core/src/main/java/org/elasticsearch/search/suggest/completion/CompletionSuggestionBuilder.java index 462aa8e271bad..224204bfc8dd3 100644 --- a/core/src/main/java/org/elasticsearch/search/suggest/completion/CompletionSuggestionBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/suggest/completion/CompletionSuggestionBuilder.java @@ -19,6 +19,7 @@ package org.elasticsearch.search.suggest.completion; import org.elasticsearch.ElasticsearchParseException; +import org.elasticsearch.Version; import org.elasticsearch.common.ParseField; import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.io.stream.StreamInput; @@ -57,6 +58,7 @@ public class CompletionSuggestionBuilder extends SuggestionBuilderfalse. + */ + public CompletionSuggestionBuilder skipDuplicates(boolean skipDuplicates) { + this.skipDuplicates = skipDuplicates; + return this; + } + private static class InnerBuilder extends CompletionSuggestionBuilder { private String field; @@ -231,6 +257,9 @@ protected XContentBuilder innerToXContent(XContentBuilder builder, Params params if (regexOptions != null) { regexOptions.toXContent(builder, params); } + if (skipDuplicates) { + builder.field(SKIP_DUPLICATES_FIELD.getPreferredName(), skipDuplicates); + } if (contextBytes != null) { builder.rawField(CONTEXTS_FIELD.getPreferredName(), contextBytes); } @@ -255,8 +284,12 @@ public SuggestionContext build(QueryShardContext context) throws IOException { // copy over common settings to each suggestion builder final MapperService mapperService = context.getMapperService(); populateCommonFields(mapperService, suggestionContext); + suggestionContext.setSkipDuplicates(skipDuplicates); suggestionContext.setFuzzyOptions(fuzzyOptions); suggestionContext.setRegexOptions(regexOptions); + if (shardSize != null) { + suggestionContext.setShardSize(shardSize); + } MappedFieldType mappedFieldType = mapperService.fullName(suggestionContext.getField()); if (mappedFieldType == null || mappedFieldType instanceof CompletionFieldMapper.CompletionFieldType == false) { throw new IllegalArgumentException("Field [" + suggestionContext.getField() + "] is not a completion suggest field"); @@ -302,13 +335,14 @@ public String getWriteableName() { @Override protected boolean doEquals(CompletionSuggestionBuilder other) { - return Objects.equals(fuzzyOptions, other.fuzzyOptions) && + return skipDuplicates == other.skipDuplicates && + Objects.equals(fuzzyOptions, other.fuzzyOptions) && Objects.equals(regexOptions, other.regexOptions) && Objects.equals(contextBytes, other.contextBytes); } @Override protected int doHashCode() { - return Objects.hash(fuzzyOptions, regexOptions, contextBytes); + return Objects.hash(fuzzyOptions, regexOptions, contextBytes, skipDuplicates); } } diff --git a/core/src/main/java/org/elasticsearch/search/suggest/completion/CompletionSuggestionContext.java b/core/src/main/java/org/elasticsearch/search/suggest/completion/CompletionSuggestionContext.java index b12b90de107ed..e7c0b45745bd6 100644 --- a/core/src/main/java/org/elasticsearch/search/suggest/completion/CompletionSuggestionContext.java +++ b/core/src/main/java/org/elasticsearch/search/suggest/completion/CompletionSuggestionContext.java @@ -40,6 +40,7 @@ protected CompletionSuggestionContext(QueryShardContext shardContext) { private CompletionFieldMapper.CompletionFieldType fieldType; private FuzzyOptions fuzzyOptions; private RegexOptions regexOptions; + private boolean skipDuplicates; private Map> queryContexts = Collections.emptyMap(); CompletionFieldMapper.CompletionFieldType getFieldType() { @@ -62,6 +63,10 @@ void setQueryContexts(Map> que this.queryContexts = queryContexts; } + void setSkipDuplicates(boolean skipDuplicates) { + this.skipDuplicates = skipDuplicates; + } + public FuzzyOptions getFuzzyOptions() { return fuzzyOptions; } @@ -74,6 +79,10 @@ public Map> getQueryContexts() return queryContexts; } + public boolean isSkipDuplicates() { + return skipDuplicates; + } + CompletionQuery toQuery() { CompletionFieldMapper.CompletionFieldType fieldType = getFieldType(); final CompletionQuery query; diff --git a/core/src/main/java/org/elasticsearch/search/suggest/completion/RegexOptions.java b/core/src/main/java/org/elasticsearch/search/suggest/completion/RegexOptions.java index f330322a9f065..814fca29b0f8f 100644 --- a/core/src/main/java/org/elasticsearch/search/suggest/completion/RegexOptions.java +++ b/core/src/main/java/org/elasticsearch/search/suggest/completion/RegexOptions.java @@ -27,7 +27,8 @@ import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; import org.elasticsearch.common.xcontent.ObjectParser; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContent.Params; +import org.elasticsearch.common.xcontent.ToXContentFragment; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.index.query.RegexpFlag; @@ -37,7 +38,7 @@ /** * Regular expression options for completion suggester */ -public class RegexOptions implements ToXContent, Writeable { +public class RegexOptions implements ToXContentFragment, Writeable { static final ParseField REGEX_OPTIONS = new ParseField("regex"); private static final ParseField FLAGS_VALUE = new ParseField("flags", "flags_value"); private static final ParseField MAX_DETERMINIZED_STATES = new ParseField("max_determinized_states"); diff --git a/core/src/main/java/org/elasticsearch/search/suggest/phrase/CandidateScorer.java b/core/src/main/java/org/elasticsearch/search/suggest/phrase/CandidateScorer.java index d24ce6b3c29a0..3928a16b7c9a0 100644 --- a/core/src/main/java/org/elasticsearch/search/suggest/phrase/CandidateScorer.java +++ b/core/src/main/java/org/elasticsearch/search/suggest/phrase/CandidateScorer.java @@ -93,7 +93,7 @@ public void findCandidates(CandidateSet[] candidates, Candidate[] path, int ord, private void updateTop(CandidateSet[] candidates, Candidate[] path, PriorityQueue corrections, double cutoffScore, double score) throws IOException { score = Math.exp(score); - assert Math.abs(score - score(path, candidates)) < 0.00001; + assert Math.abs(score - score(path, candidates)) < 0.00001 : "cur_score=" + score + ", path_score=" + score(path,candidates); if (score > cutoffScore) { if (corrections.size() < maxNumCorrections) { Candidate[] c = new Candidate[candidates.length]; diff --git a/core/src/main/java/org/elasticsearch/search/suggest/phrase/LaplaceScorer.java b/core/src/main/java/org/elasticsearch/search/suggest/phrase/LaplaceScorer.java index 562da44846652..d9797a4207e22 100644 --- a/core/src/main/java/org/elasticsearch/search/suggest/phrase/LaplaceScorer.java +++ b/core/src/main/java/org/elasticsearch/search/suggest/phrase/LaplaceScorer.java @@ -38,10 +38,15 @@ final class LaplaceScorer extends WordScorer { return this.alpha; } + @Override + protected double scoreUnigram(Candidate word) throws IOException { + return (alpha + frequency(word.term)) / (vocabluarySize + alpha * numTerms); + } + @Override protected double scoreBigram(Candidate word, Candidate w_1) throws IOException { join(separator, spare, w_1.term, word.term); - return (alpha + frequency(spare.get())) / (alpha + w_1.frequency + vocabluarySize); + return (alpha + frequency(spare.get())) / (w_1.frequency + alpha * numTerms); } @Override @@ -49,7 +54,7 @@ protected double scoreTrigram(Candidate word, Candidate w_1, Candidate w_2) thro join(separator, spare, w_2.term, w_1.term, word.term); long trigramCount = frequency(spare.get()); join(separator, spare, w_1.term, word.term); - return (alpha + trigramCount) / (alpha + frequency(spare.get()) + vocabluarySize); + return (alpha + trigramCount) / (frequency(spare.get()) + alpha * numTerms); } diff --git a/core/src/main/java/org/elasticsearch/search/suggest/phrase/PhraseSuggestionBuilder.java b/core/src/main/java/org/elasticsearch/search/suggest/phrase/PhraseSuggestionBuilder.java index 9cad283c9363d..77f8fb55aa74e 100644 --- a/core/src/main/java/org/elasticsearch/search/suggest/phrase/PhraseSuggestionBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/suggest/phrase/PhraseSuggestionBuilder.java @@ -27,7 +27,7 @@ import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; import org.elasticsearch.common.lucene.BytesRefs; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.common.xcontent.XContentParser.Token; @@ -728,7 +728,7 @@ protected int doHashCode() { /** * {@link CandidateGenerator} interface. */ - public interface CandidateGenerator extends Writeable, ToXContent { + public interface CandidateGenerator extends Writeable, ToXContentObject { String getType(); PhraseSuggestionContext.DirectCandidateGenerator build(MapperService mapperService) throws IOException; diff --git a/core/src/main/java/org/elasticsearch/search/suggest/phrase/SmoothingModel.java b/core/src/main/java/org/elasticsearch/search/suggest/phrase/SmoothingModel.java index 6fd7d6ea5397a..f0fbc81ecd7f0 100644 --- a/core/src/main/java/org/elasticsearch/search/suggest/phrase/SmoothingModel.java +++ b/core/src/main/java/org/elasticsearch/search/suggest/phrase/SmoothingModel.java @@ -21,14 +21,15 @@ import org.elasticsearch.common.ParsingException; import org.elasticsearch.common.io.stream.NamedWriteable; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContent.Params; +import org.elasticsearch.common.xcontent.ToXContentFragment; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.search.suggest.phrase.WordScorer.WordScorerFactory; import java.io.IOException; -public abstract class SmoothingModel implements NamedWriteable, ToXContent { +public abstract class SmoothingModel implements NamedWriteable, ToXContentFragment { @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { diff --git a/core/src/main/java/org/elasticsearch/search/suggest/phrase/WordScorer.java b/core/src/main/java/org/elasticsearch/search/suggest/phrase/WordScorer.java index 32d4feb4b27a2..22515489ee252 100644 --- a/core/src/main/java/org/elasticsearch/search/suggest/phrase/WordScorer.java +++ b/core/src/main/java/org/elasticsearch/search/suggest/phrase/WordScorer.java @@ -40,8 +40,8 @@ public abstract class WordScorer { protected final double realWordLikelyhood; protected final BytesRefBuilder spare = new BytesRefBuilder(); protected final BytesRef separator; + protected final long numTerms; private final TermsEnum termsEnum; - private final long numTerms; private final boolean useTotalTermFreq; public WordScorer(IndexReader reader, String field, double realWordLikelyHood, BytesRef separator) throws IOException { @@ -57,7 +57,11 @@ public WordScorer(IndexReader reader, Terms terms, String field, double realWord final long vocSize = terms.getSumTotalTermFreq(); this.vocabluarySize = vocSize == -1 ? reader.maxDoc() : vocSize; this.useTotalTermFreq = vocSize != -1; - this.numTerms = terms.size(); + // terms.size() might be -1 if it's a MultiTerms instance. In that case, + // use reader.maxDoc() as an approximation. This also protects from + // division by zero, by scoreUnigram. + final long nTerms = terms.size(); + this.numTerms = nTerms == -1 ? reader.maxDoc() : nTerms; this.termsEnum = new FreqTermsEnum(reader, field, !useTotalTermFreq, useTotalTermFreq, null, BigArrays.NON_RECYCLING_INSTANCE); // non recycling for now this.reader = reader; this.realWordLikelyhood = realWordLikelyHood; diff --git a/core/src/main/java/org/elasticsearch/snapshots/RestoreInfo.java b/core/src/main/java/org/elasticsearch/snapshots/RestoreInfo.java index b7db7d25d8b16..36e80501fc1b1 100644 --- a/core/src/main/java/org/elasticsearch/snapshots/RestoreInfo.java +++ b/core/src/main/java/org/elasticsearch/snapshots/RestoreInfo.java @@ -21,7 +21,8 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Streamable; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContent.Params; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.rest.RestStatus; @@ -35,7 +36,7 @@ *

    * Returned as part of {@link org.elasticsearch.action.admin.cluster.snapshots.restore.RestoreSnapshotResponse} */ -public class RestoreInfo implements ToXContent, Streamable { +public class RestoreInfo implements ToXContentObject, Streamable { private String name; diff --git a/core/src/main/java/org/elasticsearch/snapshots/RestoreService.java b/core/src/main/java/org/elasticsearch/snapshots/RestoreService.java index e6e6bc82173bf..c92400bf35fab 100644 --- a/core/src/main/java/org/elasticsearch/snapshots/RestoreService.java +++ b/core/src/main/java/org/elasticsearch/snapshots/RestoreService.java @@ -82,6 +82,7 @@ import java.util.Objects; import java.util.Optional; import java.util.Set; +import java.util.function.Predicate; import java.util.stream.Collectors; import static java.util.Collections.unmodifiableSet; @@ -386,40 +387,45 @@ private IndexMetaData updateIndexSettings(IndexMetaData indexMetaData, Settings } Settings normalizedChangeSettings = Settings.builder().put(changeSettings).normalizePrefix(IndexMetaData.INDEX_SETTING_PREFIX).build(); IndexMetaData.Builder builder = IndexMetaData.builder(indexMetaData); - Map settingsMap = new HashMap<>(indexMetaData.getSettings().getAsMap()); + Settings settings = indexMetaData.getSettings(); + Set keyFilters = new HashSet<>(); List simpleMatchPatterns = new ArrayList<>(); for (String ignoredSetting : ignoreSettings) { if (!Regex.isSimpleMatchPattern(ignoredSetting)) { if (UNREMOVABLE_SETTINGS.contains(ignoredSetting)) { throw new SnapshotRestoreException(snapshot, "cannot remove setting [" + ignoredSetting + "] on restore"); } else { - settingsMap.remove(ignoredSetting); + keyFilters.add(ignoredSetting); } } else { simpleMatchPatterns.add(ignoredSetting); } } - if (!simpleMatchPatterns.isEmpty()) { - String[] removePatterns = simpleMatchPatterns.toArray(new String[simpleMatchPatterns.size()]); - Iterator> iterator = settingsMap.entrySet().iterator(); - while (iterator.hasNext()) { - Map.Entry entry = iterator.next(); - if (UNREMOVABLE_SETTINGS.contains(entry.getKey()) == false) { - if (Regex.simpleMatch(removePatterns, entry.getKey())) { - iterator.remove(); + Predicate settingsFilter = k -> { + if (UNREMOVABLE_SETTINGS.contains(k) == false) { + for (String filterKey : keyFilters) { + if (k.equals(filterKey)) { + return false; + } + } + for (String pattern : simpleMatchPatterns) { + if (Regex.simpleMatch(pattern, k)) { + return false; } } } - } - for(Map.Entry entry : normalizedChangeSettings.getAsMap().entrySet()) { - if (UNMODIFIABLE_SETTINGS.contains(entry.getKey())) { - throw new SnapshotRestoreException(snapshot, "cannot modify setting [" + entry.getKey() + "] on restore"); - } else { - settingsMap.put(entry.getKey(), entry.getValue()); - } - } - - return builder.settings(Settings.builder().put(settingsMap)).build(); + return true; + }; + Settings.Builder settingsBuilder = Settings.builder() + .put(settings.filter(settingsFilter)) + .put(normalizedChangeSettings.filter(k -> { + if (UNMODIFIABLE_SETTINGS.contains(k)) { + throw new SnapshotRestoreException(snapshot, "cannot modify setting [" + k + "] on restore"); + } else { + return true; + } + })); + return builder.settings(settingsBuilder).build(); } private void restoreGlobalStateIfRequested(MetaData.Builder mdBuilder) { diff --git a/core/src/main/java/org/elasticsearch/snapshots/SnapshotId.java b/core/src/main/java/org/elasticsearch/snapshots/SnapshotId.java index ffd7547099c66..b80dfd94d759b 100644 --- a/core/src/main/java/org/elasticsearch/snapshots/SnapshotId.java +++ b/core/src/main/java/org/elasticsearch/snapshots/SnapshotId.java @@ -22,7 +22,8 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContent.Params; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentParser; @@ -32,7 +33,7 @@ /** * SnapshotId - snapshot name + snapshot UUID */ -public final class SnapshotId implements Comparable, Writeable, ToXContent { +public final class SnapshotId implements Comparable, Writeable, ToXContentObject { private static final String NAME = "name"; private static final String UUID = "uuid"; diff --git a/core/src/main/java/org/elasticsearch/snapshots/SnapshotsService.java b/core/src/main/java/org/elasticsearch/snapshots/SnapshotsService.java index 037db4d5caf66..0804e69e46e23 100644 --- a/core/src/main/java/org/elasticsearch/snapshots/SnapshotsService.java +++ b/core/src/main/java/org/elasticsearch/snapshots/SnapshotsService.java @@ -425,6 +425,15 @@ public void onFailure(String source, Exception e) { removeSnapshotFromClusterState(snapshot.snapshot(), null, e, new CleanupAfterErrorListener(snapshot, true, userCreateSnapshotListener, e)); } + @Override + public void onNoLongerMaster(String source) { + // We are not longer a master - we shouldn't try to do any cleanup + // The new master will take care of it + logger.warn("[{}] failed to create snapshot - no longer a master", snapshot.snapshot().getSnapshotId()); + userCreateSnapshotListener.onFailure( + new SnapshotException(snapshot.snapshot(), "master changed during snapshot initialization")); + } + @Override public void clusterStateProcessed(String source, ClusterState oldState, ClusterState newState) { // The userCreateSnapshotListener.onResponse() notifies caller that the snapshot was accepted @@ -473,6 +482,10 @@ public void onFailure(Exception e) { cleanupAfterError(e); } + public void onNoLongerMaster(String source) { + userCreateSnapshotListener.onFailure(e); + } + private void cleanupAfterError(Exception exception) { if(snapshotCreated) { try { @@ -628,7 +641,8 @@ private SnapshotShardFailure findShardFailure(List shardFa public void applyClusterState(ClusterChangedEvent event) { try { if (event.localNodeMaster()) { - if (event.nodesRemoved()) { + // We don't remove old master when master flips anymore. So, we need to check for change in master + if (event.nodesRemoved() || event.previousState().nodes().isLocalNodeElectedMaster() == false) { processSnapshotsOnRemovedNodes(event); } if (event.routingTableChanged()) { @@ -981,7 +995,7 @@ private void removeSnapshotFromClusterState(final Snapshot snapshot, final Snaps * @param listener listener to notify when snapshot information is removed from the cluster state */ private void removeSnapshotFromClusterState(final Snapshot snapshot, final SnapshotInfo snapshotInfo, final Exception failure, - @Nullable ActionListener listener) { + @Nullable CleanupAfterErrorListener listener) { clusterService.submitStateUpdateTask("remove snapshot metadata", new ClusterStateUpdateTask() { @Override @@ -1013,6 +1027,13 @@ public void onFailure(String source, Exception e) { } } + @Override + public void onNoLongerMaster(String source) { + if (listener != null) { + listener.onNoLongerMaster(source); + } + } + @Override public void clusterStateProcessed(String source, ClusterState oldState, ClusterState newState) { for (SnapshotCompletionListener listener : snapshotCompletionListeners) { @@ -1183,9 +1204,16 @@ public void onSnapshotCompletion(Snapshot completedSnapshot, SnapshotInfo snapsh if (completedSnapshot.equals(snapshot)) { logger.debug("deleted snapshot completed - deleting files"); removeListener(this); - threadPool.executor(ThreadPool.Names.SNAPSHOT).execute(() -> - deleteSnapshot(completedSnapshot.getRepository(), completedSnapshot.getSnapshotId().getName(), - listener, true) + threadPool.executor(ThreadPool.Names.SNAPSHOT).execute(() -> { + try { + deleteSnapshot(completedSnapshot.getRepository(), completedSnapshot.getSnapshotId().getName(), + listener, true); + + } catch (Exception ex) { + logger.warn((Supplier) () -> + new ParameterizedMessage("[{}] failed to delete snapshot", snapshot), ex); + } + } ); } } diff --git a/core/src/main/java/org/elasticsearch/tasks/Task.java b/core/src/main/java/org/elasticsearch/tasks/Task.java index bc2e8418141dc..e59970b84ee47 100644 --- a/core/src/main/java/org/elasticsearch/tasks/Task.java +++ b/core/src/main/java/org/elasticsearch/tasks/Task.java @@ -24,6 +24,7 @@ import org.elasticsearch.cluster.node.DiscoveryNode; import org.elasticsearch.common.io.stream.NamedWriteable; import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentObject; import java.io.IOException; @@ -146,7 +147,7 @@ public Status getStatus() { return null; } - public interface Status extends ToXContent, NamedWriteable {} + public interface Status extends ToXContentObject, NamedWriteable {} public TaskResult result(DiscoveryNode node, Exception error) throws IOException { return new TaskResult(taskInfo(node.getId(), true), error); diff --git a/core/src/main/java/org/elasticsearch/tasks/TaskInfo.java b/core/src/main/java/org/elasticsearch/tasks/TaskInfo.java index bfd2addb9c598..d0fd66703e09e 100644 --- a/core/src/main/java/org/elasticsearch/tasks/TaskInfo.java +++ b/core/src/main/java/org/elasticsearch/tasks/TaskInfo.java @@ -26,7 +26,8 @@ import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; import org.elasticsearch.common.xcontent.ConstructingObjectParser; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContent.Params; +import org.elasticsearch.common.xcontent.ToXContentFragment; import org.elasticsearch.common.xcontent.XContentBuilder; import java.io.IOException; @@ -45,7 +46,7 @@ * and use in APIs. Instead, immutable and streamable TaskInfo objects are used to represent * snapshot information about currently running tasks. */ -public final class TaskInfo implements Writeable, ToXContent { +public final class TaskInfo implements Writeable, ToXContentFragment { private final TaskId taskId; private final String type; diff --git a/core/src/main/java/org/elasticsearch/tasks/TaskResult.java b/core/src/main/java/org/elasticsearch/tasks/TaskResult.java index ba80879b5df8f..e845203c275e2 100644 --- a/core/src/main/java/org/elasticsearch/tasks/TaskResult.java +++ b/core/src/main/java/org/elasticsearch/tasks/TaskResult.java @@ -29,6 +29,8 @@ import org.elasticsearch.common.io.stream.Writeable; import org.elasticsearch.common.xcontent.ConstructingObjectParser; import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContent.Params; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.common.xcontent.XContentHelper; @@ -47,7 +49,7 @@ * Information about a running task or a task that stored its result. Running tasks just have a {@link #getTask()} while * tasks with stored result will have either a {@link #getError()} or {@link #getResponse()}. */ -public final class TaskResult implements Writeable, ToXContent { +public final class TaskResult implements Writeable, ToXContentObject { private final boolean completed; private final TaskInfo task; @Nullable diff --git a/core/src/main/java/org/elasticsearch/threadpool/Scheduler.java b/core/src/main/java/org/elasticsearch/threadpool/Scheduler.java new file mode 100644 index 0000000000000..2901fc1f7a8ed --- /dev/null +++ b/core/src/main/java/org/elasticsearch/threadpool/Scheduler.java @@ -0,0 +1,209 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.threadpool; + +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.unit.TimeValue; +import org.elasticsearch.common.util.concurrent.AbstractRunnable; +import org.elasticsearch.common.util.concurrent.EsAbortPolicy; +import org.elasticsearch.common.util.concurrent.EsExecutors; +import org.elasticsearch.common.util.concurrent.EsRejectedExecutionException; + +import java.util.concurrent.ScheduledFuture; +import java.util.concurrent.ScheduledThreadPoolExecutor; +import java.util.concurrent.TimeUnit; +import java.util.function.Consumer; + +/** + * Scheduler that allows to schedule one-shot and periodic commands. + */ +public interface Scheduler { + + static ScheduledThreadPoolExecutor initScheduler(Settings settings) { + ScheduledThreadPoolExecutor scheduler = new ScheduledThreadPoolExecutor(1, + EsExecutors.daemonThreadFactory(settings, "scheduler"), new EsAbortPolicy()); + scheduler.setExecuteExistingDelayedTasksAfterShutdownPolicy(false); + scheduler.setContinueExistingPeriodicTasksAfterShutdownPolicy(false); + scheduler.setRemoveOnCancelPolicy(true); + return scheduler; + } + + static boolean terminate(ScheduledThreadPoolExecutor scheduledThreadPoolExecutor, long timeout, TimeUnit timeUnit) { + scheduledThreadPoolExecutor.shutdown(); + if (awaitTermination(scheduledThreadPoolExecutor, timeout, timeUnit)) { + return true; + } + // last resort + scheduledThreadPoolExecutor.shutdownNow(); + return awaitTermination(scheduledThreadPoolExecutor, timeout, timeUnit); + } + + static boolean awaitTermination(final ScheduledThreadPoolExecutor scheduledThreadPoolExecutor, + final long timeout, final TimeUnit timeUnit) { + try { + if (scheduledThreadPoolExecutor.awaitTermination(timeout, timeUnit)) { + return true; + } + } catch (InterruptedException e) { + Thread.currentThread().interrupt(); + } + return false; + } + + /** + * Does nothing by default but can be used by subclasses to save the current thread context and wraps the command in a Runnable + * that restores that context before running the command. + */ + default Runnable preserveContext(Runnable command) { + return command; + } + + /** + * Schedules a one-shot command to be run after a given delay. The command is not run in the context of the calling thread. + * To preserve the context of the calling thread you may call {@link #preserveContext(Runnable)} on the runnable before passing + * it to this method. + * The command runs on scheduler thread. Do not run blocking calls on the scheduler thread. Subclasses may allow + * to execute on a different executor, in which case blocking calls are allowed. + * + * @param delay delay before the task executes + * @param executor the name of the executor that has to execute this task. Ignored in the default implementation but can be used + * by subclasses that support multiple executors. + * @param command the command to run + * @return a ScheduledFuture who's get will return when the task has been added to its target thread pool and throws an exception if + * the task is canceled before it was added to its target thread pool. Once the task has been added to its target thread pool + * the ScheduledFuture cannot interact with it. + * @throws EsRejectedExecutionException if the task cannot be scheduled for execution + */ + ScheduledFuture schedule(TimeValue delay, String executor, Runnable command); + + /** + * Schedules a periodic action that runs on scheduler thread. Do not run blocking calls on the scheduler thread. Subclasses may allow + * to execute on a different executor, in which case blocking calls are allowed. + * + * @param command the action to take + * @param interval the delay interval + * @param executor the name of the executor that has to execute this task. Ignored in the default implementation but can be used + * by subclasses that support multiple executors. + * @return a {@link Cancellable} that can be used to cancel the subsequent runs of the command. If the command is running, it will + * not be interrupted. + */ + default Cancellable scheduleWithFixedDelay(Runnable command, TimeValue interval, String executor) { + return new ReschedulingRunnable(command, interval, executor, this, (e) -> {}, (e) -> {}); + } + + /** + * This interface represents an object whose execution may be cancelled during runtime. + */ + interface Cancellable { + + /** + * Cancel the execution of this object. This method is idempotent. + */ + void cancel(); + + /** + * Check if the execution has been cancelled + * @return true if cancelled + */ + boolean isCancelled(); + } + + /** + * This class encapsulates the scheduling of a {@link Runnable} that needs to be repeated on a interval. For example, checking a value + * for cleanup every second could be done by passing in a Runnable that can perform the check and the specified interval between + * executions of this runnable. NOTE: the runnable is only rescheduled to run again after completion of the runnable. + * + * For this class, completion means that the call to {@link Runnable#run()} returned or an exception was thrown and caught. In + * case of an exception, this class will log the exception and reschedule the runnable for its next execution. This differs from the + * {@link ScheduledThreadPoolExecutor#scheduleWithFixedDelay(Runnable, long, long, TimeUnit)} semantics as an exception there would + * terminate the rescheduling of the runnable. + */ + final class ReschedulingRunnable extends AbstractRunnable implements Cancellable { + + private final Runnable runnable; + private final TimeValue interval; + private final String executor; + private final Scheduler scheduler; + private final Consumer rejectionConsumer; + private final Consumer failureConsumer; + + private volatile boolean run = true; + + /** + * Creates a new rescheduling runnable and schedules the first execution to occur after the interval specified + * + * @param runnable the {@link Runnable} that should be executed periodically + * @param interval the time interval between executions + * @param executor the executor where this runnable should be scheduled to run + * @param scheduler the {@link Scheduler} instance to use for scheduling + */ + ReschedulingRunnable(Runnable runnable, TimeValue interval, String executor, Scheduler scheduler, + Consumer rejectionConsumer, Consumer failureConsumer) { + this.runnable = runnable; + this.interval = interval; + this.executor = executor; + this.scheduler = scheduler; + this.rejectionConsumer = rejectionConsumer; + this.failureConsumer = failureConsumer; + scheduler.schedule(interval, executor, this); + } + + @Override + public void cancel() { + run = false; + } + + @Override + public boolean isCancelled() { + return run == false; + } + + @Override + public void doRun() { + // always check run here since this may have been cancelled since the last execution and we do not want to run + if (run) { + runnable.run(); + } + } + + @Override + public void onFailure(Exception e) { + failureConsumer.accept(e); + } + + @Override + public void onRejection(Exception e) { + run = false; + rejectionConsumer.accept(e); + } + + @Override + public void onAfter() { + // if this has not been cancelled reschedule it to run again + if (run) { + try { + scheduler.schedule(interval, executor, this); + } catch (final EsRejectedExecutionException e) { + onRejection(e); + } + } + } + } +} diff --git a/core/src/main/java/org/elasticsearch/threadpool/ThreadPool.java b/core/src/main/java/org/elasticsearch/threadpool/ThreadPool.java index a05ffd117abe7..e179650aeef03 100644 --- a/core/src/main/java/org/elasticsearch/threadpool/ThreadPool.java +++ b/core/src/main/java/org/elasticsearch/threadpool/ThreadPool.java @@ -33,10 +33,7 @@ import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.SizeValue; import org.elasticsearch.common.unit.TimeValue; -import org.elasticsearch.common.util.concurrent.AbstractRunnable; -import org.elasticsearch.common.util.concurrent.EsAbortPolicy; import org.elasticsearch.common.util.concurrent.EsExecutors; -import org.elasticsearch.common.util.concurrent.EsRejectedExecutionException; import org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor; import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.common.util.concurrent.XRejectedExecutionHandler; @@ -64,7 +61,7 @@ import static java.util.Collections.unmodifiableMap; -public class ThreadPool extends AbstractComponent implements Closeable { +public class ThreadPool extends AbstractComponent implements Scheduler, Closeable { public static class Names { public static final String SAME = "same"; @@ -143,8 +140,6 @@ public static ThreadPoolType fromType(String type) { private Map executors = new HashMap<>(); - private final ScheduledThreadPoolExecutor scheduler; - private final CachedTimeThread cachedTimeThread; static final ExecutorService DIRECT_EXECUTOR = EsExecutors.newDirectExecutorService(); @@ -153,6 +148,8 @@ public static ThreadPoolType fromType(String type) { private final Map builders; + private final ScheduledThreadPoolExecutor scheduler; + public Collection builders() { return Collections.unmodifiableCollection(builders.values()); } @@ -210,12 +207,7 @@ public ThreadPool(final Settings settings, final ExecutorBuilder... customBui executors.put(Names.SAME, new ExecutorHolder(DIRECT_EXECUTOR, new Info(Names.SAME, ThreadPoolType.DIRECT))); this.executors = unmodifiableMap(executors); - - this.scheduler = new ScheduledThreadPoolExecutor(1, EsExecutors.daemonThreadFactory(settings, "scheduler"), new EsAbortPolicy()); - this.scheduler.setExecuteExistingDelayedTasksAfterShutdownPolicy(false); - this.scheduler.setContinueExistingPeriodicTasksAfterShutdownPolicy(false); - this.scheduler.setRemoveOnCancelPolicy(true); - + this.scheduler = Scheduler.initScheduler(settings); TimeValue estimatedTimeInterval = ESTIMATED_TIME_INTERVAL_SETTING.get(settings); this.cachedTimeThread = new CachedTimeThread(EsExecutors.threadName(settings, "[timer]"), estimatedTimeInterval.millis()); this.cachedTimeThread.start(); @@ -329,25 +321,6 @@ public ExecutorService executor(String name) { return holder.executor(); } - public ScheduledExecutorService scheduler() { - return this.scheduler; - } - - /** - * Schedules a periodic action that runs on the specified thread pool. - * - * @param command the action to take - * @param interval the delay interval - * @param executor The name of the thread pool on which to execute this task. {@link Names#SAME} means "execute on the scheduler thread", - * which there is only one of. Executing blocking or long running code on the {@link Names#SAME} thread pool should never - * be done as it can cause issues with the cluster - * @return a {@link Cancellable} that can be used to cancel the subsequent runs of the command. If the command is running, it will - * not be interrupted. - */ - public Cancellable scheduleWithFixedDelay(Runnable command, TimeValue interval, String executor) { - return new ReschedulingRunnable(command, interval, executor, this); - } - /** * Schedules a one-shot command to run after a given delay. The command is not run in the context of the calling thread. To preserve the * context of the calling thread you may call threadPool.getThreadContext().preserveContext on the runnable before passing @@ -361,13 +334,30 @@ public Cancellable scheduleWithFixedDelay(Runnable command, TimeValue interval, * @return a ScheduledFuture who's get will return when the task is has been added to its target thread pool and throw an exception if * the task is canceled before it was added to its target thread pool. Once the task has been added to its target thread pool * the ScheduledFuture will cannot interact with it. - * @throws EsRejectedExecutionException if the task cannot be scheduled for execution + * @throws org.elasticsearch.common.util.concurrent.EsRejectedExecutionException if the task cannot be scheduled for execution */ public ScheduledFuture schedule(TimeValue delay, String executor, Runnable command) { if (!Names.SAME.equals(executor)) { command = new ThreadedRunnable(command, executor(executor)); } - return scheduler.schedule(new LoggingRunnable(command), delay.millis(), TimeUnit.MILLISECONDS); + return scheduler.schedule(new ThreadPool.LoggingRunnable(command), delay.millis(), TimeUnit.MILLISECONDS); + } + + @Override + public Cancellable scheduleWithFixedDelay(Runnable command, TimeValue interval, String executor) { + return new ReschedulingRunnable(command, interval, executor, this, + (e) -> { + if (logger.isDebugEnabled()) { + logger.debug((Supplier) () -> new ParameterizedMessage("scheduled task [{}] was rejected on thread pool [{}]", + command, executor), e); + } + }, + (e) -> logger.warn((Supplier) () -> new ParameterizedMessage("failed to run scheduled task [{}] on thread pool [{}]", + command, executor), e)); + } + + public Runnable preserveContext(Runnable command) { + return getThreadContext().preserveContext(command); } public void shutdown() { @@ -376,7 +366,7 @@ public void shutdown() { scheduler.shutdown(); for (ExecutorHolder executor : executors.values()) { if (executor.executor() instanceof ThreadPoolExecutor) { - ((ThreadPoolExecutor) executor.executor()).shutdown(); + executor.executor().shutdown(); } } } @@ -387,7 +377,7 @@ public void shutdownNow() { scheduler.shutdownNow(); for (ExecutorHolder executor : executors.values()) { if (executor.executor() instanceof ThreadPoolExecutor) { - ((ThreadPoolExecutor) executor.executor()).shutdownNow(); + executor.executor().shutdownNow(); } } } @@ -396,14 +386,17 @@ public boolean awaitTermination(long timeout, TimeUnit unit) throws InterruptedE boolean result = scheduler.awaitTermination(timeout, unit); for (ExecutorHolder executor : executors.values()) { if (executor.executor() instanceof ThreadPoolExecutor) { - result &= ((ThreadPoolExecutor) executor.executor()).awaitTermination(timeout, unit); + result &= executor.executor().awaitTermination(timeout, unit); } } - cachedTimeThread.join(unit.toMillis(timeout)); return result; } + public ScheduledExecutorService scheduler() { + return this.scheduler; + } + /** * Constrains a value between minimum and maximum values * (inclusive). @@ -726,7 +719,9 @@ public static boolean terminate(ThreadPool pool, long timeout, TimeUnit timeUnit if (pool != null) { try { pool.shutdown(); - if (awaitTermination(pool, timeout, timeUnit)) return true; + if (awaitTermination(pool, timeout, timeUnit)) { + return true; + } // last resort pool.shutdownNow(); return awaitTermination(pool, timeout, timeUnit); @@ -738,11 +733,11 @@ public static boolean terminate(ThreadPool pool, long timeout, TimeUnit timeUnit } private static boolean awaitTermination( - final ThreadPool pool, + final ThreadPool threadPool, final long timeout, final TimeUnit timeUnit) { try { - if (pool.awaitTermination(timeout, timeUnit)) { + if (threadPool.awaitTermination(timeout, timeUnit)) { return true; } } catch (InterruptedException e) { @@ -760,102 +755,6 @@ public ThreadContext getThreadContext() { return threadContext; } - /** - * This interface represents an object whose execution may be cancelled during runtime. - */ - public interface Cancellable { - - /** - * Cancel the execution of this object. This method is idempotent. - */ - void cancel(); - - /** - * Check if the execution has been cancelled - * @return true if cancelled - */ - boolean isCancelled(); - } - - /** - * This class encapsulates the scheduling of a {@link Runnable} that needs to be repeated on a interval. For example, checking a value - * for cleanup every second could be done by passing in a Runnable that can perform the check and the specified interval between - * executions of this runnable. NOTE: the runnable is only rescheduled to run again after completion of the runnable. - * - * For this class, completion means that the call to {@link Runnable#run()} returned or an exception was thrown and caught. In - * case of an exception, this class will log the exception and reschedule the runnable for its next execution. This differs from the - * {@link ScheduledThreadPoolExecutor#scheduleWithFixedDelay(Runnable, long, long, TimeUnit)} semantics as an exception there would - * terminate the rescheduling of the runnable. - */ - static final class ReschedulingRunnable extends AbstractRunnable implements Cancellable { - - private final Runnable runnable; - private final TimeValue interval; - private final String executor; - private final ThreadPool threadPool; - - private volatile boolean run = true; - - /** - * Creates a new rescheduling runnable and schedules the first execution to occur after the interval specified - * - * @param runnable the {@link Runnable} that should be executed periodically - * @param interval the time interval between executions - * @param executor the executor where this runnable should be scheduled to run - * @param threadPool the {@link ThreadPool} instance to use for scheduling - */ - ReschedulingRunnable(Runnable runnable, TimeValue interval, String executor, ThreadPool threadPool) { - this.runnable = runnable; - this.interval = interval; - this.executor = executor; - this.threadPool = threadPool; - threadPool.schedule(interval, executor, this); - } - - @Override - public void cancel() { - run = false; - } - - @Override - public boolean isCancelled() { - return run == false; - } - - @Override - public void doRun() { - // always check run here since this may have been cancelled since the last execution and we do not want to run - if (run) { - runnable.run(); - } - } - - @Override - public void onFailure(Exception e) { - threadPool.logger.warn((Supplier) () -> new ParameterizedMessage("failed to run scheduled task [{}] on thread pool [{}]", runnable.toString(), executor), e); - } - - @Override - public void onRejection(Exception e) { - run = false; - if (threadPool.logger.isDebugEnabled()) { - threadPool.logger.debug((Supplier) () -> new ParameterizedMessage("scheduled task [{}] was rejected on thread pool [{}]", runnable, executor), e); - } - } - - @Override - public void onAfter() { - // if this has not been cancelled reschedule it to run again - if (run) { - try { - threadPool.schedule(interval, executor, this); - } catch (final EsRejectedExecutionException e) { - onRejection(e); - } - } - } - } - public static boolean assertNotScheduleThread(String reason) { assert Thread.currentThread().getName().contains("scheduler") == false : "Expected current thread [" + Thread.currentThread() + "] to not be the scheduler thread. Reason: [" + reason + "]"; diff --git a/core/src/main/java/org/elasticsearch/threadpool/ThreadPoolInfo.java b/core/src/main/java/org/elasticsearch/threadpool/ThreadPoolInfo.java index 70c0f2c959820..3adb487b8b5e9 100644 --- a/core/src/main/java/org/elasticsearch/threadpool/ThreadPoolInfo.java +++ b/core/src/main/java/org/elasticsearch/threadpool/ThreadPoolInfo.java @@ -22,7 +22,8 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContent.Params; +import org.elasticsearch.common.xcontent.ToXContentFragment; import org.elasticsearch.common.xcontent.XContentBuilder; import java.io.IOException; @@ -30,7 +31,7 @@ import java.util.Iterator; import java.util.List; -public class ThreadPoolInfo implements Writeable, Iterable, ToXContent { +public class ThreadPoolInfo implements Writeable, Iterable, ToXContentFragment { private final List infos; diff --git a/core/src/main/java/org/elasticsearch/threadpool/ThreadPoolStats.java b/core/src/main/java/org/elasticsearch/threadpool/ThreadPoolStats.java index ead076fc83b1f..6c5042f27259d 100644 --- a/core/src/main/java/org/elasticsearch/threadpool/ThreadPoolStats.java +++ b/core/src/main/java/org/elasticsearch/threadpool/ThreadPoolStats.java @@ -23,6 +23,7 @@ import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentFragment; import org.elasticsearch.common.xcontent.XContentBuilder; import java.io.IOException; @@ -30,9 +31,9 @@ import java.util.Iterator; import java.util.List; -public class ThreadPoolStats implements Writeable, ToXContent, Iterable { +public class ThreadPoolStats implements Writeable, ToXContentFragment, Iterable { - public static class Stats implements Writeable, ToXContent, Comparable { + public static class Stats implements Writeable, ToXContentFragment, Comparable { private final String name; private final int threads; diff --git a/core/src/main/java/org/elasticsearch/transport/RemoteClusterConnection.java b/core/src/main/java/org/elasticsearch/transport/RemoteClusterConnection.java index 39fb515984f7a..c08bf9b737e95 100644 --- a/core/src/main/java/org/elasticsearch/transport/RemoteClusterConnection.java +++ b/core/src/main/java/org/elasticsearch/transport/RemoteClusterConnection.java @@ -230,15 +230,19 @@ public String executor() { } }); }; - if (connectedNodes.size() == 0) { - // just in case if we are not connected for some reason we try to connect and if we fail we have to notify the listener - // this will cause some back pressure on the search end and eventually will cause rejections but that's fine - // we can't proceed with a search on a cluster level. - // in the future we might want to just skip the remote nodes in such a case but that can already be implemented on the - // caller end since they provide the listener. - ensureConnected(ActionListener.wrap((x) -> runnable.run(), listener::onFailure)); - } else { - runnable.run(); + try { + if (connectedNodes.size() == 0) { + // just in case if we are not connected for some reason we try to connect and if we fail we have to notify the listener + // this will cause some back pressure on the search end and eventually will cause rejections but that's fine + // we can't proceed with a search on a cluster level. + // in the future we might want to just skip the remote nodes in such a case but that can already be implemented on the + // caller end since they provide the listener. + ensureConnected(ActionListener.wrap((x) -> runnable.run(), listener::onFailure)); + } else { + runnable.run(); + } + } catch (Exception ex) { + listener.onFailure(ex); } } diff --git a/core/src/main/java/org/elasticsearch/transport/RemoteConnectionInfo.java b/core/src/main/java/org/elasticsearch/transport/RemoteConnectionInfo.java index 53e6d220da1a5..a05da795a81ef 100644 --- a/core/src/main/java/org/elasticsearch/transport/RemoteConnectionInfo.java +++ b/core/src/main/java/org/elasticsearch/transport/RemoteConnectionInfo.java @@ -23,7 +23,8 @@ import org.elasticsearch.common.io.stream.Writeable; import org.elasticsearch.common.transport.TransportAddress; import org.elasticsearch.common.unit.TimeValue; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContent.Params; +import org.elasticsearch.common.xcontent.ToXContentFragment; import org.elasticsearch.common.xcontent.XContentBuilder; import java.io.IOException; @@ -34,7 +35,7 @@ * This class encapsulates all remote cluster information to be rendered on * _remote/info requests. */ -public final class RemoteConnectionInfo implements ToXContent, Writeable { +public final class RemoteConnectionInfo implements ToXContentFragment, Writeable { final List seedNodes; final List httpAddresses; final int connectionsPerCluster; diff --git a/core/src/main/java/org/elasticsearch/transport/RequestHandlerRegistry.java b/core/src/main/java/org/elasticsearch/transport/RequestHandlerRegistry.java index 2e56ff91021a6..fc1af1d876ade 100644 --- a/core/src/main/java/org/elasticsearch/transport/RequestHandlerRegistry.java +++ b/core/src/main/java/org/elasticsearch/transport/RequestHandlerRegistry.java @@ -19,6 +19,8 @@ package org.elasticsearch.transport; +import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.io.stream.Writeable; import org.elasticsearch.tasks.Task; import org.elasticsearch.tasks.TaskManager; @@ -32,15 +34,14 @@ public class RequestHandlerRegistry { private final boolean forceExecution; private final boolean canTripCircuitBreaker; private final String executor; - private final Supplier requestFactory; private final TaskManager taskManager; + private final Writeable.Reader requestReader; - public RequestHandlerRegistry(String action, Supplier requestFactory, TaskManager taskManager, + public RequestHandlerRegistry(String action, Writeable.Reader requestReader, TaskManager taskManager, TransportRequestHandler handler, String executor, boolean forceExecution, boolean canTripCircuitBreaker) { this.action = action; - this.requestFactory = requestFactory; - assert newRequest() != null; + this.requestReader = requestReader; this.handler = handler; this.forceExecution = forceExecution; this.canTripCircuitBreaker = canTripCircuitBreaker; @@ -52,8 +53,8 @@ public String getAction() { return action; } - public Request newRequest() { - return requestFactory.get(); + public Request newRequest(StreamInput in) throws IOException { + return requestReader.read(in); } public void processMessageReceived(Request request, TransportChannel channel) throws Exception { diff --git a/core/src/main/java/org/elasticsearch/transport/TcpTransport.java b/core/src/main/java/org/elasticsearch/transport/TcpTransport.java index 689a54fc8daa5..62ad2b58fb78e 100644 --- a/core/src/main/java/org/elasticsearch/transport/TcpTransport.java +++ b/core/src/main/java/org/elasticsearch/transport/TcpTransport.java @@ -110,7 +110,6 @@ import static java.util.Collections.unmodifiableMap; import static org.elasticsearch.common.settings.Setting.affixKeySetting; import static org.elasticsearch.common.settings.Setting.boolSetting; -import static org.elasticsearch.common.settings.Setting.groupSetting; import static org.elasticsearch.common.settings.Setting.intSetting; import static org.elasticsearch.common.settings.Setting.listSetting; import static org.elasticsearch.common.settings.Setting.timeSetting; @@ -184,7 +183,7 @@ public abstract class TcpTransport extends AbstractLifecycleComponent i key -> intSetting(key, -1, -1, Setting.Property.NodeScope)); private static final long NINETY_PER_HEAP_SIZE = (long) (JvmInfo.jvmInfo().getMem().getHeapMax().getBytes() * 0.9); - private static final int PING_DATA_SIZE = -1; + public static final int PING_DATA_SIZE = -1; private final CircuitBreakerService circuitBreakerService; // package visibility for tests protected final ScheduledPing scheduledPing; @@ -194,7 +193,7 @@ public abstract class TcpTransport extends AbstractLifecycleComponent i protected final NetworkService networkService; protected final Set profileSettings; - protected volatile TransportServiceAdapter transportServiceAdapter; + protected volatile TransportService transportService; // node id to actual channel protected final ConcurrentMap connectedNodes = newConcurrentMap(); @@ -270,11 +269,11 @@ public CircuitBreaker getInFlightRequestBreaker() { } @Override - public void transportServiceAdapter(TransportServiceAdapter service) { + public void setTransportService(TransportService service) { if (service.getRequestHandler(HANDSHAKE_ACTION_NAME) != null) { throw new IllegalStateException(HANDSHAKE_ACTION_NAME + " is a reserved request handler and must not be registered"); } - this.transportServiceAdapter = service; + this.transportService = service; } private static class HandshakeResponseHandler implements TransportResponseHandler { @@ -442,9 +441,10 @@ public Channel channel(TransportRequestOptions.Type type) { public void close() throws IOException { if (closed.compareAndSet(false, true)) { try { - closeChannels(Arrays.stream(channels).filter(Objects::nonNull).collect(Collectors.toList()), false); + closeChannels(Arrays.stream(channels).filter(Objects::nonNull).collect(Collectors.toList()), false, + lifecycle.stopped()); } finally { - transportServiceAdapter.onConnectionClosed(this); + transportService.onConnectionClosed(this); } } } @@ -500,7 +500,7 @@ public void connectToNode(DiscoveryNode node, ConnectionProfile connectionProfil logger.debug("connected to node [{}]", node); } try { - transportServiceAdapter.onNodeConnected(node); + transportService.onNodeConnected(node); } finally { if (nodeChannels.isClosed()) { // we got closed concurrently due to a disconnect or some other event on the channel. @@ -512,7 +512,7 @@ public void connectToNode(DiscoveryNode node, ConnectionProfile connectionProfil // try to remove it first either way one of the two wins even if the callback has run before we even added the // tuple to the map since in that case we remove it here again if (connectedNodes.remove(node, nodeChannels)) { - transportServiceAdapter.onNodeDisconnected(node); + transportService.onNodeDisconnected(node); } throw new NodeNotConnectedException(node, "connection concurrently closed"); } @@ -597,8 +597,11 @@ public final NodeChannels openConnection(DiscoveryNode node, ConnectionProfile c connectTimeout : connectionProfile.getHandshakeTimeout(); final Version version = executeHandshake(node, channel, handshakeTimeout); nodeChannels = new NodeChannels(nodeChannels, version); // clone the channels - we now have the correct version - transportServiceAdapter.onConnectionOpened(nodeChannels); + transportService.onConnectionOpened(nodeChannels); connectionRef.set(nodeChannels); + if (Arrays.stream(nodeChannels.channels).allMatch(this::isOpen) == false) { + throw new ConnectTransportException(node, "a channel closed while connecting"); + } success = true; return nodeChannels; } catch (ConnectTransportException e) { @@ -625,7 +628,7 @@ private void disconnectFromNodeCloseAndNotify(DiscoveryNode node, NodeChannels n if (closeLock.readLock().tryLock()) { try { if (connectedNodes.remove(node, nodeChannels)) { - transportServiceAdapter.onNodeDisconnected(node); + transportService.onNodeDisconnected(node); } } finally { closeLock.readLock().unlock(); @@ -640,7 +643,7 @@ private void disconnectFromNodeCloseAndNotify(DiscoveryNode node, NodeChannels n protected final void closeChannelWhileHandlingExceptions(final Channel channel) { if (isOpen(channel)) { try { - closeChannels(Collections.singletonList(channel), false); + closeChannels(Collections.singletonList(channel), false, false); } catch (IOException e) { logger.warn("failed to close channel", e); } @@ -665,7 +668,7 @@ public void disconnectFromNode(DiscoveryNode node) { } finally { closeLock.readLock().unlock(); if (nodeChannels != null) { // if we found it and removed it we close and notify - IOUtils.closeWhileHandlingException(nodeChannels, () -> transportServiceAdapter.onNodeDisconnected(node)); + IOUtils.closeWhileHandlingException(nodeChannels, () -> transportService.onNodeDisconnected(node)); } } } @@ -902,11 +905,9 @@ protected final void doStop() { // first stop to accept any incoming connections so nobody can connect to this transport for (Map.Entry> entry : serverChannels.entrySet()) { try { - closeChannels(entry.getValue(), true); + closeChannels(entry.getValue(), true, false); } catch (Exception e) { - logger.debug( - (Supplier) () -> new ParameterizedMessage( - "Error closing serverChannel for profile [{}]", entry.getKey()), e); + logger.warn(new ParameterizedMessage("Error closing serverChannel for profile [{}]", entry.getKey()), e); } } // we are holding a write lock so nobody modifies the connectedNodes / openConnections map - it's safe to first close @@ -916,7 +917,7 @@ protected final void doStop() { Map.Entry next = iterator.next(); try { IOUtils.closeWhileHandlingException(next.getValue()); - transportServiceAdapter.onNodeDisconnected(next.getKey()); + transportService.onNodeDisconnected(next.getKey()); } finally { iterator.remove(); } @@ -975,7 +976,7 @@ protected void onException(Channel channel, Exception e) { @Override protected void innerInnerOnResponse(Channel channel) { try { - closeChannels(Collections.singletonList(channel), false); + closeChannels(Collections.singletonList(channel), false, false); } catch (IOException e1) { logger.debug("failed to close httpOnTransport channel", e1); } @@ -984,7 +985,7 @@ protected void innerInnerOnResponse(Channel channel) { @Override protected void innerOnFailure(Exception e) { try { - closeChannels(Collections.singletonList(channel), false); + closeChannels(Collections.singletonList(channel), false, false); } catch (IOException e1) { e.addSuppressed(e1); logger.debug("failed to close httpOnTransport channel", e1); @@ -1021,8 +1022,9 @@ protected void innerOnFailure(Exception e) { * * @param channels the channels to close * @param blocking whether the channels should be closed synchronously + * @param doNotLinger whether we abort the connection on RST instead of FIN */ - protected abstract void closeChannels(List channels, boolean blocking) throws IOException; + protected abstract void closeChannels(List channels, boolean blocking, boolean doNotLinger) throws IOException; /** * Sends message to channel. The listener's onResponse method will be called when the send is complete unless an exception @@ -1033,7 +1035,18 @@ protected void innerOnFailure(Exception e) { */ protected abstract void sendMessage(Channel channel, BytesReference reference, ActionListener listener); - protected abstract NodeChannels connectToChannels(DiscoveryNode node, ConnectionProfile connectionProfile, + /** + * Connect to the node with channels as defined by the specified connection profile. Implementations must invoke the specified channel + * close callback when a channel is closed. + * + * @param node the node to connect to + * @param connectionProfile the connection profile + * @param onChannelClose callback to invoke when a channel is closed + * @return the channels + * @throws IOException if an I/O exception occurs while opening channels + */ + protected abstract NodeChannels connectToChannels(DiscoveryNode node, + ConnectionProfile connectionProfile, Consumer onChannelClose) throws IOException; /** @@ -1078,7 +1091,7 @@ private void sendRequestToChannel(final DiscoveryNode node, final Channel target final TransportRequestOptions finalOptions = options; // this might be called in a different thread SendListener onRequestSent = new SendListener(stream, - () -> transportServiceAdapter.onRequestSent(node, requestId, action, request, finalOptions), message.length()); + () -> transportService.onRequestSent(node, requestId, action, request, finalOptions), message.length()); internalSendMessage(targetChannel, message, onRequestSent); addedReleaseListener = true; } finally { @@ -1125,7 +1138,7 @@ public void sendErrorResponse(Version nodeVersion, Channel channel, final Except final BytesReference header = buildHeader(requestId, status, nodeVersion, bytes.length()); CompositeBytesReference message = new CompositeBytesReference(header, bytes); SendListener onResponseSent = new SendListener(null, - () -> transportServiceAdapter.onResponseSent(requestId, action, error), message.length()); + () -> transportService.onResponseSent(requestId, action, error), message.length()); internalSendMessage(channel, message, onResponseSent); } } @@ -1160,7 +1173,7 @@ private void sendResponse(Version nodeVersion, Channel channel, final TransportR final TransportResponseOptions finalOptions = options; // this might be called in a different thread SendListener listener = new SendListener(stream, - () -> transportServiceAdapter.onResponseSent(requestId, action, response, finalOptions), message.length()); + () -> transportService.onResponseSent(requestId, action, response, finalOptions), message.length()); internalSendMessage(channel, message, listener); addedReleaseListener = true; } finally { @@ -1356,14 +1369,14 @@ public final void messageReceived(BytesReference reference, Channel channel, Str if (isHandshake) { handler = pendingHandshakes.remove(requestId); } else { - TransportResponseHandler theHandler = transportServiceAdapter.onResponseReceived(requestId); + TransportResponseHandler theHandler = transportService.onResponseReceived(requestId); if (theHandler == null && TransportStatus.isError(status)) { handler = pendingHandshakes.remove(requestId); } else { handler = theHandler; } } - // ignore if its null, the adapter logs it + // ignore if its null, the service logs it if (handler != null) { if (TransportStatus.isError(status)) { handlerResponseError(streamIn, handler); @@ -1456,7 +1469,7 @@ private void handleException(final TransportResponseHandler handler, Throwable e protected String handleRequest(Channel channel, String profileName, final StreamInput stream, long requestId, int messageLengthBytes, Version version, InetSocketAddress remoteAddress, byte status) throws IOException { final String action = stream.readString(); - transportServiceAdapter.onRequestReceived(requestId, action); + transportService.onRequestReceived(requestId, action); TransportChannel transportChannel = null; try { if (TransportStatus.isHandshake(status)) { @@ -1464,7 +1477,7 @@ protected String handleRequest(Channel channel, String profileName, final Stream sendResponse(version, channel, response, requestId, HANDSHAKE_ACTION_NAME, TransportResponseOptions.EMPTY, TransportStatus.setHandshake((byte) 0)); } else { - final RequestHandlerRegistry reg = transportServiceAdapter.getRequestHandler(action); + final RequestHandlerRegistry reg = transportService.getRequestHandler(action); if (reg == null) { throw new ActionNotFoundTransportException(action); } @@ -1475,9 +1488,8 @@ protected String handleRequest(Channel channel, String profileName, final Stream } transportChannel = new TcpTransportChannel<>(this, channel, transportName, action, requestId, version, profileName, messageLengthBytes); - final TransportRequest request = reg.newRequest(); + final TransportRequest request = reg.newRequest(stream); request.remoteAddress(new TransportAddress(remoteAddress)); - request.readFrom(stream); // in case we throw an exception, i.e. when the limit is hit, we don't want to verify validateRequest(stream, requestId, action); threadPool.executor(reg.getExecutor()).execute(new RequestHandler(reg, request, transportChannel)); diff --git a/core/src/main/java/org/elasticsearch/transport/Transport.java b/core/src/main/java/org/elasticsearch/transport/Transport.java index 5d22e156d9d13..b3471b942dae2 100644 --- a/core/src/main/java/org/elasticsearch/transport/Transport.java +++ b/core/src/main/java/org/elasticsearch/transport/Transport.java @@ -40,7 +40,7 @@ public interface Transport extends LifecycleComponent { Setting TRANSPORT_TCP_COMPRESS = Setting.boolSetting("transport.tcp.compress", false, Property.NodeScope); - void transportServiceAdapter(TransportServiceAdapter service); + void setTransportService(TransportService service); /** * The address the transport is bound on. diff --git a/core/src/main/java/org/elasticsearch/transport/TransportActionProxy.java b/core/src/main/java/org/elasticsearch/transport/TransportActionProxy.java index 5259fca507e49..8c48f08874350 100644 --- a/core/src/main/java/org/elasticsearch/transport/TransportActionProxy.java +++ b/core/src/main/java/org/elasticsearch/transport/TransportActionProxy.java @@ -18,14 +18,16 @@ */ package org.elasticsearch.transport; -import org.apache.logging.log4j.util.Supplier; import org.elasticsearch.cluster.node.DiscoveryNode; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.io.stream.Writeable; import org.elasticsearch.threadpool.ThreadPool; import java.io.IOException; import java.io.UncheckedIOException; +import java.util.function.Function; +import java.util.function.Supplier; /** * TransportActionProxy allows an arbitrary action to be executed on a defined target node while the initial request is sent to a second @@ -40,19 +42,21 @@ private static class ProxyRequestHandler implements Tran private final TransportService service; private final String action; - private final Supplier responseFactory; + private final Function> responseFunction; - ProxyRequestHandler(TransportService service, String action, Supplier responseFactory) { + ProxyRequestHandler(TransportService service, String action, Function> responseFunction) { this.service = service; this.action = action; - this.responseFactory = responseFactory; + this.responseFunction = responseFunction; } @Override public void messageReceived(T request, TransportChannel channel) throws Exception { DiscoveryNode targetNode = request.targetNode; TransportRequest wrappedRequest = request.wrapped; - service.sendRequest(targetNode, action, wrappedRequest, new ProxyResponseHandler<>(channel, responseFactory)); + service.sendRequest(targetNode, action, wrappedRequest, + new ProxyResponseHandler<>(channel, responseFunction.apply(wrappedRequest))); } } @@ -97,11 +101,11 @@ public String executor() { static class ProxyRequest extends TransportRequest { T wrapped; - Supplier supplier; + Writeable.Reader reader; DiscoveryNode targetNode; - ProxyRequest(Supplier supplier) { - this.supplier = supplier; + ProxyRequest(Writeable.Reader reader) { + this.reader = reader; } ProxyRequest(T wrapped, DiscoveryNode targetNode) { @@ -113,8 +117,7 @@ static class ProxyRequest extends TransportRequest { public void readFrom(StreamInput in) throws IOException { super.readFrom(in); targetNode = new DiscoveryNode(in); - wrapped = supplier.get(); - wrapped.readFrom(in); + wrapped = reader.read(in); } @Override @@ -126,12 +129,24 @@ public void writeTo(StreamOutput out) throws IOException { } /** - * Registers a proxy request handler that allows to forward requests for the given action to another node. + * Registers a proxy request handler that allows to forward requests for the given action to another node. To be used when the + * response type changes based on the upcoming request (quite rare) + */ + public static void registerProxyAction(TransportService service, String action, + Function> responseFunction) { + RequestHandlerRegistry requestHandler = service.getRequestHandler(action); + service.registerRequestHandler(getProxyAction(action), () -> new ProxyRequest(requestHandler::newRequest), ThreadPool.Names.SAME, + true, false, new ProxyRequestHandler<>(service, action, responseFunction)); + } + + /** + * Registers a proxy request handler that allows to forward requests for the given action to another node. To be used when the + * response type is always the same (most of the cases). */ public static void registerProxyAction(TransportService service, String action, Supplier responseSupplier) { RequestHandlerRegistry requestHandler = service.getRequestHandler(action); service.registerRequestHandler(getProxyAction(action), () -> new ProxyRequest(requestHandler::newRequest), ThreadPool.Names.SAME, - true, false, new ProxyRequestHandler<>(service, action, responseSupplier)); + true, false, new ProxyRequestHandler<>(service, action, request -> responseSupplier)); } private static final String PROXY_ACTION_PREFIX = "internal:transport/proxy/"; diff --git a/core/src/main/java/org/elasticsearch/transport/TransportInfo.java b/core/src/main/java/org/elasticsearch/transport/TransportInfo.java index fbabf49b65d24..f8a75db65f37e 100644 --- a/core/src/main/java/org/elasticsearch/transport/TransportInfo.java +++ b/core/src/main/java/org/elasticsearch/transport/TransportInfo.java @@ -24,14 +24,15 @@ import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; import org.elasticsearch.common.transport.BoundTransportAddress; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContent.Params; +import org.elasticsearch.common.xcontent.ToXContentFragment; import org.elasticsearch.common.xcontent.XContentBuilder; import java.io.IOException; import java.util.HashMap; import java.util.Map; -public class TransportInfo implements Writeable, ToXContent { +public class TransportInfo implements Writeable, ToXContentFragment { private BoundTransportAddress address; private Map profileAddresses; diff --git a/core/src/main/java/org/elasticsearch/transport/TransportMessage.java b/core/src/main/java/org/elasticsearch/transport/TransportMessage.java index fa21a51ba2d3f..ecaca73b2db57 100644 --- a/core/src/main/java/org/elasticsearch/transport/TransportMessage.java +++ b/core/src/main/java/org/elasticsearch/transport/TransportMessage.java @@ -22,11 +22,12 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Streamable; +import org.elasticsearch.common.io.stream.Writeable; import org.elasticsearch.common.transport.TransportAddress; import java.io.IOException; -public abstract class TransportMessage implements Streamable { +public abstract class TransportMessage implements Streamable, Writeable { private TransportAddress remoteAddress; diff --git a/core/src/main/java/org/elasticsearch/transport/TransportRequest.java b/core/src/main/java/org/elasticsearch/transport/TransportRequest.java index c42ec24ad15a6..d6072fc9d0aa5 100644 --- a/core/src/main/java/org/elasticsearch/transport/TransportRequest.java +++ b/core/src/main/java/org/elasticsearch/transport/TransportRequest.java @@ -39,6 +39,10 @@ public static class Empty extends TransportRequest { public TransportRequest() { } + public TransportRequest(StreamInput in) throws IOException { + parentTaskId = TaskId.readFromStream(in); + } + /** * Set a reference to task that created this request. */ diff --git a/core/src/main/java/org/elasticsearch/transport/TransportService.java b/core/src/main/java/org/elasticsearch/transport/TransportService.java index 13034355366cf..adf87e3195fda 100644 --- a/core/src/main/java/org/elasticsearch/transport/TransportService.java +++ b/core/src/main/java/org/elasticsearch/transport/TransportService.java @@ -32,8 +32,9 @@ import org.elasticsearch.common.component.AbstractLifecycleComponent; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.io.stream.Streamable; +import org.elasticsearch.common.io.stream.Writeable; import org.elasticsearch.common.logging.Loggers; -import org.elasticsearch.common.metrics.MeanMetric; import org.elasticsearch.common.regex.Regex; import org.elasticsearch.common.settings.ClusterSettings; import org.elasticsearch.common.settings.Setting; @@ -62,6 +63,7 @@ import java.util.Objects; import java.util.concurrent.CopyOnWriteArrayList; import java.util.concurrent.CountDownLatch; +import java.util.concurrent.ExecutorService; import java.util.concurrent.ScheduledFuture; import java.util.function.Function; import java.util.function.Predicate; @@ -104,8 +106,6 @@ protected boolean removeEldestEntry(Map.Entry eldest) { } }); - private final TransportService.Adapter adapter; - public static final TransportInterceptor NOOP_TRANSPORT_INTERCEPTOR = new TransportInterceptor() {}; // tracer log @@ -145,8 +145,8 @@ public void close() throws IOException { /** * Build the service. * - * @param clusterSettings if non null the the {@linkplain TransportService} will register with the {@link ClusterSettings} for settings - * updates for {@link #TRACE_LOG_EXCLUDE_SETTING} and {@link #TRACE_LOG_INCLUDE_SETTING}. + * @param clusterSettings if non null, the {@linkplain TransportService} will register with the {@link ClusterSettings} for settings + * updates for {@link #TRACE_LOG_EXCLUDE_SETTING} and {@link #TRACE_LOG_INCLUDE_SETTING}. */ public TransportService(Settings settings, Transport transport, ThreadPool threadPool, TransportInterceptor transportInterceptor, Function localNodeFactory, @Nullable ClusterSettings clusterSettings) { @@ -158,7 +158,6 @@ public TransportService(Settings settings, Transport transport, ThreadPool threa setTracerLogInclude(TRACE_LOG_INCLUDE_SETTING.get(settings)); setTracerLogExclude(TRACE_LOG_EXCLUDE_SETTING.get(settings)); tracerLog = Loggers.getLogger(logger, ".tracer"); - adapter = createAdapter(); taskManager = createTaskManager(); this.interceptor = transportInterceptor; this.asyncSender = interceptor.interceptSender(this::sendRequestInternal); @@ -185,14 +184,19 @@ public TaskManager getTaskManager() { return taskManager; } - protected Adapter createAdapter() { - return new Adapter(); - } - protected TaskManager createTaskManager() { return new TaskManager(settings); } + /** + * The executor service for this transport service. + * + * @return the executor service + */ + protected ExecutorService getExecutorService() { + return threadPool.generic(); + } + void setTracerLogInclude(List tracerLogInclude) { this.tracerLogInclude = tracerLogInclude.toArray(Strings.EMPTY_ARRAY); } @@ -203,7 +207,7 @@ void setTracerLogExclude(List tracerLogExclude) { @Override protected void doStart() { - transport.transportServiceAdapter(adapter); + transport.setTransportService(this); transport.start(); if (transport.boundAddress() != null && logger.isInfoEnabled()) { @@ -238,7 +242,7 @@ protected void doStop() { if (holderToNotify != null) { // callback that an exception happened, but on a different thread since we don't // want handlers to worry about stack overflows - threadPool.generic().execute(new AbstractRunnable() { + getExecutorService().execute(new AbstractRunnable() { @Override public void onRejection(Exception e) { // if we get rejected during node shutdown we don't wanna bubble it up @@ -630,11 +634,11 @@ protected void doRun() throws Exception { } private void sendLocalRequest(long requestId, final String action, final TransportRequest request, TransportRequestOptions options) { - final DirectResponseChannel channel = new DirectResponseChannel(logger, localNode, action, requestId, adapter, threadPool); + final DirectResponseChannel channel = new DirectResponseChannel(logger, localNode, action, requestId, this, threadPool); try { - adapter.onRequestSent(localNode, requestId, action, request, options); - adapter.onRequestReceived(requestId, action); - final RequestHandlerRegistry reg = adapter.getRequestHandler(action); + onRequestSent(localNode, requestId, action, request, options); + onRequestReceived(requestId, action); + final RequestHandlerRegistry reg = getRequestHandler(action); if (reg == null) { throw new ActionNotFoundTransportException("Action [" + action + "] not found"); } @@ -709,7 +713,24 @@ public void registerRequestHandler(String act String executor, TransportRequestHandler handler) { handler = interceptor.interceptHandler(action, executor, false, handler); RequestHandlerRegistry reg = new RequestHandlerRegistry<>( - action, requestFactory, taskManager, handler, executor, false, true); + action, Streamable.newWriteableReader(requestFactory), taskManager, handler, executor, false, true); + registerRequestHandler(reg); + } + + /** + * Registers a new request handler + * + * @param action The action the request handler is associated with + * @param requestReader a callable to be used construct new instances for streaming + * @param executor The executor the request handling will be executed on + * @param handler The handler itself that implements the request handling + */ + public void registerRequestHandler(String action, String executor, + Writeable.Reader requestReader, + TransportRequestHandler handler) { + handler = interceptor.interceptHandler(action, executor, false, handler); + RequestHandlerRegistry reg = new RequestHandlerRegistry<>( + action, requestReader, taskManager, handler, executor, false, true); registerRequestHandler(reg); } @@ -729,7 +750,28 @@ public void registerRequestHandler(String act TransportRequestHandler handler) { handler = interceptor.interceptHandler(action, executor, forceExecution, handler); RequestHandlerRegistry reg = new RequestHandlerRegistry<>( - action, request, taskManager, handler, executor, forceExecution, canTripCircuitBreaker); + action, Streamable.newWriteableReader(request), taskManager, handler, executor, forceExecution, canTripCircuitBreaker); + registerRequestHandler(reg); + } + + /** + * Registers a new request handler + * + * @param action The action the request handler is associated with + * @param requestReader The request class that will be used to construct new instances for streaming + * @param executor The executor the request handling will be executed on + * @param forceExecution Force execution on the executor queue and never reject it + * @param canTripCircuitBreaker Check the request size and raise an exception in case the limit is breached. + * @param handler The handler itself that implements the request handling + */ + public void registerRequestHandler(String action, + String executor, boolean forceExecution, + boolean canTripCircuitBreaker, + Writeable.Reader requestReader, + TransportRequestHandler handler) { + handler = interceptor.interceptHandler(action, executor, forceExecution, handler); + RequestHandlerRegistry reg = new RequestHandlerRegistry<>( + action, requestReader, taskManager, handler, executor, forceExecution, canTripCircuitBreaker); registerRequestHandler(reg); } @@ -742,177 +784,171 @@ private void registerRequestHandler(RequestHa } } - protected RequestHandlerRegistry getRequestHandler(String action) { - return requestHandlers.get(action); + /** called by the {@link Transport} implementation once a request has been sent */ + void onRequestSent(DiscoveryNode node, long requestId, String action, TransportRequest request, + TransportRequestOptions options) { + if (traceEnabled() && shouldTraceAction(action)) { + traceRequestSent(node, requestId, action, options); + } } - protected class Adapter implements TransportServiceAdapter { + protected boolean traceEnabled() { + return tracerLog.isTraceEnabled(); + } - @Override - public void onRequestSent(DiscoveryNode node, long requestId, String action, TransportRequest request, - TransportRequestOptions options) { - if (traceEnabled() && shouldTraceAction(action)) { - traceRequestSent(node, requestId, action, options); - } + /** called by the {@link Transport} implementation once a response was sent to calling node */ + void onResponseSent(long requestId, String action, TransportResponse response, TransportResponseOptions options) { + if (traceEnabled() && shouldTraceAction(action)) { + traceResponseSent(requestId, action); } + } - protected boolean traceEnabled() { - return tracerLog.isTraceEnabled(); + /** called by the {@link Transport} implementation after an exception was sent as a response to an incoming request */ + void onResponseSent(long requestId, String action, Exception e) { + if (traceEnabled() && shouldTraceAction(action)) { + traceResponseSent(requestId, action, e); } + } - @Override - public void onResponseSent(long requestId, String action, TransportResponse response, TransportResponseOptions options) { - if (traceEnabled() && shouldTraceAction(action)) { - traceResponseSent(requestId, action); - } - } + protected void traceResponseSent(long requestId, String action, Exception e) { + tracerLog.trace( + (org.apache.logging.log4j.util.Supplier) + () -> new ParameterizedMessage("[{}][{}] sent error response", requestId, action), e); + } - @Override - public void onResponseSent(long requestId, String action, Exception e) { - if (traceEnabled() && shouldTraceAction(action)) { - traceResponseSent(requestId, action, e); - } + /** + * called by the {@link Transport} implementation when an incoming request arrives but before + * any parsing of it has happened (with the exception of the requestId and action) + */ + void onRequestReceived(long requestId, String action) { + try { + blockIncomingRequestsLatch.await(); + } catch (InterruptedException e) { + logger.trace("interrupted while waiting for incoming requests block to be removed"); } - - protected void traceResponseSent(long requestId, String action, Exception e) { - tracerLog.trace( - (org.apache.logging.log4j.util.Supplier) - () -> new ParameterizedMessage("[{}][{}] sent error response", requestId, action), e); + if (traceEnabled() && shouldTraceAction(action)) { + traceReceivedRequest(requestId, action); } + } - @Override - public void onRequestReceived(long requestId, String action) { - try { - blockIncomingRequestsLatch.await(); - } catch (InterruptedException e) { - logger.trace("interrupted while waiting for incoming requests block to be removed"); - } - if (traceEnabled() && shouldTraceAction(action)) { - traceReceivedRequest(requestId, action); - } - } + public RequestHandlerRegistry getRequestHandler(String action) { + return requestHandlers.get(action); + } - @Override - public RequestHandlerRegistry getRequestHandler(String action) { - return requestHandlers.get(action); - } + /** + * called by the {@link Transport} implementation when a response or an exception has been received for a previously + * sent request (before any processing or deserialization was done). Returns the appropriate response handler or null if not + * found. + */ + public TransportResponseHandler onResponseReceived(final long requestId) { + RequestHolder holder = clientHandlers.remove(requestId); - @Override - public TransportResponseHandler onResponseReceived(final long requestId) { - RequestHolder holder = clientHandlers.remove(requestId); + if (holder == null) { + checkForTimeout(requestId); + return null; + } + holder.cancelTimeout(); + if (traceEnabled() && shouldTraceAction(holder.action())) { + traceReceivedResponse(requestId, holder.connection().getNode(), holder.action()); + } + return holder.handler(); + } - if (holder == null) { - checkForTimeout(requestId); - return null; - } - holder.cancelTimeout(); - if (traceEnabled() && shouldTraceAction(holder.action())) { - traceReceivedResponse(requestId, holder.connection().getNode(), holder.action()); - } - return holder.handler(); - } - - protected void checkForTimeout(long requestId) { - // lets see if its in the timeout holder, but sync on mutex to make sure any ongoing timeout handling has finished - final DiscoveryNode sourceNode; - final String action; - assert clientHandlers.get(requestId) == null; - TimeoutInfoHolder timeoutInfoHolder = timeoutInfoHandlers.remove(requestId); - if (timeoutInfoHolder != null) { - long time = System.currentTimeMillis(); - logger.warn("Received response for a request that has timed out, sent [{}ms] ago, timed out [{}ms] ago, " + + private void checkForTimeout(long requestId) { + // lets see if its in the timeout holder, but sync on mutex to make sure any ongoing timeout handling has finished + final DiscoveryNode sourceNode; + final String action; + assert clientHandlers.get(requestId) == null; + TimeoutInfoHolder timeoutInfoHolder = timeoutInfoHandlers.remove(requestId); + if (timeoutInfoHolder != null) { + long time = System.currentTimeMillis(); + logger.warn("Received response for a request that has timed out, sent [{}ms] ago, timed out [{}ms] ago, " + "action [{}], node [{}], id [{}]", time - timeoutInfoHolder.sentTime(), time - timeoutInfoHolder.timeoutTime(), - timeoutInfoHolder.action(), timeoutInfoHolder.node(), requestId); - action = timeoutInfoHolder.action(); - sourceNode = timeoutInfoHolder.node(); - } else { - logger.warn("Transport response handler not found of id [{}]", requestId); - action = null; - sourceNode = null; - } - // call tracer out of lock - if (traceEnabled() == false) { - return; - } - if (action == null) { - assert sourceNode == null; - traceUnresolvedResponse(requestId); - } else if (shouldTraceAction(action)) { - traceReceivedResponse(requestId, sourceNode, action); - } + timeoutInfoHolder.action(), timeoutInfoHolder.node(), requestId); + action = timeoutInfoHolder.action(); + sourceNode = timeoutInfoHolder.node(); + } else { + logger.warn("Transport response handler not found of id [{}]", requestId); + action = null; + sourceNode = null; } - - @Override - public void onNodeConnected(final DiscoveryNode node) { - // capture listeners before spawning the background callback so the following pattern won't trigger a call - // connectToNode(); connection is completed successfully - // addConnectionListener(); this listener shouldn't be called - final Stream listenersToNotify = TransportService.this.connectionListeners.stream(); - threadPool.generic().execute(() -> listenersToNotify.forEach(listener -> listener.onNodeConnected(node))); + // call tracer out of lock + if (traceEnabled() == false) { + return; } - - @Override - public void onConnectionOpened(Transport.Connection connection) { - // capture listeners before spawning the background callback so the following pattern won't trigger a call - // connectToNode(); connection is completed successfully - // addConnectionListener(); this listener shouldn't be called - final Stream listenersToNotify = TransportService.this.connectionListeners.stream(); - threadPool.generic().execute(() -> listenersToNotify.forEach(listener -> listener.onConnectionOpened(connection))); + if (action == null) { + assert sourceNode == null; + traceUnresolvedResponse(requestId); + } else if (shouldTraceAction(action)) { + traceReceivedResponse(requestId, sourceNode, action); } + } - @Override - public void onNodeDisconnected(final DiscoveryNode node) { - try { - threadPool.generic().execute( () -> { - for (final TransportConnectionListener connectionListener : connectionListeners) { - connectionListener.onNodeDisconnected(node); - } - }); - } catch (EsRejectedExecutionException ex) { - logger.debug("Rejected execution on NodeDisconnected", ex); - } + void onNodeConnected(final DiscoveryNode node) { + // capture listeners before spawning the background callback so the following pattern won't trigger a call + // connectToNode(); connection is completed successfully + // addConnectionListener(); this listener shouldn't be called + final Stream listenersToNotify = TransportService.this.connectionListeners.stream(); + getExecutorService().execute(() -> listenersToNotify.forEach(listener -> listener.onNodeConnected(node))); + } + + void onConnectionOpened(Transport.Connection connection) { + // capture listeners before spawning the background callback so the following pattern won't trigger a call + // connectToNode(); connection is completed successfully + // addConnectionListener(); this listener shouldn't be called + final Stream listenersToNotify = TransportService.this.connectionListeners.stream(); + getExecutorService().execute(() -> listenersToNotify.forEach(listener -> listener.onConnectionOpened(connection))); + } + + public void onNodeDisconnected(final DiscoveryNode node) { + try { + getExecutorService().execute( () -> { + for (final TransportConnectionListener connectionListener : connectionListeners) { + connectionListener.onNodeDisconnected(node); + } + }); + } catch (EsRejectedExecutionException ex) { + logger.debug("Rejected execution on NodeDisconnected", ex); } + } - @Override - public void onConnectionClosed(Transport.Connection connection) { - try { - for (Map.Entry entry : clientHandlers.entrySet()) { - RequestHolder holder = entry.getValue(); - if (holder.connection().getCacheKey().equals(connection.getCacheKey())) { - final RequestHolder holderToNotify = clientHandlers.remove(entry.getKey()); - if (holderToNotify != null) { - // callback that an exception happened, but on a different thread since we don't - // want handlers to worry about stack overflows - threadPool.generic().execute(() -> holderToNotify.handler().handleException(new NodeDisconnectedException( - connection.getNode(), holderToNotify.action()))); - } + void onConnectionClosed(Transport.Connection connection) { + try { + for (Map.Entry entry : clientHandlers.entrySet()) { + RequestHolder holder = entry.getValue(); + if (holder.connection().getCacheKey().equals(connection.getCacheKey())) { + final RequestHolder holderToNotify = clientHandlers.remove(entry.getKey()); + if (holderToNotify != null) { + // callback that an exception happened, but on a different thread since we don't + // want handlers to worry about stack overflows + getExecutorService().execute(() -> holderToNotify.handler().handleException(new NodeDisconnectedException( + connection.getNode(), holderToNotify.action()))); } } - } catch (EsRejectedExecutionException ex) { - logger.debug("Rejected execution on onConnectionClosed", ex); } + } catch (EsRejectedExecutionException ex) { + logger.debug("Rejected execution on onConnectionClosed", ex); } + } - protected void traceReceivedRequest(long requestId, String action) { - tracerLog.trace("[{}][{}] received request", requestId, action); - } - - protected void traceResponseSent(long requestId, String action) { - tracerLog.trace("[{}][{}] sent response", requestId, action); - } + protected void traceReceivedRequest(long requestId, String action) { + tracerLog.trace("[{}][{}] received request", requestId, action); + } - protected void traceReceivedResponse(long requestId, DiscoveryNode sourceNode, String action) { - tracerLog.trace("[{}][{}] received response from [{}]", requestId, action, sourceNode); - } + protected void traceResponseSent(long requestId, String action) { + tracerLog.trace("[{}][{}] sent response", requestId, action); + } - protected void traceUnresolvedResponse(long requestId) { - tracerLog.trace("[{}] received response but can't resolve it to a request", requestId); - } + protected void traceReceivedResponse(long requestId, DiscoveryNode sourceNode, String action) { + tracerLog.trace("[{}][{}] received response from [{}]", requestId, action, sourceNode); + } - protected void traceRequestSent(DiscoveryNode node, long requestId, String action, TransportRequestOptions options) { - tracerLog.trace("[{}][{}] sent to [{}] (timeout: [{}])", requestId, action, node, options.timeout()); - } + protected void traceUnresolvedResponse(long requestId) { + tracerLog.trace("[{}] received response but can't resolve it to a request", requestId); + } + protected void traceRequestSent(DiscoveryNode node, long requestId, String action, TransportRequestOptions options) { + tracerLog.trace("[{}][{}] sent to [{}] (timeout: [{}])", requestId, action, node, options.timeout()); } class TimeoutHandler implements Runnable { @@ -1078,16 +1114,16 @@ static class DirectResponseChannel implements TransportChannel { final DiscoveryNode localNode; private final String action; private final long requestId; - final TransportServiceAdapter adapter; + final TransportService service; final ThreadPool threadPool; DirectResponseChannel(Logger logger, DiscoveryNode localNode, String action, long requestId, - TransportServiceAdapter adapter, ThreadPool threadPool) { + TransportService service, ThreadPool threadPool) { this.logger = logger; this.localNode = localNode; this.action = action; this.requestId = requestId; - this.adapter = adapter; + this.service = service; this.threadPool = threadPool; } @@ -1108,9 +1144,9 @@ public void sendResponse(TransportResponse response) throws IOException { @Override public void sendResponse(final TransportResponse response, TransportResponseOptions options) throws IOException { - adapter.onResponseSent(requestId, action, response, options); - final TransportResponseHandler handler = adapter.onResponseReceived(requestId); - // ignore if its null, the adapter logs it + service.onResponseSent(requestId, action, response, options); + final TransportResponseHandler handler = service.onResponseReceived(requestId); + // ignore if its null, the service logs it if (handler != null) { final String executor = handler.executor(); if (ThreadPool.Names.SAME.equals(executor)) { @@ -1132,9 +1168,9 @@ protected void processResponse(TransportResponseHandler handler, TransportRespon @Override public void sendResponse(Exception exception) throws IOException { - adapter.onResponseSent(requestId, action, exception); - final TransportResponseHandler handler = adapter.onResponseReceived(requestId); - // ignore if its null, the adapter logs it + service.onResponseSent(requestId, action, exception); + final TransportResponseHandler handler = service.onResponseReceived(requestId); + // ignore if its null, the service logs it if (handler != null) { final RemoteTransportException rtx = wrapInRemote(exception); final String executor = handler.executor(); diff --git a/core/src/main/java/org/elasticsearch/transport/TransportServiceAdapter.java b/core/src/main/java/org/elasticsearch/transport/TransportServiceAdapter.java deleted file mode 100644 index 24a71a99998a4..0000000000000 --- a/core/src/main/java/org/elasticsearch/transport/TransportServiceAdapter.java +++ /dev/null @@ -1,49 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.transport; - -import org.elasticsearch.cluster.node.DiscoveryNode; - -public interface TransportServiceAdapter extends TransportConnectionListener { - - /** called by the {@link Transport} implementation once a request has been sent */ - void onRequestSent(DiscoveryNode node, long requestId, String action, TransportRequest request, TransportRequestOptions options); - - /** called by the {@link Transport} implementation once a response was sent to calling node */ - void onResponseSent(long requestId, String action, TransportResponse response, TransportResponseOptions options); - - /** called by the {@link Transport} implementation after an exception was sent as a response to an incoming request */ - void onResponseSent(long requestId, String action, Exception e); - - /** - * called by the {@link Transport} implementation when a response or an exception has been received for a previously - * sent request (before any processing or deserialization was done). Returns the appropriate response handler or null if not - * found. - */ - TransportResponseHandler onResponseReceived(long requestId); - - /** - * called by the {@link Transport} implementation when an incoming request arrives but before - * any parsing of it has happened (with the exception of the requestId and action) - */ - void onRequestReceived(long requestId, String action); - - RequestHandlerRegistry getRequestHandler(String action); -} diff --git a/core/src/main/java/org/elasticsearch/transport/TransportStats.java b/core/src/main/java/org/elasticsearch/transport/TransportStats.java index 78e692939b774..e911d2e7aa771 100644 --- a/core/src/main/java/org/elasticsearch/transport/TransportStats.java +++ b/core/src/main/java/org/elasticsearch/transport/TransportStats.java @@ -23,12 +23,13 @@ import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; import org.elasticsearch.common.unit.ByteSizeValue; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContent.Params; +import org.elasticsearch.common.xcontent.ToXContentFragment; import org.elasticsearch.common.xcontent.XContentBuilder; import java.io.IOException; -public class TransportStats implements Writeable, ToXContent { +public class TransportStats implements Writeable, ToXContentFragment { private final long serverOpen; private final long rxCount; diff --git a/core/src/main/java/org/elasticsearch/transport/TransportStatus.java b/core/src/main/java/org/elasticsearch/transport/TransportStatus.java index 39472cbe3cd0d..2f5f6d6bd9bb5 100644 --- a/core/src/main/java/org/elasticsearch/transport/TransportStatus.java +++ b/core/src/main/java/org/elasticsearch/transport/TransportStatus.java @@ -19,7 +19,7 @@ package org.elasticsearch.transport; -final class TransportStatus { +public final class TransportStatus { private static final byte STATUS_REQRES = 1 << 0; private static final byte STATUS_ERROR = 1 << 1; diff --git a/core/src/main/java/org/elasticsearch/watcher/ResourceWatcherService.java b/core/src/main/java/org/elasticsearch/watcher/ResourceWatcherService.java index f897fdfa749e8..54b24a86cc385 100644 --- a/core/src/main/java/org/elasticsearch/watcher/ResourceWatcherService.java +++ b/core/src/main/java/org/elasticsearch/watcher/ResourceWatcherService.java @@ -25,7 +25,7 @@ import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.threadpool.ThreadPool; -import org.elasticsearch.threadpool.ThreadPool.Cancellable; +import org.elasticsearch.threadpool.Scheduler.Cancellable; import org.elasticsearch.threadpool.ThreadPool.Names; import java.io.IOException; diff --git a/core/src/main/java/org/joda/time/format/StrictISODateTimeFormat.java b/core/src/main/java/org/joda/time/format/StrictISODateTimeFormat.java index f2be8c9118068..4533b78b84add 100644 --- a/core/src/main/java/org/joda/time/format/StrictISODateTimeFormat.java +++ b/core/src/main/java/org/joda/time/format/StrictISODateTimeFormat.java @@ -32,7 +32,7 @@ * All methods have been marked with an "// ES change" commentary * * In case you compare this with the original ISODateTimeFormat, make sure you use a diff - * call, that ignores whitespaces/tabs/indendetations like 'diff -b' + * call, that ignores whitespaces/tabs/indentations like 'diff -b' */ /** * Factory that creates instances of DateTimeFormatter based on the ISO8601 standard. diff --git a/core/src/main/resources/org/elasticsearch/bootstrap/security.policy b/core/src/main/resources/org/elasticsearch/bootstrap/security.policy index 9fcdf4e65586f..7268e0f72380b 100644 --- a/core/src/main/resources/org/elasticsearch/bootstrap/security.policy +++ b/core/src/main/resources/org/elasticsearch/bootstrap/security.policy @@ -31,7 +31,7 @@ grant codeBase "${codebase.securesm-1.1.jar}" { //// Very special jar permissions: //// These are dangerous permissions that we don't want to grant to everything. -grant codeBase "${codebase.lucene-core-7.0.0-snapshot-a128fcb.jar}" { +grant codeBase "${codebase.lucene-core-7.1.0.jar}" { // needed to allow MMapDirectory's "unmap hack" (die unmap hack, die) // java 8 package permission java.lang.RuntimePermission "accessClassInPackage.sun.misc"; @@ -42,7 +42,7 @@ grant codeBase "${codebase.lucene-core-7.0.0-snapshot-a128fcb.jar}" { permission java.lang.RuntimePermission "accessDeclaredMembers"; }; -grant codeBase "${codebase.lucene-misc-7.0.0-snapshot-a128fcb.jar}" { +grant codeBase "${codebase.lucene-misc-7.1.0.jar}" { // needed to allow shard shrinking to use hard-links if possible via lucenes HardlinkCopyDirectoryWrapper permission java.nio.file.LinkPermission "hard"; }; @@ -129,4 +129,6 @@ grant { permission java.io.FilePermission "/sys/fs/cgroup/cpu/-", "read"; permission java.io.FilePermission "/sys/fs/cgroup/cpuacct", "read"; permission java.io.FilePermission "/sys/fs/cgroup/cpuacct/-", "read"; + permission java.io.FilePermission "/sys/fs/cgroup/memory", "read"; + permission java.io.FilePermission "/sys/fs/cgroup/memory/-", "read"; }; diff --git a/core/src/main/resources/org/elasticsearch/bootstrap/test-framework.policy b/core/src/main/resources/org/elasticsearch/bootstrap/test-framework.policy index 61456a0206739..453621b138e0a 100644 --- a/core/src/main/resources/org/elasticsearch/bootstrap/test-framework.policy +++ b/core/src/main/resources/org/elasticsearch/bootstrap/test-framework.policy @@ -33,7 +33,7 @@ grant codeBase "${codebase.securemock-1.2.jar}" { permission java.lang.reflect.ReflectPermission "suppressAccessChecks"; }; -grant codeBase "${codebase.lucene-test-framework-7.0.0-snapshot-a128fcb.jar}" { +grant codeBase "${codebase.lucene-test-framework-7.1.0.jar}" { // needed by RamUsageTester permission java.lang.reflect.ReflectPermission "suppressAccessChecks"; // needed for testing hardlinks in StoreRecoveryTests since we install MockFS @@ -63,20 +63,13 @@ grant codeBase "${codebase.mocksocket-1.2.jar}" { permission java.net.SocketPermission "*", "accept,connect"; }; -grant codeBase "${codebase.elasticsearch-rest-client-7.0.0-alpha1-SNAPSHOT.jar}" { +grant codeBase "${codebase.elasticsearch-rest-client}" { // rest makes socket connections for rest tests permission java.net.SocketPermission "*", "connect"; // rest client uses system properties which gets the default proxy permission java.net.NetPermission "getProxySelector"; }; -// IDEs need this because they do not play nicely with removing artifacts on projects, -// so we keep it in here for IDE test support -grant codeBase "${codebase.elasticsearch-rest-client-7.0.0-alpha1-SNAPSHOT-deps.jar}" { - // rest makes socket connections for rest tests - permission java.net.SocketPermission "*", "connect"; -}; - grant codeBase "${codebase.httpcore-nio-4.4.5.jar}" { // httpcore makes socket connections for rest tests permission java.net.SocketPermission "*", "connect"; diff --git a/core/src/test/java/org/apache/lucene/grouping/CollapsingTopDocsCollectorTests.java b/core/src/test/java/org/apache/lucene/grouping/CollapsingTopDocsCollectorTests.java index aef354a04951f..4352f16c05f2b 100644 --- a/core/src/test/java/org/apache/lucene/grouping/CollapsingTopDocsCollectorTests.java +++ b/core/src/test/java/org/apache/lucene/grouping/CollapsingTopDocsCollectorTests.java @@ -54,6 +54,8 @@ import java.util.List; import java.util.Set; +import static org.hamcrest.core.IsEqual.equalTo; + public class CollapsingTopDocsCollectorTests extends ESTestCase { private static class SegmentSearcher extends IndexSearcher { private final List ctx; @@ -82,12 +84,15 @@ interface CollapsingDocValuesProducer { } void assertSearchCollapse(CollapsingDocValuesProducer dvProducers, boolean numeric) throws IOException { - assertSearchCollapse(dvProducers, numeric, true); - assertSearchCollapse(dvProducers, numeric, false); + assertSearchCollapse(dvProducers, numeric, true, true); + assertSearchCollapse(dvProducers, numeric, true, false); + assertSearchCollapse(dvProducers, numeric, false, true); + assertSearchCollapse(dvProducers, numeric, false, false); } private void assertSearchCollapse(CollapsingDocValuesProducer dvProducers, - boolean numeric, boolean multivalued) throws IOException { + boolean numeric, boolean multivalued, + boolean trackMaxScores) throws IOException { final int numDocs = randomIntBetween(1000, 2000); int maxGroup = randomIntBetween(2, 500); final Directory dir = newDirectory(); @@ -118,14 +123,14 @@ private void assertSearchCollapse(CollapsingDocValuesProd final CollapsingTopDocsCollector collapsingCollector; if (numeric) { collapsingCollector = - CollapsingTopDocsCollector.createNumeric(collapseField.getField(), sort, expectedNumGroups, false); + CollapsingTopDocsCollector.createNumeric(collapseField.getField(), sort, expectedNumGroups, trackMaxScores); } else { collapsingCollector = - CollapsingTopDocsCollector.createKeyword(collapseField.getField(), sort, expectedNumGroups, false); + CollapsingTopDocsCollector.createKeyword(collapseField.getField(), sort, expectedNumGroups, trackMaxScores); } TopFieldCollector topFieldCollector = - TopFieldCollector.create(sort, totalHits, true, false, false); + TopFieldCollector.create(sort, totalHits, true, trackMaxScores, trackMaxScores); searcher.search(new MatchAllDocsQuery(), collapsingCollector); searcher.search(new MatchAllDocsQuery(), topFieldCollector); @@ -136,6 +141,11 @@ private void assertSearchCollapse(CollapsingDocValuesProd assertEquals(totalHits, collapseTopFieldDocs.totalHits); assertEquals(totalHits, topDocs.scoreDocs.length); assertEquals(totalHits, topDocs.totalHits); + if (trackMaxScores) { + assertThat(collapseTopFieldDocs.getMaxScore(), equalTo(topDocs.getMaxScore())); + } else { + assertThat(collapseTopFieldDocs.getMaxScore(), equalTo(Float.NaN)); + } Set seen = new HashSet<>(); // collapse field is the last sort @@ -186,14 +196,14 @@ private void assertSearchCollapse(CollapsingDocValuesProd } final CollapseTopFieldDocs[] shardHits = new CollapseTopFieldDocs[subSearchers.length]; - final Weight weight = searcher.createNormalizedWeight(new MatchAllDocsQuery(), false); + final Weight weight = searcher.createNormalizedWeight(new MatchAllDocsQuery(), true); for (int shardIDX = 0; shardIDX < subSearchers.length; shardIDX++) { final SegmentSearcher subSearcher = subSearchers[shardIDX]; final CollapsingTopDocsCollector c; if (numeric) { - c = CollapsingTopDocsCollector.createNumeric(collapseField.getField(), sort, expectedNumGroups, false); + c = CollapsingTopDocsCollector.createNumeric(collapseField.getField(), sort, expectedNumGroups, trackMaxScores); } else { - c = CollapsingTopDocsCollector.createKeyword(collapseField.getField(), sort, expectedNumGroups, false); + c = CollapsingTopDocsCollector.createKeyword(collapseField.getField(), sort, expectedNumGroups, trackMaxScores); } subSearcher.search(weight, c); shardHits[shardIDX] = c.getTopDocs(); diff --git a/core/src/test/java/org/apache/lucene/search/uhighlight/CustomUnifiedHighlighterTests.java b/core/src/test/java/org/apache/lucene/search/uhighlight/CustomUnifiedHighlighterTests.java index a2fe5d453ded3..a3e292d0ea23b 100644 --- a/core/src/test/java/org/apache/lucene/search/uhighlight/CustomUnifiedHighlighterTests.java +++ b/core/src/test/java/org/apache/lucene/search/uhighlight/CustomUnifiedHighlighterTests.java @@ -43,7 +43,6 @@ import org.apache.lucene.search.highlight.DefaultEncoder; import org.apache.lucene.store.Directory; import org.elasticsearch.common.Strings; -import org.elasticsearch.common.lucene.all.AllTermQuery; import org.elasticsearch.common.lucene.search.MultiPhrasePrefixQuery; import org.elasticsearch.test.ESTestCase; @@ -148,18 +147,6 @@ public void testMultiPhrasePrefixQuery() throws Exception { BreakIterator.getSentenceInstance(Locale.ROOT), 0, outputs); } - public void testAllTermQuery() throws Exception { - final String[] inputs = { - "The quick brown fox." - }; - final String[] outputs = { - "The quick brown fox." - }; - AllTermQuery query = new AllTermQuery(new Term("text", "fox")); - assertHighlightOneDoc("text", inputs, new StandardAnalyzer(), query, Locale.ROOT, - BreakIterator.getSentenceInstance(Locale.ROOT), 0, outputs); - } - public void testCommonTermsQuery() throws Exception { final String[] inputs = { "The quick brown fox." diff --git a/core/src/test/java/org/elasticsearch/action/DocWriteResponseTests.java b/core/src/test/java/org/elasticsearch/action/DocWriteResponseTests.java index bb1f2d2a637f5..36a72178bbad6 100644 --- a/core/src/test/java/org/elasticsearch/action/DocWriteResponseTests.java +++ b/core/src/test/java/org/elasticsearch/action/DocWriteResponseTests.java @@ -25,7 +25,7 @@ import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.common.xcontent.json.JsonXContent; -import org.elasticsearch.index.seqno.SequenceNumbersService; +import org.elasticsearch.index.seqno.SequenceNumbers; import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.test.ESTestCase; @@ -42,7 +42,7 @@ public void testGetLocation() { new ShardId("index", "uuid", 0), "type", "id", - SequenceNumbersService.UNASSIGNED_SEQ_NO, + SequenceNumbers.UNASSIGNED_SEQ_NO, 17, 0, Result.CREATED) {}; @@ -56,7 +56,7 @@ public void testGetLocationNonAscii() { new ShardId("index", "uuid", 0), "type", "❤", - SequenceNumbersService.UNASSIGNED_SEQ_NO, + SequenceNumbers.UNASSIGNED_SEQ_NO, 17, 0, Result.CREATED) {}; @@ -70,7 +70,7 @@ public void testGetLocationWithSpaces() { new ShardId("index", "uuid", 0), "type", "a b", - SequenceNumbersService.UNASSIGNED_SEQ_NO, + SequenceNumbers.UNASSIGNED_SEQ_NO, 17, 0, Result.CREATED) {}; @@ -88,7 +88,7 @@ public void testToXContentDoesntIncludeForcedRefreshUnlessForced() throws IOExce new ShardId("index", "uuid", 0), "type", "id", - SequenceNumbersService.UNASSIGNED_SEQ_NO, + SequenceNumbers.UNASSIGNED_SEQ_NO, 17, 0, Result.CREATED) { diff --git a/core/src/test/java/org/elasticsearch/action/admin/cluster/allocation/ClusterAllocationExplainActionTests.java b/core/src/test/java/org/elasticsearch/action/admin/cluster/allocation/ClusterAllocationExplainActionTests.java index f9785df6495b2..5732d5cc987ca 100644 --- a/core/src/test/java/org/elasticsearch/action/admin/cluster/allocation/ClusterAllocationExplainActionTests.java +++ b/core/src/test/java/org/elasticsearch/action/admin/cluster/allocation/ClusterAllocationExplainActionTests.java @@ -52,7 +52,7 @@ public void testInitializingOrRelocatingShardExplanation() throws Exception { ClusterState clusterState = ClusterStateCreationUtils.state("idx", randomBoolean(), shardRoutingState); ShardRouting shard = clusterState.getRoutingTable().index("idx").shard(0).primaryShard(); RoutingAllocation allocation = new RoutingAllocation(new AllocationDeciders(Settings.EMPTY, Collections.emptyList()), - clusterState.getRoutingNodes(), clusterState, null, System.nanoTime(), randomBoolean()); + clusterState.getRoutingNodes(), clusterState, null, System.nanoTime()); ClusterAllocationExplanation cae = TransportClusterAllocationExplainAction.explainShard(shard, allocation, null, randomBoolean(), new TestGatewayAllocator(), new ShardsAllocator() { @Override @@ -165,6 +165,6 @@ public void testFindShardAssignedToNode() { } private static RoutingAllocation routingAllocation(ClusterState clusterState) { - return new RoutingAllocation(NOOP_DECIDERS, clusterState.getRoutingNodes(), clusterState, null, System.nanoTime(), randomBoolean()); + return new RoutingAllocation(NOOP_DECIDERS, clusterState.getRoutingNodes(), clusterState, null, System.nanoTime()); } } diff --git a/core/src/test/java/org/elasticsearch/action/admin/cluster/allocation/ClusterAllocationExplainRequestTests.java b/core/src/test/java/org/elasticsearch/action/admin/cluster/allocation/ClusterAllocationExplainRequestTests.java index 0c47caa54665c..c549ab017c1ff 100644 --- a/core/src/test/java/org/elasticsearch/action/admin/cluster/allocation/ClusterAllocationExplainRequestTests.java +++ b/core/src/test/java/org/elasticsearch/action/admin/cluster/allocation/ClusterAllocationExplainRequestTests.java @@ -33,8 +33,7 @@ public void testSerialization() throws Exception { BytesStreamOutput output = new BytesStreamOutput(); request.writeTo(output); - ClusterAllocationExplainRequest actual = new ClusterAllocationExplainRequest(); - actual.readFrom(output.bytes().streamInput()); + ClusterAllocationExplainRequest actual = new ClusterAllocationExplainRequest(output.bytes().streamInput()); assertEquals(request.getIndex(), actual.getIndex()); assertEquals(request.getShard(), actual.getShard()); assertEquals(request.isPrimary(), actual.isPrimary()); diff --git a/core/src/test/java/org/elasticsearch/action/admin/cluster/node/stats/NodeStatsTests.java b/core/src/test/java/org/elasticsearch/action/admin/cluster/node/stats/NodeStatsTests.java index 9591e31b2b6dc..d9aed454732e1 100644 --- a/core/src/test/java/org/elasticsearch/action/admin/cluster/node/stats/NodeStatsTests.java +++ b/core/src/test/java/org/elasticsearch/action/admin/cluster/node/stats/NodeStatsTests.java @@ -24,6 +24,7 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.discovery.DiscoveryStats; import org.elasticsearch.discovery.zen.PendingClusterStateStats; +import org.elasticsearch.discovery.zen.PublishClusterStateStats; import org.elasticsearch.http.HttpStats; import org.elasticsearch.indices.breaker.AllCircuitBreakerStats; import org.elasticsearch.indices.breaker.CircuitBreakerStats; @@ -32,6 +33,8 @@ import org.elasticsearch.monitor.jvm.JvmStats; import org.elasticsearch.monitor.os.OsStats; import org.elasticsearch.monitor.process.ProcessStats; +import org.elasticsearch.node.AdaptiveSelectionStats; +import org.elasticsearch.node.ResponseCollectorService; import org.elasticsearch.script.ScriptStats; import org.elasticsearch.test.ESTestCase; import org.elasticsearch.test.VersionUtils; @@ -46,6 +49,7 @@ import java.util.List; import java.util.Map; +import static com.carrotsearch.randomizedtesting.RandomizedTest.randomLongBetween; import static java.util.Collections.emptyMap; import static java.util.Collections.emptySet; @@ -96,6 +100,12 @@ public void testSerialization() throws IOException { assertEquals( nodeStats.getOs().getCgroup().getCpuStat().getTimeThrottledNanos(), deserializedNodeStats.getOs().getCgroup().getCpuStat().getTimeThrottledNanos()); + assertEquals( + nodeStats.getOs().getCgroup().getMemoryLimitInBytes(), + deserializedNodeStats.getOs().getCgroup().getMemoryLimitInBytes()); + assertEquals( + nodeStats.getOs().getCgroup().getMemoryUsageInBytes(), + deserializedNodeStats.getOs().getCgroup().getMemoryUsageInBytes()); assertArrayEquals(nodeStats.getOs().getCpu().getLoadAverage(), deserializedNodeStats.getOs().getCpu().getLoadAverage(), 0); } @@ -272,6 +282,22 @@ public void testSerialization() throws IOException { assertEquals(stats.getIngestCount(), deserializedStats.getIngestCount()); } } + AdaptiveSelectionStats adaptiveStats = nodeStats.getAdaptiveSelectionStats(); + AdaptiveSelectionStats deserializedAdaptiveStats = deserializedNodeStats.getAdaptiveSelectionStats(); + if (adaptiveStats == null) { + assertNull(deserializedAdaptiveStats); + } else { + assertEquals(adaptiveStats.getOutgoingConnections(), deserializedAdaptiveStats.getOutgoingConnections()); + assertEquals(adaptiveStats.getRanks(), deserializedAdaptiveStats.getRanks()); + adaptiveStats.getComputedStats().forEach((k, v) -> { + ResponseCollectorService.ComputedNodeStats aStats = adaptiveStats.getComputedStats().get(k); + ResponseCollectorService.ComputedNodeStats bStats = deserializedAdaptiveStats.getComputedStats().get(k); + assertEquals(aStats.nodeId, bStats.nodeId); + assertEquals(aStats.queueSize, bStats.queueSize, 0.01); + assertEquals(aStats.serviceTime, bStats.serviceTime, 0.01); + assertEquals(aStats.responseTime, bStats.responseTime, 0.01); + }); + } } } } @@ -294,7 +320,10 @@ private static NodeStats createNodeStats() { randomAlphaOfLength(8), randomNonNegativeLong(), randomNonNegativeLong(), - new OsStats.Cgroup.CpuStat(randomNonNegativeLong(), randomNonNegativeLong(), randomNonNegativeLong()))); + new OsStats.Cgroup.CpuStat(randomNonNegativeLong(), randomNonNegativeLong(), randomNonNegativeLong()), + randomAlphaOfLength(8), + Long.toString(randomNonNegativeLong()), + Long.toString(randomNonNegativeLong()))); } ProcessStats processStats = frequently() ? new ProcessStats( @@ -381,9 +410,20 @@ private static NodeStats createNodeStats() { } allCircuitBreakerStats = new AllCircuitBreakerStats(circuitBreakerStatsArray); } - ScriptStats scriptStats = frequently() ? new ScriptStats(randomNonNegativeLong(), randomNonNegativeLong()) : null; - DiscoveryStats discoveryStats = frequently() ? new DiscoveryStats(randomBoolean() ? new PendingClusterStateStats(randomInt(), - randomInt(), randomInt()) : null) : null; + ScriptStats scriptStats = frequently() ? + new ScriptStats(randomNonNegativeLong(), randomNonNegativeLong(), randomNonNegativeLong()) : null; + DiscoveryStats discoveryStats = frequently() + ? new DiscoveryStats( + randomBoolean() + ? new PendingClusterStateStats(randomInt(), randomInt(), randomInt()) + : null, + randomBoolean() + ? new PublishClusterStateStats( + randomNonNegativeLong(), + randomNonNegativeLong(), + randomNonNegativeLong()) + : null) + : null; IngestStats ingestStats = null; if (frequently()) { IngestStats.Stats totalStats = new IngestStats.Stats(randomNonNegativeLong(), randomNonNegativeLong(), randomNonNegativeLong(), @@ -397,8 +437,31 @@ private static NodeStats createNodeStats() { } ingestStats = new IngestStats(totalStats, statsPerPipeline); } + AdaptiveSelectionStats adaptiveSelectionStats = null; + if (frequently()) { + int numNodes = randomIntBetween(0,10); + Map nodeConnections = new HashMap<>(); + Map nodeStats = new HashMap<>(); + for (int i = 0; i < numNodes; i++) { + String nodeId = randomAlphaOfLengthBetween(3, 10); + // add outgoing connection info + if (frequently()) { + nodeConnections.put(nodeId, randomLongBetween(0, 100)); + } + // add node calculations + if (frequently()) { + ResponseCollectorService.ComputedNodeStats stats = new ResponseCollectorService.ComputedNodeStats(nodeId, + randomIntBetween(1,10), randomIntBetween(0, 2000), + randomDoubleBetween(1.0, 10000000.0, true), + randomDoubleBetween(1.0, 10000000.0, true)); + nodeStats.put(nodeId, stats); + } + } + adaptiveSelectionStats = new AdaptiveSelectionStats(nodeConnections, nodeStats); + } //TODO NodeIndicesStats are not tested here, way too complicated to create, also they need to be migrated to Writeable yet - return new NodeStats(node, randomNonNegativeLong(), null, osStats, processStats, jvmStats, threadPoolStats, fsInfo, - transportStats, httpStats, allCircuitBreakerStats, scriptStats, discoveryStats, ingestStats); + return new NodeStats(node, randomNonNegativeLong(), null, osStats, processStats, jvmStats, threadPoolStats, + fsInfo, transportStats, httpStats, allCircuitBreakerStats, scriptStats, discoveryStats, + ingestStats, adaptiveSelectionStats); } } diff --git a/core/src/test/java/org/elasticsearch/action/admin/cluster/node/tasks/TaskManagerTestCase.java b/core/src/test/java/org/elasticsearch/action/admin/cluster/node/tasks/TaskManagerTestCase.java index de5c6690a34c0..8927fed567ed9 100644 --- a/core/src/test/java/org/elasticsearch/action/admin/cluster/node/tasks/TaskManagerTestCase.java +++ b/core/src/test/java/org/elasticsearch/action/admin/cluster/node/tasks/TaskManagerTestCase.java @@ -174,10 +174,10 @@ public TestNode(String name, ThreadPool threadPool, Settings settings) { return discoveryNode.get(); }; transportService = new TransportService(settings, - new MockTcpTransport(settings, threadPool, BigArrays.NON_RECYCLING_INSTANCE, new NoneCircuitBreakerService(), + new MockTcpTransport(settings, threadPool, BigArrays.NON_RECYCLING_INSTANCE, new NoneCircuitBreakerService(), new NamedWriteableRegistry(ClusterModule.getNamedWriteables()), new NetworkService(Collections.emptyList())), - threadPool, TransportService.NOOP_TRANSPORT_INTERCEPTOR, boundTransportAddressDiscoveryNodeFunction, null) { + threadPool, TransportService.NOOP_TRANSPORT_INTERCEPTOR, boundTransportAddressDiscoveryNodeFunction, null) { @Override protected TaskManager createTaskManager() { if (MockTaskManager.USE_MOCK_TASK_MANAGER_SETTING.get(settings)) { diff --git a/core/src/test/java/org/elasticsearch/action/admin/cluster/node/tasks/TestTaskPlugin.java b/core/src/test/java/org/elasticsearch/action/admin/cluster/node/tasks/TestTaskPlugin.java index f113f49a4158a..88674bfec74d8 100644 --- a/core/src/test/java/org/elasticsearch/action/admin/cluster/node/tasks/TestTaskPlugin.java +++ b/core/src/test/java/org/elasticsearch/action/admin/cluster/node/tasks/TestTaskPlugin.java @@ -45,7 +45,8 @@ import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContent.Params; +import org.elasticsearch.common.xcontent.ToXContentFragment; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.plugins.ActionPlugin; import org.elasticsearch.plugins.Plugin; @@ -108,7 +109,7 @@ public NodeResponse(DiscoveryNode node) { } } - public static class NodesResponse extends BaseNodesResponse implements ToXContent { + public static class NodesResponse extends BaseNodesResponse implements ToXContentFragment { NodesResponse() { diff --git a/core/src/test/java/org/elasticsearch/action/admin/cluster/reroute/ClusterRerouteTests.java b/core/src/test/java/org/elasticsearch/action/admin/cluster/reroute/ClusterRerouteTests.java index a9054879941fa..d3a0d12a85332 100644 --- a/core/src/test/java/org/elasticsearch/action/admin/cluster/reroute/ClusterRerouteTests.java +++ b/core/src/test/java/org/elasticsearch/action/admin/cluster/reroute/ClusterRerouteTests.java @@ -150,9 +150,9 @@ public void onFailure(Exception e) { assertNotSame(newState, clusterState); // dry-run=false clusterState = newState; routingTable = clusterState.routingTable(); - assertEquals(routingTable.index("idx").shards().size(), 1); - assertEquals(routingTable.index("idx").shard(0).shards().get(0).state(), INITIALIZING); - assertEquals(routingTable.index("idx").shard(0).shards().get(0).unassignedInfo().getNumFailedAllocations(), retries); + assertEquals(1, routingTable.index("idx").shards().size()); + assertEquals(INITIALIZING, routingTable.index("idx").shard(0).shards().get(0).state()); + assertEquals(0, routingTable.index("idx").shard(0).shards().get(0).unassignedInfo().getNumFailedAllocations()); } private ClusterState createInitialClusterState(AllocationService service) { diff --git a/core/src/test/java/org/elasticsearch/action/admin/cluster/settings/SettingsUpdaterTests.java b/core/src/test/java/org/elasticsearch/action/admin/cluster/settings/SettingsUpdaterTests.java index ad03d4b001db6..19dd64e6324ca 100644 --- a/core/src/test/java/org/elasticsearch/action/admin/cluster/settings/SettingsUpdaterTests.java +++ b/core/src/test/java/org/elasticsearch/action/admin/cluster/settings/SettingsUpdaterTests.java @@ -23,10 +23,15 @@ import org.elasticsearch.cluster.metadata.MetaData; import org.elasticsearch.cluster.routing.allocation.allocator.BalancedShardsAllocator; import org.elasticsearch.common.settings.ClusterSettings; +import org.elasticsearch.common.settings.Setting; +import org.elasticsearch.common.settings.Setting.Property; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.test.ESTestCase; +import java.util.Set; import java.util.concurrent.atomic.AtomicReference; +import java.util.stream.Collectors; +import java.util.stream.Stream; public class SettingsUpdaterTests extends ESTestCase { @@ -132,4 +137,30 @@ public void testClusterBlock() { assertEquals(clusterState.blocks().global().size(), 0); } + + public void testDeprecationLogging() { + Setting deprecatedSetting = + Setting.simpleString("deprecated.setting", Property.Dynamic, Property.NodeScope, Property.Deprecated); + final Settings settings = Settings.builder().put("deprecated.setting", "foo").build(); + final Set> settingsSet = + Stream.concat(ClusterSettings.BUILT_IN_CLUSTER_SETTINGS.stream(), Stream.of(deprecatedSetting)).collect(Collectors.toSet()); + final ClusterSettings clusterSettings = new ClusterSettings(settings, settingsSet); + clusterSettings.addSettingsUpdateConsumer(deprecatedSetting, s -> {}); + final SettingsUpdater settingsUpdater = new SettingsUpdater(clusterSettings); + final ClusterState clusterState = + ClusterState.builder(new ClusterName("foo")).metaData(MetaData.builder().persistentSettings(settings).build()).build(); + + final Settings toApplyDebug = Settings.builder().put("logger.org.elasticsearch", "debug").build(); + final ClusterState afterDebug = settingsUpdater.updateSettings(clusterState, toApplyDebug, Settings.EMPTY); + assertSettingDeprecationsAndWarnings(new Setting[] { deprecatedSetting }); + + final Settings toApplyUnset = Settings.builder().putNull("logger.org.elasticsearch").build(); + final ClusterState afterUnset = settingsUpdater.updateSettings(afterDebug, toApplyUnset, Settings.EMPTY); + assertSettingDeprecationsAndWarnings(new Setting[] { deprecatedSetting }); + + // we also check that if no settings are changed, deprecation logging still occurs + settingsUpdater.updateSettings(afterUnset, toApplyUnset, Settings.EMPTY); + assertSettingDeprecationsAndWarnings(new Setting[] { deprecatedSetting }); + } + } diff --git a/core/src/test/java/org/elasticsearch/action/admin/cluster/shards/ClusterSearchShardsRequestTests.java b/core/src/test/java/org/elasticsearch/action/admin/cluster/shards/ClusterSearchShardsRequestTests.java index e21635596b9e7..d8b9e2f5b5e03 100644 --- a/core/src/test/java/org/elasticsearch/action/admin/cluster/shards/ClusterSearchShardsRequestTests.java +++ b/core/src/test/java/org/elasticsearch/action/admin/cluster/shards/ClusterSearchShardsRequestTests.java @@ -60,8 +60,7 @@ public void testSerialization() throws Exception { request.writeTo(out); try (StreamInput in = out.bytes().streamInput()) { in.setVersion(version); - ClusterSearchShardsRequest deserialized = new ClusterSearchShardsRequest(); - deserialized.readFrom(in); + ClusterSearchShardsRequest deserialized = new ClusterSearchShardsRequest(in); assertArrayEquals(request.indices(), deserialized.indices()); assertSame(request.indicesOptions(), deserialized.indicesOptions()); assertEquals(request.routing(), deserialized.routing()); diff --git a/core/src/test/java/org/elasticsearch/action/admin/cluster/snapshots/status/SnapshotStatusTests.java b/core/src/test/java/org/elasticsearch/action/admin/cluster/snapshots/status/SnapshotStatusTests.java new file mode 100644 index 0000000000000..481bf5579e2ab --- /dev/null +++ b/core/src/test/java/org/elasticsearch/action/admin/cluster/snapshots/status/SnapshotStatusTests.java @@ -0,0 +1,135 @@ +package org.elasticsearch.action.admin.cluster.snapshots.status; + +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +import org.elasticsearch.cluster.SnapshotsInProgress; +import org.elasticsearch.common.UUIDs; +import org.elasticsearch.index.shard.ShardId; +import org.elasticsearch.snapshots.Snapshot; +import org.elasticsearch.snapshots.SnapshotId; +import org.elasticsearch.test.ESTestCase; + +import java.util.ArrayList; +import java.util.List; + + +public class SnapshotStatusTests extends ESTestCase { + + + public void testToString() throws Exception { + SnapshotsInProgress.State state = randomFrom(SnapshotsInProgress.State.values()); + String uuid = UUIDs.randomBase64UUID(); + SnapshotId id = new SnapshotId("test-snap", uuid); + Snapshot snapshot = new Snapshot("test-repo", id); + + String indexName = randomAlphaOfLengthBetween(3, 50); + int shardId = randomInt(); + ShardId testShardId = ShardId.fromString("[" + indexName + "][" + shardId + "]"); + SnapshotIndexShardStage shardStage = randomFrom(SnapshotIndexShardStage.values()); + SnapshotIndexShardStatus snapshotIndexShardStatus = new SnapshotIndexShardStatus(testShardId, shardStage); + List snapshotIndexShardStatuses = new ArrayList<>(); + snapshotIndexShardStatuses.add(snapshotIndexShardStatus); + SnapshotStatus status = new SnapshotStatus(snapshot, state, snapshotIndexShardStatuses); + + int initializingShards = 0; + int startedShards = 0; + int finalizingShards = 0; + int doneShards = 0; + int failedShards = 0; + int totalShards = 1; + + switch (shardStage) { + case INIT: + initializingShards++; + break; + case STARTED: + startedShards++; + break; + case FINALIZE: + finalizingShards++; + break; + case DONE: + doneShards++; + break; + case FAILURE: + failedShards++; + break; + default: + break; + } + + String expected = "{\n" + + " \"snapshot\" : \"test-snap\",\n" + + " \"repository\" : \"test-repo\",\n" + + " \"uuid\" : \"" + uuid + "\",\n" + + " \"state\" : \"" + state.toString() + "\",\n" + + " \"shards_stats\" : {\n" + + " \"initializing\" : " + initializingShards + ",\n" + + " \"started\" : " + startedShards + ",\n" + + " \"finalizing\" : " + finalizingShards + ",\n" + + " \"done\" : " + doneShards + ",\n" + + " \"failed\" : " + failedShards + ",\n" + + " \"total\" : " + totalShards + "\n" + + " },\n" + + " \"stats\" : {\n" + + " \"number_of_files\" : 0,\n" + + " \"processed_files\" : 0,\n" + + " \"total_size_in_bytes\" : 0,\n" + + " \"processed_size_in_bytes\" : 0,\n" + + " \"start_time_in_millis\" : 0,\n" + + " \"time_in_millis\" : 0\n" + + " },\n" + + " \"indices\" : {\n" + + " \"" + indexName + "\" : {\n" + + " \"shards_stats\" : {\n" + + " \"initializing\" : " + initializingShards + ",\n" + + " \"started\" : " + startedShards + ",\n" + + " \"finalizing\" : " + finalizingShards + ",\n" + + " \"done\" : " + doneShards + ",\n" + + " \"failed\" : " + failedShards + ",\n" + + " \"total\" : " + totalShards + "\n" + + " },\n" + + " \"stats\" : {\n" + + " \"number_of_files\" : 0,\n" + + " \"processed_files\" : 0,\n" + + " \"total_size_in_bytes\" : 0,\n" + + " \"processed_size_in_bytes\" : 0,\n" + + " \"start_time_in_millis\" : 0,\n" + + " \"time_in_millis\" : 0\n" + + " },\n" + + " \"shards\" : {\n" + + " \"" + shardId + "\" : {\n" + + " \"stage\" : \"" + shardStage.toString() + "\",\n" + + " \"stats\" : {\n" + + " \"number_of_files\" : 0,\n" + + " \"processed_files\" : 0,\n" + + " \"total_size_in_bytes\" : 0,\n" + + " \"processed_size_in_bytes\" : 0,\n" + + " \"start_time_in_millis\" : 0,\n" + + " \"time_in_millis\" : 0\n" + + " }\n" + + " }\n" + + " }\n" + + " }\n" + + " }\n" + + "}"; + assertEquals(expected, status.toString()); + } +} diff --git a/core/src/test/java/org/elasticsearch/action/admin/cluster/state/ClusterStateRequestTests.java b/core/src/test/java/org/elasticsearch/action/admin/cluster/state/ClusterStateRequestTests.java index b515829b72ac5..8c77ccfef90ce 100644 --- a/core/src/test/java/org/elasticsearch/action/admin/cluster/state/ClusterStateRequestTests.java +++ b/core/src/test/java/org/elasticsearch/action/admin/cluster/state/ClusterStateRequestTests.java @@ -47,8 +47,7 @@ public void testSerialization() throws Exception { StreamInput streamInput = output.bytes().streamInput(); streamInput.setVersion(testVersion); - ClusterStateRequest deserializedCSRequest = new ClusterStateRequest(); - deserializedCSRequest.readFrom(streamInput); + ClusterStateRequest deserializedCSRequest = new ClusterStateRequest(streamInput); assertThat(deserializedCSRequest.routingTable(), equalTo(clusterStateRequest.routingTable())); assertThat(deserializedCSRequest.metaData(), equalTo(clusterStateRequest.metaData())); diff --git a/core/src/test/java/org/elasticsearch/action/admin/cluster/stats/ClusterStatsIT.java b/core/src/test/java/org/elasticsearch/action/admin/cluster/stats/ClusterStatsIT.java index 26a882f00456f..4bb6a5f3a8c41 100644 --- a/core/src/test/java/org/elasticsearch/action/admin/cluster/stats/ClusterStatsIT.java +++ b/core/src/test/java/org/elasticsearch/action/admin/cluster/stats/ClusterStatsIT.java @@ -122,7 +122,7 @@ public void testIndicesShardStats() throws ExecutionException, InterruptedExcept ClusterStatsResponse response = client().admin().cluster().prepareClusterStats().get(); assertThat(response.getStatus(), Matchers.equalTo(ClusterHealthStatus.GREEN)); - prepareCreate("test1").setSettings("number_of_shards", 2, "number_of_replicas", 1).get(); + prepareCreate("test1").setSettings(Settings.builder().put("number_of_shards", 2).put("number_of_replicas", 1)).get(); response = client().admin().cluster().prepareClusterStats().get(); assertThat(response.getStatus(), Matchers.equalTo(ClusterHealthStatus.YELLOW)); @@ -140,7 +140,7 @@ public void testIndicesShardStats() throws ExecutionException, InterruptedExcept assertThat(response.indicesStats.getDocs().getCount(), Matchers.equalTo(1L)); assertShardStats(response.getIndicesStats().getShards(), 1, 4, 2, 1.0); - prepareCreate("test2").setSettings("number_of_shards", 3, "number_of_replicas", 0).get(); + prepareCreate("test2").setSettings(Settings.builder().put("number_of_shards", 3).put("number_of_replicas", 0)).get(); ensureGreen(); response = client().admin().cluster().prepareClusterStats().get(); assertThat(response.getStatus(), Matchers.equalTo(ClusterHealthStatus.GREEN)); diff --git a/core/src/test/java/org/elasticsearch/action/admin/cluster/storedscripts/GetStoredScriptRequestTests.java b/core/src/test/java/org/elasticsearch/action/admin/cluster/storedscripts/GetStoredScriptRequestTests.java index bd12d58b1cbcd..756b7f1e5f688 100644 --- a/core/src/test/java/org/elasticsearch/action/admin/cluster/storedscripts/GetStoredScriptRequestTests.java +++ b/core/src/test/java/org/elasticsearch/action/admin/cluster/storedscripts/GetStoredScriptRequestTests.java @@ -38,8 +38,7 @@ public void testGetIndexedScriptRequestSerialization() throws IOException { StreamInput in = out.bytes().streamInput(); in.setVersion(out.getVersion()); - GetStoredScriptRequest request2 = new GetStoredScriptRequest(); - request2.readFrom(in); + GetStoredScriptRequest request2 = new GetStoredScriptRequest(in); assertThat(request2.id(), equalTo(request.id())); } diff --git a/core/src/test/java/org/elasticsearch/action/admin/indices/TransportAnalyzeActionTests.java b/core/src/test/java/org/elasticsearch/action/admin/indices/TransportAnalyzeActionTests.java index 54c7ba3aab084..90857da0be089 100644 --- a/core/src/test/java/org/elasticsearch/action/admin/indices/TransportAnalyzeActionTests.java +++ b/core/src/test/java/org/elasticsearch/action/admin/indices/TransportAnalyzeActionTests.java @@ -28,6 +28,7 @@ import org.elasticsearch.common.UUIDs; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.env.Environment; +import org.elasticsearch.env.TestEnvironment; import org.elasticsearch.index.IndexSettings; import org.elasticsearch.index.analysis.AbstractCharFilterFactory; import org.elasticsearch.index.analysis.AbstractTokenFilterFactory; @@ -36,12 +37,10 @@ import org.elasticsearch.index.analysis.IndexAnalyzers; import org.elasticsearch.index.analysis.PreConfiguredCharFilter; import org.elasticsearch.index.analysis.TokenFilterFactory; -import org.elasticsearch.index.mapper.AllFieldMapper; import org.elasticsearch.indices.analysis.AnalysisModule; import org.elasticsearch.indices.analysis.AnalysisModule.AnalysisProvider; import org.elasticsearch.indices.analysis.AnalysisModuleTests.AppendCharFilter; import org.elasticsearch.plugins.AnalysisPlugin; -import static org.elasticsearch.plugins.AnalysisPlugin.requriesAnalysisSettings; import org.elasticsearch.test.ESTestCase; import org.elasticsearch.test.IndexSettingsModule; @@ -74,9 +73,9 @@ public void setUp() throws Exception { .put("index.analysis.analyzer.custom_analyzer.tokenizer", "standard") .put("index.analysis.analyzer.custom_analyzer.filter", "mock") .put("index.analysis.normalizer.my_normalizer.type", "custom") - .putArray("index.analysis.normalizer.my_normalizer.filter", "lowercase").build(); + .putList("index.analysis.normalizer.my_normalizer.filter", "lowercase").build(); IndexSettings idxSettings = IndexSettingsModule.newIndexSettings("index", indexSettings); - environment = new Environment(settings); + environment = TestEnvironment.newEnvironment(settings); AnalysisPlugin plugin = new AnalysisPlugin() { class MockFactory extends AbstractTokenFilterFactory { MockFactory(IndexSettings indexSettings, Environment env, String name, Settings settings) { @@ -127,7 +126,7 @@ public void testNoIndexAnalyzers() throws IOException { AnalyzeRequest request = new AnalyzeRequest(); request.text("the quick brown fox"); request.analyzer("standard"); - AnalyzeResponse analyze = TransportAnalyzeAction.analyze(request, AllFieldMapper.NAME, null, null, registry, environment); + AnalyzeResponse analyze = TransportAnalyzeAction.analyze(request, "text", null, null, registry, environment); List tokens = analyze.getTokens(); assertEquals(4, tokens.size()); @@ -136,7 +135,7 @@ public void testNoIndexAnalyzers() throws IOException { request.text("the qu1ck brown fox"); request.tokenizer("standard"); request.addTokenFilter("mock"); - analyze = TransportAnalyzeAction.analyze(request, AllFieldMapper.NAME, null, randomBoolean() ? indexAnalyzers : null, registry, environment); + analyze = TransportAnalyzeAction.analyze(request, "text", null, randomBoolean() ? indexAnalyzers : null, registry, environment); tokens = analyze.getTokens(); assertEquals(3, tokens.size()); assertEquals("qu1ck", tokens.get(0).getTerm()); @@ -148,7 +147,7 @@ public void testNoIndexAnalyzers() throws IOException { request.text("the qu1ck brown fox"); request.tokenizer("standard"); request.addCharFilter("append_foo"); - analyze = TransportAnalyzeAction.analyze(request, AllFieldMapper.NAME, null, randomBoolean() ? indexAnalyzers : null, registry, environment); + analyze = TransportAnalyzeAction.analyze(request, "text", null, randomBoolean() ? indexAnalyzers : null, registry, environment); tokens = analyze.getTokens(); assertEquals(4, tokens.size()); assertEquals("the", tokens.get(0).getTerm()); @@ -162,7 +161,7 @@ public void testNoIndexAnalyzers() throws IOException { request.tokenizer("standard"); request.addCharFilter("append"); request.text("the qu1ck brown fox"); - analyze = TransportAnalyzeAction.analyze(request, AllFieldMapper.NAME, null, randomBoolean() ? indexAnalyzers : null, registry, environment); + analyze = TransportAnalyzeAction.analyze(request, "text", null, randomBoolean() ? indexAnalyzers : null, registry, environment); tokens = analyze.getTokens(); assertEquals(4, tokens.size()); assertEquals("the", tokens.get(0).getTerm()); @@ -175,7 +174,7 @@ public void testFillsAttributes() throws IOException { AnalyzeRequest request = new AnalyzeRequest(); request.analyzer("standard"); request.text("the 1 brown fox"); - AnalyzeResponse analyze = TransportAnalyzeAction.analyze(request, AllFieldMapper.NAME, null, null, registry, environment); + AnalyzeResponse analyze = TransportAnalyzeAction.analyze(request, "text", null, null, registry, environment); List tokens = analyze.getTokens(); assertEquals(4, tokens.size()); assertEquals("the", tokens.get(0).getTerm()); @@ -207,7 +206,7 @@ public void testWithIndexAnalyzers() throws IOException { AnalyzeRequest request = new AnalyzeRequest(); request.text("the quick brown fox"); request.analyzer("custom_analyzer"); - AnalyzeResponse analyze = TransportAnalyzeAction.analyze(request, AllFieldMapper.NAME, null, indexAnalyzers, registry, environment); + AnalyzeResponse analyze = TransportAnalyzeAction.analyze(request, "text", null, indexAnalyzers, registry, environment); List tokens = analyze.getTokens(); assertEquals(3, tokens.size()); assertEquals("quick", tokens.get(0).getTerm()); @@ -215,7 +214,7 @@ public void testWithIndexAnalyzers() throws IOException { assertEquals("fox", tokens.get(2).getTerm()); request.analyzer("standard"); - analyze = TransportAnalyzeAction.analyze(request, AllFieldMapper.NAME, null, indexAnalyzers, registry, environment); + analyze = TransportAnalyzeAction.analyze(request, "text", null, indexAnalyzers, registry, environment); tokens = analyze.getTokens(); assertEquals(4, tokens.size()); assertEquals("the", tokens.get(0).getTerm()); @@ -226,7 +225,7 @@ public void testWithIndexAnalyzers() throws IOException { // Switch the analyzer out for just a tokenizer request.analyzer(null); request.tokenizer("standard"); - analyze = TransportAnalyzeAction.analyze(request, AllFieldMapper.NAME, null, indexAnalyzers, registry, environment); + analyze = TransportAnalyzeAction.analyze(request, "text", null, indexAnalyzers, registry, environment); tokens = analyze.getTokens(); assertEquals(4, tokens.size()); assertEquals("the", tokens.get(0).getTerm()); @@ -236,7 +235,7 @@ public void testWithIndexAnalyzers() throws IOException { // Now try applying our token filter request.addTokenFilter("mock"); - analyze = TransportAnalyzeAction.analyze(request, AllFieldMapper.NAME, null, indexAnalyzers, registry, environment); + analyze = TransportAnalyzeAction.analyze(request, "text", null, indexAnalyzers, registry, environment); tokens = analyze.getTokens(); assertEquals(3, tokens.size()); assertEquals("quick", tokens.get(0).getTerm()); @@ -250,7 +249,7 @@ public void testGetIndexAnalyserWithoutIndexAnalyzers() throws IOException { new AnalyzeRequest() .analyzer("custom_analyzer") .text("the qu1ck brown fox-dog"), - AllFieldMapper.NAME, null, null, registry, environment)); + "text", null, null, registry, environment)); assertEquals(e.getMessage(), "failed to find global analyzer [custom_analyzer]"); } @@ -261,7 +260,7 @@ public void testUnknown() throws IOException { new AnalyzeRequest() .analyzer("foobar") .text("the qu1ck brown fox"), - AllFieldMapper.NAME, null, notGlobal ? indexAnalyzers : null, registry, environment)); + "text", null, notGlobal ? indexAnalyzers : null, registry, environment)); if (notGlobal) { assertEquals(e.getMessage(), "failed to find analyzer [foobar]"); } else { @@ -273,7 +272,7 @@ public void testUnknown() throws IOException { new AnalyzeRequest() .tokenizer("foobar") .text("the qu1ck brown fox"), - AllFieldMapper.NAME, null, notGlobal ? indexAnalyzers : null, registry, environment)); + "text", null, notGlobal ? indexAnalyzers : null, registry, environment)); if (notGlobal) { assertEquals(e.getMessage(), "failed to find tokenizer under [foobar]"); } else { @@ -286,7 +285,7 @@ public void testUnknown() throws IOException { .tokenizer("whitespace") .addTokenFilter("foobar") .text("the qu1ck brown fox"), - AllFieldMapper.NAME, null, notGlobal ? indexAnalyzers : null, registry, environment)); + "text", null, notGlobal ? indexAnalyzers : null, registry, environment)); if (notGlobal) { assertEquals(e.getMessage(), "failed to find token filter under [foobar]"); } else { @@ -300,7 +299,7 @@ public void testUnknown() throws IOException { .addTokenFilter("lowercase") .addCharFilter("foobar") .text("the qu1ck brown fox"), - AllFieldMapper.NAME, null, notGlobal ? indexAnalyzers : null, registry, environment)); + "text", null, notGlobal ? indexAnalyzers : null, registry, environment)); if (notGlobal) { assertEquals(e.getMessage(), "failed to find char filter under [foobar]"); } else { @@ -312,7 +311,7 @@ public void testUnknown() throws IOException { new AnalyzeRequest() .normalizer("foobar") .text("the qu1ck brown fox"), - AllFieldMapper.NAME, null, indexAnalyzers, registry, environment)); + "text", null, indexAnalyzers, registry, environment)); assertEquals(e.getMessage(), "failed to find normalizer under [foobar]"); } @@ -321,7 +320,7 @@ public void testNonPreBuildTokenFilter() throws IOException { request.tokenizer("whitespace"); request.addTokenFilter("stop"); // stop token filter is not prebuilt in AnalysisModule#setupPreConfiguredTokenFilters() request.text("the quick brown fox"); - AnalyzeResponse analyze = TransportAnalyzeAction.analyze(request, AllFieldMapper.NAME, null, indexAnalyzers, registry, environment); + AnalyzeResponse analyze = TransportAnalyzeAction.analyze(request, "text", null, indexAnalyzers, registry, environment); List tokens = analyze.getTokens(); assertEquals(3, tokens.size()); assertEquals("quick", tokens.get(0).getTerm()); @@ -333,7 +332,7 @@ public void testNormalizerWithIndex() throws IOException { AnalyzeRequest request = new AnalyzeRequest("index"); request.normalizer("my_normalizer"); request.text("ABc"); - AnalyzeResponse analyze = TransportAnalyzeAction.analyze(request, AllFieldMapper.NAME, null, indexAnalyzers, registry, environment); + AnalyzeResponse analyze = TransportAnalyzeAction.analyze(request, "text", null, indexAnalyzers, registry, environment); List tokens = analyze.getTokens(); assertEquals(1, tokens.size()); diff --git a/core/src/test/java/org/elasticsearch/action/admin/indices/create/CreateIndexRequestBuilderTests.java b/core/src/test/java/org/elasticsearch/action/admin/indices/create/CreateIndexRequestBuilderTests.java index 1ae60f42eb63d..2dd8a8343c501 100644 --- a/core/src/test/java/org/elasticsearch/action/admin/indices/create/CreateIndexRequestBuilderTests.java +++ b/core/src/test/java/org/elasticsearch/action/admin/indices/create/CreateIndexRequestBuilderTests.java @@ -19,6 +19,7 @@ package org.elasticsearch.action.admin.indices.create; +import org.elasticsearch.ElasticsearchParseException; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentFactory; @@ -31,6 +32,7 @@ import java.io.ByteArrayOutputStream; import java.io.IOException; import java.util.HashMap; +import java.util.Locale; import java.util.Map; public class CreateIndexRequestBuilderTests extends ESTestCase { @@ -58,16 +60,23 @@ public void tearDown() throws Exception { */ public void testSetSource() throws IOException { CreateIndexRequestBuilder builder = new CreateIndexRequestBuilder(this.testClient, CreateIndexAction.INSTANCE); - builder.setSource("{\""+KEY+"\" : \""+VALUE+"\"}", XContentType.JSON); + + ElasticsearchParseException e = expectThrows(ElasticsearchParseException.class, + () -> {builder.setSource("{\""+KEY+"\" : \""+VALUE+"\"}", XContentType.JSON);}); + assertEquals(String.format(Locale.ROOT, "unknown key [%s] for create index", KEY), e.getMessage()); + + builder.setSource("{\"settings\" : {\""+KEY+"\" : \""+VALUE+"\"}}", XContentType.JSON); assertEquals(VALUE, builder.request().settings().get(KEY)); - XContentBuilder xContent = XContentFactory.jsonBuilder().startObject().field(KEY, VALUE).endObject(); + XContentBuilder xContent = XContentFactory.jsonBuilder().startObject() + .startObject("settings").field(KEY, VALUE).endObject().endObject(); xContent.close(); builder.setSource(xContent); assertEquals(VALUE, builder.request().settings().get(KEY)); ByteArrayOutputStream docOut = new ByteArrayOutputStream(); - XContentBuilder doc = XContentFactory.jsonBuilder(docOut).startObject().field(KEY, VALUE).endObject(); + XContentBuilder doc = XContentFactory.jsonBuilder(docOut).startObject() + .startObject("settings").field(KEY, VALUE).endObject().endObject(); doc.close(); builder.setSource(docOut.toByteArray(), XContentType.JSON); assertEquals(VALUE, builder.request().settings().get(KEY)); @@ -83,7 +92,7 @@ public void testSetSource() throws IOException { */ public void testSetSettings() throws IOException { CreateIndexRequestBuilder builder = new CreateIndexRequestBuilder(this.testClient, CreateIndexAction.INSTANCE); - builder.setSettings(KEY, VALUE); + builder.setSettings(Settings.builder().put(KEY, VALUE)); assertEquals(VALUE, builder.request().settings().get(KEY)); builder.setSettings("{\""+KEY+"\" : \""+VALUE+"\"}", XContentType.JSON); diff --git a/core/src/test/java/org/elasticsearch/action/admin/indices/create/CreateIndexRequestTests.java b/core/src/test/java/org/elasticsearch/action/admin/indices/create/CreateIndexRequestTests.java index 74a87497181e5..4acdfd636bf61 100644 --- a/core/src/test/java/org/elasticsearch/action/admin/indices/create/CreateIndexRequestTests.java +++ b/core/src/test/java/org/elasticsearch/action/admin/indices/create/CreateIndexRequestTests.java @@ -19,6 +19,7 @@ package org.elasticsearch.action.admin.indices.create; +import org.elasticsearch.ElasticsearchParseException; import org.elasticsearch.common.io.stream.BytesStreamOutput; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.xcontent.XContentType; @@ -45,4 +46,27 @@ public void testSerialization() throws IOException { } } } + + public void testTopLevelKeys() throws IOException { + String createIndex = + "{\n" + + " \"FOO_SHOULD_BE_ILLEGAL_HERE\": {\n" + + " \"BAR_IS_THE_SAME\": 42\n" + + " },\n" + + " \"mappings\": {\n" + + " \"test\": {\n" + + " \"properties\": {\n" + + " \"field1\": {\n" + + " \"type\": \"text\"\n" + + " }\n" + + " }\n" + + " }\n" + + " }\n" + + "}"; + + CreateIndexRequest request = new CreateIndexRequest(); + ElasticsearchParseException e = expectThrows(ElasticsearchParseException.class, + () -> {request.source(createIndex, XContentType.JSON);}); + assertEquals("unknown key [FOO_SHOULD_BE_ILLEGAL_HERE] for create index", e.getMessage()); + } } diff --git a/core/src/test/java/org/elasticsearch/action/admin/indices/create/CreateIndexResponseTests.java b/core/src/test/java/org/elasticsearch/action/admin/indices/create/CreateIndexResponseTests.java index 588659335e499..b0fdae9ca62b9 100644 --- a/core/src/test/java/org/elasticsearch/action/admin/indices/create/CreateIndexResponseTests.java +++ b/core/src/test/java/org/elasticsearch/action/admin/indices/create/CreateIndexResponseTests.java @@ -20,12 +20,19 @@ package org.elasticsearch.action.admin.indices.create; import org.elasticsearch.Version; +import org.elasticsearch.common.Strings; +import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.io.stream.BytesStreamOutput; import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.test.ESTestCase; import java.io.IOException; +import static org.elasticsearch.test.XContentTestUtils.insertRandomFields; + public class CreateIndexResponseTests extends ESTestCase { public void testSerialization() throws IOException { @@ -62,4 +69,59 @@ public void testSerializationWithOldVersion() throws IOException { } } } + + public void testToXContent() { + CreateIndexResponse response = new CreateIndexResponse(true, false, "index_name"); + String output = Strings.toString(response); + assertEquals("{\"acknowledged\":true,\"shards_acknowledged\":false,\"index\":\"index_name\"}", output); + } + + public void testToAndFromXContent() throws IOException { + doFromXContentTestWithRandomFields(false); + } + + /** + * This test adds random fields and objects to the xContent rendered out to + * ensure we can parse it back to be forward compatible with additions to + * the xContent + */ + public void testFromXContentWithRandomFields() throws IOException { + doFromXContentTestWithRandomFields(true); + } + + private void doFromXContentTestWithRandomFields(boolean addRandomFields) throws IOException { + + final CreateIndexResponse createIndexResponse = createTestItem(); + + boolean humanReadable = randomBoolean(); + final XContentType xContentType = randomFrom(XContentType.values()); + BytesReference originalBytes = toShuffledXContent(createIndexResponse, xContentType, ToXContent.EMPTY_PARAMS, humanReadable); + + BytesReference mutated; + if (addRandomFields) { + mutated = insertRandomFields(xContentType, originalBytes, null, random()); + } else { + mutated = originalBytes; + } + CreateIndexResponse parsedCreateIndexResponse; + try (XContentParser parser = createParser(xContentType.xContent(), mutated)) { + parsedCreateIndexResponse = CreateIndexResponse.fromXContent(parser); + assertNull(parser.nextToken()); + } + + assertEquals(createIndexResponse.index(), parsedCreateIndexResponse.index()); + assertEquals(createIndexResponse.isShardsAcked(), parsedCreateIndexResponse.isShardsAcked()); + assertEquals(createIndexResponse.isAcknowledged(), parsedCreateIndexResponse.isAcknowledged()); + } + + /** + * Returns a random {@link CreateIndexResponse}. + */ + private static CreateIndexResponse createTestItem() throws IOException { + boolean acknowledged = randomBoolean(); + boolean shardsAcked = acknowledged && randomBoolean(); + String index = randomAlphaOfLength(5); + + return new CreateIndexResponse(acknowledged, shardsAcked, index); + } } diff --git a/core/src/test/java/org/elasticsearch/action/admin/indices/create/ShrinkIndexIT.java b/core/src/test/java/org/elasticsearch/action/admin/indices/create/ShrinkIndexIT.java index 3c2e10d181b58..982b9456b8cdf 100644 --- a/core/src/test/java/org/elasticsearch/action/admin/indices/create/ShrinkIndexIT.java +++ b/core/src/test/java/org/elasticsearch/action/admin/indices/create/ShrinkIndexIT.java @@ -105,7 +105,7 @@ public void testCreateShrinkIndexToN() { .put("index.blocks.write", true)).get(); ensureGreen(); // now merge source into a 4 shard index - assertAcked(client().admin().indices().prepareShrinkIndex("source", "first_shrink") + assertAcked(client().admin().indices().prepareResizeIndex("source", "first_shrink") .setSettings(Settings.builder() .put("index.number_of_replicas", 0) .put("index.number_of_shards", shardSplits[1]).build()).get()); @@ -127,7 +127,7 @@ public void testCreateShrinkIndexToN() { .put("index.blocks.write", true)).get(); ensureGreen(); // now merge source into a 2 shard index - assertAcked(client().admin().indices().prepareShrinkIndex("first_shrink", "second_shrink") + assertAcked(client().admin().indices().prepareResizeIndex("first_shrink", "second_shrink") .setSettings(Settings.builder() .put("index.number_of_replicas", 0) .put("index.number_of_shards", shardSplits[2]).build()).get()); @@ -211,7 +211,7 @@ public void testShrinkIndexPrimaryTerm() throws Exception { // now merge source into target final Settings shrinkSettings = Settings.builder().put("index.number_of_replicas", 0).put("index.number_of_shards", numberOfTargetShards).build(); - assertAcked(client().admin().indices().prepareShrinkIndex("source", "target").setSettings(shrinkSettings).get()); + assertAcked(client().admin().indices().prepareResizeIndex("source", "target").setSettings(shrinkSettings).get()); ensureGreen(); @@ -264,7 +264,7 @@ public void testCreateShrinkIndex() { // now merge source into a single shard index final boolean createWithReplicas = randomBoolean(); - assertAcked(client().admin().indices().prepareShrinkIndex("source", "target") + assertAcked(client().admin().indices().prepareResizeIndex("source", "target") .setSettings(Settings.builder().put("index.number_of_replicas", createWithReplicas ? 1 : 0).build()).get()); ensureGreen(); @@ -350,7 +350,7 @@ public void testCreateShrinkIndexFails() throws Exception { ensureGreen(); // now merge source into a single shard index - client().admin().indices().prepareShrinkIndex("source", "target") + client().admin().indices().prepareResizeIndex("source", "target") .setWaitForActiveShards(ActiveShardCount.NONE) .setSettings(Settings.builder() .put("index.routing.allocation.exclude._name", mergeNode) // we manually exclude the merge node to forcefully fuck it up @@ -436,16 +436,16 @@ public void testCreateShrinkWithIndexSort() throws Exception { // check that index sort cannot be set on the target index IllegalArgumentException exc = expectThrows(IllegalArgumentException.class, - () -> client().admin().indices().prepareShrinkIndex("source", "target") + () -> client().admin().indices().prepareResizeIndex("source", "target") .setSettings(Settings.builder() .put("index.number_of_replicas", 0) .put("index.number_of_shards", "2") .put("index.sort.field", "foo") .build()).get()); - assertThat(exc.getMessage(), containsString("can't override index sort when shrinking index")); + assertThat(exc.getMessage(), containsString("can't override index sort when resizing an index")); // check that the index sort order of `source` is correctly applied to the `target` - assertAcked(client().admin().indices().prepareShrinkIndex("source", "target") + assertAcked(client().admin().indices().prepareResizeIndex("source", "target") .setSettings(Settings.builder() .put("index.number_of_replicas", 0) .put("index.number_of_shards", "2").build()).get()); diff --git a/core/src/test/java/org/elasticsearch/action/admin/indices/create/SplitIndexIT.java b/core/src/test/java/org/elasticsearch/action/admin/indices/create/SplitIndexIT.java new file mode 100644 index 0000000000000..ebd647d0e02fd --- /dev/null +++ b/core/src/test/java/org/elasticsearch/action/admin/indices/create/SplitIndexIT.java @@ -0,0 +1,463 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.action.admin.indices.create; + +import org.apache.lucene.search.Sort; +import org.apache.lucene.search.SortField; +import org.apache.lucene.search.SortedSetSelector; +import org.apache.lucene.search.SortedSetSortField; +import org.elasticsearch.Version; +import org.elasticsearch.action.admin.cluster.state.ClusterStateRequest; +import org.elasticsearch.action.admin.cluster.state.ClusterStateResponse; +import org.elasticsearch.action.admin.indices.settings.get.GetSettingsResponse; +import org.elasticsearch.action.admin.indices.shrink.ResizeType; +import org.elasticsearch.action.admin.indices.stats.CommonStats; +import org.elasticsearch.action.admin.indices.stats.IndicesStatsResponse; +import org.elasticsearch.action.admin.indices.stats.ShardStats; +import org.elasticsearch.action.get.GetResponse; +import org.elasticsearch.action.index.IndexRequest; +import org.elasticsearch.action.index.IndexRequestBuilder; +import org.elasticsearch.action.search.SearchResponse; +import org.elasticsearch.client.Client; +import org.elasticsearch.cluster.ClusterState; +import org.elasticsearch.cluster.metadata.IndexMetaData; +import org.elasticsearch.cluster.node.DiscoveryNode; +import org.elasticsearch.cluster.routing.Murmur3HashFunction; +import org.elasticsearch.cluster.routing.ShardRouting; +import org.elasticsearch.cluster.routing.allocation.decider.EnableAllocationDecider; +import org.elasticsearch.common.collect.ImmutableOpenMap; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.xcontent.XContentType; +import org.elasticsearch.index.Index; +import org.elasticsearch.index.IndexService; +import org.elasticsearch.index.engine.SegmentsStats; +import org.elasticsearch.index.query.TermsQueryBuilder; +import org.elasticsearch.index.seqno.SeqNoStats; +import org.elasticsearch.index.shard.IndexShard; +import org.elasticsearch.indices.IndicesService; +import org.elasticsearch.plugins.Plugin; +import org.elasticsearch.test.ESIntegTestCase; +import org.elasticsearch.test.InternalSettingsPlugin; +import org.elasticsearch.test.VersionUtils; + +import java.util.Arrays; +import java.util.Collection; +import java.util.HashSet; +import java.util.List; +import java.util.Set; +import java.util.stream.IntStream; + +import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked; +import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertHitCount; +import static org.hamcrest.Matchers.containsString; +import static org.hamcrest.Matchers.equalTo; +import static org.hamcrest.Matchers.greaterThanOrEqualTo; + + +public class SplitIndexIT extends ESIntegTestCase { + + @Override + protected Collection> nodePlugins() { + return Arrays.asList(InternalSettingsPlugin.class); + } + + public void testCreateSplitIndexToN() { + int[][] possibleShardSplits = new int[][] {{2,4,8}, {3, 6, 12}, {1, 2, 4}}; + int[] shardSplits = randomFrom(possibleShardSplits); + assertEquals(shardSplits[0], (shardSplits[0] * shardSplits[1]) / shardSplits[1]); + assertEquals(shardSplits[1], (shardSplits[1] * shardSplits[2]) / shardSplits[2]); + internalCluster().ensureAtLeastNumDataNodes(2); + final boolean useRouting = randomBoolean(); + final boolean useMixedRouting = useRouting ? randomBoolean() : false; + CreateIndexRequestBuilder createInitialIndex = prepareCreate("source"); + final int routingShards = shardSplits[2] * randomIntBetween(1, 10); + Settings.Builder settings = Settings.builder().put(indexSettings()) + .put("number_of_shards", shardSplits[0]) + .put("index.number_of_routing_shards", routingShards); + if (useRouting && useMixedRouting == false && randomBoolean()) { + settings.put("index.routing_partition_size", randomIntBetween(1, routingShards - 1)); + createInitialIndex.addMapping("t1", "_routing", "required=true"); + } + logger.info("use routing {} use mixed routing {}", useRouting, useMixedRouting); + createInitialIndex.setSettings(settings).get(); + + int numDocs = randomIntBetween(10, 50); + String[] routingValue = new String[numDocs]; + for (int i = 0; i < numDocs; i++) { + IndexRequestBuilder builder = client().prepareIndex("source", "t1", Integer.toString(i)) + .setSource("{\"foo\" : \"bar\", \"i\" : " + i + "}", XContentType.JSON); + if (useRouting) { + String routing = randomRealisticUnicodeOfCodepointLengthBetween(1, 10); + if (useMixedRouting && randomBoolean()) { + routingValue[i] = null; + } else { + routingValue[i] = routing; + } + builder.setRouting(routingValue[i]); + } + builder.get(); + } + + if (randomBoolean()) { + for (int i = 0; i < numDocs; i++) { // let's introduce some updates / deletes on the index + if (randomBoolean()) { + IndexRequestBuilder builder = client().prepareIndex("source", "t1", Integer.toString(i)) + .setSource("{\"foo\" : \"bar\", \"i\" : " + i + "}", XContentType.JSON); + if (useRouting) { + builder.setRouting(routingValue[i]); + } + builder.get(); + } + } + } + + ImmutableOpenMap dataNodes = client().admin().cluster().prepareState().get().getState().nodes() + .getDataNodes(); + assertTrue("at least 2 nodes but was: " + dataNodes.size(), dataNodes.size() >= 2); + ensureYellow(); + client().admin().indices().prepareUpdateSettings("source") + .setSettings(Settings.builder() + .put("index.blocks.write", true)).get(); + ensureGreen(); + assertAcked(client().admin().indices().prepareResizeIndex("source", "first_split") + .setResizeType(ResizeType.SPLIT) + .setSettings(Settings.builder() + .put("index.number_of_replicas", 0) + .put("index.number_of_shards", shardSplits[1]).build()).get()); + ensureGreen(); + assertHitCount(client().prepareSearch("first_split").setSize(100).setQuery(new TermsQueryBuilder("foo", "bar")).get(), numDocs); + + for (int i = 0; i < numDocs; i++) { // now update + IndexRequestBuilder builder = client().prepareIndex("first_split", "t1", Integer.toString(i)) + .setSource("{\"foo\" : \"bar\", \"i\" : " + i + "}", XContentType.JSON); + if (useRouting) { + builder.setRouting(routingValue[i]); + } + builder.get(); + } + flushAndRefresh(); + assertHitCount(client().prepareSearch("first_split").setSize(100).setQuery(new TermsQueryBuilder("foo", "bar")).get(), numDocs); + assertHitCount(client().prepareSearch("source").setSize(100).setQuery(new TermsQueryBuilder("foo", "bar")).get(), numDocs); + for (int i = 0; i < numDocs; i++) { + GetResponse getResponse = client().prepareGet("first_split", "t1", Integer.toString(i)).setRouting(routingValue[i]).get(); + assertTrue(getResponse.isExists()); + } + + client().admin().indices().prepareUpdateSettings("first_split") + .setSettings(Settings.builder() + .put("index.blocks.write", true)).get(); + ensureGreen(); + // now split source into a new index + assertAcked(client().admin().indices().prepareResizeIndex("first_split", "second_split") + .setResizeType(ResizeType.SPLIT) + .setSettings(Settings.builder() + .put("index.number_of_replicas", 0) + .put("index.number_of_shards", shardSplits[2]).build()).get()); + ensureGreen(); + assertHitCount(client().prepareSearch("second_split").setSize(100).setQuery(new TermsQueryBuilder("foo", "bar")).get(), numDocs); + // let it be allocated anywhere and bump replicas + client().admin().indices().prepareUpdateSettings("second_split") + .setSettings(Settings.builder() + .put("index.number_of_replicas", 1)).get(); + ensureGreen(); + assertHitCount(client().prepareSearch("second_split").setSize(100).setQuery(new TermsQueryBuilder("foo", "bar")).get(), numDocs); + + for (int i = 0; i < numDocs; i++) { // now update + IndexRequestBuilder builder = client().prepareIndex("second_split", "t1", Integer.toString(i)) + .setSource("{\"foo\" : \"bar\", \"i\" : " + i + "}", XContentType.JSON); + if (useRouting) { + builder.setRouting(routingValue[i]); + } + builder.get(); + } + flushAndRefresh(); + for (int i = 0; i < numDocs; i++) { + GetResponse getResponse = client().prepareGet("second_split", "t1", Integer.toString(i)).setRouting(routingValue[i]).get(); + assertTrue(getResponse.isExists()); + } + assertHitCount(client().prepareSearch("second_split").setSize(100).setQuery(new TermsQueryBuilder("foo", "bar")).get(), numDocs); + assertHitCount(client().prepareSearch("first_split").setSize(100).setQuery(new TermsQueryBuilder("foo", "bar")).get(), numDocs); + assertHitCount(client().prepareSearch("source").setSize(100).setQuery(new TermsQueryBuilder("foo", "bar")).get(), numDocs); + + assertAllUniqueDocs(client().prepareSearch("second_split").setSize(100) + .setQuery(new TermsQueryBuilder("foo", "bar")).get(), numDocs); + assertAllUniqueDocs(client().prepareSearch("first_split").setSize(100) + .setQuery(new TermsQueryBuilder("foo", "bar")).get(), numDocs); + assertAllUniqueDocs(client().prepareSearch("source").setSize(100) + .setQuery(new TermsQueryBuilder("foo", "bar")).get(), numDocs); + + } + + public void assertAllUniqueDocs(SearchResponse response, int numDocs) { + Set ids = new HashSet<>(); + for (int i = 0; i < response.getHits().getHits().length; i++) { + String id = response.getHits().getHits()[i].getId(); + assertTrue("found ID "+ id + " more than once", ids.add(id)); + } + assertEquals(numDocs, ids.size()); + } + + public void testSplitIndexPrimaryTerm() throws Exception { + final List factors = Arrays.asList(1, 2, 4, 8); + final List numberOfShardsFactors = randomSubsetOf(scaledRandomIntBetween(1, factors.size()), factors); + final int numberOfShards = randomSubsetOf(numberOfShardsFactors).stream().reduce(1, (x, y) -> x * y); + final int numberOfTargetShards = numberOfShardsFactors.stream().reduce(2, (x, y) -> x * y); + internalCluster().ensureAtLeastNumDataNodes(2); + prepareCreate("source").setSettings(Settings.builder().put(indexSettings()) + .put("number_of_shards", numberOfShards) + .put("index.number_of_routing_shards", numberOfTargetShards)).get(); + + final ImmutableOpenMap dataNodes = + client().admin().cluster().prepareState().get().getState().nodes().getDataNodes(); + assertThat(dataNodes.size(), greaterThanOrEqualTo(2)); + ensureYellow(); + + // fail random primary shards to force primary terms to increase + final Index source = resolveIndex("source"); + final int iterations = scaledRandomIntBetween(0, 16); + for (int i = 0; i < iterations; i++) { + final String node = randomSubsetOf(1, internalCluster().nodesInclude("source")).get(0); + final IndicesService indexServices = internalCluster().getInstance(IndicesService.class, node); + final IndexService indexShards = indexServices.indexServiceSafe(source); + for (final Integer shardId : indexShards.shardIds()) { + final IndexShard shard = indexShards.getShard(shardId); + if (shard.routingEntry().primary() && randomBoolean()) { + disableAllocation("source"); + shard.failShard("test", new Exception("test")); + // this can not succeed until the shard is failed and a replica is promoted + int id = 0; + while (true) { + // find an ID that routes to the right shard, we will only index to the shard that saw a primary failure + final String s = Integer.toString(id); + final int hash = Math.floorMod(Murmur3HashFunction.hash(s), numberOfShards); + if (hash == shardId) { + final IndexRequest request = + new IndexRequest("source", "type", s).source("{ \"f\": \"" + s + "\"}", XContentType.JSON); + client().index(request).get(); + break; + } else { + id++; + } + } + enableAllocation("source"); + ensureGreen(); + } + } + } + + final Settings.Builder prepareSplitSettings = Settings.builder().put("index.blocks.write", true); + client().admin().indices().prepareUpdateSettings("source").setSettings(prepareSplitSettings).get(); + ensureYellow(); + + final IndexMetaData indexMetaData = indexMetaData(client(), "source"); + final long beforeSplitPrimaryTerm = IntStream.range(0, numberOfShards).mapToLong(indexMetaData::primaryTerm).max().getAsLong(); + + // now split source into target + final Settings splitSettings = + Settings.builder().put("index.number_of_replicas", 0).put("index.number_of_shards", numberOfTargetShards).build(); + assertAcked(client().admin().indices().prepareResizeIndex("source", "target") + .setResizeType(ResizeType.SPLIT) + .setSettings(splitSettings).get()); + + ensureGreen(); + + final IndexMetaData aftersplitIndexMetaData = indexMetaData(client(), "target"); + for (int shardId = 0; shardId < numberOfTargetShards; shardId++) { + assertThat(aftersplitIndexMetaData.primaryTerm(shardId), equalTo(beforeSplitPrimaryTerm + 1)); + } + } + + private static IndexMetaData indexMetaData(final Client client, final String index) { + final ClusterStateResponse clusterStateResponse = client.admin().cluster().state(new ClusterStateRequest()).actionGet(); + return clusterStateResponse.getState().metaData().index(index); + } + + public void testCreateSplitIndex() { + internalCluster().ensureAtLeastNumDataNodes(2); + Version version = VersionUtils.randomVersionBetween(random(), Version.V_6_0_0_rc2, Version.CURRENT); + prepareCreate("source").setSettings(Settings.builder().put(indexSettings()) + .put("number_of_shards", 1) + .put("index.version.created", version) + .put("index.number_of_routing_shards", 2) + ).get(); + final int docs = randomIntBetween(0, 128); + for (int i = 0; i < docs; i++) { + client().prepareIndex("source", "type") + .setSource("{\"foo\" : \"bar\", \"i\" : " + i + "}", XContentType.JSON).get(); + } + ImmutableOpenMap dataNodes = + client().admin().cluster().prepareState().get().getState().nodes().getDataNodes(); + assertTrue("at least 2 nodes but was: " + dataNodes.size(), dataNodes.size() >= 2); + // ensure all shards are allocated otherwise the ensure green below might not succeed since we require the merge node + // if we change the setting too quickly we will end up with one replica unassigned which can't be assigned anymore due + // to the require._name below. + ensureGreen(); + // relocate all shards to one node such that we can merge it. + client().admin().indices().prepareUpdateSettings("source") + .setSettings(Settings.builder() + .put("index.blocks.write", true)).get(); + ensureGreen(); + + final IndicesStatsResponse sourceStats = client().admin().indices().prepareStats("source").setSegments(true).get(); + + // disable rebalancing to be able to capture the right stats. balancing can move the target primary + // making it hard to pin point the source shards. + client().admin().cluster().prepareUpdateSettings().setTransientSettings(Settings.builder().put( + EnableAllocationDecider.CLUSTER_ROUTING_REBALANCE_ENABLE_SETTING.getKey(), "none" + )).get(); + try { + + final boolean createWithReplicas = randomBoolean(); + assertAcked(client().admin().indices().prepareResizeIndex("source", "target") + .setResizeType(ResizeType.SPLIT) + .setSettings(Settings.builder() + .put("index.number_of_replicas", createWithReplicas ? 1 : 0) + .put("index.number_of_shards", 2).build()).get()); + ensureGreen(); + + final ClusterState state = client().admin().cluster().prepareState().get().getState(); + DiscoveryNode mergeNode = state.nodes().get(state.getRoutingTable().index("target").shard(0).primaryShard().currentNodeId()); + logger.info("split node {}", mergeNode); + + final long maxSeqNo = Arrays.stream(sourceStats.getShards()) + .filter(shard -> shard.getShardRouting().currentNodeId().equals(mergeNode.getId())) + .map(ShardStats::getSeqNoStats).mapToLong(SeqNoStats::getMaxSeqNo).max().getAsLong(); + final long maxUnsafeAutoIdTimestamp = Arrays.stream(sourceStats.getShards()) + .filter(shard -> shard.getShardRouting().currentNodeId().equals(mergeNode.getId())) + .map(ShardStats::getStats) + .map(CommonStats::getSegments) + .mapToLong(SegmentsStats::getMaxUnsafeAutoIdTimestamp) + .max() + .getAsLong(); + + final IndicesStatsResponse targetStats = client().admin().indices().prepareStats("target").get(); + for (final ShardStats shardStats : targetStats.getShards()) { + final SeqNoStats seqNoStats = shardStats.getSeqNoStats(); + final ShardRouting shardRouting = shardStats.getShardRouting(); + assertThat("failed on " + shardRouting, seqNoStats.getMaxSeqNo(), equalTo(maxSeqNo)); + assertThat("failed on " + shardRouting, seqNoStats.getLocalCheckpoint(), equalTo(maxSeqNo)); + assertThat("failed on " + shardRouting, + shardStats.getStats().getSegments().getMaxUnsafeAutoIdTimestamp(), equalTo(maxUnsafeAutoIdTimestamp)); + } + + final int size = docs > 0 ? 2 * docs : 1; + assertHitCount(client().prepareSearch("target").setSize(size).setQuery(new TermsQueryBuilder("foo", "bar")).get(), docs); + + if (createWithReplicas == false) { + // bump replicas + client().admin().indices().prepareUpdateSettings("target") + .setSettings(Settings.builder() + .put("index.number_of_replicas", 1)).get(); + ensureGreen(); + assertHitCount(client().prepareSearch("target").setSize(size).setQuery(new TermsQueryBuilder("foo", "bar")).get(), docs); + } + + for (int i = docs; i < 2 * docs; i++) { + client().prepareIndex("target", "type") + .setSource("{\"foo\" : \"bar\", \"i\" : " + i + "}", XContentType.JSON).get(); + } + flushAndRefresh(); + assertHitCount(client().prepareSearch("target").setSize(2 * size).setQuery(new TermsQueryBuilder("foo", "bar")).get(), + 2 * docs); + assertHitCount(client().prepareSearch("source").setSize(size).setQuery(new TermsQueryBuilder("foo", "bar")).get(), docs); + GetSettingsResponse target = client().admin().indices().prepareGetSettings("target").get(); + assertEquals(version, target.getIndexToSettings().get("target").getAsVersion("index.version.created", null)); + } finally { + // clean up + client().admin().cluster().prepareUpdateSettings().setTransientSettings(Settings.builder().put( + EnableAllocationDecider.CLUSTER_ROUTING_REBALANCE_ENABLE_SETTING.getKey(), (String)null + )).get(); + } + + } + + public void testCreateSplitWithIndexSort() throws Exception { + SortField expectedSortField = new SortedSetSortField("id", true, SortedSetSelector.Type.MAX); + expectedSortField.setMissingValue(SortedSetSortField.STRING_FIRST); + Sort expectedIndexSort = new Sort(expectedSortField); + internalCluster().ensureAtLeastNumDataNodes(2); + prepareCreate("source") + .setSettings( + Settings.builder() + .put(indexSettings()) + .put("sort.field", "id") + .put("index.number_of_routing_shards", 16) + .put("sort.order", "desc") + .put("number_of_shards", 2) + .put("number_of_replicas", 0) + ) + .addMapping("type", "id", "type=keyword,doc_values=true") + .get(); + for (int i = 0; i < 20; i++) { + client().prepareIndex("source", "type", Integer.toString(i)) + .setSource("{\"foo\" : \"bar\", \"id\" : " + i + "}", XContentType.JSON).get(); + } + ImmutableOpenMap dataNodes = client().admin().cluster().prepareState().get().getState().nodes() + .getDataNodes(); + assertTrue("at least 2 nodes but was: " + dataNodes.size(), dataNodes.size() >= 2); + DiscoveryNode[] discoveryNodes = dataNodes.values().toArray(DiscoveryNode.class); + String mergeNode = discoveryNodes[0].getName(); + // ensure all shards are allocated otherwise the ensure green below might not succeed since we require the merge node + // if we change the setting too quickly we will end up with one replica unassigned which can't be assigned anymore due + // to the require._name below. + ensureGreen(); + + flushAndRefresh(); + assertSortedSegments("source", expectedIndexSort); + + client().admin().indices().prepareUpdateSettings("source") + .setSettings(Settings.builder() + .put("index.blocks.write", true)).get(); + ensureYellow(); + + // check that index sort cannot be set on the target index + IllegalArgumentException exc = expectThrows(IllegalArgumentException.class, + () -> client().admin().indices().prepareResizeIndex("source", "target") + .setResizeType(ResizeType.SPLIT) + .setSettings(Settings.builder() + .put("index.number_of_replicas", 0) + .put("index.number_of_shards", 4) + .put("index.sort.field", "foo") + .build()).get()); + assertThat(exc.getMessage(), containsString("can't override index sort when resizing an index")); + + // check that the index sort order of `source` is correctly applied to the `target` + assertAcked(client().admin().indices().prepareResizeIndex("source", "target") + .setResizeType(ResizeType.SPLIT) + .setSettings(Settings.builder() + .put("index.number_of_replicas", 0) + .put("index.number_of_shards", 4).build()).get()); + ensureGreen(); + flushAndRefresh(); + GetSettingsResponse settingsResponse = + client().admin().indices().prepareGetSettings("target").execute().actionGet(); + assertEquals(settingsResponse.getSetting("target", "index.sort.field"), "id"); + assertEquals(settingsResponse.getSetting("target", "index.sort.order"), "desc"); + assertSortedSegments("target", expectedIndexSort); + + // ... and that the index sort is also applied to updates + for (int i = 20; i < 40; i++) { + client().prepareIndex("target", "type") + .setSource("{\"foo\" : \"bar\", \"i\" : " + i + "}", XContentType.JSON).get(); + } + flushAndRefresh(); + assertSortedSegments("target", expectedIndexSort); + } +} diff --git a/core/src/test/java/org/elasticsearch/action/admin/indices/delete/DeleteIndexResponseTests.java b/core/src/test/java/org/elasticsearch/action/admin/indices/delete/DeleteIndexResponseTests.java new file mode 100755 index 0000000000000..4e036319ad95e --- /dev/null +++ b/core/src/test/java/org/elasticsearch/action/admin/indices/delete/DeleteIndexResponseTests.java @@ -0,0 +1,85 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.action.admin.indices.delete; + +import org.elasticsearch.common.Strings; +import org.elasticsearch.common.bytes.BytesReference; +import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.common.xcontent.XContentType; +import org.elasticsearch.test.ESTestCase; + +import java.io.IOException; + +import static org.elasticsearch.test.XContentTestUtils.insertRandomFields; + +public class DeleteIndexResponseTests extends ESTestCase { + + public void testToXContent() { + DeleteIndexResponse response = new DeleteIndexResponse(true); + String output = Strings.toString(response); + assertEquals("{\"acknowledged\":true}", output); + } + + public void testToAndFromXContent() throws IOException { + doFromXContentTestWithRandomFields(false); + } + + /** + * This test adds random fields and objects to the xContent rendered out to + * ensure we can parse it back to be forward compatible with additions to + * the xContent + */ + public void testFromXContentWithRandomFields() throws IOException { + doFromXContentTestWithRandomFields(true); + } + + private void doFromXContentTestWithRandomFields(boolean addRandomFields) throws IOException { + + final DeleteIndexResponse deleteIndexResponse = createTestItem(); + + boolean humanReadable = randomBoolean(); + final XContentType xContentType = randomFrom(XContentType.values()); + BytesReference originalBytes = toShuffledXContent(deleteIndexResponse, xContentType, ToXContent.EMPTY_PARAMS, humanReadable); + + BytesReference mutated; + if (addRandomFields) { + mutated = insertRandomFields(xContentType, originalBytes, null, random()); + } else { + mutated = originalBytes; + } + DeleteIndexResponse parsedDeleteIndexResponse; + try (XContentParser parser = createParser(xContentType.xContent(), mutated)) { + parsedDeleteIndexResponse = DeleteIndexResponse.fromXContent(parser); + assertNull(parser.nextToken()); + } + + assertEquals(deleteIndexResponse.isAcknowledged(), parsedDeleteIndexResponse.isAcknowledged()); + } + + /** + * Returns a random {@link DeleteIndexResponse}. + */ + private static DeleteIndexResponse createTestItem() throws IOException { + boolean acknowledged = randomBoolean(); + + return new DeleteIndexResponse(acknowledged); + } +} diff --git a/core/src/test/java/org/elasticsearch/action/admin/indices/rollover/ConditionTests.java b/core/src/test/java/org/elasticsearch/action/admin/indices/rollover/ConditionTests.java index 95f186ba0e566..a4e6cdfade7ef 100644 --- a/core/src/test/java/org/elasticsearch/action/admin/indices/rollover/ConditionTests.java +++ b/core/src/test/java/org/elasticsearch/action/admin/indices/rollover/ConditionTests.java @@ -19,6 +19,8 @@ package org.elasticsearch.action.admin.indices.rollover; +import org.elasticsearch.common.unit.ByteSizeUnit; +import org.elasticsearch.common.unit.ByteSizeValue; import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.test.ESTestCase; @@ -30,12 +32,12 @@ public void testMaxAge() throws Exception { final MaxAgeCondition maxAgeCondition = new MaxAgeCondition(TimeValue.timeValueHours(1)); long indexCreatedMatch = System.currentTimeMillis() - TimeValue.timeValueMinutes(61).getMillis(); - Condition.Result evaluate = maxAgeCondition.evaluate(new Condition.Stats(0, indexCreatedMatch)); + Condition.Result evaluate = maxAgeCondition.evaluate(new Condition.Stats(0, indexCreatedMatch, randomByteSize())); assertThat(evaluate.condition, equalTo(maxAgeCondition)); assertThat(evaluate.matched, equalTo(true)); long indexCreatedNotMatch = System.currentTimeMillis() - TimeValue.timeValueMinutes(59).getMillis(); - evaluate = maxAgeCondition.evaluate(new Condition.Stats(0, indexCreatedNotMatch)); + evaluate = maxAgeCondition.evaluate(new Condition.Stats(0, indexCreatedNotMatch, randomByteSize())); assertThat(evaluate.condition, equalTo(maxAgeCondition)); assertThat(evaluate.matched, equalTo(false)); } @@ -44,13 +46,33 @@ public void testMaxDocs() throws Exception { final MaxDocsCondition maxDocsCondition = new MaxDocsCondition(100L); long maxDocsMatch = randomIntBetween(100, 1000); - Condition.Result evaluate = maxDocsCondition.evaluate(new Condition.Stats(maxDocsMatch, 0)); + Condition.Result evaluate = maxDocsCondition.evaluate(new Condition.Stats(maxDocsMatch, 0, randomByteSize())); assertThat(evaluate.condition, equalTo(maxDocsCondition)); assertThat(evaluate.matched, equalTo(true)); long maxDocsNotMatch = randomIntBetween(0, 99); - evaluate = maxDocsCondition.evaluate(new Condition.Stats(0, maxDocsNotMatch)); + evaluate = maxDocsCondition.evaluate(new Condition.Stats(0, maxDocsNotMatch, randomByteSize())); assertThat(evaluate.condition, equalTo(maxDocsCondition)); assertThat(evaluate.matched, equalTo(false)); } + + public void testMaxSize() throws Exception { + MaxSizeCondition maxSizeCondition = new MaxSizeCondition(new ByteSizeValue(randomIntBetween(10, 20), ByteSizeUnit.MB)); + + Condition.Result result = maxSizeCondition.evaluate(new Condition.Stats(randomNonNegativeLong(), randomNonNegativeLong(), + new ByteSizeValue(0, ByteSizeUnit.MB))); + assertThat(result.matched, equalTo(false)); + + result = maxSizeCondition.evaluate(new Condition.Stats(randomNonNegativeLong(), randomNonNegativeLong(), + new ByteSizeValue(randomIntBetween(0, 9), ByteSizeUnit.MB))); + assertThat(result.matched, equalTo(false)); + + result = maxSizeCondition.evaluate(new Condition.Stats(randomNonNegativeLong(), randomNonNegativeLong(), + new ByteSizeValue(randomIntBetween(20, 1000), ByteSizeUnit.MB))); + assertThat(result.matched, equalTo(true)); + } + + private ByteSizeValue randomByteSize() { + return new ByteSizeValue(randomNonNegativeLong(), ByteSizeUnit.BYTES); + } } diff --git a/core/src/test/java/org/elasticsearch/action/admin/indices/rollover/RolloverIT.java b/core/src/test/java/org/elasticsearch/action/admin/indices/rollover/RolloverIT.java index c449147cbbd5e..c047611f71932 100644 --- a/core/src/test/java/org/elasticsearch/action/admin/indices/rollover/RolloverIT.java +++ b/core/src/test/java/org/elasticsearch/action/admin/indices/rollover/RolloverIT.java @@ -19,13 +19,15 @@ package org.elasticsearch.action.admin.indices.rollover; +import org.elasticsearch.ResourceAlreadyExistsException; import org.elasticsearch.action.admin.indices.alias.Alias; import org.elasticsearch.action.admin.indices.settings.get.GetSettingsResponse; import org.elasticsearch.cluster.ClusterState; import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.unit.ByteSizeUnit; +import org.elasticsearch.common.unit.ByteSizeValue; import org.elasticsearch.common.unit.TimeValue; -import org.elasticsearch.ResourceAlreadyExistsException; import org.elasticsearch.plugins.Plugin; import org.elasticsearch.test.ESIntegTestCase; import org.elasticsearch.test.InternalSettingsPlugin; @@ -36,9 +38,15 @@ import java.util.Collection; import java.util.Collections; import java.util.Map; +import java.util.Set; +import java.util.stream.Collectors; import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked; +import static org.hamcrest.Matchers.containsInAnyOrder; import static org.hamcrest.Matchers.equalTo; +import static org.hamcrest.Matchers.everyItem; +import static org.hamcrest.Matchers.hasProperty; +import static org.hamcrest.Matchers.is; @ESIntegTestCase.ClusterScope(scope = ESIntegTestCase.Scope.TEST) public class RolloverIT extends ESIntegTestCase { @@ -128,15 +136,23 @@ public void testRolloverConditionsNotMet() throws Exception { index("test_index-0", "type1", "1", "field", "value"); flush("test_index-0"); final RolloverResponse response = client().admin().indices().prepareRolloverIndex("test_alias") + .addMaxIndexSizeCondition(new ByteSizeValue(10, ByteSizeUnit.MB)) .addMaxIndexAgeCondition(TimeValue.timeValueHours(4)).get(); assertThat(response.getOldIndex(), equalTo("test_index-0")); assertThat(response.getNewIndex(), equalTo("test_index-000001")); assertThat(response.isDryRun(), equalTo(false)); assertThat(response.isRolledOver(), equalTo(false)); - assertThat(response.getConditionStatus().size(), equalTo(1)); - final Map.Entry conditionEntry = response.getConditionStatus().iterator().next(); - assertThat(conditionEntry.getKey(), equalTo(new MaxAgeCondition(TimeValue.timeValueHours(4)).toString())); - assertThat(conditionEntry.getValue(), equalTo(false)); + assertThat(response.getConditionStatus().size(), equalTo(2)); + + + assertThat(response.getConditionStatus(), everyItem(hasProperty("value", is(false)))); + Set conditions = response.getConditionStatus().stream() + .map(Map.Entry::getKey) + .collect(Collectors.toSet()); + assertThat(conditions, containsInAnyOrder( + new MaxSizeCondition(new ByteSizeValue(10, ByteSizeUnit.MB)).toString(), + new MaxAgeCondition(TimeValue.timeValueHours(4)).toString())); + final ClusterState state = client().admin().cluster().prepareState().get().getState(); final IndexMetaData oldIndex = state.metaData().index("test_index-0"); assertTrue(oldIndex.getAliases().containsKey("test_alias")); @@ -218,4 +234,47 @@ public void testRolloverWithDateMath() { assertThat(response.isRolledOver(), equalTo(true)); assertThat(response.getConditionStatus().size(), equalTo(0)); } + + public void testRolloverMaxSize() throws Exception { + assertAcked(prepareCreate("test-1").addAlias(new Alias("test_alias")).get()); + int numDocs = randomIntBetween(10, 20); + for (int i = 0; i < numDocs; i++) { + index("test-1", "doc", Integer.toString(i), "field", "foo-" + i); + } + flush("test-1"); + refresh("test_alias"); + + // A large max_size + { + final RolloverResponse response = client().admin().indices() + .prepareRolloverIndex("test_alias") + .addMaxIndexSizeCondition(new ByteSizeValue(randomIntBetween(100, 50 * 1024), ByteSizeUnit.MB)) + .get(); + assertThat(response.getOldIndex(), equalTo("test-1")); + assertThat(response.getNewIndex(), equalTo("test-000002")); + assertThat("No rollover with a large max_size condition", response.isRolledOver(), equalTo(false)); + } + + // A small max_size + { + final RolloverResponse response = client().admin().indices() + .prepareRolloverIndex("test_alias") + .addMaxIndexSizeCondition(new ByteSizeValue(randomIntBetween(1, 20), ByteSizeUnit.BYTES)) + .get(); + assertThat(response.getOldIndex(), equalTo("test-1")); + assertThat(response.getNewIndex(), equalTo("test-000002")); + assertThat("Should rollover with a small max_size condition", response.isRolledOver(), equalTo(true)); + } + + // An empty index + { + final RolloverResponse response = client().admin().indices() + .prepareRolloverIndex("test_alias") + .addMaxIndexSizeCondition(new ByteSizeValue(randomNonNegativeLong(), ByteSizeUnit.BYTES)) + .get(); + assertThat(response.getOldIndex(), equalTo("test-000002")); + assertThat(response.getNewIndex(), equalTo("test-000003")); + assertThat("No rollover with an empty index", response.isRolledOver(), equalTo(false)); + } + } } diff --git a/core/src/test/java/org/elasticsearch/action/admin/indices/rollover/RolloverRequestTests.java b/core/src/test/java/org/elasticsearch/action/admin/indices/rollover/RolloverRequestTests.java index 920ba2e9715f0..290ba79af0738 100644 --- a/core/src/test/java/org/elasticsearch/action/admin/indices/rollover/RolloverRequestTests.java +++ b/core/src/test/java/org/elasticsearch/action/admin/indices/rollover/RolloverRequestTests.java @@ -19,17 +19,38 @@ package org.elasticsearch.action.admin.indices.rollover; +import org.elasticsearch.common.bytes.BytesReference; +import org.elasticsearch.common.io.stream.BytesStreamOutput; +import org.elasticsearch.common.io.stream.NamedWriteableAwareStreamInput; +import org.elasticsearch.common.io.stream.NamedWriteableRegistry; +import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.unit.ByteSizeUnit; +import org.elasticsearch.common.unit.ByteSizeValue; import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentFactory; +import org.elasticsearch.indices.IndicesModule; import org.elasticsearch.test.ESTestCase; +import org.junit.Before; +import java.util.Collections; +import java.util.List; import java.util.Set; +import java.util.stream.Collectors; import static org.hamcrest.Matchers.equalTo; public class RolloverRequestTests extends ESTestCase { + private NamedWriteableRegistry writeableRegistry; + + @Override + @Before + public void setUp() throws Exception { + super.setUp(); + writeableRegistry = new NamedWriteableRegistry(new IndicesModule(Collections.emptyList()).getNamedWriteables()); + } + public void testConditionsParsing() throws Exception { final RolloverRequest request = new RolloverRequest(randomAlphaOfLength(10), randomAlphaOfLength(10)); final XContentBuilder builder = XContentFactory.jsonBuilder() @@ -37,11 +58,12 @@ public void testConditionsParsing() throws Exception { .startObject("conditions") .field("max_age", "10d") .field("max_docs", 100) + .field("max_size", "45gb") .endObject() .endObject(); RolloverRequest.PARSER.parse(createParser(builder), request, null); Set conditions = request.getConditions(); - assertThat(conditions.size(), equalTo(2)); + assertThat(conditions.size(), equalTo(3)); for (Condition condition : conditions) { if (condition instanceof MaxAgeCondition) { MaxAgeCondition maxAgeCondition = (MaxAgeCondition) condition; @@ -49,6 +71,9 @@ public void testConditionsParsing() throws Exception { } else if (condition instanceof MaxDocsCondition) { MaxDocsCondition maxDocsCondition = (MaxDocsCondition) condition; assertThat(maxDocsCondition.value, equalTo(100L)); + } else if (condition instanceof MaxSizeCondition) { + MaxSizeCondition maxSizeCondition = (MaxSizeCondition) condition; + assertThat(maxSizeCondition.value.getBytes(), equalTo(ByteSizeUnit.GB.toBytes(45))); } else { fail("unexpected condition " + condition); } @@ -87,4 +112,33 @@ public void testParsingWithIndexSettings() throws Exception { assertThat(request.getCreateIndexRequest().aliases().size(), equalTo(1)); assertThat(request.getCreateIndexRequest().settings().getAsInt("number_of_shards", 0), equalTo(10)); } + + public void testSerialize() throws Exception { + RolloverRequest originalRequest = new RolloverRequest("alias-index", "new-index-name"); + originalRequest.addMaxIndexDocsCondition(randomNonNegativeLong()); + originalRequest.addMaxIndexAgeCondition(TimeValue.timeValueNanos(randomNonNegativeLong())); + originalRequest.addMaxIndexSizeCondition(new ByteSizeValue(randomNonNegativeLong())); + try (BytesStreamOutput out = new BytesStreamOutput()) { + originalRequest.writeTo(out); + BytesReference bytes = out.bytes(); + try (StreamInput in = new NamedWriteableAwareStreamInput(bytes.streamInput(), writeableRegistry)) { + RolloverRequest cloneRequest = new RolloverRequest(); + cloneRequest.readFrom(in); + assertThat(cloneRequest.getNewIndexName(), equalTo(originalRequest.getNewIndexName())); + assertThat(cloneRequest.getAlias(), equalTo(originalRequest.getAlias())); + + List originalConditions = originalRequest.getConditions().stream() + .map(Condition::toString) + .sorted() + .collect(Collectors.toList()); + + List cloneConditions = cloneRequest.getConditions().stream() + .map(Condition::toString) + .sorted() + .collect(Collectors.toList()); + + assertThat(originalConditions, equalTo(cloneConditions)); + } + } + } } diff --git a/core/src/test/java/org/elasticsearch/action/admin/indices/rollover/TransportRolloverActionTests.java b/core/src/test/java/org/elasticsearch/action/admin/indices/rollover/TransportRolloverActionTests.java index d33987c92adbd..dcb3a87df74f4 100644 --- a/core/src/test/java/org/elasticsearch/action/admin/indices/rollover/TransportRolloverActionTests.java +++ b/core/src/test/java/org/elasticsearch/action/admin/indices/rollover/TransportRolloverActionTests.java @@ -32,23 +32,24 @@ import org.elasticsearch.cluster.metadata.MetaData; import org.elasticsearch.common.UUIDs; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.unit.ByteSizeUnit; +import org.elasticsearch.common.unit.ByteSizeValue; import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.common.util.set.Sets; import org.elasticsearch.index.shard.DocsStats; import org.elasticsearch.test.ESTestCase; +import org.mockito.ArgumentCaptor; -import java.util.HashSet; import java.util.List; import java.util.Locale; import java.util.Set; -import org.mockito.ArgumentCaptor; import static org.elasticsearch.action.admin.indices.rollover.TransportRolloverAction.evaluateConditions; import static org.hamcrest.Matchers.equalTo; import static org.hamcrest.Matchers.hasSize; import static org.mockito.Matchers.any; -import static org.mockito.Mockito.verify; import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.verify; import static org.mockito.Mockito.when; @@ -59,7 +60,7 @@ public void testDocStatsSelectionFromPrimariesOnly() throws Exception { long docsInShards = 200; final Condition condition = createTestCondition(); - evaluateConditions(Sets.newHashSet(condition), createMetaData(), createIndecesStatResponse(docsInShards, docsInPrimaryShards)); + evaluateConditions(Sets.newHashSet(condition), createMetaData(), createIndicesStatResponse(docsInShards, docsInPrimaryShards)); final ArgumentCaptor argument = ArgumentCaptor.forClass(Condition.Stats.class); verify(condition).evaluate(argument.capture()); @@ -69,8 +70,11 @@ public void testDocStatsSelectionFromPrimariesOnly() throws Exception { public void testEvaluateConditions() throws Exception { MaxDocsCondition maxDocsCondition = new MaxDocsCondition(100L); MaxAgeCondition maxAgeCondition = new MaxAgeCondition(TimeValue.timeValueHours(2)); + MaxSizeCondition maxSizeCondition = new MaxSizeCondition(new ByteSizeValue(randomIntBetween(10, 100), ByteSizeUnit.MB)); + long matchMaxDocs = randomIntBetween(100, 1000); long notMatchMaxDocs = randomIntBetween(0, 99); + ByteSizeValue notMatchMaxSize = new ByteSizeValue(randomIntBetween(0, 9), ByteSizeUnit.MB); final Settings settings = Settings.builder() .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT) .put(IndexMetaData.SETTING_INDEX_UUID, UUIDs.randomBase64UUID()) @@ -81,30 +85,56 @@ public void testEvaluateConditions() throws Exception { .creationDate(System.currentTimeMillis() - TimeValue.timeValueHours(3).getMillis()) .settings(settings) .build(); - final HashSet conditions = Sets.newHashSet(maxDocsCondition, maxAgeCondition); - Set results = evaluateConditions(conditions, new DocsStats(matchMaxDocs, 0L), metaData); - assertThat(results.size(), equalTo(2)); + final Set conditions = Sets.newHashSet(maxDocsCondition, maxAgeCondition, maxSizeCondition); + Set results = evaluateConditions(conditions, + new DocsStats(matchMaxDocs, 0L, ByteSizeUnit.MB.toBytes(120)), metaData); + assertThat(results.size(), equalTo(3)); for (Condition.Result result : results) { assertThat(result.matched, equalTo(true)); } - results = evaluateConditions(conditions, new DocsStats(notMatchMaxDocs, 0), metaData); - assertThat(results.size(), equalTo(2)); + + results = evaluateConditions(conditions, new DocsStats(notMatchMaxDocs, 0, notMatchMaxSize.getBytes()), metaData); + assertThat(results.size(), equalTo(3)); for (Condition.Result result : results) { if (result.condition instanceof MaxAgeCondition) { assertThat(result.matched, equalTo(true)); } else if (result.condition instanceof MaxDocsCondition) { assertThat(result.matched, equalTo(false)); + } else if (result.condition instanceof MaxSizeCondition) { + assertThat(result.matched, equalTo(false)); } else { fail("unknown condition result found " + result.condition); } } - results = evaluateConditions(conditions, null, metaData); - assertThat(results.size(), equalTo(2)); + } + + public void testEvaluateWithoutDocStats() throws Exception { + MaxDocsCondition maxDocsCondition = new MaxDocsCondition(randomNonNegativeLong()); + MaxAgeCondition maxAgeCondition = new MaxAgeCondition(TimeValue.timeValueHours(randomIntBetween(1, 3))); + MaxSizeCondition maxSizeCondition = new MaxSizeCondition(new ByteSizeValue(randomNonNegativeLong())); + + Set conditions = Sets.newHashSet(maxDocsCondition, maxAgeCondition, maxSizeCondition); + final Settings settings = Settings.builder() + .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT) + .put(IndexMetaData.SETTING_INDEX_UUID, UUIDs.randomBase64UUID()) + .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, randomIntBetween(1, 1000)) + .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, randomInt(10)) + .build(); + + final IndexMetaData metaData = IndexMetaData.builder(randomAlphaOfLength(10)) + .creationDate(System.currentTimeMillis() - TimeValue.timeValueHours(randomIntBetween(5, 10)).getMillis()) + .settings(settings) + .build(); + Set results = evaluateConditions(conditions, null, metaData); + assertThat(results.size(), equalTo(3)); + for (Condition.Result result : results) { if (result.condition instanceof MaxAgeCondition) { assertThat(result.matched, equalTo(true)); } else if (result.condition instanceof MaxDocsCondition) { assertThat(result.matched, equalTo(false)); + } else if (result.condition instanceof MaxSizeCondition) { + assertThat(result.matched, equalTo(false)); } else { fail("unknown condition result found " + result.condition); } @@ -211,12 +241,12 @@ public void testCreateIndexRequest() throws Exception { assertThat(createIndexRequest.cause(), equalTo("rollover_index")); } - private IndicesStatsResponse createIndecesStatResponse(long totalDocs, long primaryDocs) { + private IndicesStatsResponse createIndicesStatResponse(long totalDocs, long primaryDocs) { final CommonStats primaryStats = mock(CommonStats.class); - when(primaryStats.getDocs()).thenReturn(new DocsStats(primaryDocs, 0)); + when(primaryStats.getDocs()).thenReturn(new DocsStats(primaryDocs, 0, between(1, 10000))); final CommonStats totalStats = mock(CommonStats.class); - when(totalStats.getDocs()).thenReturn(new DocsStats(totalDocs, 0)); + when(totalStats.getDocs()).thenReturn(new DocsStats(totalDocs, 0, between(1, 10000))); final IndicesStatsResponse response = mock(IndicesStatsResponse.class); when(response.getPrimaries()).thenReturn(primaryStats); diff --git a/core/src/test/java/org/elasticsearch/action/admin/indices/segments/IndicesSegmentsRequestTests.java b/core/src/test/java/org/elasticsearch/action/admin/indices/segments/IndicesSegmentsRequestTests.java index 4a2895ad7ee41..e6d8dea13d2fb 100644 --- a/core/src/test/java/org/elasticsearch/action/admin/indices/segments/IndicesSegmentsRequestTests.java +++ b/core/src/test/java/org/elasticsearch/action/admin/indices/segments/IndicesSegmentsRequestTests.java @@ -55,6 +55,7 @@ public void setupIndex() { client().prepareIndex("test", "type1", id).setSource("text", "sometext").get(); } client().admin().indices().prepareFlush("test").get(); + client().admin().indices().prepareRefresh().get(); } public void testBasic() { diff --git a/core/src/test/java/org/elasticsearch/action/admin/indices/shrink/TransportShrinkActionTests.java b/core/src/test/java/org/elasticsearch/action/admin/indices/shrink/TransportResizeActionTests.java similarity index 84% rename from core/src/test/java/org/elasticsearch/action/admin/indices/shrink/TransportShrinkActionTests.java rename to core/src/test/java/org/elasticsearch/action/admin/indices/shrink/TransportResizeActionTests.java index b24c8dca79a58..b03b043f03e14 100644 --- a/core/src/test/java/org/elasticsearch/action/admin/indices/shrink/TransportShrinkActionTests.java +++ b/core/src/test/java/org/elasticsearch/action/admin/indices/shrink/TransportResizeActionTests.java @@ -28,7 +28,6 @@ import org.elasticsearch.cluster.EmptyClusterInfoService; import org.elasticsearch.cluster.block.ClusterBlocks; import org.elasticsearch.cluster.metadata.IndexMetaData; -import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; import org.elasticsearch.cluster.metadata.MetaData; import org.elasticsearch.cluster.node.DiscoveryNode; import org.elasticsearch.cluster.node.DiscoveryNodes; @@ -49,7 +48,7 @@ import static java.util.Collections.emptyMap; -public class TransportShrinkActionTests extends ESTestCase { +public class TransportResizeActionTests extends ESTestCase { private ClusterState createClusterState(String name, int numShards, int numReplicas, Settings settings) { MetaData.Builder metaBuilder = MetaData.builder(); @@ -72,20 +71,20 @@ public void testErrorCondition() { Settings.builder().put("index.blocks.write", true).build()); assertTrue( expectThrows(IllegalStateException.class, () -> - TransportShrinkAction.prepareCreateIndexRequest(new ShrinkRequest("target", "source"), state, - (i) -> new DocsStats(Integer.MAX_VALUE, randomIntBetween(1, 1000)), new IndexNameExpressionResolver(Settings.EMPTY)) + TransportResizeAction.prepareCreateIndexRequest(new ResizeRequest("target", "source"), state, + (i) -> new DocsStats(Integer.MAX_VALUE, between(1, 1000), between(1, 100)), "source", "target") ).getMessage().startsWith("Can't merge index with more than [2147483519] docs - too many documents in shards ")); assertTrue( expectThrows(IllegalStateException.class, () -> { - ShrinkRequest req = new ShrinkRequest("target", "source"); - req.getShrinkIndexRequest().settings(Settings.builder().put("index.number_of_shards", 4)); + ResizeRequest req = new ResizeRequest("target", "source"); + req.getTargetIndexRequest().settings(Settings.builder().put("index.number_of_shards", 4)); ClusterState clusterState = createClusterState("source", 8, 1, Settings.builder().put("index.blocks.write", true).build()); - TransportShrinkAction.prepareCreateIndexRequest(req, clusterState, - (i) -> i == 2 || i == 3 ? new DocsStats(Integer.MAX_VALUE/2, randomIntBetween(1, 1000)) : null, - new IndexNameExpressionResolver(Settings.EMPTY)); + TransportResizeAction.prepareCreateIndexRequest(req, clusterState, + (i) -> i == 2 || i == 3 ? new DocsStats(Integer.MAX_VALUE / 2, between(1, 1000), between(1, 10000)) : null + , "source", "target"); } ).getMessage().startsWith("Can't merge index with more than [2147483519] docs - too many documents in shards ")); @@ -105,8 +104,8 @@ public void testErrorCondition() { routingTable.index("source").shardsWithState(ShardRoutingState.INITIALIZING)).routingTable(); clusterState = ClusterState.builder(clusterState).routingTable(routingTable).build(); - TransportShrinkAction.prepareCreateIndexRequest(new ShrinkRequest("target", "source"), clusterState, - (i) -> new DocsStats(randomIntBetween(1, 1000), randomIntBetween(1, 1000)), new IndexNameExpressionResolver(Settings.EMPTY)); + TransportResizeAction.prepareCreateIndexRequest(new ResizeRequest("target", "source"), clusterState, + (i) -> new DocsStats(between(1, 1000), between(1, 1000), between(0, 10000)), "source", "target"); } public void testShrinkIndexSettings() { @@ -128,15 +127,14 @@ public void testShrinkIndexSettings() { routingTable.index(indexName).shardsWithState(ShardRoutingState.INITIALIZING)).routingTable(); clusterState = ClusterState.builder(clusterState).routingTable(routingTable).build(); int numSourceShards = clusterState.metaData().index(indexName).getNumberOfShards(); - DocsStats stats = new DocsStats(randomIntBetween(0, (IndexWriter.MAX_DOCS) / numSourceShards), randomIntBetween(1, 1000)); - ShrinkRequest target = new ShrinkRequest("target", indexName); + DocsStats stats = new DocsStats(between(0, (IndexWriter.MAX_DOCS) / numSourceShards), between(1, 1000), between(1, 10000)); + ResizeRequest target = new ResizeRequest("target", indexName); final ActiveShardCount activeShardCount = randomBoolean() ? ActiveShardCount.ALL : ActiveShardCount.ONE; target.setWaitForActiveShards(activeShardCount); - CreateIndexClusterStateUpdateRequest request = TransportShrinkAction.prepareCreateIndexRequest( - target, clusterState, (i) -> stats, - new IndexNameExpressionResolver(Settings.EMPTY)); - assertNotNull(request.shrinkFrom()); - assertEquals(indexName, request.shrinkFrom().getName()); + CreateIndexClusterStateUpdateRequest request = TransportResizeAction.prepareCreateIndexRequest( + target, clusterState, (i) -> stats, indexName, "target"); + assertNotNull(request.recoverFrom()); + assertEquals(indexName, request.recoverFrom().getName()); assertEquals("1", request.settings().get("index.number_of_shards")); assertEquals("shrink_index", request.cause()); assertEquals(request.waitForActiveShards(), activeShardCount); diff --git a/core/src/test/java/org/elasticsearch/action/admin/indices/template/put/MetaDataIndexTemplateServiceTests.java b/core/src/test/java/org/elasticsearch/action/admin/indices/template/put/MetaDataIndexTemplateServiceTests.java index 48598ecb2ecdb..58012909b8f2e 100644 --- a/core/src/test/java/org/elasticsearch/action/admin/indices/template/put/MetaDataIndexTemplateServiceTests.java +++ b/core/src/test/java/org/elasticsearch/action/admin/indices/template/put/MetaDataIndexTemplateServiceTests.java @@ -54,10 +54,9 @@ public void testIndexTemplateInvalidNumberOfShards() { PutRequest request = new PutRequest("test", "test_shards"); request.patterns(Collections.singletonList("test_shards*")); - Map map = new HashMap<>(); - map.put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, "0"); - map.put("index.shard.check_on_startup", "blargh"); - request.settings(Settings.builder().put(map).build()); + request.settings(Settings.builder() + .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, "0") + .put("index.shard.check_on_startup", "blargh").build()); List throwables = putTemplate(xContentRegistry(), request); assertEquals(throwables.size(), 1); @@ -72,10 +71,7 @@ public void testIndexTemplateInvalidNumberOfShards() { public void testIndexTemplateValidationAccumulatesValidationErrors() { PutRequest request = new PutRequest("test", "putTemplate shards"); request.patterns(Collections.singletonList("_test_shards*")); - - Map map = new HashMap<>(); - map.put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, "0"); - request.settings(Settings.builder().put(map).build()); + request.settings(Settings.builder().put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, "0").build()); List throwables = putTemplate(xContentRegistry(), request); assertEquals(throwables.size(), 1); diff --git a/core/src/test/java/org/elasticsearch/action/admin/indices/template/put/PutIndexTemplateRequestTests.java b/core/src/test/java/org/elasticsearch/action/admin/indices/template/put/PutIndexTemplateRequestTests.java index 2137b33eb089d..fca6ca4fd84d9 100644 --- a/core/src/test/java/org/elasticsearch/action/admin/indices/template/put/PutIndexTemplateRequestTests.java +++ b/core/src/test/java/org/elasticsearch/action/admin/indices/template/put/PutIndexTemplateRequestTests.java @@ -19,6 +19,7 @@ package org.elasticsearch.action.admin.indices.template.put; import org.elasticsearch.Version; +import org.elasticsearch.action.ActionRequestValidationException; import org.elasticsearch.common.bytes.BytesArray; import org.elasticsearch.common.io.stream.BytesStreamOutput; import org.elasticsearch.common.io.stream.StreamInput; @@ -32,6 +33,11 @@ import java.util.Base64; import java.util.Collections; +import static org.hamcrest.Matchers.containsString; +import static org.hamcrest.Matchers.hasSize; +import static org.hamcrest.Matchers.nullValue; +import static org.hamcrest.core.Is.is; + public class PutIndexTemplateRequestTests extends ESTestCase { // bwc for #21009 @@ -107,4 +113,21 @@ public void testPutIndexTemplateRequestSerializationXContentBwc() throws IOExcep assertEquals("template", request.patterns().get(0)); } } + + public void testValidateErrorMessage() throws Exception { + PutIndexTemplateRequest request = new PutIndexTemplateRequest(); + ActionRequestValidationException withoutNameAndPattern = request.validate(); + assertThat(withoutNameAndPattern.getMessage(), containsString("name is missing")); + assertThat(withoutNameAndPattern.getMessage(), containsString("index patterns are missing")); + + request.name("foo"); + ActionRequestValidationException withoutIndexPatterns = request.validate(); + assertThat(withoutIndexPatterns.validationErrors(), hasSize(1)); + assertThat(withoutIndexPatterns.getMessage(), containsString("index patterns are missing")); + + request.patterns(Collections.singletonList("test-*")); + ActionRequestValidationException noError = request.validate(); + assertThat(noError, is(nullValue())); + } + } diff --git a/core/src/test/java/org/elasticsearch/action/bulk/BulkProcessorTests.java b/core/src/test/java/org/elasticsearch/action/bulk/BulkProcessorTests.java new file mode 100644 index 0000000000000..4ff5b69ad378a --- /dev/null +++ b/core/src/test/java/org/elasticsearch/action/bulk/BulkProcessorTests.java @@ -0,0 +1,100 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.action.bulk; + +import org.elasticsearch.action.ActionListener; +import org.elasticsearch.action.index.IndexRequest; +import org.elasticsearch.common.unit.ByteSizeUnit; +import org.elasticsearch.common.unit.ByteSizeValue; +import org.elasticsearch.common.unit.TimeValue; +import org.elasticsearch.common.util.concurrent.ThreadContext; +import org.elasticsearch.test.ESTestCase; +import org.elasticsearch.threadpool.TestThreadPool; +import org.elasticsearch.threadpool.ThreadPool; +import org.junit.After; +import org.junit.Before; + +import java.util.concurrent.CountDownLatch; +import java.util.function.BiConsumer; + +public class BulkProcessorTests extends ESTestCase { + + private ThreadPool threadPool; + + @Before + public void startThreadPool() { + threadPool = new TestThreadPool("BulkProcessorTests"); + } + + @After + public void stopThreadPool() throws InterruptedException { + terminate(threadPool); + } + + public void testBulkProcessorFlushPreservesContext() throws InterruptedException { + final CountDownLatch latch = new CountDownLatch(1); + final String headerKey = randomAlphaOfLengthBetween(1, 8); + final String transientKey = randomAlphaOfLengthBetween(1, 8); + final String headerValue = randomAlphaOfLengthBetween(1, 32); + final Object transientValue = new Object(); + + BiConsumer> consumer = (request, listener) -> { + ThreadContext threadContext = threadPool.getThreadContext(); + assertEquals(headerValue, threadContext.getHeader(headerKey)); + assertSame(transientValue, threadContext.getTransient(transientKey)); + latch.countDown(); + }; + + final int bulkSize = randomIntBetween(2, 32); + final TimeValue flushInterval = TimeValue.timeValueSeconds(1L); + final BulkProcessor bulkProcessor; + assertNull(threadPool.getThreadContext().getHeader(headerKey)); + assertNull(threadPool.getThreadContext().getTransient(transientKey)); + try (ThreadContext.StoredContext ignore = threadPool.getThreadContext().stashContext()) { + threadPool.getThreadContext().putHeader(headerKey, headerValue); + threadPool.getThreadContext().putTransient(transientKey, transientValue); + bulkProcessor = new BulkProcessor(consumer, BackoffPolicy.noBackoff(), new BulkProcessor.Listener() { + @Override + public void beforeBulk(long executionId, BulkRequest request) { + } + + @Override + public void afterBulk(long executionId, BulkRequest request, BulkResponse response) { + } + + @Override + public void afterBulk(long executionId, BulkRequest request, Throwable failure) { + } + }, 1, bulkSize, new ByteSizeValue(5, ByteSizeUnit.MB), flushInterval, threadPool, () -> {}); + } + assertNull(threadPool.getThreadContext().getHeader(headerKey)); + assertNull(threadPool.getThreadContext().getTransient(transientKey)); + + // add a single item which won't be over the size or number of items + bulkProcessor.add(new IndexRequest()); + + // wait for flush to execute + latch.await(); + + assertNull(threadPool.getThreadContext().getHeader(headerKey)); + assertNull(threadPool.getThreadContext().getTransient(transientKey)); + bulkProcessor.close(); + } +} diff --git a/core/src/test/java/org/elasticsearch/action/bulk/BulkWithUpdatesIT.java b/core/src/test/java/org/elasticsearch/action/bulk/BulkWithUpdatesIT.java index 2dfc0e5a031fe..6656faf1e194e 100644 --- a/core/src/test/java/org/elasticsearch/action/bulk/BulkWithUpdatesIT.java +++ b/core/src/test/java/org/elasticsearch/action/bulk/BulkWithUpdatesIT.java @@ -457,7 +457,7 @@ public void testBulkIndexingWhileInitializing() throws Exception { */ public void testBulkUpdateChildMissingParentRouting() throws Exception { assertAcked(prepareCreate("test") - .setSettings("index.version.created", Version.V_5_6_0.id) // allows for multiple types + .setSettings(Settings.builder().put("index.version.created", Version.V_5_6_0.id)) // allows for multiple types .addMapping("parent", "{\"parent\":{}}", XContentType.JSON) .addMapping("child", "{\"child\": {\"_parent\": {\"type\": \"parent\"}}}", XContentType.JSON)); ensureGreen(); diff --git a/core/src/test/java/org/elasticsearch/action/bulk/TransportBulkActionTests.java b/core/src/test/java/org/elasticsearch/action/bulk/TransportBulkActionTests.java index 50fb348834fe3..5141b9cd47187 100644 --- a/core/src/test/java/org/elasticsearch/action/bulk/TransportBulkActionTests.java +++ b/core/src/test/java/org/elasticsearch/action/bulk/TransportBulkActionTests.java @@ -49,7 +49,7 @@ public class TransportBulkActionTests extends ESTestCase { private TransportService transportService; private ClusterService clusterService; private ThreadPool threadPool; - + private TestTransportBulkAction bulkAction; class TestTransportBulkAction extends TransportBulkAction { @@ -132,4 +132,4 @@ public void testDeleteNonExistingDocExternalGteVersionCreatesIndex() throws Exce throw new AssertionError(exception); })); } -} \ No newline at end of file +} diff --git a/core/src/test/java/org/elasticsearch/action/bulk/TransportShardBulkActionTests.java b/core/src/test/java/org/elasticsearch/action/bulk/TransportShardBulkActionTests.java index b2e11d833391c..dd76564ca3282 100644 --- a/core/src/test/java/org/elasticsearch/action/bulk/TransportShardBulkActionTests.java +++ b/core/src/test/java/org/elasticsearch/action/bulk/TransportShardBulkActionTests.java @@ -20,6 +20,7 @@ package org.elasticsearch.action.bulk; import org.elasticsearch.ElasticsearchException; +import org.elasticsearch.ElasticsearchStatusException; import org.elasticsearch.Version; import org.elasticsearch.action.DocWriteRequest; import org.elasticsearch.action.DocWriteResponse; @@ -30,14 +31,12 @@ import org.elasticsearch.action.index.IndexResponse; import org.elasticsearch.action.support.WriteRequest.RefreshPolicy; import org.elasticsearch.action.support.replication.ReplicationOperation; +import org.elasticsearch.action.support.replication.TransportWriteAction.WritePrimaryResult; import org.elasticsearch.action.update.UpdateHelper; import org.elasticsearch.action.update.UpdateRequest; import org.elasticsearch.action.update.UpdateResponse; import org.elasticsearch.client.Requests; import org.elasticsearch.cluster.metadata.IndexMetaData; -import org.elasticsearch.common.io.stream.StreamInput; -import org.elasticsearch.common.io.stream.StreamOutput; -import org.elasticsearch.common.io.stream.Streamable; import org.elasticsearch.common.lucene.uid.Versions; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentType; @@ -59,9 +58,10 @@ import java.util.function.LongSupplier; import static org.elasticsearch.action.bulk.TransportShardBulkAction.replicaItemExecutionMode; -import static org.junit.Assert.assertNotNull; import static org.hamcrest.CoreMatchers.equalTo; import static org.hamcrest.CoreMatchers.not; +import static org.hamcrest.CoreMatchers.notNullValue; +import static org.hamcrest.Matchers.arrayWithSize; import static org.hamcrest.Matchers.containsString; import static org.hamcrest.Matchers.instanceOf; import static org.mockito.Mockito.any; @@ -207,6 +207,56 @@ public void testExecuteBulkIndexRequest() throws Exception { closeShards(shard); } + public void testSkipBulkIndexRequestIfAborted() throws Exception { + IndexShard shard = newStartedShard(true); + + BulkItemRequest[] items = new BulkItemRequest[randomIntBetween(2, 5)]; + for (int i = 0; i < items.length; i++) { + DocWriteRequest writeRequest = new IndexRequest("index", "type", "id_" + i) + .source(Requests.INDEX_CONTENT_TYPE, "foo", "bar-" + i) + .opType(DocWriteRequest.OpType.INDEX); + items[i] = new BulkItemRequest(i, writeRequest); + } + BulkShardRequest bulkShardRequest = new BulkShardRequest(shardId, RefreshPolicy.NONE, items); + + // Preemptively abort one of the bulk items, but allow the others to proceed + BulkItemRequest rejectItem = randomFrom(items); + RestStatus rejectionStatus = randomFrom(RestStatus.BAD_REQUEST, RestStatus.CONFLICT, RestStatus.FORBIDDEN, RestStatus.LOCKED); + final ElasticsearchStatusException rejectionCause = new ElasticsearchStatusException("testing rejection", rejectionStatus); + rejectItem.abort("index", rejectionCause); + + UpdateHelper updateHelper = null; + WritePrimaryResult result = TransportShardBulkAction.performOnPrimary( + bulkShardRequest, shard, updateHelper, threadPool::absoluteTimeInMillis, new NoopMappingUpdatePerformer()); + + // since at least 1 item passed, the tran log location should exist, + assertThat(result.location, notNullValue()); + // and the response should exist and match the item count + assertThat(result.finalResponseIfSuccessful, notNullValue()); + assertThat(result.finalResponseIfSuccessful.getResponses(), arrayWithSize(items.length)); + + // check each response matches the input item, including the rejection + for (int i = 0; i < items.length; i++) { + BulkItemResponse response = result.finalResponseIfSuccessful.getResponses()[i]; + assertThat(response.getItemId(), equalTo(i)); + assertThat(response.getIndex(), equalTo("index")); + assertThat(response.getType(), equalTo("type")); + assertThat(response.getId(), equalTo("id_" + i)); + assertThat(response.getOpType(), equalTo(DocWriteRequest.OpType.INDEX)); + if (response.getItemId() == rejectItem.id()) { + assertTrue(response.isFailed()); + assertThat(response.getFailure().getCause(), equalTo(rejectionCause)); + assertThat(response.status(), equalTo(rejectionStatus)); + } else { + assertFalse(response.isFailed()); + } + } + + // Check that the non-rejected updates made it to the shard + assertDocCount(shard, items.length - 1); + closeShards(shard); + } + public void testExecuteBulkIndexRequestWithRejection() throws Exception { IndexMetaData metaData = indexMetaData(); IndexShard shard = newStartedShard(true); diff --git a/core/src/test/java/org/elasticsearch/action/delete/DeleteResponseTests.java b/core/src/test/java/org/elasticsearch/action/delete/DeleteResponseTests.java index 8f4e22e0fd221..8e40c5ea2aad6 100644 --- a/core/src/test/java/org/elasticsearch/action/delete/DeleteResponseTests.java +++ b/core/src/test/java/org/elasticsearch/action/delete/DeleteResponseTests.java @@ -26,7 +26,7 @@ import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.common.xcontent.XContentType; -import org.elasticsearch.index.seqno.SequenceNumbersService; +import org.elasticsearch.index.seqno.SequenceNumbers; import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.test.ESTestCase; import org.elasticsearch.test.RandomObjects; @@ -114,8 +114,8 @@ public static Tuple randomDeleteResponse() { int shardId = randomIntBetween(0, 5); String type = randomAlphaOfLength(5); String id = randomAlphaOfLength(5); - long seqNo = randomFrom(SequenceNumbersService.UNASSIGNED_SEQ_NO, randomNonNegativeLong(), (long) randomIntBetween(0, 10000)); - long primaryTerm = seqNo == SequenceNumbersService.UNASSIGNED_SEQ_NO ? 0 : randomIntBetween(1, 10000); + long seqNo = randomFrom(SequenceNumbers.UNASSIGNED_SEQ_NO, randomNonNegativeLong(), (long) randomIntBetween(0, 10000)); + long primaryTerm = seqNo == SequenceNumbers.UNASSIGNED_SEQ_NO ? 0 : randomIntBetween(1, 10000); long version = randomBoolean() ? randomNonNegativeLong() : randomIntBetween(0, 10000); boolean found = randomBoolean(); boolean forcedRefresh = randomBoolean(); diff --git a/core/src/test/java/org/elasticsearch/action/index/IndexRequestTests.java b/core/src/test/java/org/elasticsearch/action/index/IndexRequestTests.java index 6816068dff453..6191184ef3f7a 100644 --- a/core/src/test/java/org/elasticsearch/action/index/IndexRequestTests.java +++ b/core/src/test/java/org/elasticsearch/action/index/IndexRequestTests.java @@ -26,10 +26,10 @@ import org.elasticsearch.common.bytes.BytesArray; import org.elasticsearch.common.io.stream.BytesStreamOutput; import org.elasticsearch.common.io.stream.StreamInput; -import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.common.unit.ByteSizeValue; +import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.index.VersionType; -import org.elasticsearch.index.seqno.SequenceNumbersService; +import org.elasticsearch.index.seqno.SequenceNumbers; import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.rest.RestStatus; import org.elasticsearch.test.ESTestCase; @@ -37,7 +37,6 @@ import java.io.IOException; import java.io.UnsupportedEncodingException; import java.util.Arrays; -import java.util.Base64; import java.util.HashSet; import java.util.Set; @@ -135,7 +134,7 @@ public void testIndexResponse() { String id = randomAlphaOfLengthBetween(3, 10); long version = randomLong(); boolean created = randomBoolean(); - IndexResponse indexResponse = new IndexResponse(shardId, type, id, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, version, created); + IndexResponse indexResponse = new IndexResponse(shardId, type, id, SequenceNumbers.UNASSIGNED_SEQ_NO, 0, version, created); int total = randomIntBetween(1, 10); int successful = randomIntBetween(1, 10); ReplicationResponse.ShardInfo shardInfo = new ReplicationResponse.ShardInfo(total, successful); @@ -155,7 +154,7 @@ public void testIndexResponse() { assertEquals(forcedRefresh, indexResponse.forcedRefresh()); assertEquals("IndexResponse[index=" + shardId.getIndexName() + ",type=" + type + ",id="+ id + ",version=" + version + ",result=" + (created ? "created" : "updated") + - ",seqNo=" + SequenceNumbersService.UNASSIGNED_SEQ_NO + + ",seqNo=" + SequenceNumbers.UNASSIGNED_SEQ_NO + ",primaryTerm=" + 0 + ",shards={\"total\":" + total + ",\"successful\":" + successful + ",\"failed\":0}]", indexResponse.toString()); diff --git a/core/src/test/java/org/elasticsearch/action/index/IndexResponseTests.java b/core/src/test/java/org/elasticsearch/action/index/IndexResponseTests.java index be67834576e2b..926f272ed8339 100644 --- a/core/src/test/java/org/elasticsearch/action/index/IndexResponseTests.java +++ b/core/src/test/java/org/elasticsearch/action/index/IndexResponseTests.java @@ -27,7 +27,7 @@ import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.common.xcontent.XContentType; -import org.elasticsearch.index.seqno.SequenceNumbersService; +import org.elasticsearch.index.seqno.SequenceNumbers; import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.test.ESTestCase; import org.elasticsearch.test.RandomObjects; @@ -127,8 +127,8 @@ public static Tuple randomIndexResponse() { int shardId = randomIntBetween(0, 5); String type = randomAlphaOfLength(5); String id = randomAlphaOfLength(5); - long seqNo = randomFrom(SequenceNumbersService.UNASSIGNED_SEQ_NO, randomNonNegativeLong(), (long) randomIntBetween(0, 10000)); - long primaryTerm = seqNo == SequenceNumbersService.UNASSIGNED_SEQ_NO ? 0 : randomIntBetween(1, 10000); + long seqNo = randomFrom(SequenceNumbers.UNASSIGNED_SEQ_NO, randomNonNegativeLong(), (long) randomIntBetween(0, 10000)); + long primaryTerm = seqNo == SequenceNumbers.UNASSIGNED_SEQ_NO ? 0 : randomIntBetween(1, 10000); long version = randomBoolean() ? randomNonNegativeLong() : randomIntBetween(0, 10000); boolean created = randomBoolean(); boolean forcedRefresh = randomBoolean(); diff --git a/core/src/test/java/org/elasticsearch/action/ingest/SimulatePipelineRequestTests.java b/core/src/test/java/org/elasticsearch/action/ingest/SimulatePipelineRequestTests.java index ecd0256b11068..5cd82be8cb04c 100644 --- a/core/src/test/java/org/elasticsearch/action/ingest/SimulatePipelineRequestTests.java +++ b/core/src/test/java/org/elasticsearch/action/ingest/SimulatePipelineRequestTests.java @@ -49,8 +49,7 @@ public void testSerialization() throws IOException { BytesStreamOutput out = new BytesStreamOutput(); request.writeTo(out); StreamInput streamInput = out.bytes().streamInput(); - SimulatePipelineRequest otherRequest = new SimulatePipelineRequest(); - otherRequest.readFrom(streamInput); + SimulatePipelineRequest otherRequest = new SimulatePipelineRequest(streamInput); assertThat(otherRequest.getId(), equalTo(request.getId())); assertThat(otherRequest.isVerbose(), equalTo(request.isVerbose())); @@ -65,8 +64,7 @@ public void testSerializationWithXContent() throws IOException { request.writeTo(output); StreamInput in = StreamInput.wrap(output.bytes().toBytesRef().bytes); - SimulatePipelineRequest serialized = new SimulatePipelineRequest(); - serialized.readFrom(in); + SimulatePipelineRequest serialized = new SimulatePipelineRequest(in); assertEquals(XContentType.JSON, serialized.getXContentType()); assertEquals("{}", serialized.getSource().utf8ToString()); } @@ -77,8 +75,7 @@ public void testSerializationWithXContentBwc() throws IOException { Version.V_5_1_1, Version.V_5_1_2, Version.V_5_2_0); try (StreamInput in = StreamInput.wrap(data)) { in.setVersion(version); - SimulatePipelineRequest request = new SimulatePipelineRequest(); - request.readFrom(in); + SimulatePipelineRequest request = new SimulatePipelineRequest(in); assertEquals(XContentType.JSON, request.getXContentType()); assertEquals("{}", request.getSource().utf8ToString()); diff --git a/core/src/test/java/org/elasticsearch/action/search/AbstractSearchAsyncActionTests.java b/core/src/test/java/org/elasticsearch/action/search/AbstractSearchAsyncActionTests.java index ec78f1892f90f..8f413eb436421 100644 --- a/core/src/test/java/org/elasticsearch/action/search/AbstractSearchAsyncActionTests.java +++ b/core/src/test/java/org/elasticsearch/action/search/AbstractSearchAsyncActionTests.java @@ -60,11 +60,12 @@ private AbstractSearchAsyncAction createAction( System::nanoTime); } + final SearchRequest request = new SearchRequest(); return new AbstractSearchAsyncAction("test", null, null, null, Collections.singletonMap("foo", new AliasFilter(new MatchAllQueryBuilder())), Collections.singletonMap("foo", 2.0f), null, - new SearchRequest(), null, new GroupShardsIterator<>(Collections.singletonList( + request, null, new GroupShardsIterator<>(Collections.singletonList( new SearchShardIterator(null, null, Collections.emptyList(), null))), timeProvider, 0, null, - new InitialSearchPhase.ArraySearchPhaseResults<>(10)) { + new InitialSearchPhase.ArraySearchPhaseResults<>(10), request.getMaxConcurrentShardRequests()) { @Override protected SearchPhase getNextPhase(final SearchPhaseResults results, final SearchPhaseContext context) { return null; diff --git a/core/src/test/java/org/elasticsearch/action/search/CanMatchPreFilterSearchPhaseTests.java b/core/src/test/java/org/elasticsearch/action/search/CanMatchPreFilterSearchPhaseTests.java index 87cebc957c6c0..9e0b4f7fee9ba 100644 --- a/core/src/test/java/org/elasticsearch/action/search/CanMatchPreFilterSearchPhaseTests.java +++ b/core/src/test/java/org/elasticsearch/action/search/CanMatchPreFilterSearchPhaseTests.java @@ -24,9 +24,12 @@ import org.elasticsearch.action.support.IndicesOptions; import org.elasticsearch.cluster.node.DiscoveryNode; import org.elasticsearch.cluster.routing.GroupShardsIterator; +import org.elasticsearch.cluster.routing.ShardRouting; import org.elasticsearch.common.Strings; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.util.concurrent.EsExecutors; +import org.elasticsearch.search.SearchPhaseResult; +import org.elasticsearch.search.SearchShardTarget; import org.elasticsearch.search.internal.AliasFilter; import org.elasticsearch.search.internal.ShardSearchTransportRequest; import org.elasticsearch.test.ESTestCase; @@ -38,11 +41,12 @@ import java.util.Map; import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.CountDownLatch; +import java.util.concurrent.ExecutorService; +import java.util.concurrent.Executors; import java.util.concurrent.atomic.AtomicReference; public class CanMatchPreFilterSearchPhaseTests extends ESTestCase { - public void testFilterShards() throws InterruptedException { final TransportSearchAction.SearchTimeProvider timeProvider = new TransportSearchAction.SearchTimeProvider(0, System.nanoTime(), @@ -170,4 +174,84 @@ public void run() throws IOException { assertEquals(shard1, !result.get().get(0).skip()); assertFalse(result.get().get(1).skip()); // never skip the failure } + + /* + * In cases that a query coordinating node held all the shards for a query, the can match phase would recurse and end in stack overflow + * when subjected to max concurrent search requests. This test is a test for that situation. + */ + public void testLotsOfShards() throws InterruptedException { + final TransportSearchAction.SearchTimeProvider timeProvider = + new TransportSearchAction.SearchTimeProvider(0, System.nanoTime(), System::nanoTime); + + final Map lookup = new ConcurrentHashMap<>(); + final DiscoveryNode primaryNode = new DiscoveryNode("node_1", buildNewFakeTransportAddress(), Version.CURRENT); + final DiscoveryNode replicaNode = new DiscoveryNode("node_2", buildNewFakeTransportAddress(), Version.CURRENT); + lookup.put("node1", new SearchAsyncActionTests.MockConnection(primaryNode)); + lookup.put("node2", new SearchAsyncActionTests.MockConnection(replicaNode)); + + + final SearchTransportService searchTransportService = + new SearchTransportService(Settings.builder().put("search.remote.connect", false).build(), null, null) { + @Override + public void sendCanMatch( + Transport.Connection connection, + ShardSearchTransportRequest request, + SearchTask task, + ActionListener listener) { + listener.onResponse(new CanMatchResponse(randomBoolean())); + } + }; + + final CountDownLatch latch = new CountDownLatch(1); + final OriginalIndices originalIndices = new OriginalIndices(new String[]{"idx"}, IndicesOptions.strictExpandOpenAndForbidClosed()); + final GroupShardsIterator shardsIter = + SearchAsyncActionTests.getShardsIter("idx", originalIndices, 4096, randomBoolean(), primaryNode, replicaNode); + final ExecutorService executor = Executors.newFixedThreadPool(randomIntBetween(1, Runtime.getRuntime().availableProcessors())); + final CanMatchPreFilterSearchPhase canMatchPhase = new CanMatchPreFilterSearchPhase( + logger, + searchTransportService, + (clusterAlias, node) -> lookup.get(node), + Collections.singletonMap("_na_", new AliasFilter(null, Strings.EMPTY_ARRAY)), + Collections.emptyMap(), + EsExecutors.newDirectExecutorService(), + new SearchRequest(), + null, + shardsIter, + timeProvider, + 0, + null, + (iter) -> new InitialSearchPhase("test", null, iter, logger, randomIntBetween(1, 32), executor) { + @Override + void onPhaseDone() { + latch.countDown(); + } + + @Override + void onShardFailure(final int shardIndex, final SearchShardTarget shardTarget, final Exception ex) { + + } + + @Override + void onShardSuccess(final SearchPhaseResult result) { + + } + + @Override + protected void executePhaseOnShard( + final SearchShardIterator shardIt, + final ShardRouting shard, + final SearchActionListener listener) { + if (randomBoolean()) { + listener.onResponse(new SearchPhaseResult() {}); + } else { + listener.onFailure(new Exception("failure")); + } + } + }); + + canMatchPhase.start(); + latch.await(); + executor.shutdown(); + } + } diff --git a/core/src/test/java/org/elasticsearch/action/search/ExpandSearchPhaseTests.java b/core/src/test/java/org/elasticsearch/action/search/ExpandSearchPhaseTests.java index 81a6359997d7a..0951380fcf4aa 100644 --- a/core/src/test/java/org/elasticsearch/action/search/ExpandSearchPhaseTests.java +++ b/core/src/test/java/org/elasticsearch/action/search/ExpandSearchPhaseTests.java @@ -20,6 +20,7 @@ package org.elasticsearch.action.search; import org.elasticsearch.action.ActionListener; +import org.elasticsearch.action.support.IndicesOptions; import org.elasticsearch.common.document.DocumentField; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.text.Text; @@ -100,7 +101,8 @@ void sendExecuteMultiSearch(MultiSearchRequest request, SearchTask task, ActionL mSearchResponses.add(new MultiSearchResponse.Item(response, null)); } - listener.onResponse(new MultiSearchResponse(mSearchResponses.toArray(new MultiSearchResponse.Item[0]))); + listener.onResponse( + new MultiSearchResponse(mSearchResponses.toArray(new MultiSearchResponse.Item[0]), randomIntBetween(1, 10000))); } }; @@ -152,10 +154,11 @@ void sendExecuteMultiSearch(MultiSearchRequest request, SearchTask task, ActionL InternalSearchResponse internalSearchResponse = new InternalSearchResponse(collapsedHits, null, null, null, false, null, 1); SearchResponse response = mockSearchPhaseContext.buildSearchResponse(internalSearchResponse, null); - listener.onResponse(new MultiSearchResponse(new MultiSearchResponse.Item[]{ - new MultiSearchResponse.Item(null, new RuntimeException("boom")), - new MultiSearchResponse.Item(response, null) - })); + listener.onResponse(new MultiSearchResponse( + new MultiSearchResponse.Item[]{ + new MultiSearchResponse.Item(null, new RuntimeException("boom")), + new MultiSearchResponse.Item(response, null) + }, randomIntBetween(1, 10000))); } }; @@ -242,4 +245,43 @@ public void run() throws IOException { assertNotNull(reference.get()); assertEquals(1, mockSearchPhaseContext.phasesExecuted.get()); } + + public void testExpandRequestOptions() throws IOException { + MockSearchPhaseContext mockSearchPhaseContext = new MockSearchPhaseContext(1); + mockSearchPhaseContext.searchTransport = new SearchTransportService( + Settings.builder().put("search.remote.connect", false).build(), null, null) { + + @Override + void sendExecuteMultiSearch(MultiSearchRequest request, SearchTask task, ActionListener listener) { + final QueryBuilder postFilter = QueryBuilders.existsQuery("foo"); + assertTrue(request.requests().stream().allMatch((r) -> "foo".equals(r.preference()))); + assertTrue(request.requests().stream().allMatch((r) -> "baz".equals(r.routing()))); + assertTrue(request.requests().stream().allMatch((r) -> postFilter.equals(r.source().postFilter()))); + } + }; + mockSearchPhaseContext.getRequest().source(new SearchSourceBuilder() + .collapse( + new CollapseBuilder("someField") + .setInnerHits(new InnerHitBuilder().setName("foobarbaz")) + ) + .postFilter(QueryBuilders.existsQuery("foo"))) + .preference("foobar") + .routing("baz"); + + SearchHits hits = new SearchHits(new SearchHit[0], 1, 1.0f); + InternalSearchResponse internalSearchResponse = new InternalSearchResponse(hits, null, null, null, false, null, 1); + AtomicReference reference = new AtomicReference<>(); + ExpandSearchPhase phase = new ExpandSearchPhase(mockSearchPhaseContext, internalSearchResponse, r -> + new SearchPhase("test") { + @Override + public void run() throws IOException { + reference.set(mockSearchPhaseContext.buildSearchResponse(r, null)); + } + } + ); + phase.run(); + mockSearchPhaseContext.assertNoFailure(); + assertNotNull(reference.get()); + assertEquals(1, mockSearchPhaseContext.phasesExecuted.get()); + } } diff --git a/core/src/test/java/org/elasticsearch/action/search/MultiSearchActionTookTests.java b/core/src/test/java/org/elasticsearch/action/search/MultiSearchActionTookTests.java new file mode 100644 index 0000000000000..73743230d1a14 --- /dev/null +++ b/core/src/test/java/org/elasticsearch/action/search/MultiSearchActionTookTests.java @@ -0,0 +1,199 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.action.search; + +import org.elasticsearch.action.ActionListener; +import org.elasticsearch.action.IndicesRequest; +import org.elasticsearch.action.support.ActionFilters; +import org.elasticsearch.action.support.TransportAction; +import org.elasticsearch.cluster.ClusterName; +import org.elasticsearch.cluster.ClusterState; +import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; +import org.elasticsearch.cluster.node.DiscoveryNode; +import org.elasticsearch.cluster.service.ClusterService; +import org.elasticsearch.common.Randomness; +import org.elasticsearch.common.UUIDs; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.util.concurrent.AtomicArray; +import org.elasticsearch.tasks.TaskManager; +import org.elasticsearch.test.ESTestCase; +import org.elasticsearch.threadpool.TestThreadPool; +import org.elasticsearch.threadpool.ThreadPool; +import org.elasticsearch.transport.TransportService; +import org.junit.After; +import org.junit.AfterClass; +import org.junit.Before; +import org.junit.BeforeClass; + +import java.util.Arrays; +import java.util.Collections; +import java.util.HashSet; +import java.util.IdentityHashMap; +import java.util.List; +import java.util.Queue; +import java.util.Set; +import java.util.concurrent.ExecutorService; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicInteger; +import java.util.concurrent.atomic.AtomicLong; +import java.util.function.LongSupplier; + +import static org.elasticsearch.test.ClusterServiceUtils.createClusterService; +import static org.hamcrest.CoreMatchers.equalTo; +import static org.hamcrest.Matchers.greaterThanOrEqualTo; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.when; + +/** + * MultiSearch took time tests + */ +public class MultiSearchActionTookTests extends ESTestCase { + + private ThreadPool threadPool; + private ClusterService clusterService; + + @BeforeClass + public static void beforeClass() { + } + + @AfterClass + public static void afterClass() { + } + + @Before + public void setUp() throws Exception { + super.setUp(); + threadPool = new TestThreadPool("MultiSearchActionTookTests"); + clusterService = createClusterService(threadPool); + } + + @After + public void tearDown() throws Exception { + clusterService.close(); + ThreadPool.terminate(threadPool, 30, TimeUnit.SECONDS); + super.tearDown(); + } + + // test unit conversion using a controller clock + public void testTookWithControlledClock() throws Exception { + runTestTook(true); + } + + // test using System#nanoTime + public void testTookWithRealClock() throws Exception { + runTestTook(false); + } + + private void runTestTook(boolean controlledClock) throws Exception { + MultiSearchRequest multiSearchRequest = new MultiSearchRequest().add(new SearchRequest()); + AtomicLong expected = new AtomicLong(); + + TransportMultiSearchAction action = createTransportMultiSearchAction(controlledClock, expected); + + action.doExecute(multiSearchRequest, new ActionListener() { + @Override + public void onResponse(MultiSearchResponse multiSearchResponse) { + if (controlledClock) { + assertThat(TimeUnit.MILLISECONDS.convert(expected.get(), TimeUnit.NANOSECONDS), + equalTo(multiSearchResponse.getTook().getMillis())); + } else { + assertThat(multiSearchResponse.getTook().getMillis(), + greaterThanOrEqualTo(TimeUnit.MILLISECONDS.convert(expected.get(), TimeUnit.NANOSECONDS))); + } + } + + @Override + public void onFailure(Exception e) { + throw new RuntimeException(e); + } + }); + } + + private TransportMultiSearchAction createTransportMultiSearchAction(boolean controlledClock, AtomicLong expected) { + Settings settings = Settings.builder().put("node.name", TransportMultiSearchActionTests.class.getSimpleName()).build(); + TaskManager taskManager = mock(TaskManager.class); + TransportService transportService = new TransportService(Settings.EMPTY, null, null, TransportService.NOOP_TRANSPORT_INTERCEPTOR, + boundAddress -> DiscoveryNode.createLocal(settings, boundAddress.publishAddress(), UUIDs.randomBase64UUID()), null) { + @Override + public TaskManager getTaskManager() { + return taskManager; + } + }; + ActionFilters actionFilters = new ActionFilters(new HashSet<>()); + ClusterService clusterService = mock(ClusterService.class); + when(clusterService.state()).thenReturn(ClusterState.builder(new ClusterName("test")).build()); + IndexNameExpressionResolver resolver = new Resolver(Settings.EMPTY); + + final int availableProcessors = Runtime.getRuntime().availableProcessors(); + AtomicInteger counter = new AtomicInteger(); + final List threadPoolNames = Arrays.asList(ThreadPool.Names.GENERIC, ThreadPool.Names.SAME); + Randomness.shuffle(threadPoolNames); + final ExecutorService commonExecutor = threadPool.executor(threadPoolNames.get(0)); + final Set requests = Collections.newSetFromMap(Collections.synchronizedMap(new IdentityHashMap<>())); + + TransportAction searchAction = new TransportAction(Settings.EMPTY, + "action", threadPool, actionFilters, resolver, taskManager) { + @Override + protected void doExecute(SearchRequest request, ActionListener listener) { + requests.add(request); + commonExecutor.execute(() -> { + counter.decrementAndGet(); + listener.onResponse(new SearchResponse()); + }); + } + }; + + if (controlledClock) { + return new TransportMultiSearchAction(threadPool, actionFilters, transportService, clusterService, searchAction, resolver, + availableProcessors, expected::get) { + @Override + void executeSearch(final Queue requests, final AtomicArray responses, + final AtomicInteger responseCounter, final ActionListener listener, long startTimeInNanos) { + expected.set(1000000); + super.executeSearch(requests, responses, responseCounter, listener, startTimeInNanos); + } + }; + } else { + return new TransportMultiSearchAction(threadPool, actionFilters, transportService, clusterService, searchAction, resolver, + availableProcessors, System::nanoTime) { + + @Override + void executeSearch(final Queue requests, final AtomicArray responses, + final AtomicInteger responseCounter, final ActionListener listener, long startTimeInNanos) { + long elapsed = spinForAtLeastNMilliseconds(randomIntBetween(0, 10)); + expected.set(elapsed); + super.executeSearch(requests, responses, responseCounter, listener, startTimeInNanos); + } + }; + } + } + + static class Resolver extends IndexNameExpressionResolver { + + Resolver(Settings settings) { + super(settings); + } + + @Override + public String[] concreteIndexNames(ClusterState state, IndicesRequest request) { + return request.indices(); + } + } +} diff --git a/core/src/test/java/org/elasticsearch/action/search/MultiSearchRequestTests.java b/core/src/test/java/org/elasticsearch/action/search/MultiSearchRequestTests.java index 3a162f302bc3b..e6de1d859d867 100644 --- a/core/src/test/java/org/elasticsearch/action/search/MultiSearchRequestTests.java +++ b/core/src/test/java/org/elasticsearch/action/search/MultiSearchRequestTests.java @@ -146,13 +146,16 @@ public void testSimpleAdd4() throws Exception { } public void testResponseErrorToXContent() throws IOException { + long tookInMillis = randomIntBetween(1, 1000); MultiSearchResponse response = new MultiSearchResponse( - new MultiSearchResponse.Item[]{ - new MultiSearchResponse.Item(null, new IllegalStateException("foobar")), - new MultiSearchResponse.Item(null, new IllegalStateException("baaaaaazzzz")) - }); - - assertEquals("{\"responses\":[" + new MultiSearchResponse.Item[] { + new MultiSearchResponse.Item(null, new IllegalStateException("foobar")), + new MultiSearchResponse.Item(null, new IllegalStateException("baaaaaazzzz")) + }, tookInMillis); + + assertEquals("{\"took\":" + + tookInMillis + + ",\"responses\":[" + "{" + "\"error\":{\"root_cause\":[{\"type\":\"illegal_state_exception\",\"reason\":\"foobar\"}]," + "\"type\":\"illegal_state_exception\",\"reason\":\"foobar\"},\"status\":500" diff --git a/core/src/test/java/org/elasticsearch/action/search/SearchAsyncActionTests.java b/core/src/test/java/org/elasticsearch/action/search/SearchAsyncActionTests.java index 3ee681383cd27..8a9c98395d76f 100644 --- a/core/src/test/java/org/elasticsearch/action/search/SearchAsyncActionTests.java +++ b/core/src/test/java/org/elasticsearch/action/search/SearchAsyncActionTests.java @@ -50,6 +50,8 @@ import java.util.Set; import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.CountDownLatch; +import java.util.concurrent.ExecutorService; +import java.util.concurrent.Executors; import java.util.concurrent.atomic.AtomicInteger; import java.util.concurrent.atomic.AtomicReference; @@ -110,7 +112,8 @@ public void onFailure(Exception e) { new TransportSearchAction.SearchTimeProvider(0, 0, () -> 0), 0, null, - new InitialSearchPhase.ArraySearchPhaseResults<>(shardsIter.size())) { + new InitialSearchPhase.ArraySearchPhaseResults<>(shardsIter.size()), + request.getMaxConcurrentShardRequests()) { @Override protected void executePhaseOnShard(SearchShardIterator shardIt, ShardRouting shard, @@ -199,7 +202,8 @@ public void onFailure(Exception e) { new TransportSearchAction.SearchTimeProvider(0, 0, () -> 0), 0, null, - new InitialSearchPhase.ArraySearchPhaseResults<>(shardsIter.size())) { + new InitialSearchPhase.ArraySearchPhaseResults<>(shardsIter.size()), + request.getMaxConcurrentShardRequests()) { @Override protected void executePhaseOnShard(SearchShardIterator shardIt, ShardRouting shard, @@ -283,6 +287,7 @@ public void sendFreeContext(Transport.Connection connection, long contextId, Ori lookup.put(primaryNode.getId(), new MockConnection(primaryNode)); lookup.put(replicaNode.getId(), new MockConnection(replicaNode)); Map aliasFilters = Collections.singletonMap("_na_", new AliasFilter(null, Strings.EMPTY_ARRAY)); + final ExecutorService executor = Executors.newFixedThreadPool(randomIntBetween(1, Runtime.getRuntime().availableProcessors())); AbstractSearchAsyncAction asyncAction = new AbstractSearchAsyncAction( "test", @@ -293,14 +298,15 @@ public void sendFreeContext(Transport.Connection connection, long contextId, Ori return lookup.get(node); }, aliasFilters, Collections.emptyMap(), - null, + executor, request, responseListener, shardsIter, new TransportSearchAction.SearchTimeProvider(0, 0, () -> 0), 0, null, - new InitialSearchPhase.ArraySearchPhaseResults<>(shardsIter.size())) { + new InitialSearchPhase.ArraySearchPhaseResults<>(shardsIter.size()), + request.getMaxConcurrentShardRequests()) { TestSearchResponse response = new TestSearchResponse(); @Override @@ -346,6 +352,7 @@ public void run() throws IOException { } else { assertTrue(nodeToContextMap.get(replicaNode).toString(), nodeToContextMap.get(replicaNode).isEmpty()); } + executor.shutdown(); } static GroupShardsIterator getShardsIter(String index, OriginalIndices originalIndices, int numShards, diff --git a/core/src/test/java/org/elasticsearch/action/search/SearchPhaseControllerTests.java b/core/src/test/java/org/elasticsearch/action/search/SearchPhaseControllerTests.java index e6d1e20147b90..7501a7a90be70 100644 --- a/core/src/test/java/org/elasticsearch/action/search/SearchPhaseControllerTests.java +++ b/core/src/test/java/org/elasticsearch/action/search/SearchPhaseControllerTests.java @@ -72,7 +72,7 @@ public void setup() { public void testSort() throws Exception { List suggestions = new ArrayList<>(); for (int i = 0; i < randomIntBetween(1, 5); i++) { - suggestions.add(new CompletionSuggestion(randomAlphaOfLength(randomIntBetween(1, 5)), randomIntBetween(1, 20))); + suggestions.add(new CompletionSuggestion(randomAlphaOfLength(randomIntBetween(1, 5)), randomIntBetween(1, 20), false)); } int nShards = randomIntBetween(1, 20); int queryResultSize = randomBoolean() ? 0 : randomIntBetween(1, nShards * 2); @@ -139,7 +139,7 @@ public void testMerge() throws IOException { for (int i = 0; i < randomIntBetween(1, 5); i++) { int size = randomIntBetween(1, 20); maxSuggestSize += size; - suggestions.add(new CompletionSuggestion(randomAlphaOfLength(randomIntBetween(1, 5)), size)); + suggestions.add(new CompletionSuggestion(randomAlphaOfLength(randomIntBetween(1, 5)), size, false)); } int nShards = randomIntBetween(1, 20); int queryResultSize = randomBoolean() ? 0 : randomIntBetween(1, nShards * 2); @@ -202,7 +202,7 @@ private AtomicArray generateQueryResults(int nShards, List shardSuggestion = new ArrayList<>(); for (CompletionSuggestion completionSuggestion : suggestions) { CompletionSuggestion suggestion = new CompletionSuggestion( - completionSuggestion.getName(), completionSuggestion.getSize()); + completionSuggestion.getName(), completionSuggestion.getSize(), false); final CompletionSuggestion.Entry completionEntry = new CompletionSuggestion.Entry(new Text(""), 0, 5); suggestion.addTerm(completionEntry); int optionSize = randomIntBetween(1, suggestion.getSize()); diff --git a/core/src/test/java/org/elasticsearch/action/search/SearchResponseTests.java b/core/src/test/java/org/elasticsearch/action/search/SearchResponseTests.java index 02c4964af3cc5..999c348b57580 100644 --- a/core/src/test/java/org/elasticsearch/action/search/SearchResponseTests.java +++ b/core/src/test/java/org/elasticsearch/action/search/SearchResponseTests.java @@ -175,7 +175,7 @@ public void testFromXContentWithFailures() throws IOException { ShardSearchFailure parsedFailure = parsed.getShardFailures()[i]; ShardSearchFailure originalFailure = failures[i]; assertEquals(originalFailure.index(), parsedFailure.index()); - assertEquals(originalFailure.shard().getNodeId(), parsedFailure.shard().getNodeId()); + assertEquals(originalFailure.shard(), parsedFailure.shard()); assertEquals(originalFailure.shardId(), parsedFailure.shardId()); String originalMsg = originalFailure.getCause().getMessage(); assertEquals(parsedFailure.getCause().getMessage(), "Elasticsearch exception [type=parsing_exception, reason=" + diff --git a/core/src/test/java/org/elasticsearch/action/search/SearchScrollRequestTests.java b/core/src/test/java/org/elasticsearch/action/search/SearchScrollRequestTests.java index 6ec9f95f489de..f40819ec08958 100644 --- a/core/src/test/java/org/elasticsearch/action/search/SearchScrollRequestTests.java +++ b/core/src/test/java/org/elasticsearch/action/search/SearchScrollRequestTests.java @@ -46,8 +46,7 @@ public void testSerialization() throws Exception { try (BytesStreamOutput output = new BytesStreamOutput()) { searchScrollRequest.writeTo(output); try (StreamInput in = output.bytes().streamInput()) { - SearchScrollRequest deserializedRequest = new SearchScrollRequest(); - deserializedRequest.readFrom(in); + SearchScrollRequest deserializedRequest = new SearchScrollRequest(in); assertEquals(deserializedRequest, searchScrollRequest); assertEquals(deserializedRequest.hashCode(), searchScrollRequest.hashCode()); assertNotSame(deserializedRequest, searchScrollRequest); @@ -61,8 +60,7 @@ public void testInternalScrollSearchRequestSerialization() throws IOException { try (BytesStreamOutput output = new BytesStreamOutput()) { internalScrollSearchRequest.writeTo(output); try (StreamInput in = output.bytes().streamInput()) { - InternalScrollSearchRequest deserializedRequest = new InternalScrollSearchRequest(); - deserializedRequest.readFrom(in); + InternalScrollSearchRequest deserializedRequest = new InternalScrollSearchRequest(in); assertEquals(deserializedRequest.id(), internalScrollSearchRequest.id()); assertEquals(deserializedRequest.scroll(), internalScrollSearchRequest.scroll()); assertNotSame(deserializedRequest, internalScrollSearchRequest); diff --git a/core/src/test/java/org/elasticsearch/action/search/ShardSearchFailureTests.java b/core/src/test/java/org/elasticsearch/action/search/ShardSearchFailureTests.java index 9a8c0b1feb1d3..13625a2bc612f 100644 --- a/core/src/test/java/org/elasticsearch/action/search/ShardSearchFailureTests.java +++ b/core/src/test/java/org/elasticsearch/action/search/ShardSearchFailureTests.java @@ -20,6 +20,7 @@ package org.elasticsearch.action.search; import org.elasticsearch.action.OriginalIndices; +import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.common.ParsingException; import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.xcontent.ToXContent; @@ -40,12 +41,14 @@ public class ShardSearchFailureTests extends ESTestCase { public static ShardSearchFailure createTestItem() { String randomMessage = randomAlphaOfLengthBetween(3, 20); Exception ex = new ParsingException(0, 0, randomMessage , new IllegalArgumentException("some bad argument")); - String nodeId = randomAlphaOfLengthBetween(5, 10); - String indexName = randomAlphaOfLengthBetween(5, 10); - String indexUuid = randomAlphaOfLengthBetween(5, 10); - int shardId = randomInt(); - return new ShardSearchFailure(ex, - new SearchShardTarget(nodeId, new ShardId(new Index(indexName, indexUuid), shardId), null, null)); + SearchShardTarget searchShardTarget = null; + if (randomBoolean()) { + String nodeId = randomAlphaOfLengthBetween(5, 10); + String indexName = randomAlphaOfLengthBetween(5, 10); + searchShardTarget = new SearchShardTarget(nodeId, + new ShardId(new Index(indexName, IndexMetaData.INDEX_UUID_NA_VALUE), randomInt()), null, null); + } + return new ShardSearchFailure(ex, searchShardTarget); } public void testFromXContent() throws IOException { @@ -80,10 +83,10 @@ private void doFromXContentTestWithRandomFields(boolean addRandomFields) throws assertNull(parser.nextToken()); } assertEquals(response.index(), parsed.index()); - assertEquals(response.shard().getNodeId(), parsed.shard().getNodeId()); + assertEquals(response.shard(), parsed.shard()); assertEquals(response.shardId(), parsed.shardId()); - /** + /* * we cannot compare the cause, because it will be wrapped in an outer * ElasticSearchException best effort: try to check that the original * message appears somewhere in the rendered xContent diff --git a/core/src/test/java/org/elasticsearch/action/search/TransportMultiSearchActionTests.java b/core/src/test/java/org/elasticsearch/action/search/TransportMultiSearchActionTests.java index e811da82c47a8..4410507eef92e 100644 --- a/core/src/test/java/org/elasticsearch/action/search/TransportMultiSearchActionTests.java +++ b/core/src/test/java/org/elasticsearch/action/search/TransportMultiSearchActionTests.java @@ -102,8 +102,10 @@ protected void doExecute(SearchRequest request, ActionListener l }); } }; - TransportMultiSearchAction action = - new TransportMultiSearchAction(threadPool, actionFilters, transportService, clusterService, searchAction, resolver, 10); + + TransportMultiSearchAction action = + new TransportMultiSearchAction(threadPool, actionFilters, transportService, clusterService, searchAction, resolver, 10, + System::nanoTime); // Execute the multi search api and fail if we find an error after executing: try { diff --git a/core/src/test/java/org/elasticsearch/action/search/TransportSearchIT.java b/core/src/test/java/org/elasticsearch/action/search/TransportSearchIT.java index b3c695e881d68..ffc41cab847b9 100644 --- a/core/src/test/java/org/elasticsearch/action/search/TransportSearchIT.java +++ b/core/src/test/java/org/elasticsearch/action/search/TransportSearchIT.java @@ -20,6 +20,7 @@ package org.elasticsearch.action.search; import org.elasticsearch.cluster.metadata.IndexMetaData; +import org.elasticsearch.common.settings.Settings; import org.elasticsearch.test.ESIntegTestCase; import java.util.Collections; @@ -34,9 +35,9 @@ public void testShardCountLimit() throws Exception { final int numPrimaries1 = randomIntBetween(2, 10); final int numPrimaries2 = randomIntBetween(1, 10); assertAcked(prepareCreate("test1") - .setSettings(IndexMetaData.SETTING_NUMBER_OF_SHARDS, numPrimaries1)); + .setSettings(Settings.builder().put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, numPrimaries1))); assertAcked(prepareCreate("test2") - .setSettings(IndexMetaData.SETTING_NUMBER_OF_SHARDS, numPrimaries2)); + .setSettings(Settings.builder().put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, numPrimaries2))); // no exception client().prepareSearch("test1").get(); diff --git a/core/src/test/java/org/elasticsearch/action/support/AutoCreateIndexTests.java b/core/src/test/java/org/elasticsearch/action/support/AutoCreateIndexTests.java index 159be84de0791..d531e824eb29d 100644 --- a/core/src/test/java/org/elasticsearch/action/support/AutoCreateIndexTests.java +++ b/core/src/test/java/org/elasticsearch/action/support/AutoCreateIndexTests.java @@ -102,22 +102,12 @@ public void testDefaultAutoCreation() { public void testExistingIndex() { Settings settings = Settings.builder().put(AutoCreateIndex.AUTO_CREATE_INDEX_SETTING.getKey(), randomFrom(true, false, - randomAlphaOfLengthBetween(7, 10))).build(); + randomAlphaOfLengthBetween(7, 10)).toString()).build(); AutoCreateIndex autoCreateIndex = newAutoCreateIndex(settings); assertThat(autoCreateIndex.shouldAutoCreate(randomFrom("index1", "index2", "index3"), buildClusterState("index1", "index2", "index3")), equalTo(false)); } - public void testDynamicMappingDisabled() { - Settings settings = Settings.builder().put(AutoCreateIndex.AUTO_CREATE_INDEX_SETTING.getKey(), randomFrom(true, - randomAlphaOfLengthBetween(1, 10))) - .put(MapperService.INDEX_MAPPER_DYNAMIC_SETTING.getKey(), false).build(); - AutoCreateIndex autoCreateIndex = newAutoCreateIndex(settings); - IndexNotFoundException e = expectThrows(IndexNotFoundException.class, () -> - autoCreateIndex.shouldAutoCreate(randomAlphaOfLengthBetween(1, 10), buildClusterState())); - assertEquals("no such index and [index.mapper.dynamic] is [false]", e.getMessage()); - } - public void testAutoCreationPatternEnabled() { Settings settings = Settings.builder().put(AutoCreateIndex.AUTO_CREATE_INDEX_SETTING.getKey(), randomFrom("+index*", "index*")) .build(); diff --git a/core/src/test/java/org/elasticsearch/action/support/replication/ReplicationOperationTests.java b/core/src/test/java/org/elasticsearch/action/support/replication/ReplicationOperationTests.java index e590e12895591..07dd1ae9ed1ac 100644 --- a/core/src/test/java/org/elasticsearch/action/support/replication/ReplicationOperationTests.java +++ b/core/src/test/java/org/elasticsearch/action/support/replication/ReplicationOperationTests.java @@ -131,6 +131,7 @@ public void testReplication() throws Exception { assertThat(primary.knownLocalCheckpoints.remove(primaryShard.allocationId().getId()), equalTo(primary.localCheckpoint)); assertThat(primary.knownLocalCheckpoints, equalTo(replicasProxy.generatedLocalCheckpoints)); + assertThat(primary.knownGlobalCheckpoints, equalTo(replicasProxy.generatedGlobalCheckpoints)); } public void testDemotedPrimary() throws Exception { @@ -380,6 +381,7 @@ static class TestPrimary implements ReplicationOperation.Primary clusterStateSupplier; final Map knownLocalCheckpoints = new HashMap<>(); + final Map knownGlobalCheckpoints = new HashMap<>(); TestPrimary(ShardRouting routing, Supplier clusterStateSupplier) { this.routing = routing; @@ -434,6 +436,11 @@ public void updateLocalCheckpointForShard(String allocationId, long checkpoint) knownLocalCheckpoints.put(allocationId, checkpoint); } + @Override + public void updateGlobalCheckpointForShard(String allocationId, long globalCheckpoint) { + knownGlobalCheckpoints.put(allocationId, globalCheckpoint); + } + @Override public long localCheckpoint() { return localCheckpoint; @@ -455,15 +462,23 @@ public ReplicationGroup getReplicationGroup() { static class ReplicaResponse implements ReplicationOperation.ReplicaResponse { final long localCheckpoint; + final long globalCheckpoint; - ReplicaResponse(long localCheckpoint) { + ReplicaResponse(long localCheckpoint, long globalCheckpoint) { this.localCheckpoint = localCheckpoint; + this.globalCheckpoint = globalCheckpoint; } @Override public long localCheckpoint() { return localCheckpoint; } + + @Override + public long globalCheckpoint() { + return globalCheckpoint; + } + } static class TestReplicaProxy implements ReplicationOperation.Replicas { @@ -474,6 +489,8 @@ static class TestReplicaProxy implements ReplicationOperation.Replicas final Map generatedLocalCheckpoints = ConcurrentCollections.newConcurrentMap(); + final Map generatedGlobalCheckpoints = ConcurrentCollections.newConcurrentMap(); + final Set markedAsStaleCopies = ConcurrentCollections.newConcurrentSet(); final long primaryTerm; @@ -497,11 +514,12 @@ public void performOn( if (opFailures.containsKey(replica)) { listener.onFailure(opFailures.get(replica)); } else { - final long checkpoint = random().nextLong(); + final long generatedLocalCheckpoint = random().nextLong(); + final long generatedGlobalCheckpoint = random().nextLong(); final String allocationId = replica.allocationId().getId(); - Long existing = generatedLocalCheckpoints.put(allocationId, checkpoint); - assertNull(existing); - listener.onResponse(new ReplicaResponse(checkpoint)); + assertNull(generatedLocalCheckpoints.put(allocationId, generatedLocalCheckpoint)); + assertNull(generatedGlobalCheckpoints.put(allocationId, generatedGlobalCheckpoint)); + listener.onResponse(new ReplicaResponse(generatedLocalCheckpoint, generatedGlobalCheckpoint)); } } diff --git a/core/src/test/java/org/elasticsearch/action/support/replication/TransportReplicationActionTests.java b/core/src/test/java/org/elasticsearch/action/support/replication/TransportReplicationActionTests.java index 1c8aa7079997e..1127a5ced580d 100644 --- a/core/src/test/java/org/elasticsearch/action/support/replication/TransportReplicationActionTests.java +++ b/core/src/test/java/org/elasticsearch/action/support/replication/TransportReplicationActionTests.java @@ -639,7 +639,8 @@ public void testReplicaProxy() throws InterruptedException, ExecutionException { CapturingTransport.CapturedRequest[] captures = transport.getCapturedRequestsAndClear(); assertThat(captures, arrayWithSize(1)); if (randomBoolean()) { - final TransportReplicationAction.ReplicaResponse response = new TransportReplicationAction.ReplicaResponse(randomLong()); + final TransportReplicationAction.ReplicaResponse response = + new TransportReplicationAction.ReplicaResponse(randomLong(), randomLong()); transport.handleResponse(captures[0].requestId, response); assertTrue(listener.isDone()); assertThat(listener.get(), equalTo(response)); diff --git a/core/src/test/java/org/elasticsearch/action/support/replication/TransportWriteActionTests.java b/core/src/test/java/org/elasticsearch/action/support/replication/TransportWriteActionTests.java index b1a1562073881..b3db10f920973 100644 --- a/core/src/test/java/org/elasticsearch/action/support/replication/TransportWriteActionTests.java +++ b/core/src/test/java/org/elasticsearch/action/support/replication/TransportWriteActionTests.java @@ -289,7 +289,8 @@ public void testReplicaProxy() throws InterruptedException, ExecutionException { CapturingTransport.CapturedRequest[] captures = transport.getCapturedRequestsAndClear(); assertThat(captures, arrayWithSize(1)); if (randomBoolean()) { - final TransportReplicationAction.ReplicaResponse response = new TransportReplicationAction.ReplicaResponse(randomLong()); + final TransportReplicationAction.ReplicaResponse response = + new TransportReplicationAction.ReplicaResponse(randomLong(), randomLong()); transport.handleResponse(captures[0].requestId, response); assertTrue(listener.isDone()); assertThat(listener.get(), equalTo(response)); diff --git a/core/src/test/java/org/elasticsearch/action/termvectors/AbstractTermVectorsTestCase.java b/core/src/test/java/org/elasticsearch/action/termvectors/AbstractTermVectorsTestCase.java index 15a2f9e74a461..bd76557f9a86f 100644 --- a/core/src/test/java/org/elasticsearch/action/termvectors/AbstractTermVectorsTestCase.java +++ b/core/src/test/java/org/elasticsearch/action/termvectors/AbstractTermVectorsTestCase.java @@ -210,7 +210,7 @@ protected void createIndexBasedOnFieldSettings(String index, String alias, TestF Settings.Builder settings = Settings.builder() .put(indexSettings()) .put("index.analysis.analyzer.tv_test.tokenizer", "standard") - .putArray("index.analysis.analyzer.tv_test.filter", "lowercase"); + .putList("index.analysis.analyzer.tv_test.filter", "lowercase"); assertAcked(prepareCreate(index).addMapping("type1", mappingBuilder).setSettings(settings).addAlias(new Alias(alias))); } diff --git a/core/src/test/java/org/elasticsearch/action/termvectors/GetTermVectorsIT.java b/core/src/test/java/org/elasticsearch/action/termvectors/GetTermVectorsIT.java index bbd7d5501783c..520c881aa7e62 100644 --- a/core/src/test/java/org/elasticsearch/action/termvectors/GetTermVectorsIT.java +++ b/core/src/test/java/org/elasticsearch/action/termvectors/GetTermVectorsIT.java @@ -189,7 +189,7 @@ public void testSimpleTermVectors() throws IOException { .setSettings(Settings.builder() .put(indexSettings()) .put("index.analysis.analyzer.tv_test.tokenizer", "whitespace") - .putArray("index.analysis.analyzer.tv_test.filter", "lowercase"))); + .putList("index.analysis.analyzer.tv_test.filter", "lowercase"))); for (int i = 0; i < 10; i++) { client().prepareIndex("test", "type1", Integer.toString(i)) .setSource(jsonBuilder().startObject().field("field", "the quick brown fox jumps over the lazy dog") @@ -261,7 +261,7 @@ public void testRandomSingleTermVectors() throws IOException { assertAcked(prepareCreate("test").addMapping("type1", mapping) .setSettings(Settings.builder() .put("index.analysis.analyzer.tv_test.tokenizer", "whitespace") - .putArray("index.analysis.analyzer.tv_test.filter", "lowercase"))); + .putList("index.analysis.analyzer.tv_test.filter", "lowercase"))); for (int i = 0; i < 10; i++) { client().prepareIndex("test", "type1", Integer.toString(i)) .setSource(jsonBuilder().startObject().field("field", "the quick brown fox jumps over the lazy dog") @@ -395,7 +395,7 @@ public void testSimpleTermVectorsWithGenerate() throws IOException { .setSettings(Settings.builder() .put(indexSettings()) .put("index.analysis.analyzer.tv_test.tokenizer", "whitespace") - .putArray("index.analysis.analyzer.tv_test.filter", "lowercase"))); + .putList("index.analysis.analyzer.tv_test.filter", "lowercase"))); ensureGreen(); diff --git a/core/src/test/java/org/elasticsearch/action/termvectors/GetTermVectorsTests.java b/core/src/test/java/org/elasticsearch/action/termvectors/GetTermVectorsTests.java index e4d55da9f92b1..5e81949402055 100644 --- a/core/src/test/java/org/elasticsearch/action/termvectors/GetTermVectorsTests.java +++ b/core/src/test/java/org/elasticsearch/action/termvectors/GetTermVectorsTests.java @@ -152,7 +152,7 @@ public void testRandomPayloadWithDelimitedPayloadTokenFilter() throws IOExceptio .field("analyzer", "payload_test").endObject().endObject().endObject().endObject(); Settings setting = Settings.builder() .put("index.analysis.analyzer.payload_test.tokenizer", "whitespace") - .putArray("index.analysis.analyzer.payload_test.filter", "my_delimited_payload_filter") + .putList("index.analysis.analyzer.payload_test.filter", "my_delimited_payload_filter") .put("index.analysis.filter.my_delimited_payload_filter.delimiter", delimiter) .put("index.analysis.filter.my_delimited_payload_filter.encoding", encodingString) .put("index.analysis.filter.my_delimited_payload_filter.type", "mock_payload_filter").build(); diff --git a/core/src/test/java/org/elasticsearch/action/termvectors/TermVectorsUnitTests.java b/core/src/test/java/org/elasticsearch/action/termvectors/TermVectorsUnitTests.java index 2018218cc5456..d110d12e4fc39 100644 --- a/core/src/test/java/org/elasticsearch/action/termvectors/TermVectorsUnitTests.java +++ b/core/src/test/java/org/elasticsearch/action/termvectors/TermVectorsUnitTests.java @@ -47,9 +47,9 @@ import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.common.xcontent.json.JsonXContent; -import org.elasticsearch.index.mapper.AllFieldMapper; import org.elasticsearch.index.mapper.FieldMapper; import org.elasticsearch.index.mapper.MapperParsingException; +import org.elasticsearch.index.mapper.TextFieldMapper; import org.elasticsearch.index.mapper.TypeParsers; import org.elasticsearch.rest.action.document.RestTermVectorsAction; import org.elasticsearch.test.ESTestCase; @@ -303,7 +303,7 @@ public void testFieldTypeToTermVectorString() throws Exception { ft.setStoreTermVectorPositions(true); String ftOpts = FieldMapper.termVectorOptionsToString(ft); assertThat("with_positions_payloads", equalTo(ftOpts)); - AllFieldMapper.Builder builder = new AllFieldMapper.Builder(null); + TextFieldMapper.Builder builder = new TextFieldMapper.Builder(null); boolean exceptiontrown = false; try { TypeParsers.parseTermVector("", ftOpts, builder); diff --git a/core/src/test/java/org/elasticsearch/bootstrap/BootstrapChecksTests.java b/core/src/test/java/org/elasticsearch/bootstrap/BootstrapChecksTests.java index 77276b8787fc2..a70d96a302c84 100644 --- a/core/src/test/java/org/elasticsearch/bootstrap/BootstrapChecksTests.java +++ b/core/src/test/java/org/elasticsearch/bootstrap/BootstrapChecksTests.java @@ -21,6 +21,7 @@ import org.apache.logging.log4j.Logger; import org.apache.lucene.util.Constants; +import org.elasticsearch.cluster.metadata.MetaData; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.transport.BoundTransportAddress; import org.elasticsearch.common.transport.TransportAddress; @@ -52,6 +53,8 @@ public class BootstrapChecksTests extends ESTestCase { + private static final BootstrapContext defaultContext = new BootstrapContext(Settings.EMPTY, MetaData.EMPTY_META_DATA); + public void testNonProductionMode() throws NodeValidationException { // nothing should happen since we are in non-production mode final List transportAddresses = new ArrayList<>(); @@ -64,18 +67,18 @@ public void testNonProductionMode() throws NodeValidationException { BoundTransportAddress boundTransportAddress = mock(BoundTransportAddress.class); when(boundTransportAddress.boundAddresses()).thenReturn(transportAddresses.toArray(new TransportAddress[0])); when(boundTransportAddress.publishAddress()).thenReturn(publishAddress); - BootstrapChecks.check(Settings.EMPTY, boundTransportAddress, Collections.emptyList()); + BootstrapChecks.check(defaultContext, boundTransportAddress, Collections.emptyList()); } public void testNoLogMessageInNonProductionMode() throws NodeValidationException { final Logger logger = mock(Logger.class); - BootstrapChecks.check(false, Collections.emptyList(), logger); + BootstrapChecks.check(defaultContext, false, Collections.emptyList(), logger); verifyNoMoreInteractions(logger); } public void testLogMessageInProductionMode() throws NodeValidationException { final Logger logger = mock(Logger.class); - BootstrapChecks.check(true, Collections.emptyList(), logger); + BootstrapChecks.check(defaultContext, true, Collections.emptyList(), logger); verify(logger).info("bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks"); verifyNoMoreInteractions(logger); } @@ -124,32 +127,12 @@ public void testEnforceLimitsWhenPublishingToNonLocalAddress() { public void testExceptionAggregation() { final List checks = Arrays.asList( - new BootstrapCheck() { - @Override - public boolean check() { - return true; - } - - @Override - public String errorMessage() { - return "first"; - } - }, - new BootstrapCheck() { - @Override - public boolean check() { - return true; - } - - @Override - public String errorMessage() { - return "second"; - } - } - ); + context -> BootstrapCheck.BootstrapCheckResult.failure("first"), + context -> BootstrapCheck.BootstrapCheckResult.failure("second")); final NodeValidationException e = - expectThrows(NodeValidationException.class, () -> BootstrapChecks.check(true, checks, "testExceptionAggregation")); + expectThrows(NodeValidationException.class, + () -> BootstrapChecks.check(defaultContext, true, checks, "testExceptionAggregation")); assertThat(e, hasToString(allOf(containsString("bootstrap checks failed"), containsString("first"), containsString("second")))); final Throwable[] suppressed = e.getSuppressed(); assertThat(suppressed.length, equalTo(2)); @@ -180,7 +163,7 @@ long getMaxHeapSize() { final NodeValidationException e = expectThrows( NodeValidationException.class, - () -> BootstrapChecks.check(true, Collections.singletonList(check), "testHeapSizeCheck")); + () -> BootstrapChecks.check(defaultContext, true, Collections.singletonList(check), "testHeapSizeCheck")); assertThat( e.getMessage(), containsString("initial heap size [" + initialHeapSize.get() + "] " + @@ -188,7 +171,7 @@ long getMaxHeapSize() { initialHeapSize.set(maxHeapSize.get()); - BootstrapChecks.check(true, Collections.singletonList(check), "testHeapSizeCheck"); + BootstrapChecks.check(defaultContext, true, Collections.singletonList(check), "testHeapSizeCheck"); // nothing should happen if the initial heap size or the max // heap size is not available @@ -197,7 +180,7 @@ long getMaxHeapSize() { } else { maxHeapSize.set(0); } - BootstrapChecks.check(true, Collections.singletonList(check), "testHeapSizeCheck"); + BootstrapChecks.check(defaultContext, true, Collections.singletonList(check), "testHeapSizeCheck"); } public void testFileDescriptorLimits() throws NodeValidationException { @@ -223,17 +206,17 @@ long getMaxFileDescriptorCount() { final NodeValidationException e = expectThrows(NodeValidationException.class, - () -> BootstrapChecks.check(true, Collections.singletonList(check), "testFileDescriptorLimits")); + () -> BootstrapChecks.check(defaultContext, true, Collections.singletonList(check), "testFileDescriptorLimits")); assertThat(e.getMessage(), containsString("max file descriptors")); maxFileDescriptorCount.set(randomIntBetween(limit + 1, Integer.MAX_VALUE)); - BootstrapChecks.check(true, Collections.singletonList(check), "testFileDescriptorLimits"); + BootstrapChecks.check(defaultContext, true, Collections.singletonList(check), "testFileDescriptorLimits"); // nothing should happen if current file descriptor count is // not available maxFileDescriptorCount.set(-1); - BootstrapChecks.check(true, Collections.singletonList(check), "testFileDescriptorLimits"); + BootstrapChecks.check(defaultContext, true, Collections.singletonList(check), "testFileDescriptorLimits"); } public void testFileDescriptorLimitsThrowsOnInvalidLimit() { @@ -266,17 +249,19 @@ class MlockallCheckTestCase { testCases.add(new MlockallCheckTestCase(false, false, false)); for (final MlockallCheckTestCase testCase : testCases) { - final BootstrapChecks.MlockallCheck check = new BootstrapChecks.MlockallCheck(testCase.mlockallSet) { + final BootstrapChecks.MlockallCheck check = new BootstrapChecks.MlockallCheck() { @Override boolean isMemoryLocked() { return testCase.isMemoryLocked; } }; - + BootstrapContext bootstrapContext = new BootstrapContext( + Settings.builder().put("bootstrap.memory_lock", testCase.mlockallSet).build(), null); if (testCase.shouldFail) { final NodeValidationException e = expectThrows( NodeValidationException.class, () -> BootstrapChecks.check( + bootstrapContext, true, Collections.singletonList(check), "testFileDescriptorLimitsThrowsOnInvalidLimit")); @@ -285,7 +270,8 @@ boolean isMemoryLocked() { containsString("memory locking requested for elasticsearch process but memory is not locked")); } else { // nothing should happen - BootstrapChecks.check(true, Collections.singletonList(check), "testFileDescriptorLimitsThrowsOnInvalidLimit"); + BootstrapChecks.check(bootstrapContext, true, Collections.singletonList(check), + "testFileDescriptorLimitsThrowsOnInvalidLimit"); } } } @@ -302,17 +288,17 @@ long getMaxNumberOfThreads() { final NodeValidationException e = expectThrows( NodeValidationException.class, - () -> BootstrapChecks.check(true, Collections.singletonList(check), "testMaxNumberOfThreadsCheck")); + () -> BootstrapChecks.check(defaultContext, true, Collections.singletonList(check), "testMaxNumberOfThreadsCheck")); assertThat(e.getMessage(), containsString("max number of threads")); maxNumberOfThreads.set(randomIntBetween(limit + 1, Integer.MAX_VALUE)); - BootstrapChecks.check(true, Collections.singletonList(check), "testMaxNumberOfThreadsCheck"); + BootstrapChecks.check(defaultContext, true, Collections.singletonList(check), "testMaxNumberOfThreadsCheck"); // nothing should happen if current max number of threads is // not available maxNumberOfThreads.set(-1); - BootstrapChecks.check(true, Collections.singletonList(check), "testMaxNumberOfThreadsCheck"); + BootstrapChecks.check(defaultContext, true, Collections.singletonList(check), "testMaxNumberOfThreadsCheck"); } public void testMaxSizeVirtualMemory() throws NodeValidationException { @@ -332,16 +318,16 @@ long getRlimInfinity() { final NodeValidationException e = expectThrows( NodeValidationException.class, - () -> BootstrapChecks.check(true, Collections.singletonList(check), "testMaxSizeVirtualMemory")); + () -> BootstrapChecks.check(defaultContext, true, Collections.singletonList(check), "testMaxSizeVirtualMemory")); assertThat(e.getMessage(), containsString("max size virtual memory")); maxSizeVirtualMemory.set(rlimInfinity); - BootstrapChecks.check(true, Collections.singletonList(check), "testMaxSizeVirtualMemory"); + BootstrapChecks.check(defaultContext, true, Collections.singletonList(check), "testMaxSizeVirtualMemory"); // nothing should happen if max size virtual memory is not available maxSizeVirtualMemory.set(Long.MIN_VALUE); - BootstrapChecks.check(true, Collections.singletonList(check), "testMaxSizeVirtualMemory"); + BootstrapChecks.check(defaultContext, true, Collections.singletonList(check), "testMaxSizeVirtualMemory"); } public void testMaxFileSizeCheck() throws NodeValidationException { @@ -361,16 +347,16 @@ long getRlimInfinity() { final NodeValidationException e = expectThrows( NodeValidationException.class, - () -> BootstrapChecks.check(true, Collections.singletonList(check), "testMaxFileSize")); + () -> BootstrapChecks.check(defaultContext, true, Collections.singletonList(check), "testMaxFileSize")); assertThat(e.getMessage(), containsString("max file size")); maxFileSize.set(rlimInfinity); - BootstrapChecks.check(true, Collections.singletonList(check), "testMaxFileSize"); + BootstrapChecks.check(defaultContext, true, Collections.singletonList(check), "testMaxFileSize"); // nothing should happen if max file size is not available maxFileSize.set(Long.MIN_VALUE); - BootstrapChecks.check(true, Collections.singletonList(check), "testMaxFileSize"); + BootstrapChecks.check(defaultContext, true, Collections.singletonList(check), "testMaxFileSize"); } public void testMaxMapCountCheck() throws NodeValidationException { @@ -385,17 +371,17 @@ long getMaxMapCount() { final NodeValidationException e = expectThrows( NodeValidationException.class, - () -> BootstrapChecks.check(true, Collections.singletonList(check), "testMaxMapCountCheck")); + () -> BootstrapChecks.check(defaultContext, true, Collections.singletonList(check), "testMaxMapCountCheck")); assertThat(e.getMessage(), containsString("max virtual memory areas vm.max_map_count")); maxMapCount.set(randomIntBetween(limit + 1, Integer.MAX_VALUE)); - BootstrapChecks.check(true, Collections.singletonList(check), "testMaxMapCountCheck"); + BootstrapChecks.check(defaultContext, true, Collections.singletonList(check), "testMaxMapCountCheck"); // nothing should happen if current vm.max_map_count is not // available maxMapCount.set(-1); - BootstrapChecks.check(true, Collections.singletonList(check), "testMaxMapCountCheck"); + BootstrapChecks.check(defaultContext, true, Collections.singletonList(check), "testMaxMapCountCheck"); } public void testClientJvmCheck() throws NodeValidationException { @@ -409,14 +395,14 @@ String getVmName() { final NodeValidationException e = expectThrows( NodeValidationException.class, - () -> BootstrapChecks.check(true, Collections.singletonList(check), "testClientJvmCheck")); + () -> BootstrapChecks.check(defaultContext, true, Collections.singletonList(check), "testClientJvmCheck")); assertThat( e.getMessage(), containsString("JVM is using the client VM [Java HotSpot(TM) 32-Bit Client VM] " + "but should be using a server VM for the best performance")); vmName.set("Java HotSpot(TM) 32-Bit Server VM"); - BootstrapChecks.check(true, Collections.singletonList(check), "testClientJvmCheck"); + BootstrapChecks.check(defaultContext, true, Collections.singletonList(check), "testClientJvmCheck"); } public void testUseSerialGCCheck() throws NodeValidationException { @@ -430,19 +416,22 @@ String getUseSerialGC() { final NodeValidationException e = expectThrows( NodeValidationException.class, - () -> BootstrapChecks.check(true, Collections.singletonList(check), "testUseSerialGCCheck")); + () -> BootstrapChecks.check(defaultContext, true, Collections.singletonList(check), "testUseSerialGCCheck")); assertThat( e.getMessage(), containsString("JVM is using the serial collector but should not be for the best performance; " + "" + "either it's the default for the VM [" + JvmInfo.jvmInfo().getVmName() +"] or -XX:+UseSerialGC was explicitly specified")); useSerialGC.set("false"); - BootstrapChecks.check(true, Collections.singletonList(check), "testUseSerialGCCheck"); + BootstrapChecks.check(defaultContext, true, Collections.singletonList(check), "testUseSerialGCCheck"); } public void testSystemCallFilterCheck() throws NodeValidationException { final AtomicBoolean isSystemCallFilterInstalled = new AtomicBoolean(); - final BootstrapChecks.SystemCallFilterCheck systemCallFilterEnabledCheck = new BootstrapChecks.SystemCallFilterCheck(true) { + BootstrapContext context = randomBoolean() ? new BootstrapContext(Settings.builder().put("bootstrap.system_call_filter", true) + .build(), null) : defaultContext; + + final BootstrapChecks.SystemCallFilterCheck systemCallFilterEnabledCheck = new BootstrapChecks.SystemCallFilterCheck() { @Override boolean isSystemCallFilterInstalled() { return isSystemCallFilterInstalled.get(); @@ -451,25 +440,26 @@ boolean isSystemCallFilterInstalled() { final NodeValidationException e = expectThrows( NodeValidationException.class, - () -> BootstrapChecks.check(true, Collections.singletonList(systemCallFilterEnabledCheck), "testSystemCallFilterCheck")); + () -> BootstrapChecks.check(context, true, Collections.singletonList(systemCallFilterEnabledCheck), + "testSystemCallFilterCheck")); assertThat( e.getMessage(), containsString("system call filters failed to install; " + "check the logs and fix your configuration or disable system call filters at your own risk")); isSystemCallFilterInstalled.set(true); - BootstrapChecks.check(true, Collections.singletonList(systemCallFilterEnabledCheck), "testSystemCallFilterCheck"); - - final BootstrapChecks.SystemCallFilterCheck systemCallFilterNotEnabledCheck = new BootstrapChecks.SystemCallFilterCheck(false) { + BootstrapChecks.check(context, true, Collections.singletonList(systemCallFilterEnabledCheck), "testSystemCallFilterCheck"); + BootstrapContext context_1 = new BootstrapContext(Settings.builder().put("bootstrap.system_call_filter", false).build(), null); + final BootstrapChecks.SystemCallFilterCheck systemCallFilterNotEnabledCheck = new BootstrapChecks.SystemCallFilterCheck() { @Override boolean isSystemCallFilterInstalled() { return isSystemCallFilterInstalled.get(); } }; isSystemCallFilterInstalled.set(false); - BootstrapChecks.check(true, Collections.singletonList(systemCallFilterNotEnabledCheck), "testSystemCallFilterCheck"); + BootstrapChecks.check(context_1, true, Collections.singletonList(systemCallFilterNotEnabledCheck), "testSystemCallFilterCheck"); isSystemCallFilterInstalled.set(true); - BootstrapChecks.check(true, Collections.singletonList(systemCallFilterNotEnabledCheck), "testSystemCallFilterCheck"); + BootstrapChecks.check(context_1, true, Collections.singletonList(systemCallFilterNotEnabledCheck), "testSystemCallFilterCheck"); } public void testMightForkCheck() throws NodeValidationException { @@ -487,7 +477,7 @@ boolean mightFork() { } @Override - public String errorMessage() { + String message(BootstrapContext context) { return "error"; } }; @@ -573,13 +563,13 @@ private void runMightForkTest( } else { enableMightFork.run(); } - BootstrapChecks.check(true, Collections.singletonList(check), methodName); + BootstrapChecks.check(defaultContext, true, Collections.singletonList(check), methodName); // if system call filter is enabled, but we will not fork, nothing should // happen isSystemCallFilterInstalled.set(true); disableMightFork.run(); - BootstrapChecks.check(true, Collections.singletonList(check), methodName); + BootstrapChecks.check(defaultContext, true, Collections.singletonList(check), methodName); // if system call filter is enabled, and we might fork, the check should be enforced, regardless of bootstrap checks being enabled // or not @@ -588,7 +578,7 @@ private void runMightForkTest( final NodeValidationException e = expectThrows( NodeValidationException.class, - () -> BootstrapChecks.check(randomBoolean(), Collections.singletonList(check), methodName)); + () -> BootstrapChecks.check(defaultContext, randomBoolean(), Collections.singletonList(check), methodName)); consumer.accept(e); } @@ -613,7 +603,7 @@ String javaVersion() { final NodeValidationException e = expectThrows( NodeValidationException.class, () -> { - BootstrapChecks.check(true, checks, "testEarlyAccessCheck"); + BootstrapChecks.check(defaultContext, true, checks, "testEarlyAccessCheck"); }); assertThat( e.getMessage(), @@ -624,7 +614,7 @@ String javaVersion() { // if not on an early-access build, nothing should happen javaVersion.set(randomFrom("1.8.0_152", "9")); - BootstrapChecks.check(true, checks, "testEarlyAccessCheck"); + BootstrapChecks.check(defaultContext, true, checks, "testEarlyAccessCheck"); } @@ -660,7 +650,7 @@ boolean isJava8() { final NodeValidationException e = expectThrows( NodeValidationException.class, - () -> BootstrapChecks.check(true, Collections.singletonList(g1GCCheck), "testG1GCCheck")); + () -> BootstrapChecks.check(defaultContext, true, Collections.singletonList(g1GCCheck), "testG1GCCheck")); assertThat( e.getMessage(), containsString( @@ -668,12 +658,12 @@ boolean isJava8() { // if G1GC is disabled, nothing should happen isG1GCEnabled.set(false); - BootstrapChecks.check(true, Collections.singletonList(g1GCCheck), "testG1GCCheck"); + BootstrapChecks.check(defaultContext, true, Collections.singletonList(g1GCCheck), "testG1GCCheck"); // if on or after update 40, nothing should happen independent of whether or not G1GC is enabled isG1GCEnabled.set(randomBoolean()); jvmVersion.set(String.format(Locale.ROOT, "25.%d-b%d", randomIntBetween(40, 112), randomIntBetween(1, 128))); - BootstrapChecks.check(true, Collections.singletonList(g1GCCheck), "testG1GCCheck"); + BootstrapChecks.check(defaultContext, true, Collections.singletonList(g1GCCheck), "testG1GCCheck"); final BootstrapChecks.G1GCCheck nonOracleCheck = new BootstrapChecks.G1GCCheck() { @@ -685,7 +675,7 @@ String jvmVendor() { }; // if not on an Oracle JVM, nothing should happen - BootstrapChecks.check(true, Collections.singletonList(nonOracleCheck), "testG1GCCheck"); + BootstrapChecks.check(defaultContext, true, Collections.singletonList(nonOracleCheck), "testG1GCCheck"); final BootstrapChecks.G1GCCheck nonJava8Check = new BootstrapChecks.G1GCCheck() { @@ -697,19 +687,14 @@ boolean isJava8() { }; // if not Java 8, nothing should happen - BootstrapChecks.check(true, Collections.singletonList(nonJava8Check), "testG1GCCheck"); + BootstrapChecks.check(defaultContext, true, Collections.singletonList(nonJava8Check), "testG1GCCheck"); } public void testAlwaysEnforcedChecks() { final BootstrapCheck check = new BootstrapCheck() { @Override - public boolean check() { - return true; - } - - @Override - public String errorMessage() { - return "error"; + public BootstrapCheckResult check(BootstrapContext context) { + return BootstrapCheckResult.failure("error"); } @Override @@ -720,7 +705,7 @@ public boolean alwaysEnforce() { final NodeValidationException alwaysEnforced = expectThrows( NodeValidationException.class, - () -> BootstrapChecks.check(randomBoolean(), Collections.singletonList(check), "testAlwaysEnforcedChecks")); + () -> BootstrapChecks.check(defaultContext, randomBoolean(), Collections.singletonList(check), "testAlwaysEnforcedChecks")); assertThat(alwaysEnforced, hasToString(containsString("error"))); } diff --git a/core/src/test/java/org/elasticsearch/bootstrap/BootstrapTests.java b/core/src/test/java/org/elasticsearch/bootstrap/BootstrapTests.java new file mode 100644 index 0000000000000..6c390a5a8a7d7 --- /dev/null +++ b/core/src/test/java/org/elasticsearch/bootstrap/BootstrapTests.java @@ -0,0 +1,69 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.bootstrap; + +import org.apache.lucene.util.IOUtils; +import org.elasticsearch.common.settings.KeyStoreCommandTestCase; +import org.elasticsearch.common.settings.KeyStoreWrapper; +import org.elasticsearch.common.settings.SecureSettings; +import org.elasticsearch.common.settings.SecureString; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.env.Environment; +import org.elasticsearch.test.ESTestCase; +import org.junit.After; +import org.junit.Before; + +import java.io.IOException; +import java.nio.file.FileSystem; +import java.nio.file.Files; +import java.nio.file.Path; +import java.util.ArrayList; +import java.util.List; + +public class BootstrapTests extends ESTestCase { + Environment env; + List fileSystems = new ArrayList<>(); + + @After + public void closeMockFileSystems() throws IOException { + IOUtils.close(fileSystems); + } + + @Before + public void setupEnv() throws IOException { + env = KeyStoreCommandTestCase.setupEnv(true, fileSystems); + } + + public void testLoadSecureSettings() throws Exception { + final Path configPath = env.configFile(); + final SecureString seed; + try (KeyStoreWrapper keyStoreWrapper = KeyStoreWrapper.create(new char[0])) { + seed = KeyStoreWrapper.SEED_SETTING.get(Settings.builder().setSecureSettings(keyStoreWrapper).build()); + assertNotNull(seed); + assertTrue(seed.length() > 0); + keyStoreWrapper.save(configPath); + } + assertTrue(Files.exists(configPath.resolve("elasticsearch.keystore"))); + try (SecureSettings secureSettings = Bootstrap.loadSecureSettings(env)) { + SecureString seedAfterLoad = KeyStoreWrapper.SEED_SETTING.get(Settings.builder().setSecureSettings(secureSettings).build()); + assertEquals(seedAfterLoad.toString(), seed.toString()); + assertTrue(Files.exists(configPath.resolve("elasticsearch.keystore"))); + } + } +} diff --git a/core/src/test/java/org/elasticsearch/bootstrap/ElasticsearchCliTests.java b/core/src/test/java/org/elasticsearch/bootstrap/ElasticsearchCliTests.java index 137d7593edca9..bcc70773146c6 100644 --- a/core/src/test/java/org/elasticsearch/bootstrap/ElasticsearchCliTests.java +++ b/core/src/test/java/org/elasticsearch/bootstrap/ElasticsearchCliTests.java @@ -22,6 +22,7 @@ import org.elasticsearch.Build; import org.elasticsearch.Version; import org.elasticsearch.cli.ExitCodes; +import org.elasticsearch.common.settings.Settings; import org.elasticsearch.monitor.jvm.JvmInfo; import java.nio.file.Path; @@ -150,9 +151,9 @@ public void testElasticsearchSettings() throws Exception { true, output -> {}, (foreground, pidFile, quiet, env) -> { - Map settings = env.settings().getAsMap(); - assertThat(settings, hasEntry("foo", "bar")); - assertThat(settings, hasEntry("baz", "qux")); + Settings settings = env.settings(); + assertEquals("bar", settings.get("foo")); + assertEquals("qux", settings.get("baz")); }, "-Efoo=bar", "-E", "baz=qux"); } diff --git a/core/src/test/java/org/elasticsearch/bootstrap/ElasticsearchUncaughtExceptionHandlerTests.java b/core/src/test/java/org/elasticsearch/bootstrap/ElasticsearchUncaughtExceptionHandlerTests.java index 6e40153b467d9..e2bf07b7d0bb4 100644 --- a/core/src/test/java/org/elasticsearch/bootstrap/ElasticsearchUncaughtExceptionHandlerTests.java +++ b/core/src/test/java/org/elasticsearch/bootstrap/ElasticsearchUncaughtExceptionHandlerTests.java @@ -19,7 +19,6 @@ package org.elasticsearch.bootstrap; -import org.apache.lucene.index.MergePolicy; import org.elasticsearch.test.ESTestCase; import org.junit.Before; @@ -131,7 +130,6 @@ void onNonFatalUncaught(String threadName, Throwable t) { } public void testIsFatalCause() { - assertFatal(new MergePolicy.MergeException(new OutOfMemoryError(), null)); assertFatal(new OutOfMemoryError()); assertFatal(new StackOverflowError()); assertFatal(new InternalError()); diff --git a/core/src/test/java/org/elasticsearch/bwcompat/RecoveryWithUnsupportedIndicesIT.java b/core/src/test/java/org/elasticsearch/bwcompat/RecoveryWithUnsupportedIndicesIT.java index 429266c45892c..5247a224423ec 100644 --- a/core/src/test/java/org/elasticsearch/bwcompat/RecoveryWithUnsupportedIndicesIT.java +++ b/core/src/test/java/org/elasticsearch/bwcompat/RecoveryWithUnsupportedIndicesIT.java @@ -18,11 +18,69 @@ */ package org.elasticsearch.bwcompat; +import java.io.IOException; +import java.io.InputStream; +import java.nio.file.DirectoryStream; +import java.nio.file.Files; +import java.nio.file.Path; +import java.util.ArrayList; +import java.util.List; + +import org.apache.lucene.util.LuceneTestCase; +import org.apache.lucene.util.TestUtil; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.env.Environment; +import org.elasticsearch.env.NodeEnvironment; +import org.elasticsearch.test.ESIntegTestCase; import static org.hamcrest.Matchers.containsString; -public class RecoveryWithUnsupportedIndicesIT extends StaticIndexBackwardCompatibilityIT { +@LuceneTestCase.SuppressCodecs("*") +@ESIntegTestCase.ClusterScope(scope = ESIntegTestCase.Scope.TEST, numDataNodes = 0, minNumDataNodes = 0, maxNumDataNodes = 0) +public class RecoveryWithUnsupportedIndicesIT extends ESIntegTestCase { + + /** + * Return settings that could be used to start a node that has the given zipped home directory. + */ + protected Settings prepareBackwardsDataDir(Path backwardsIndex) throws IOException { + Path indexDir = createTempDir(); + Path dataDir = indexDir.resolve("data"); + try (InputStream stream = Files.newInputStream(backwardsIndex)) { + TestUtil.unzip(stream, indexDir); + } + assertTrue(Files.exists(dataDir)); + + // list clusters in the datapath, ignoring anything from extrasfs + final Path[] list; + try (DirectoryStream stream = Files.newDirectoryStream(dataDir)) { + List dirs = new ArrayList<>(); + for (Path p : stream) { + if (!p.getFileName().toString().startsWith("extra")) { + dirs.add(p); + } + } + list = dirs.toArray(new Path[0]); + } + + if (list.length != 1) { + StringBuilder builder = new StringBuilder("Backwards index must contain exactly one cluster\n"); + for (Path line : list) { + builder.append(line.toString()).append('\n'); + } + throw new IllegalStateException(builder.toString()); + } + Path src = list[0].resolve(NodeEnvironment.NODES_FOLDER); + Path dest = dataDir.resolve(NodeEnvironment.NODES_FOLDER); + assertTrue(Files.exists(src)); + Files.move(src, dest); + assertFalse(Files.exists(src)); + assertTrue(Files.exists(dest)); + Settings.Builder builder = Settings.builder() + .put(Environment.PATH_DATA_SETTING.getKey(), dataDir.toAbsolutePath()); + + return builder.build(); + } + public void testUpgradeStartClusterOn_0_20_6() throws Exception { String indexName = "unsupported-0.20.6"; @@ -32,7 +90,7 @@ public void testUpgradeStartClusterOn_0_20_6() throws Exception { internalCluster().startNode(nodeSettings); fail(); } catch (Exception ex) { - assertThat(ex.getMessage(), containsString(" was created before v2.0.0.beta1 and wasn't upgraded")); + assertThat(ex.getCause().getCause().getMessage(), containsString(" was created before v2.0.0.beta1 and wasn't upgraded")); } } } diff --git a/core/src/test/java/org/elasticsearch/bwcompat/StaticIndexBackwardCompatibilityIT.java b/core/src/test/java/org/elasticsearch/bwcompat/StaticIndexBackwardCompatibilityIT.java deleted file mode 100644 index 3884d3475e12a..0000000000000 --- a/core/src/test/java/org/elasticsearch/bwcompat/StaticIndexBackwardCompatibilityIT.java +++ /dev/null @@ -1,55 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.bwcompat; - -import org.apache.lucene.util.LuceneTestCase; -import org.elasticsearch.action.admin.indices.get.GetIndexResponse; -import org.elasticsearch.action.search.SearchResponse; -import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.test.ESIntegTestCase; - -import static org.hamcrest.Matchers.greaterThanOrEqualTo; - -/** - * These tests are against static indexes, built from versions of ES that cannot be upgraded without - * a full cluster restart (ie no wire format compatibility). - */ -@LuceneTestCase.SuppressCodecs("*") -@ESIntegTestCase.ClusterScope(scope = ESIntegTestCase.Scope.TEST, numDataNodes = 0, minNumDataNodes = 0, maxNumDataNodes = 0) -public class StaticIndexBackwardCompatibilityIT extends ESIntegTestCase { - - public void loadIndex(String index, Object... settings) throws Exception { - logger.info("Checking static index {}", index); - Settings nodeSettings = prepareBackwardsDataDir(getDataPath(index + ".zip"), settings); - internalCluster().startNode(nodeSettings); - ensureGreen(index); - assertIndexSanity(index); - } - - private void assertIndexSanity(String index) { - GetIndexResponse getIndexResponse = client().admin().indices().prepareGetIndex().get(); - assertEquals(1, getIndexResponse.indices().length); - assertEquals(index, getIndexResponse.indices()[0]); - ensureYellow(index); - SearchResponse test = client().prepareSearch(index).get(); - assertThat(test.getHits().getTotalHits(), greaterThanOrEqualTo(1L)); - } - -} diff --git a/core/src/test/java/org/elasticsearch/client/AbstractClientHeadersTestCase.java b/core/src/test/java/org/elasticsearch/client/AbstractClientHeadersTestCase.java index 8d9c3ee2d6f44..8c1b22f7fb171 100644 --- a/core/src/test/java/org/elasticsearch/client/AbstractClientHeadersTestCase.java +++ b/core/src/test/java/org/elasticsearch/client/AbstractClientHeadersTestCase.java @@ -43,6 +43,8 @@ import java.util.HashMap; import java.util.Map; +import java.util.function.Function; +import java.util.stream.Collectors; import static org.hamcrest.Matchers.equalTo; import static org.hamcrest.Matchers.notNullValue; @@ -144,7 +146,10 @@ protected static void assertHeaders(Map headers, Map headers = new HashMap<>(); + Settings asSettings = HEADER_SETTINGS.getAsSettings(ThreadContext.PREFIX); + assertHeaders(pool.getThreadContext().getHeaders(), + asSettings.keySet().stream().collect(Collectors.toMap(Function.identity(), k -> asSettings.get(k)))); } public static class InternalException extends Exception { @@ -161,9 +166,11 @@ protected static class AssertingActionListener implements ActionListener { private final String action; private final Map expectedHeaders; private final ThreadPool pool; + private static final Settings THREAD_HEADER_SETTINGS = HEADER_SETTINGS.getAsSettings(ThreadContext.PREFIX); public AssertingActionListener(String action, ThreadPool pool) { - this(action, (Map)HEADER_SETTINGS.getAsSettings(ThreadContext.PREFIX).getAsStructuredMap(), pool); + this(action, THREAD_HEADER_SETTINGS.keySet().stream() + .collect(Collectors.toMap(Function.identity(), k -> THREAD_HEADER_SETTINGS.get(k))), pool); } public AssertingActionListener(String action, Map expectedHeaders, ThreadPool pool) { diff --git a/core/src/test/java/org/elasticsearch/client/transport/FailAndRetryMockTransport.java b/core/src/test/java/org/elasticsearch/client/transport/FailAndRetryMockTransport.java index dbe858982090c..9be0d55d77e6a 100644 --- a/core/src/test/java/org/elasticsearch/client/transport/FailAndRetryMockTransport.java +++ b/core/src/test/java/org/elasticsearch/client/transport/FailAndRetryMockTransport.java @@ -40,7 +40,7 @@ import org.elasticsearch.transport.TransportRequestOptions; import org.elasticsearch.transport.TransportResponse; import org.elasticsearch.transport.TransportResponseHandler; -import org.elasticsearch.transport.TransportServiceAdapter; +import org.elasticsearch.transport.TransportService; import org.elasticsearch.transport.TransportStats; import java.io.IOException; @@ -60,7 +60,7 @@ abstract class FailAndRetryMockTransport imp private boolean connectMode = true; - private TransportServiceAdapter transportServiceAdapter; + private TransportService transportService; private final AtomicInteger connectTransportExceptions = new AtomicInteger(); private final AtomicInteger failures = new AtomicInteger(); @@ -90,12 +90,12 @@ public void sendRequest(long requestId, String action, TransportRequest request, //we make sure that nodes get added to the connected ones when calling addTransportAddress, by returning proper nodes info if (connectMode) { if (TransportLivenessAction.NAME.equals(action)) { - TransportResponseHandler transportResponseHandler = transportServiceAdapter.onResponseReceived(requestId); + TransportResponseHandler transportResponseHandler = transportService.onResponseReceived(requestId); transportResponseHandler.handleResponse(new LivenessResponse(ClusterName.CLUSTER_NAME_SETTING. getDefault(Settings.EMPTY), node)); } else if (ClusterStateAction.NAME.equals(action)) { - TransportResponseHandler transportResponseHandler = transportServiceAdapter.onResponseReceived(requestId); + TransportResponseHandler transportResponseHandler = transportService.onResponseReceived(requestId); ClusterState clusterState = getMockClusterState(node); transportResponseHandler.handleResponse(new ClusterStateResponse(clusterName, clusterState, 0L)); } else { @@ -116,7 +116,7 @@ public void sendRequest(long requestId, String action, TransportRequest request, //throw whatever exception that is not a subclass of ConnectTransportException throw new IllegalStateException(); } else { - TransportResponseHandler transportResponseHandler = transportServiceAdapter.onResponseReceived(requestId); + TransportResponseHandler transportResponseHandler = transportService.onResponseReceived(requestId); if (random.nextBoolean()) { successes.incrementAndGet(); transportResponseHandler.handleResponse(newResponse()); @@ -163,8 +163,8 @@ public Set triedNodes() { } @Override - public void transportServiceAdapter(TransportServiceAdapter transportServiceAdapter) { - this.transportServiceAdapter = transportServiceAdapter; + public void setTransportService(TransportService transportServiceAdapter) { + this.transportService = transportServiceAdapter; } @Override diff --git a/core/src/test/java/org/elasticsearch/client/transport/TransportClientNodesServiceTests.java b/core/src/test/java/org/elasticsearch/client/transport/TransportClientNodesServiceTests.java index ad24da029e7e0..b120c7a3e7dd3 100644 --- a/core/src/test/java/org/elasticsearch/client/transport/TransportClientNodesServiceTests.java +++ b/core/src/test/java/org/elasticsearch/client/transport/TransportClientNodesServiceTests.java @@ -31,6 +31,7 @@ import org.elasticsearch.cluster.node.DiscoveryNode; import org.elasticsearch.cluster.node.DiscoveryNodes; import org.elasticsearch.common.UUIDs; +import org.elasticsearch.common.settings.Setting; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.transport.TransportAddress; import org.elasticsearch.common.unit.TimeValue; @@ -61,7 +62,6 @@ import java.util.Map; import java.util.concurrent.CopyOnWriteArrayList; import java.util.concurrent.CountDownLatch; -import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.atomic.AtomicInteger; import java.util.concurrent.atomic.AtomicReference; @@ -91,8 +91,11 @@ private static class TestIteration implements Closeable { // map for each address of the nodes a cluster state request should respond with final Map nodeMap; + TestIteration() { + this(Settings.EMPTY); + } - TestIteration(Object... extraSettings) { + TestIteration(Settings extraSettings) { Settings settings = Settings.builder().put(extraSettings).put("cluster.name", "test").build(); ClusterName clusterName = ClusterName.CLUSTER_NAME_SETTING.get(settings); List listNodes = new ArrayList<>(); @@ -319,7 +322,7 @@ public void testRemoveAddressSimple() { } private void checkRemoveAddress(boolean sniff) { - Object[] extraSettings = {TransportClient.CLIENT_TRANSPORT_SNIFF.getKey(), sniff}; + Settings extraSettings = Settings.builder().put(TransportClient.CLIENT_TRANSPORT_SNIFF.getKey(), sniff).build(); try(TestIteration iteration = new TestIteration(extraSettings)) { final TransportClientNodesService service = iteration.transportClientNodesService; assertEquals(iteration.listNodesCount + iteration.sniffNodesCount, service.connectedNodes().size()); @@ -341,7 +344,7 @@ public void testSniffNodesSamplerClosesConnections() throws Exception { Settings remoteSettings = Settings.builder().put(Node.NODE_NAME_SETTING.getKey(), "remote").build(); try (MockTransportService remoteService = createNewService(remoteSettings, Version.CURRENT, threadPool, null)) { final MockHandler handler = new MockHandler(remoteService); - remoteService.registerRequestHandler(ClusterStateAction.NAME, ClusterStateRequest::new, ThreadPool.Names.SAME, handler); + remoteService.registerRequestHandler(ClusterStateAction.NAME, ThreadPool.Names.SAME, ClusterStateRequest::new, handler); remoteService.start(); remoteService.acceptIncomingRequests(); diff --git a/core/src/test/java/org/elasticsearch/cluster/ClusterInfoServiceIT.java b/core/src/test/java/org/elasticsearch/cluster/ClusterInfoServiceIT.java index b5e7a1201f969..d7d232a08d43c 100644 --- a/core/src/test/java/org/elasticsearch/cluster/ClusterInfoServiceIT.java +++ b/core/src/test/java/org/elasticsearch/cluster/ClusterInfoServiceIT.java @@ -23,7 +23,6 @@ import org.elasticsearch.ElasticsearchException; import org.elasticsearch.action.ActionListener; import org.elasticsearch.action.ActionRequest; -import org.elasticsearch.action.ActionResponse; import org.elasticsearch.action.admin.cluster.node.stats.NodesStatsAction; import org.elasticsearch.action.admin.indices.stats.IndicesStatsAction; import org.elasticsearch.action.support.ActionFilter; @@ -35,7 +34,6 @@ import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.Strings; import org.elasticsearch.common.collect.ImmutableOpenMap; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.env.NodeEnvironment; @@ -48,9 +46,6 @@ import org.elasticsearch.test.ESIntegTestCase; import org.elasticsearch.test.InternalTestCluster; import org.elasticsearch.test.transport.MockTransportService; -import org.elasticsearch.transport.ConnectionProfile; -import org.elasticsearch.transport.Transport; -import org.elasticsearch.transport.TransportException; import org.elasticsearch.transport.TransportRequest; import org.elasticsearch.transport.TransportRequestOptions; import org.elasticsearch.transport.TransportService; @@ -80,16 +75,22 @@ public class ClusterInfoServiceIT extends ESIntegTestCase { public static class TestPlugin extends Plugin implements ActionPlugin { + + private final BlockingActionFilter blockingActionFilter; + + public TestPlugin(Settings settings) { + blockingActionFilter = new BlockingActionFilter(settings); + } + @Override - public List> getActionFilters() { - return singletonList(BlockingActionFilter.class); + public List getActionFilters() { + return singletonList(blockingActionFilter); } } public static class BlockingActionFilter extends org.elasticsearch.action.support.ActionFilter.Simple { private Set blockedActions = emptySet(); - @Inject public BlockingActionFilter(Settings settings) { super(settings); } @@ -178,7 +179,7 @@ public void testClusterInfoServiceInformationClearOnError() throws InterruptedEx internalCluster().startNodes(2, // manually control publishing Settings.builder().put(InternalClusterInfoService.INTERNAL_CLUSTER_INFO_UPDATE_INTERVAL_SETTING.getKey(), "60m").build()); - prepareCreate("test").setSettings(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 1).get(); + prepareCreate("test").setSettings(Settings.builder().put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 1)).get(); ensureGreen("test"); InternalTestCluster internalTestCluster = internalCluster(); InternalClusterInfoService infoService = (InternalClusterInfoService) internalTestCluster.getInstance(ClusterInfoService.class, internalTestCluster.getMasterName()); diff --git a/core/src/test/java/org/elasticsearch/cluster/ClusterModuleTests.java b/core/src/test/java/org/elasticsearch/cluster/ClusterModuleTests.java index 81acd138d26fb..6fd3d66c8f81b 100644 --- a/core/src/test/java/org/elasticsearch/cluster/ClusterModuleTests.java +++ b/core/src/test/java/org/elasticsearch/cluster/ClusterModuleTests.java @@ -34,6 +34,7 @@ import org.elasticsearch.cluster.routing.allocation.decider.NodeVersionAllocationDecider; import org.elasticsearch.cluster.routing.allocation.decider.RebalanceOnlyWhenActiveAllocationDecider; import org.elasticsearch.cluster.routing.allocation.decider.ReplicaAfterPrimaryActiveAllocationDecider; +import org.elasticsearch.cluster.routing.allocation.decider.ResizeAllocationDecider; import org.elasticsearch.cluster.routing.allocation.decider.SameShardAllocationDecider; import org.elasticsearch.cluster.routing.allocation.decider.ShardsLimitAllocationDecider; import org.elasticsearch.cluster.routing.allocation.decider.SnapshotInProgressAllocationDecider; @@ -174,6 +175,7 @@ public void testShardsAllocatorFactoryNull() { public void testAllocationDeciderOrder() { List> expectedDeciders = Arrays.asList( MaxRetryAllocationDecider.class, + ResizeAllocationDecider.class, ReplicaAfterPrimaryActiveAllocationDecider.class, RebalanceOnlyWhenActiveAllocationDecider.class, ClusterRebalanceAllocationDecider.class, diff --git a/core/src/test/java/org/elasticsearch/cluster/DiskUsageTests.java b/core/src/test/java/org/elasticsearch/cluster/DiskUsageTests.java index 942d7a222ecfb..09ff06919e986 100644 --- a/core/src/test/java/org/elasticsearch/cluster/DiskUsageTests.java +++ b/core/src/test/java/org/elasticsearch/cluster/DiskUsageTests.java @@ -152,11 +152,11 @@ public void testFillDiskUsage() { }; List nodeStats = Arrays.asList( new NodeStats(new DiscoveryNode("node_1", buildNewFakeTransportAddress(), emptyMap(), emptySet(), Version.CURRENT), 0, - null,null,null,null,null,new FsInfo(0, null, node1FSInfo), null,null,null,null,null, null), + null,null,null,null,null,new FsInfo(0, null, node1FSInfo), null,null,null,null,null, null, null), new NodeStats(new DiscoveryNode("node_2", buildNewFakeTransportAddress(), emptyMap(), emptySet(), Version.CURRENT), 0, - null,null,null,null,null, new FsInfo(0, null, node2FSInfo), null,null,null,null,null, null), + null,null,null,null,null, new FsInfo(0, null, node2FSInfo), null,null,null,null,null, null, null), new NodeStats(new DiscoveryNode("node_3", buildNewFakeTransportAddress(), emptyMap(), emptySet(), Version.CURRENT), 0, - null,null,null,null,null, new FsInfo(0, null, node3FSInfo), null,null,null,null,null, null) + null,null,null,null,null, new FsInfo(0, null, node3FSInfo), null,null,null,null,null, null, null) ); InternalClusterInfoService.fillDiskUsagePerNode(logger, nodeStats, newLeastAvaiableUsages, newMostAvaiableUsages); DiskUsage leastNode_1 = newLeastAvaiableUsages.get("node_1"); @@ -193,11 +193,11 @@ public void testFillDiskUsageSomeInvalidValues() { }; List nodeStats = Arrays.asList( new NodeStats(new DiscoveryNode("node_1", buildNewFakeTransportAddress(), emptyMap(), emptySet(), Version.CURRENT), 0, - null,null,null,null,null,new FsInfo(0, null, node1FSInfo), null,null,null,null,null, null), + null,null,null,null,null,new FsInfo(0, null, node1FSInfo), null,null,null,null,null, null, null), new NodeStats(new DiscoveryNode("node_2", buildNewFakeTransportAddress(), emptyMap(), emptySet(), Version.CURRENT), 0, - null,null,null,null,null, new FsInfo(0, null, node2FSInfo), null,null,null,null,null, null), + null,null,null,null,null, new FsInfo(0, null, node2FSInfo), null,null,null,null,null, null, null), new NodeStats(new DiscoveryNode("node_3", buildNewFakeTransportAddress(), emptyMap(), emptySet(), Version.CURRENT), 0, - null,null,null,null,null, new FsInfo(0, null, node3FSInfo), null,null,null,null,null, null) + null,null,null,null,null, new FsInfo(0, null, node3FSInfo), null,null,null,null,null, null, null) ); InternalClusterInfoService.fillDiskUsagePerNode(logger, nodeStats, newLeastAvailableUsages, newMostAvailableUsages); DiskUsage leastNode_1 = newLeastAvailableUsages.get("node_1"); diff --git a/core/src/test/java/org/elasticsearch/cluster/NoMasterNodeIT.java b/core/src/test/java/org/elasticsearch/cluster/NoMasterNodeIT.java index 35d46879639ae..cf41ede128753 100644 --- a/core/src/test/java/org/elasticsearch/cluster/NoMasterNodeIT.java +++ b/core/src/test/java/org/elasticsearch/cluster/NoMasterNodeIT.java @@ -184,8 +184,9 @@ public void testNoMasterActionsWriteMasterBlock() throws Exception { internalCluster().startNode(settings); // start a second node, create an index, and then shut it down so we have no master block internalCluster().startNode(settings); - prepareCreate("test1").setSettings(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1).get(); - prepareCreate("test2").setSettings(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 2, IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0).get(); + prepareCreate("test1").setSettings(Settings.builder().put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1)).get(); + prepareCreate("test2").setSettings( + Settings.builder().put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 2).put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0)).get(); client().admin().cluster().prepareHealth("_all").setWaitForGreenStatus().get(); client().prepareIndex("test1", "type1", "1").setSource("field", "value1").get(); client().prepareIndex("test2", "type1", "1").setSource("field", "value1").get(); diff --git a/core/src/test/java/org/elasticsearch/cluster/NodeConnectionsServiceTests.java b/core/src/test/java/org/elasticsearch/cluster/NodeConnectionsServiceTests.java index 2e7a857cc7bc9..51908a45380f0 100644 --- a/core/src/test/java/org/elasticsearch/cluster/NodeConnectionsServiceTests.java +++ b/core/src/test/java/org/elasticsearch/cluster/NodeConnectionsServiceTests.java @@ -40,7 +40,6 @@ import org.elasticsearch.transport.TransportRequest; import org.elasticsearch.transport.TransportRequestOptions; import org.elasticsearch.transport.TransportService; -import org.elasticsearch.transport.TransportServiceAdapter; import org.elasticsearch.transport.TransportStats; import org.junit.After; import org.junit.Before; @@ -176,7 +175,7 @@ final class MockTransport implements Transport { volatile boolean randomConnectionExceptions = false; @Override - public void transportServiceAdapter(TransportServiceAdapter service) { + public void setTransportService(TransportService service) { } @Override diff --git a/core/src/test/java/org/elasticsearch/cluster/SimpleClusterStateIT.java b/core/src/test/java/org/elasticsearch/cluster/SimpleClusterStateIT.java index 3a4e414275b0a..8b246ecc2d3de 100644 --- a/core/src/test/java/org/elasticsearch/cluster/SimpleClusterStateIT.java +++ b/core/src/test/java/org/elasticsearch/cluster/SimpleClusterStateIT.java @@ -35,6 +35,7 @@ import org.elasticsearch.common.io.stream.NamedWriteableRegistry; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.ByteSizeValue; import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.XContentBuilder; @@ -209,9 +210,10 @@ public void testLargeClusterStatePublishing() throws Exception { int numberOfShards = scaledRandomIntBetween(1, cluster().numDataNodes()); // if the create index is ack'ed, then all nodes have successfully processed the cluster state assertAcked(client().admin().indices().prepareCreate("test") - .setSettings(IndexMetaData.SETTING_NUMBER_OF_SHARDS, numberOfShards, - IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0, - MapperService.INDEX_MAPPING_TOTAL_FIELDS_LIMIT_SETTING.getKey(), Long.MAX_VALUE) + .setSettings(Settings.builder() + .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, numberOfShards) + .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0) + .put(MapperService.INDEX_MAPPING_TOTAL_FIELDS_LIMIT_SETTING.getKey(), Long.MAX_VALUE)) .addMapping("type", mapping) .setTimeout("60s").get()); ensureGreen(); // wait for green state, so its both green, and there are no more pending events diff --git a/core/src/test/java/org/elasticsearch/cluster/allocation/ClusterRerouteIT.java b/core/src/test/java/org/elasticsearch/cluster/allocation/ClusterRerouteIT.java index b6b6b3024b413..0522f3f15f817 100644 --- a/core/src/test/java/org/elasticsearch/cluster/allocation/ClusterRerouteIT.java +++ b/core/src/test/java/org/elasticsearch/cluster/allocation/ClusterRerouteIT.java @@ -19,20 +19,24 @@ package org.elasticsearch.cluster.allocation; +import org.apache.logging.log4j.Level; import org.apache.logging.log4j.Logger; import org.apache.lucene.util.IOUtils; import org.elasticsearch.action.admin.cluster.health.ClusterHealthResponse; import org.elasticsearch.action.admin.cluster.reroute.ClusterRerouteResponse; +import org.elasticsearch.action.admin.cluster.reroute.TransportClusterRerouteAction; import org.elasticsearch.action.support.ActiveShardCount; import org.elasticsearch.action.support.WriteRequest.RefreshPolicy; import org.elasticsearch.cluster.ClusterState; import org.elasticsearch.cluster.health.ClusterHealthStatus; +import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.cluster.node.DiscoveryNode; import org.elasticsearch.cluster.routing.ShardRouting; import org.elasticsearch.cluster.routing.ShardRoutingState; import org.elasticsearch.cluster.routing.allocation.RerouteExplanation; import org.elasticsearch.cluster.routing.allocation.RoutingExplanations; import org.elasticsearch.cluster.routing.allocation.command.AllocateEmptyPrimaryAllocationCommand; +import org.elasticsearch.cluster.routing.allocation.command.AllocationCommand; import org.elasticsearch.cluster.routing.allocation.command.MoveAllocationCommand; import org.elasticsearch.cluster.routing.allocation.decider.Decision; import org.elasticsearch.cluster.routing.allocation.decider.EnableAllocationDecider; @@ -50,6 +54,7 @@ import org.elasticsearch.test.ESIntegTestCase.ClusterScope; import org.elasticsearch.test.ESIntegTestCase.Scope; import org.elasticsearch.test.InternalTestCluster; +import org.elasticsearch.test.MockLogAppender; import java.nio.file.Path; import java.util.Arrays; @@ -63,6 +68,7 @@ import static org.elasticsearch.cluster.routing.allocation.decider.EnableAllocationDecider.CLUSTER_ROUTING_ALLOCATION_ENABLE_SETTING; import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked; import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertBlocked; +import static org.hamcrest.Matchers.containsString; import static org.hamcrest.Matchers.equalTo; import static org.hamcrest.Matchers.hasSize; @@ -304,6 +310,84 @@ public void testRerouteExplain() { assertThat(explanation.decisions().type(), equalTo(Decision.Type.YES)); } + public void testMessageLogging() throws Exception{ + final Settings settings = Settings.builder() + .put(EnableAllocationDecider.CLUSTER_ROUTING_ALLOCATION_ENABLE_SETTING.getKey(), Allocation.NONE.name()) + .put(EnableAllocationDecider.CLUSTER_ROUTING_REBALANCE_ENABLE_SETTING.getKey(), EnableAllocationDecider.Rebalance.NONE.name()) + .build(); + + final String nodeName1 = internalCluster().startNode(settings); + assertThat(cluster().size(), equalTo(1)); + ClusterHealthResponse healthResponse = client().admin().cluster().prepareHealth().setWaitForNodes("1") + .execute().actionGet(); + assertThat(healthResponse.isTimedOut(), equalTo(false)); + + final String nodeName2 = internalCluster().startNode(settings); + assertThat(cluster().size(), equalTo(2)); + healthResponse = client().admin().cluster().prepareHealth().setWaitForNodes("2").execute().actionGet(); + assertThat(healthResponse.isTimedOut(), equalTo(false)); + + final String indexName = "test_index"; + client().admin().indices().prepareCreate(indexName).setWaitForActiveShards(ActiveShardCount.NONE) + .setSettings(Settings.builder() + .put(IndexMetaData.INDEX_NUMBER_OF_SHARDS_SETTING.getKey(), 2) + .put(IndexMetaData.INDEX_NUMBER_OF_REPLICAS_SETTING.getKey(), 1)) + .execute().actionGet(); + + Logger actionLogger = Loggers.getLogger(TransportClusterRerouteAction.class); + + MockLogAppender dryRunMockLog = new MockLogAppender(); + dryRunMockLog.start(); + dryRunMockLog.addExpectation( + new MockLogAppender.UnseenEventExpectation("no completed message logged on dry run", + TransportClusterRerouteAction.class.getName(), Level.INFO, "allocated an empty primary*") + ); + Loggers.addAppender(actionLogger, dryRunMockLog); + + AllocationCommand dryRunAllocation = new AllocateEmptyPrimaryAllocationCommand(indexName, 0, nodeName1, true); + ClusterRerouteResponse dryRunResponse = client().admin().cluster().prepareReroute() + .setExplain(randomBoolean()) + .setDryRun(true) + .add(dryRunAllocation) + .execute().actionGet(); + + // during a dry run, messages exist but are not logged or exposed + assertThat(dryRunResponse.getExplanations().getYesDecisionMessages(), hasSize(1)); + assertThat(dryRunResponse.getExplanations().getYesDecisionMessages().get(0), containsString("allocated an empty primary")); + + dryRunMockLog.assertAllExpectationsMatched(); + dryRunMockLog.stop(); + Loggers.removeAppender(actionLogger, dryRunMockLog); + + MockLogAppender allocateMockLog = new MockLogAppender(); + allocateMockLog.start(); + allocateMockLog.addExpectation( + new MockLogAppender.SeenEventExpectation("message for first allocate empty primary", + TransportClusterRerouteAction.class.getName(), Level.INFO, "allocated an empty primary*" + nodeName1 + "*") + ); + allocateMockLog.addExpectation( + new MockLogAppender.UnseenEventExpectation("no message for second allocate empty primary", + TransportClusterRerouteAction.class.getName(), Level.INFO, "allocated an empty primary*" + nodeName2 + "*") + ); + Loggers.addAppender(actionLogger, allocateMockLog); + + AllocationCommand yesDecisionAllocation = new AllocateEmptyPrimaryAllocationCommand(indexName, 0, nodeName1, true); + AllocationCommand noDecisionAllocation = new AllocateEmptyPrimaryAllocationCommand("noexist", 1, nodeName2, true); + ClusterRerouteResponse response = client().admin().cluster().prepareReroute() + .setExplain(true) // so we get a NO decision back rather than an exception + .add(yesDecisionAllocation) + .add(noDecisionAllocation) + .execute().actionGet(); + + assertThat(response.getExplanations().getYesDecisionMessages(), hasSize(1)); + assertThat(response.getExplanations().getYesDecisionMessages().get(0), containsString("allocated an empty primary")); + assertThat(response.getExplanations().getYesDecisionMessages().get(0), containsString(nodeName1)); + + allocateMockLog.assertAllExpectationsMatched(); + allocateMockLog.stop(); + Loggers.removeAppender(actionLogger, allocateMockLog); + } + public void testClusterRerouteWithBlocks() throws Exception { List nodesIds = internalCluster().startNodes(2); diff --git a/core/src/test/java/org/elasticsearch/cluster/allocation/FilteringAllocationIT.java b/core/src/test/java/org/elasticsearch/cluster/allocation/FilteringAllocationIT.java index 49c63362f58c7..91a41495a461a 100644 --- a/core/src/test/java/org/elasticsearch/cluster/allocation/FilteringAllocationIT.java +++ b/core/src/test/java/org/elasticsearch/cluster/allocation/FilteringAllocationIT.java @@ -149,12 +149,12 @@ public void testDisablingAllocationFiltering() throws Exception { public void testInvalidIPFilterClusterSettings() { String ipKey = randomFrom("_ip", "_host_ip", "_publish_ip"); - Setting filterSetting = randomFrom(FilterAllocationDecider.CLUSTER_ROUTING_REQUIRE_GROUP_SETTING, + Setting filterSetting = randomFrom(FilterAllocationDecider.CLUSTER_ROUTING_REQUIRE_GROUP_SETTING, FilterAllocationDecider.CLUSTER_ROUTING_INCLUDE_GROUP_SETTING, FilterAllocationDecider.CLUSTER_ROUTING_EXCLUDE_GROUP_SETTING); IllegalArgumentException e = expectThrows(IllegalArgumentException.class, () -> client().admin().cluster().prepareUpdateSettings() .setTransientSettings(Settings.builder().put(filterSetting.getKey() + ipKey, "192.168.1.1.")) .execute().actionGet()); - assertEquals("invalid IP address [192.168.1.1.] for [" + ipKey + "]", e.getMessage()); + assertEquals("invalid IP address [192.168.1.1.] for [" + filterSetting.getKey() + ipKey + "]", e.getMessage()); } } diff --git a/core/src/test/java/org/elasticsearch/cluster/metadata/IndexCreationTaskTests.java b/core/src/test/java/org/elasticsearch/cluster/metadata/IndexCreationTaskTests.java new file mode 100644 index 0000000000000..f44d0b7c4036e --- /dev/null +++ b/core/src/test/java/org/elasticsearch/cluster/metadata/IndexCreationTaskTests.java @@ -0,0 +1,449 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.cluster.metadata; + +import org.apache.logging.log4j.Logger; +import org.apache.lucene.search.Sort; +import org.elasticsearch.Version; +import org.elasticsearch.action.ActionListener; +import org.elasticsearch.action.admin.indices.alias.Alias; +import org.elasticsearch.action.admin.indices.create.CreateIndexClusterStateUpdateRequest; +import org.elasticsearch.action.admin.indices.shrink.ResizeType; +import org.elasticsearch.action.support.ActiveShardCount; +import org.elasticsearch.cluster.ClusterState; +import org.elasticsearch.cluster.block.ClusterBlock; +import org.elasticsearch.cluster.block.ClusterBlockLevel; +import org.elasticsearch.cluster.block.ClusterBlocks; +import org.elasticsearch.cluster.node.DiscoveryNodes; +import org.elasticsearch.cluster.routing.IndexRoutingTable; +import org.elasticsearch.cluster.routing.RoutingTable; +import org.elasticsearch.cluster.routing.ShardRoutingState; +import org.elasticsearch.cluster.routing.TestShardRouting; +import org.elasticsearch.cluster.routing.allocation.AllocationService; +import org.elasticsearch.common.collect.ImmutableOpenMap; +import org.elasticsearch.common.compress.CompressedXContent; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.util.set.Sets; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; +import org.elasticsearch.common.xcontent.XContentFactory; +import org.elasticsearch.index.Index; +import org.elasticsearch.index.IndexService; +import org.elasticsearch.index.mapper.DocumentMapper; +import org.elasticsearch.index.mapper.MapperService; +import org.elasticsearch.index.mapper.ParentFieldMapper; +import org.elasticsearch.index.mapper.RoutingFieldMapper; +import org.elasticsearch.index.shard.IndexEventListener; +import org.elasticsearch.indices.IndicesService; +import org.elasticsearch.test.ESTestCase; +import org.hamcrest.Matchers; +import org.mockito.ArgumentCaptor; + +import java.io.IOException; +import java.util.Map; +import java.util.HashSet; +import java.util.Set; +import java.util.Collections; +import java.util.Arrays; +import java.util.function.Supplier; + +import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_NUMBER_OF_SHARDS; +import static org.elasticsearch.test.hamcrest.CollectionAssertions.hasAllKeys; +import static org.elasticsearch.test.hamcrest.CollectionAssertions.hasKey; +import static org.hamcrest.Matchers.containsString; +import static org.hamcrest.Matchers.equalTo; +import static org.hamcrest.Matchers.not; +import static org.mockito.Matchers.anyBoolean; +import static org.mockito.Matchers.anyObject; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.when; +import static org.mockito.Mockito.doThrow; +import static org.mockito.Mockito.verify; +import static org.mockito.Mockito.anyMap; +import static org.mockito.Mockito.times; +import static org.mockito.Mockito.eq; + +public class IndexCreationTaskTests extends ESTestCase { + + private final IndicesService indicesService = mock(IndicesService.class); + private final AliasValidator aliasValidator = mock(AliasValidator.class); + private final NamedXContentRegistry xContentRegistry = mock(NamedXContentRegistry.class); + private final CreateIndexClusterStateUpdateRequest request = mock(CreateIndexClusterStateUpdateRequest.class); + private final Logger logger = mock(Logger.class); + private final AllocationService allocationService = mock(AllocationService.class); + private final MetaDataCreateIndexService.IndexValidator validator = mock(MetaDataCreateIndexService.IndexValidator.class); + private final ActionListener listener = mock(ActionListener.class); + private final ClusterState state = mock(ClusterState.class); + private final Settings.Builder clusterStateSettings = Settings.builder(); + private final MapperService mapper = mock(MapperService.class); + + private final ImmutableOpenMap.Builder tplBuilder = ImmutableOpenMap.builder(); + private final ImmutableOpenMap.Builder customBuilder = ImmutableOpenMap.builder(); + private final ImmutableOpenMap.Builder idxBuilder = ImmutableOpenMap.builder(); + + private final Settings.Builder reqSettings = Settings.builder(); + private final Set reqBlocks = Sets.newHashSet(); + private final MetaData.Builder currentStateMetaDataBuilder = MetaData.builder(); + private final ClusterBlocks currentStateBlocks = mock(ClusterBlocks.class); + private final RoutingTable.Builder routingTableBuilder = RoutingTable.builder(); + private final DocumentMapper docMapper = mock(DocumentMapper.class); + + private ActiveShardCount waitForActiveShardsNum = ActiveShardCount.DEFAULT; + + public void setUp() throws Exception { + super.setUp(); + setupIndicesService(); + setupClusterState(); + } + + public void testMatchTemplates() throws Exception { + tplBuilder.put("template_1", createTemplateMetadata("template_1", "te*")); + tplBuilder.put("template_2", createTemplateMetadata("template_2", "tes*")); + tplBuilder.put("template_3", createTemplateMetadata("template_3", "zzz*")); + + final ClusterState result = executeTask(); + + assertThat(result.metaData().index("test").getAliases(), hasAllKeys("alias_from_template_1", "alias_from_template_2")); + assertThat(result.metaData().index("test").getAliases(), not(hasKey("alias_from_template_3"))); + } + + public void testApplyDataFromTemplate() throws Exception { + addMatchingTemplate(builder -> builder + .putAlias(AliasMetaData.builder("alias1")) + .putMapping("mapping1", createMapping()) + .putCustom("custom1", createCustom()) + .settings(Settings.builder().put("key1", "value1")) + ); + + final ClusterState result = executeTask(); + + assertThat(result.metaData().index("test").getAliases(), hasKey("alias1")); + assertThat(result.metaData().index("test").getCustoms(), hasKey("custom1")); + assertThat(result.metaData().index("test").getSettings().get("key1"), equalTo("value1")); + assertThat(getMappingsFromResponse(), Matchers.hasKey("mapping1")); + } + + public void testApplyDataFromRequest() throws Exception { + setupRequestAlias(new Alias("alias1")); + setupRequestMapping("mapping1", createMapping()); + setupRequestCustom("custom1", createCustom()); + reqSettings.put("key1", "value1"); + + final ClusterState result = executeTask(); + + assertThat(result.metaData().index("test").getAliases(), hasKey("alias1")); + assertThat(result.metaData().index("test").getCustoms(), hasKey("custom1")); + assertThat(result.metaData().index("test").getSettings().get("key1"), equalTo("value1")); + assertThat(getMappingsFromResponse(), Matchers.hasKey("mapping1")); + } + + public void testRequestDataHavePriorityOverTemplateData() throws Exception { + final IndexMetaData.Custom tplCustom = createCustom(); + final IndexMetaData.Custom reqCustom = createCustom(); + final IndexMetaData.Custom mergedCustom = createCustom(); + when(reqCustom.mergeWith(tplCustom)).thenReturn(mergedCustom); + + final CompressedXContent tplMapping = createMapping("text"); + final CompressedXContent reqMapping = createMapping("keyword"); + + addMatchingTemplate(builder -> builder + .putAlias(AliasMetaData.builder("alias1").searchRouting("fromTpl").build()) + .putMapping("mapping1", tplMapping) + .putCustom("custom1", tplCustom) + .settings(Settings.builder().put("key1", "tplValue")) + ); + + setupRequestAlias(new Alias("alias1").searchRouting("fromReq")); + setupRequestMapping("mapping1", reqMapping); + setupRequestCustom("custom1", reqCustom); + reqSettings.put("key1", "reqValue"); + + final ClusterState result = executeTask(); + + assertThat(result.metaData().index("test").getCustoms().get("custom1"), equalTo(mergedCustom)); + assertThat(result.metaData().index("test").getAliases().get("alias1").getSearchRouting(), equalTo("fromReq")); + assertThat(result.metaData().index("test").getSettings().get("key1"), equalTo("reqValue")); + assertThat(getMappingsFromResponse().get("mapping1").toString(), equalTo("{type={properties={field={type=keyword}}}}")); + } + + public void testDefaultSettings() throws Exception { + final ClusterState result = executeTask(); + + assertThat(result.getMetaData().index("test").getSettings().get(SETTING_NUMBER_OF_SHARDS), equalTo("5")); + } + + public void testSettingsFromClusterState() throws Exception { + clusterStateSettings.put(SETTING_NUMBER_OF_SHARDS, 15); + + final ClusterState result = executeTask(); + + assertThat(result.getMetaData().index("test").getSettings().get(SETTING_NUMBER_OF_SHARDS), equalTo("15")); + } + + public void testTemplateOrder() throws Exception { + addMatchingTemplate(builder -> builder + .order(1) + .settings(Settings.builder().put(SETTING_NUMBER_OF_SHARDS, 10)) + .putAlias(AliasMetaData.builder("alias1").searchRouting("1").build()) + ); + addMatchingTemplate(builder -> builder + .order(2) + .settings(Settings.builder().put(SETTING_NUMBER_OF_SHARDS, 11)) + .putAlias(AliasMetaData.builder("alias1").searchRouting("2").build()) + ); + addMatchingTemplate(builder -> builder + .order(3) + .settings(Settings.builder().put(SETTING_NUMBER_OF_SHARDS, 12)) + .putAlias(AliasMetaData.builder("alias1").searchRouting("3").build()) + ); + final ClusterState result = executeTask(); + + assertThat(result.getMetaData().index("test").getSettings().get(SETTING_NUMBER_OF_SHARDS), equalTo("12")); + assertThat(result.metaData().index("test").getAliases().get("alias1").getSearchRouting(), equalTo("3")); + } + + public void testTemplateOrder2() throws Exception { + addMatchingTemplate(builder -> builder + .order(3) + .settings(Settings.builder().put(SETTING_NUMBER_OF_SHARDS, 12)) + .putAlias(AliasMetaData.builder("alias1").searchRouting("3").build()) + ); + addMatchingTemplate(builder -> builder + .order(2) + .settings(Settings.builder().put(SETTING_NUMBER_OF_SHARDS, 11)) + .putAlias(AliasMetaData.builder("alias1").searchRouting("2").build()) + ); + addMatchingTemplate(builder -> builder + .order(1) + .settings(Settings.builder().put(SETTING_NUMBER_OF_SHARDS, 10)) + .putAlias(AliasMetaData.builder("alias1").searchRouting("1").build()) + ); + final ClusterState result = executeTask(); + + assertThat(result.getMetaData().index("test").getSettings().get(SETTING_NUMBER_OF_SHARDS), equalTo("12")); + assertThat(result.metaData().index("test").getAliases().get("alias1").getSearchRouting(), equalTo("3")); + } + + public void testRequestStateOpen() throws Exception { + when(request.state()).thenReturn(IndexMetaData.State.OPEN); + + executeTask(); + + verify(allocationService, times(1)).reroute(anyObject(), anyObject()); + } + + @SuppressWarnings("unchecked") + public void testIndexRemovalOnFailure() throws Exception { + doThrow(new RuntimeException("oops")).when(mapper).merge(anyMap(), anyObject(), anyBoolean()); + + expectThrows(RuntimeException.class, this::executeTask); + + verify(indicesService, times(1)).removeIndex(anyObject(), anyObject(), anyObject()); + } + + public void testShrinkIndexIgnoresTemplates() throws Exception { + final Index source = new Index("source_idx", "aaa111bbb222"); + + when(request.recoverFrom()).thenReturn(source); + when(request.resizeType()).thenReturn(ResizeType.SHRINK); + currentStateMetaDataBuilder.put(createIndexMetaDataBuilder("source_idx", "aaa111bbb222", 2, 2)); + + routingTableBuilder.add(createIndexRoutingTableWithStartedShards(source)); + + when(currentStateBlocks.indexBlocked(eq(ClusterBlockLevel.WRITE), eq("source_idx"))).thenReturn(true); + reqSettings.put(SETTING_NUMBER_OF_SHARDS, 1); + + addMatchingTemplate(builder -> builder + .putAlias(AliasMetaData.builder("alias1").searchRouting("fromTpl").build()) + .putMapping("mapping1", createMapping()) + .putCustom("custom1", createCustom()) + .settings(Settings.builder().put("key1", "tplValue")) + ); + + final ClusterState result = executeTask(); + + assertThat(result.metaData().index("test").getAliases(), not(hasKey("alias1"))); + assertThat(result.metaData().index("test").getCustoms(), not(hasKey("custom1"))); + assertThat(result.metaData().index("test").getSettings().keySet(), not(Matchers.contains("key1"))); + assertThat(getMappingsFromResponse(), not(Matchers.hasKey("mapping1"))); + } + + public void testValidateWaitForActiveShardsFailure() throws Exception { + waitForActiveShardsNum = ActiveShardCount.from(1000); + + IllegalArgumentException e = expectThrows(IllegalArgumentException.class, this::executeTask); + + assertThat(e.getMessage(), containsString("invalid wait_for_active_shards")); + } + + private IndexRoutingTable createIndexRoutingTableWithStartedShards(Index index) { + final IndexRoutingTable idxRoutingTable = mock(IndexRoutingTable.class); + + when(idxRoutingTable.getIndex()).thenReturn(index); + when(idxRoutingTable.shardsWithState(eq(ShardRoutingState.STARTED))).thenReturn(Arrays.asList( + TestShardRouting.newShardRouting(index.getName(), 0, "1", randomBoolean(), ShardRoutingState.INITIALIZING).moveToStarted(), + TestShardRouting.newShardRouting(index.getName(), 0, "1", randomBoolean(), ShardRoutingState.INITIALIZING).moveToStarted() + + )); + + return idxRoutingTable; + } + + private IndexMetaData.Builder createIndexMetaDataBuilder(String name, String uuid, int numShards, int numReplicas) { + return IndexMetaData + .builder(name) + .settings(Settings.builder() + .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT) + .put(IndexMetaData.SETTING_INDEX_UUID, uuid)) + .putMapping(new MappingMetaData(docMapper)) + .numberOfShards(numShards) + .numberOfReplicas(numReplicas); + } + + private IndexMetaData.Custom createCustom() { + return mock(IndexMetaData.Custom.class); + } + + private interface MetaDataBuilderConfigurator { + void configure(IndexTemplateMetaData.Builder builder) throws IOException; + } + + private void addMatchingTemplate(MetaDataBuilderConfigurator configurator) throws IOException { + final IndexTemplateMetaData.Builder builder = metaDataBuilder("template1", "te*"); + configurator.configure(builder); + + tplBuilder.put("template" + builder.hashCode(), builder.build()); + } + + @SuppressWarnings("unchecked") + private Map> getMappingsFromResponse() { + final ArgumentCaptor argument = ArgumentCaptor.forClass(Map.class); + verify(mapper).merge(argument.capture(), anyObject(), anyBoolean()); + return argument.getValue(); + } + + private void setupRequestAlias(Alias alias) { + when(request.aliases()).thenReturn(new HashSet<>(Collections.singletonList(alias))); + } + + private void setupRequestMapping(String mappingKey, CompressedXContent mapping) throws IOException { + when(request.mappings()).thenReturn(Collections.singletonMap(mappingKey, mapping.string())); + } + + private void setupRequestCustom(String customKey, IndexMetaData.Custom custom) throws IOException { + when(request.customs()).thenReturn(Collections.singletonMap(customKey, custom)); + } + + private CompressedXContent createMapping() throws IOException { + return createMapping("text"); + } + + private CompressedXContent createMapping(String fieldType) throws IOException { + final String mapping = XContentFactory.jsonBuilder() + .startObject() + .startObject("type") + .startObject("properties") + .startObject("field") + .field("type", fieldType) + .endObject() + .endObject() + .endObject() + .endObject().string(); + + return new CompressedXContent(mapping); + } + + private IndexTemplateMetaData.Builder metaDataBuilder(String name, String pattern) { + return IndexTemplateMetaData + .builder(name) + .patterns(Collections.singletonList(pattern)); + } + + private IndexTemplateMetaData createTemplateMetadata(String name, String pattern) { + return IndexTemplateMetaData + .builder(name) + .patterns(Collections.singletonList(pattern)) + .putAlias(AliasMetaData.builder("alias_from_" + name).build()) + .build(); + } + + @SuppressWarnings("unchecked") + private ClusterState executeTask() throws Exception { + setupState(); + setupRequest(); + final MetaDataCreateIndexService.IndexCreationTask task = new MetaDataCreateIndexService.IndexCreationTask( + logger, allocationService, request, listener, indicesService, aliasValidator, xContentRegistry, clusterStateSettings.build(), + validator + ); + return task.execute(state); + } + + private void setupState() { + final ImmutableOpenMap.Builder stateCustomsBuilder = ImmutableOpenMap.builder(); + + currentStateMetaDataBuilder + .customs(customBuilder.build()) + .templates(tplBuilder.build()) + .indices(idxBuilder.build()); + + when(state.metaData()).thenReturn(currentStateMetaDataBuilder.build()); + + final ImmutableOpenMap.Builder> blockIdxBuilder = ImmutableOpenMap.builder(); + + when(currentStateBlocks.indices()).thenReturn(blockIdxBuilder.build()); + + when(state.blocks()).thenReturn(currentStateBlocks); + when(state.customs()).thenReturn(stateCustomsBuilder.build()); + when(state.routingTable()).thenReturn(routingTableBuilder.build()); + } + + private void setupRequest() { + when(request.settings()).thenReturn(reqSettings.build()); + when(request.index()).thenReturn("test"); + when(request.waitForActiveShards()).thenReturn(waitForActiveShardsNum); + when(request.blocks()).thenReturn(reqBlocks); + } + + private void setupClusterState() { + final DiscoveryNodes nodes = mock(DiscoveryNodes.class); + when(nodes.getSmallestNonClientNodeVersion()).thenReturn(Version.CURRENT); + + when(state.nodes()).thenReturn(nodes); + } + + @SuppressWarnings("unchecked") + private void setupIndicesService() throws Exception { + final RoutingFieldMapper routingMapper = mock(RoutingFieldMapper.class); + when(routingMapper.required()).thenReturn(false); + + when(docMapper.routingFieldMapper()).thenReturn(routingMapper); + when(docMapper.parentFieldMapper()).thenReturn(mock(ParentFieldMapper.class)); + + when(mapper.docMappers(anyBoolean())).thenReturn(Collections.singletonList(docMapper)); + + final Index index = new Index("target", "tgt1234"); + final Supplier supplier = mock(Supplier.class); + final IndexService service = mock(IndexService.class); + when(service.index()).thenReturn(index); + when(service.mapperService()).thenReturn(mapper); + when(service.getIndexSortSupplier()).thenReturn(supplier); + when(service.getIndexEventListener()).thenReturn(mock(IndexEventListener.class)); + + when(indicesService.createIndex(anyObject(), anyObject())).thenReturn(service); + } +} diff --git a/core/src/test/java/org/elasticsearch/cluster/metadata/IndexMetaDataTests.java b/core/src/test/java/org/elasticsearch/cluster/metadata/IndexMetaDataTests.java index fa56c756fcc35..e83d1fa706cfd 100644 --- a/core/src/test/java/org/elasticsearch/cluster/metadata/IndexMetaDataTests.java +++ b/core/src/test/java/org/elasticsearch/cluster/metadata/IndexMetaDataTests.java @@ -30,6 +30,7 @@ import org.elasticsearch.test.ESTestCase; import java.io.IOException; +import java.util.Collections; import java.util.Set; import static org.hamcrest.Matchers.is; @@ -84,21 +85,12 @@ public void testIndexMetaDataSerialization() throws IOException { } public void testGetRoutingFactor() { - int numberOfReplicas = randomIntBetween(0, 10); - IndexMetaData metaData = IndexMetaData.builder("foo") - .settings(Settings.builder() - .put("index.version.created", 1) - .put("index.number_of_shards", 32) - .put("index.number_of_replicas", numberOfReplicas) - .build()) - .creationDate(randomLong()) - .build(); Integer numShard = randomFrom(1, 2, 4, 8, 16); - int routingFactor = IndexMetaData.getRoutingFactor(metaData, numShard); - assertEquals(routingFactor * numShard, metaData.getNumberOfShards()); + int routingFactor = IndexMetaData.getRoutingFactor(32, numShard); + assertEquals(routingFactor * numShard, 32); - Integer brokenNumShards = randomFrom(3, 5, 9, 12, 29, 42, 64); - expectThrows(IllegalArgumentException.class, () -> IndexMetaData.getRoutingFactor(metaData, brokenNumShards)); + Integer brokenNumShards = randomFrom(3, 5, 9, 12, 29, 42); + expectThrows(IllegalArgumentException.class, () -> IndexMetaData.getRoutingFactor(32, brokenNumShards)); } public void testSelectShrinkShards() { @@ -125,6 +117,64 @@ public void testSelectShrinkShards() { expectThrows(IllegalArgumentException.class, () -> IndexMetaData.selectShrinkShards(8, metaData, 8)).getMessage()); } + public void testSelectResizeShards() { + IndexMetaData split = IndexMetaData.builder("foo") + .settings(Settings.builder() + .put("index.version.created", 1) + .put("index.number_of_shards", 2) + .put("index.number_of_replicas", 0) + .build()) + .creationDate(randomLong()) + .build(); + + IndexMetaData shrink = IndexMetaData.builder("foo") + .settings(Settings.builder() + .put("index.version.created", 1) + .put("index.number_of_shards", 32) + .put("index.number_of_replicas", 0) + .build()) + .creationDate(randomLong()) + .build(); + int numTargetShards = randomFrom(4, 6, 8, 12); + int shard = randomIntBetween(0, numTargetShards-1); + assertEquals(Collections.singleton(IndexMetaData.selectSplitShard(shard, split, numTargetShards)), + IndexMetaData.selectRecoverFromShards(shard, split, numTargetShards)); + + numTargetShards = randomFrom(1, 2, 4, 8, 16); + shard = randomIntBetween(0, numTargetShards-1); + assertEquals(IndexMetaData.selectShrinkShards(shard, shrink, numTargetShards), + IndexMetaData.selectRecoverFromShards(shard, shrink, numTargetShards)); + + assertEquals("can't select recover from shards if both indices have the same number of shards", + expectThrows(IllegalArgumentException.class, () -> IndexMetaData.selectRecoverFromShards(0, shrink, 32)).getMessage()); + } + + public void testSelectSplitShard() { + IndexMetaData metaData = IndexMetaData.builder("foo") + .settings(Settings.builder() + .put("index.version.created", 1) + .put("index.number_of_shards", 2) + .put("index.number_of_replicas", 0) + .build()) + .creationDate(randomLong()) + .setRoutingNumShards(4) + .build(); + ShardId shardId = IndexMetaData.selectSplitShard(0, metaData, 4); + assertEquals(0, shardId.getId()); + shardId = IndexMetaData.selectSplitShard(1, metaData, 4); + assertEquals(0, shardId.getId()); + shardId = IndexMetaData.selectSplitShard(2, metaData, 4); + assertEquals(1, shardId.getId()); + shardId = IndexMetaData.selectSplitShard(3, metaData, 4); + assertEquals(1, shardId.getId()); + + assertEquals("the number of target shards (0) must be greater than the shard id: 0", + expectThrows(IllegalArgumentException.class, () -> IndexMetaData.selectSplitShard(0, metaData, 0)).getMessage()); + + assertEquals("the number of source shards [2] must be a must be a factor of [3]", + expectThrows(IllegalArgumentException.class, () -> IndexMetaData.selectSplitShard(0, metaData, 3)).getMessage()); + } + public void testIndexFormat() { Settings defaultSettings = Settings.builder() .put("index.version.created", 1) @@ -156,4 +206,26 @@ public void testIndexFormat() { assertThat(metaData.getSettings().getAsInt(IndexMetaData.INDEX_FORMAT_SETTING.getKey(), 0), is(0)); } } + + public void testNumberOfRoutingShards() { + Settings build = Settings.builder().put("index.number_of_shards", 5).put("index.number_of_routing_shards", 10).build(); + assertEquals(10, IndexMetaData.INDEX_NUMBER_OF_ROUTING_SHARDS_SETTING.get(build).intValue()); + + build = Settings.builder().put("index.number_of_shards", 5).put("index.number_of_routing_shards", 5).build(); + assertEquals(5, IndexMetaData.INDEX_NUMBER_OF_ROUTING_SHARDS_SETTING.get(build).intValue()); + + int numShards = randomIntBetween(1, 10); + build = Settings.builder().put("index.number_of_shards", numShards).build(); + assertEquals(numShards, IndexMetaData.INDEX_NUMBER_OF_ROUTING_SHARDS_SETTING.get(build).intValue()); + + Settings lessThanSettings = Settings.builder().put("index.number_of_shards", 8).put("index.number_of_routing_shards", 4).build(); + IllegalArgumentException iae = expectThrows(IllegalArgumentException.class, + () -> IndexMetaData.INDEX_NUMBER_OF_ROUTING_SHARDS_SETTING.get(lessThanSettings)); + assertEquals("index.number_of_routing_shards [4] must be >= index.number_of_shards [8]", iae.getMessage()); + + Settings notAFactorySettings = Settings.builder().put("index.number_of_shards", 2).put("index.number_of_routing_shards", 3).build(); + iae = expectThrows(IllegalArgumentException.class, + () -> IndexMetaData.INDEX_NUMBER_OF_ROUTING_SHARDS_SETTING.get(notAFactorySettings)); + assertEquals("the number of source shards [2] must be a must be a factor of [3]", iae.getMessage()); + } } diff --git a/core/src/test/java/org/elasticsearch/cluster/metadata/IndexNameExpressionResolverTests.java b/core/src/test/java/org/elasticsearch/cluster/metadata/IndexNameExpressionResolverTests.java index 5e04714552248..0530bd617af63 100644 --- a/core/src/test/java/org/elasticsearch/cluster/metadata/IndexNameExpressionResolverTests.java +++ b/core/src/test/java/org/elasticsearch/cluster/metadata/IndexNameExpressionResolverTests.java @@ -30,6 +30,7 @@ import org.elasticsearch.common.settings.Settings; import org.elasticsearch.index.IndexNotFoundException; import org.elasticsearch.indices.IndexClosedException; +import org.elasticsearch.indices.InvalidIndexNameException; import org.elasticsearch.test.ESTestCase; import java.util.Arrays; @@ -641,7 +642,7 @@ public void testConcreteIndicesWildcardAndAliases() { // when ignoreAliases option is set, concreteIndexNames resolves the provided expressions // only against the defined indices IndicesOptions ignoreAliasesOptions = IndicesOptions.fromOptions(false, false, true, false, true, false, true); - + String[] indexNamesIndexWildcard = indexNameExpressionResolver.concreteIndexNames(state, ignoreAliasesOptions, "foo*"); assertEquals(1, indexNamesIndexWildcard.length); @@ -1126,4 +1127,14 @@ public void testIndicesAliasesRequestIgnoresAliases() { assertEquals("test-index", indices[0]); } } + + public void testInvalidIndex() { + MetaData.Builder mdBuilder = MetaData.builder().put(indexBuilder("test")); + ClusterState state = ClusterState.builder(new ClusterName("_name")).metaData(mdBuilder).build(); + IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpandOpen()); + + InvalidIndexNameException iine = expectThrows(InvalidIndexNameException.class, + () -> indexNameExpressionResolver.concreteIndexNames(context, "_foo")); + assertEquals("Invalid index name [_foo], must not start with '_'.", iine.getMessage()); + } } diff --git a/core/src/test/java/org/elasticsearch/cluster/metadata/IndexTemplateMetaDataTests.java b/core/src/test/java/org/elasticsearch/cluster/metadata/IndexTemplateMetaDataTests.java index bfc6f5d78d24b..8f247abcf3387 100644 --- a/core/src/test/java/org/elasticsearch/cluster/metadata/IndexTemplateMetaDataTests.java +++ b/core/src/test/java/org/elasticsearch/cluster/metadata/IndexTemplateMetaDataTests.java @@ -20,9 +20,17 @@ import org.elasticsearch.Version; import org.elasticsearch.common.bytes.BytesArray; +import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.io.stream.BytesStreamOutput; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; +import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentHelper; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.common.xcontent.XContentType; +import org.elasticsearch.common.xcontent.json.JsonXContent; import org.elasticsearch.test.ESTestCase; import java.io.IOException; @@ -30,7 +38,9 @@ import java.util.Base64; import java.util.Collections; +import static java.util.Collections.singletonMap; import static org.elasticsearch.cluster.metadata.AliasMetaData.newAliasMetaDataBuilder; +import static org.hamcrest.CoreMatchers.equalTo; public class IndexTemplateMetaDataTests extends ESTestCase { @@ -78,4 +88,36 @@ public void testIndexTemplateMetaData510() throws IOException { } } + public void testIndexTemplateMetaDataXContentRoundTrip() throws Exception { + ToXContent.Params params = new ToXContent.MapParams(singletonMap("reduce_mappings", "true")); + + String template = "{\"index_patterns\" : [ \".test-*\" ],\"order\" : 1000," + + "\"settings\" : {\"number_of_shards\" : 1,\"number_of_replicas\" : 0}," + + "\"mappings\" : {\"doc\" :" + + "{\"properties\":{\"" + + randomAlphaOfLength(10) + "\":{\"type\":\"text\"},\"" + + randomAlphaOfLength(10) + "\":{\"type\":\"keyword\"}}" + + "}}}"; + + BytesReference templateBytes = new BytesArray(template); + final IndexTemplateMetaData indexTemplateMetaData; + try (XContentParser parser = XContentHelper.createParser(NamedXContentRegistry.EMPTY, templateBytes, XContentType.JSON)) { + indexTemplateMetaData = IndexTemplateMetaData.Builder.fromXContent(parser, "test"); + } + + final BytesReference templateBytesRoundTrip; + try (XContentBuilder builder = XContentBuilder.builder(JsonXContent.jsonXContent)) { + builder.startObject(); + IndexTemplateMetaData.Builder.toXContent(indexTemplateMetaData, builder, params); + builder.endObject(); + templateBytesRoundTrip = builder.bytes(); + } + + final IndexTemplateMetaData indexTemplateMetaDataRoundTrip; + try (XContentParser parser = XContentHelper.createParser(NamedXContentRegistry.EMPTY, templateBytesRoundTrip, XContentType.JSON)) { + indexTemplateMetaDataRoundTrip = IndexTemplateMetaData.Builder.fromXContent(parser, "test"); + } + assertThat(indexTemplateMetaData, equalTo(indexTemplateMetaDataRoundTrip)); + } + } diff --git a/core/src/test/java/org/elasticsearch/cluster/metadata/MetaDataCreateIndexServiceTests.java b/core/src/test/java/org/elasticsearch/cluster/metadata/MetaDataCreateIndexServiceTests.java index 387b66d031b37..39e4a18440931 100644 --- a/core/src/test/java/org/elasticsearch/cluster/metadata/MetaDataCreateIndexServiceTests.java +++ b/core/src/test/java/org/elasticsearch/cluster/metadata/MetaDataCreateIndexServiceTests.java @@ -20,6 +20,7 @@ package org.elasticsearch.cluster.metadata; import org.elasticsearch.Version; +import org.elasticsearch.action.admin.indices.shrink.ResizeType; import org.elasticsearch.cluster.ClusterName; import org.elasticsearch.cluster.ClusterState; import org.elasticsearch.cluster.EmptyClusterInfoService; @@ -33,6 +34,7 @@ import org.elasticsearch.cluster.routing.allocation.decider.AllocationDeciders; import org.elasticsearch.cluster.routing.allocation.decider.MaxRetryAllocationDecider; import org.elasticsearch.common.Strings; +import org.elasticsearch.common.settings.Setting; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.index.IndexNotFoundException; import org.elasticsearch.ResourceAlreadyExistsException; @@ -43,6 +45,7 @@ import java.util.Arrays; import java.util.Collections; +import java.util.Comparator; import java.util.HashSet; import java.util.List; @@ -75,6 +78,12 @@ public static boolean isShrinkable(int source, int target) { return target * x == source; } + public static boolean isSplitable(int source, int target) { + int x = target / source; + assert source < target : source + " >= " + target; + return source * x == target; + } + public void testValidateShrinkIndex() { int numShards = randomIntBetween(2, 42); ClusterState state = createClusterState("source", numShards, randomIntBetween(0, 10), @@ -90,29 +99,28 @@ public void testValidateShrinkIndex() { MetaDataCreateIndexService.validateShrinkIndex(state, "no such index", Collections.emptySet(), "target", Settings.EMPTY) ).getMessage()); + Settings targetSettings = Settings.builder().put("index.number_of_shards", 1).build(); assertEquals("can't shrink an index with only one shard", expectThrows(IllegalArgumentException.class, () -> MetaDataCreateIndexService.validateShrinkIndex(createClusterState("source", - 1, 0, Settings.builder().put("index.blocks.write", true).build()), "source", Collections.emptySet(), - "target", Settings.EMPTY) - ).getMessage()); + 1, 0, Settings.builder().put("index.blocks.write", true).build()), "source", + Collections.emptySet(), "target", targetSettings)).getMessage()); - assertEquals("the number of target shards must be less that the number of source shards", + assertEquals("the number of target shards [10] must be less that the number of source shards [5]", expectThrows(IllegalArgumentException.class, () -> MetaDataCreateIndexService.validateShrinkIndex(createClusterState("source", - 5, 0, Settings.builder().put("index.blocks.write", true).build()), "source", Collections.emptySet(), - "target", Settings.builder().put("index.number_of_shards", 10).build()) - ).getMessage()); + 5, 0, Settings.builder().put("index.blocks.write", true).build()), "source", + Collections.emptySet(), "target", Settings.builder().put("index.number_of_shards", 10).build())).getMessage()); - assertEquals("index source must be read-only to shrink index. use \"index.blocks.write=true\"", + assertEquals("index source must be read-only to resize index. use \"index.blocks.write=true\"", expectThrows(IllegalStateException.class, () -> MetaDataCreateIndexService.validateShrinkIndex( createClusterState("source", randomIntBetween(2, 100), randomIntBetween(0, 10), Settings.EMPTY) - , "source", Collections.emptySet(), "target", Settings.EMPTY) + , "source", Collections.emptySet(), "target", targetSettings) ).getMessage()); assertEquals("index source must have all shards allocated on the same node to shrink index", expectThrows(IllegalStateException.class, () -> - MetaDataCreateIndexService.validateShrinkIndex(state, "source", Collections.emptySet(), "target", Settings.EMPTY) + MetaDataCreateIndexService.validateShrinkIndex(state, "source", Collections.emptySet(), "target", targetSettings) ).getMessage()); assertEquals("the number of source shards [8] must be a must be a multiple of [3]", @@ -122,10 +130,10 @@ public void testValidateShrinkIndex() { Settings.builder().put("index.number_of_shards", 3).build()) ).getMessage()); - assertEquals("mappings are not allowed when shrinking indices, all mappings are copied from the source index", + assertEquals("mappings are not allowed when resizing indices, all mappings are copied from the source index", expectThrows(IllegalArgumentException.class, () -> { MetaDataCreateIndexService.validateShrinkIndex(state, "source", Collections.singleton("foo"), - "target", Settings.EMPTY); + "target", targetSettings); } ).getMessage()); @@ -151,11 +159,78 @@ public void testValidateShrinkIndex() { Settings.builder().put("index.number_of_shards", targetShards).build()); } - public void testShrinkIndexSettings() { + public void testValidateSplitIndex() { + int numShards = randomIntBetween(1, 42); + Settings targetSettings = Settings.builder().put("index.number_of_shards", numShards * 2).build(); + ClusterState state = createClusterState("source", numShards, randomIntBetween(0, 10), + Settings.builder().put("index.blocks.write", true).build()); + + assertEquals("index [source] already exists", + expectThrows(ResourceAlreadyExistsException.class, () -> + MetaDataCreateIndexService.validateSplitIndex(state, "target", Collections.emptySet(), "source", targetSettings) + ).getMessage()); + + assertEquals("no such index", + expectThrows(IndexNotFoundException.class, () -> + MetaDataCreateIndexService.validateSplitIndex(state, "no such index", Collections.emptySet(), "target", targetSettings) + ).getMessage()); + + assertEquals("the number of source shards [10] must be less that the number of target shards [5]", + expectThrows(IllegalArgumentException.class, () -> MetaDataCreateIndexService.validateSplitIndex(createClusterState("source", + 10, 0, Settings.builder().put("index.blocks.write", true).build()), "source", Collections.emptySet(), + "target", Settings.builder().put("index.number_of_shards", 5).build()) + ).getMessage()); + + + assertEquals("index source must be read-only to resize index. use \"index.blocks.write=true\"", + expectThrows(IllegalStateException.class, () -> + MetaDataCreateIndexService.validateSplitIndex( + createClusterState("source", randomIntBetween(2, 100), randomIntBetween(0, 10), Settings.EMPTY) + , "source", Collections.emptySet(), "target", targetSettings) + ).getMessage()); + + + assertEquals("the number of source shards [3] must be a must be a factor of [4]", + expectThrows(IllegalArgumentException.class, () -> + MetaDataCreateIndexService.validateSplitIndex(createClusterState("source", 3, randomIntBetween(0, 10), + Settings.builder().put("index.blocks.write", true).build()), "source", Collections.emptySet(), "target", + Settings.builder().put("index.number_of_shards", 4).build()) + ).getMessage()); + + assertEquals("mappings are not allowed when resizing indices, all mappings are copied from the source index", + expectThrows(IllegalArgumentException.class, () -> { + MetaDataCreateIndexService.validateSplitIndex(state, "source", Collections.singleton("foo"), + "target", targetSettings); + } + ).getMessage()); + + + ClusterState clusterState = ClusterState.builder(createClusterState("source", numShards, 0, + Settings.builder().put("index.blocks.write", true).build())).nodes(DiscoveryNodes.builder().add(newNode("node1"))) + .build(); + AllocationService service = new AllocationService(Settings.builder().build(), new AllocationDeciders(Settings.EMPTY, + Collections.singleton(new MaxRetryAllocationDecider(Settings.EMPTY))), + new TestGatewayAllocator(), new BalancedShardsAllocator(Settings.EMPTY), EmptyClusterInfoService.INSTANCE); + + RoutingTable routingTable = service.reroute(clusterState, "reroute").routingTable(); + clusterState = ClusterState.builder(clusterState).routingTable(routingTable).build(); + // now we start the shard + routingTable = service.applyStartedShards(clusterState, + routingTable.index("source").shardsWithState(ShardRoutingState.INITIALIZING)).routingTable(); + clusterState = ClusterState.builder(clusterState).routingTable(routingTable).build(); + int targetShards; + do { + targetShards = randomIntBetween(numShards+1, 100); + } while (isSplitable(numShards, targetShards) == false); + MetaDataCreateIndexService.validateSplitIndex(clusterState, "source", Collections.emptySet(), "target", + Settings.builder().put("index.number_of_shards", targetShards).build()); + } + + public void testResizeIndexSettings() { String indexName = randomAlphaOfLength(10); List versions = Arrays.asList(VersionUtils.randomVersion(random()), VersionUtils.randomVersion(random()), VersionUtils.randomVersion(random())); - versions.sort((l, r) -> Long.compare(l.id, r.id)); + versions.sort(Comparator.comparingLong(l -> l.id)); Version version = versions.get(0); Version minCompat = versions.get(1); Version upgraded = versions.get(2); @@ -166,7 +241,7 @@ public void testShrinkIndexSettings() { .put("index.similarity.default.type", "BM25") .put("index.version.created", version) .put("index.version.upgraded", upgraded) - .put("index.version.minimum_compatible", minCompat.luceneVersion) + .put("index.version.minimum_compatible", minCompat.luceneVersion.toString()) .put("index.analysis.analyzer.my_analyzer.tokenizer", "keyword") .build())).nodes(DiscoveryNodes.builder().add(newNode("node1"))) .build(); @@ -182,8 +257,9 @@ public void testShrinkIndexSettings() { clusterState = ClusterState.builder(clusterState).routingTable(routingTable).build(); Settings.Builder builder = Settings.builder(); - MetaDataCreateIndexService.prepareShrinkIndexSettings( - clusterState, Collections.emptySet(), builder, clusterState.metaData().index(indexName).getIndex(), "target"); + builder.put("index.number_of_shards", 1); + MetaDataCreateIndexService.prepareResizeIndexSettings(clusterState, Collections.emptySet(), builder, + clusterState.metaData().index(indexName).getIndex(), "target", ResizeType.SHRINK); assertEquals("similarity settings must be copied", "BM25", builder.build().get("index.similarity.default.type")); assertEquals("analysis settings must be copied", "keyword", builder.build().get("index.analysis.analyzer.my_analyzer.tokenizer")); @@ -212,6 +288,7 @@ public void testValidateIndexName() throws Exception { validateIndexName("..", "must not be '.' or '..'"); + validateIndexName("foo:bar", "must not contain ':'"); } private void validateIndexName(String indexName, String errorMessage) { diff --git a/core/src/test/java/org/elasticsearch/cluster/metadata/MetaDataIndexUpgradeServiceTests.java b/core/src/test/java/org/elasticsearch/cluster/metadata/MetaDataIndexUpgradeServiceTests.java index 0ed185dfea909..0fa6831fb06ee 100644 --- a/core/src/test/java/org/elasticsearch/cluster/metadata/MetaDataIndexUpgradeServiceTests.java +++ b/core/src/test/java/org/elasticsearch/cluster/metadata/MetaDataIndexUpgradeServiceTests.java @@ -87,22 +87,26 @@ public void testFailUpgrade() { MetaDataIndexUpgradeService service = new MetaDataIndexUpgradeService(Settings.EMPTY, xContentRegistry(), new MapperRegistry(Collections.emptyMap(), Collections.emptyMap()), IndexScopedSettings.DEFAULT_SCOPED_SETTINGS, Collections.emptyList()); + Version minCompat = Version.CURRENT.minimumIndexCompatibilityVersion(); + Version indexUpgraded = VersionUtils.randomVersionBetween(random(), minCompat, VersionUtils.getPreviousVersion(Version.CURRENT)); + Version indexCreated = Version.fromString((minCompat.major - 1) + "." + randomInt(5) + "." + randomInt(5)); final IndexMetaData metaData = newIndexMeta("foo", Settings.builder() - .put(IndexMetaData.SETTING_VERSION_UPGRADED, Version.V_5_0_0_beta1) - .put(IndexMetaData.SETTING_VERSION_CREATED, Version.fromString("2.4.0")) + .put(IndexMetaData.SETTING_VERSION_UPGRADED, indexUpgraded) + .put(IndexMetaData.SETTING_VERSION_CREATED, indexCreated) .build()); - // norelease : having a hardcoded version message requires modifying this test when creating new major version. fix this... String message = expectThrows(IllegalStateException.class, () -> service.upgradeIndexMetaData(metaData, Version.CURRENT.minimumIndexCompatibilityVersion())).getMessage(); - assertEquals(message, "The index [[foo/BOOM]] was created with version [2.4.0] " + - "but the minimum compatible version is [6.0.0-beta1]." + - " It should be re-indexed in Elasticsearch 6.x before upgrading to " + Version.CURRENT.toString() + "."); + assertEquals(message, "The index [[foo/BOOM]] was created with version [" + indexCreated + "] " + + "but the minimum compatible version is [" + minCompat + "]." + + " It should be re-indexed in Elasticsearch " + minCompat.major + ".x before upgrading to " + Version.CURRENT.toString() + "."); + indexCreated = VersionUtils.randomVersionBetween(random(), minCompat, Version.CURRENT); + indexUpgraded = VersionUtils.randomVersionBetween(random(), indexCreated, Version.CURRENT); IndexMetaData goodMeta = newIndexMeta("foo", Settings.builder() - .put(IndexMetaData.SETTING_VERSION_UPGRADED, Version.V_5_0_0_beta1) - .put(IndexMetaData.SETTING_VERSION_CREATED, Version.fromString("5.1.0")) + .put(IndexMetaData.SETTING_VERSION_UPGRADED, indexUpgraded) + .put(IndexMetaData.SETTING_VERSION_CREATED, indexCreated) .build()); - service.upgradeIndexMetaData(goodMeta, Version.V_6_0_0_beta1.minimumIndexCompatibilityVersion()); + service.upgradeIndexMetaData(goodMeta, Version.CURRENT.minimumIndexCompatibilityVersion()); } public void testPluginUpgrade() { diff --git a/core/src/test/java/org/elasticsearch/cluster/metadata/MetaDataMappingServiceTests.java b/core/src/test/java/org/elasticsearch/cluster/metadata/MetaDataMappingServiceTests.java index 3cce782a898d8..7385387305f0b 100644 --- a/core/src/test/java/org/elasticsearch/cluster/metadata/MetaDataMappingServiceTests.java +++ b/core/src/test/java/org/elasticsearch/cluster/metadata/MetaDataMappingServiceTests.java @@ -63,7 +63,7 @@ public void testAddChildTypePointingToAlreadyExistingType() throws Exception { // Tests _parent meta field logic, because part of the validation is in MetaDataMappingService public void testAddExtraChildTypePointingToAlreadyParentExistingType() throws Exception { IndexService indexService = createIndex("test", client().admin().indices().prepareCreate("test") - .setSettings("index.version.created", Version.V_5_6_0.id) + .setSettings(Settings.builder().put("index.version.created", Version.V_5_6_0.id)) .addMapping("parent") .addMapping("child1", "_parent", "type=parent") ); diff --git a/core/src/test/java/org/elasticsearch/cluster/metadata/MetaDataTests.java b/core/src/test/java/org/elasticsearch/cluster/metadata/MetaDataTests.java index 6cc6a1cb54c45..dd7683c1de213 100644 --- a/core/src/test/java/org/elasticsearch/cluster/metadata/MetaDataTests.java +++ b/core/src/test/java/org/elasticsearch/cluster/metadata/MetaDataTests.java @@ -26,7 +26,6 @@ import org.elasticsearch.common.io.stream.BytesStreamOutput; import org.elasticsearch.common.io.stream.NamedWriteableAwareStreamInput; import org.elasticsearch.common.io.stream.NamedWriteableRegistry; -import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.XContentBuilder; @@ -36,9 +35,14 @@ import org.elasticsearch.test.ESTestCase; import java.io.IOException; +import java.util.HashMap; +import java.util.HashSet; +import java.util.Map; +import java.util.Set; import static org.hamcrest.Matchers.equalTo; import static org.hamcrest.Matchers.is; +import static org.hamcrest.Matchers.startsWith; public class MetaDataTests extends ESTestCase { @@ -52,7 +56,42 @@ public void testIndexAndAliasWithSameName() { MetaData.builder().put(builder).build(); fail("exception should have been thrown"); } catch (IllegalStateException e) { - assertThat(e.getMessage(), equalTo("index and alias names need to be unique, but alias [index] and index [index] have the same name")); + assertThat(e.getMessage(), equalTo("index and alias names need to be unique, but the following duplicates were found [index (alias of [index])]")); + } + } + + public void testAliasCollidingWithAnExistingIndex() { + int indexCount = randomIntBetween(10, 100); + Set indices = new HashSet<>(indexCount); + for (int i = 0; i < indexCount; i++) { + indices.add(randomAlphaOfLength(10)); + } + Map> aliasToIndices = new HashMap<>(); + for (String alias: randomSubsetOf(randomIntBetween(1, 10), indices)) { + aliasToIndices.put(alias, new HashSet<>(randomSubsetOf(randomIntBetween(1, 3), indices))); + } + int properAliases = randomIntBetween(0, 3); + for (int i = 0; i < properAliases; i++) { + aliasToIndices.put(randomAlphaOfLength(5), new HashSet<>(randomSubsetOf(randomIntBetween(1, 3), indices))); + } + MetaData.Builder metaDataBuilder = MetaData.builder(); + for (String index : indices) { + IndexMetaData.Builder indexBuilder = IndexMetaData.builder(index) + .settings(Settings.builder().put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT)) + .numberOfShards(1) + .numberOfReplicas(0); + aliasToIndices.forEach((key, value) -> { + if (value.contains(index)) { + indexBuilder.putAlias(AliasMetaData.builder(key).build()); + } + }); + metaDataBuilder.put(indexBuilder); + } + try { + metaDataBuilder.build(); + fail("exception should have been thrown"); + } catch (IllegalStateException e) { + assertThat(e.getMessage(), startsWith("index and alias names need to be unique")); } } diff --git a/core/src/test/java/org/elasticsearch/cluster/metadata/TemplateUpgradeServiceTests.java b/core/src/test/java/org/elasticsearch/cluster/metadata/TemplateUpgradeServiceTests.java index 0b32ae2eb99f2..e1763fa6a5d60 100644 --- a/core/src/test/java/org/elasticsearch/cluster/metadata/TemplateUpgradeServiceTests.java +++ b/core/src/test/java/org/elasticsearch/cluster/metadata/TemplateUpgradeServiceTests.java @@ -341,51 +341,6 @@ public void testClusterStateUpdate() { private static final int NODE_TEST_ITERS = 100; - public void testOnlyOneNodeRunsTemplateUpdates() { - TemplateUpgradeService service = new TemplateUpgradeService(Settings.EMPTY, null, clusterService, null, Collections.emptyList()); - for (int i = 0; i < NODE_TEST_ITERS; i++) { - int nodesCount = randomIntBetween(1, 10); - int clientNodesCount = randomIntBetween(0, 4); - DiscoveryNodes nodes = randomNodes(nodesCount, clientNodesCount); - int updaterNode = -1; - for (int j = 0; j < nodesCount; j++) { - DiscoveryNodes localNodes = DiscoveryNodes.builder(nodes).localNodeId(nodes.resolveNode("node_" + j).getId()).build(); - if (service.shouldLocalNodeUpdateTemplates(localNodes)) { - assertThat("Expected only one node to update template, found " + updaterNode + " and " + j, updaterNode, lessThan(0)); - updaterNode = j; - } - } - assertThat("Expected one node to update template", updaterNode, greaterThanOrEqualTo(0)); - } - } - - public void testIfMasterHasTheHighestVersionItShouldRunsTemplateUpdates() { - for (int i = 0; i < NODE_TEST_ITERS; i++) { - int nodesCount = randomIntBetween(1, 10); - int clientNodesCount = randomIntBetween(0, 4); - DiscoveryNodes nodes = randomNodes(nodesCount, clientNodesCount); - DiscoveryNodes.Builder builder = DiscoveryNodes.builder(nodes).localNodeId(nodes.resolveNode("_master").getId()); - nodes = builder.build(); - TemplateUpgradeService service = new TemplateUpgradeService(Settings.EMPTY, null, clusterService, null, - Collections.emptyList()); - assertThat(service.shouldLocalNodeUpdateTemplates(nodes), - equalTo(nodes.getLargestNonClientNodeVersion().equals(nodes.getMasterNode().getVersion()))); - } - } - - public void testClientNodeDontRunTemplateUpdates() { - for (int i = 0; i < NODE_TEST_ITERS; i++) { - int nodesCount = randomIntBetween(1, 10); - int clientNodesCount = randomIntBetween(1, 4); - DiscoveryNodes nodes = randomNodes(nodesCount, clientNodesCount); - int testClient = randomIntBetween(0, clientNodesCount - 1); - DiscoveryNodes.Builder builder = DiscoveryNodes.builder(nodes).localNodeId(nodes.resolveNode("client_" + testClient).getId()); - TemplateUpgradeService service = new TemplateUpgradeService(Settings.EMPTY, null, clusterService, null, - Collections.emptyList()); - assertThat(service.shouldLocalNodeUpdateTemplates(builder.build()), equalTo(false)); - } - } - private DiscoveryNodes randomNodes(int dataAndMasterNodes, int clientNodes) { DiscoveryNodes.Builder builder = DiscoveryNodes.builder(); String masterNodeId = null; diff --git a/core/src/test/java/org/elasticsearch/cluster/node/DiscoveryNodeFiltersTests.java b/core/src/test/java/org/elasticsearch/cluster/node/DiscoveryNodeFiltersTests.java index b2dba18181022..d6e6d1691a042 100644 --- a/core/src/test/java/org/elasticsearch/cluster/node/DiscoveryNodeFiltersTests.java +++ b/core/src/test/java/org/elasticsearch/cluster/node/DiscoveryNodeFiltersTests.java @@ -20,6 +20,7 @@ package org.elasticsearch.cluster.node; import org.elasticsearch.Version; +import org.elasticsearch.common.settings.Setting; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.transport.TransportAddress; import org.elasticsearch.test.ESTestCase; @@ -59,7 +60,7 @@ public void testNameMatch() { Settings settings = Settings.builder() .put("xxx.name", "name1") .build(); - DiscoveryNodeFilters filters = DiscoveryNodeFilters.buildFromSettings(OR, "xxx.", settings); + DiscoveryNodeFilters filters = buildFromSettings(OR, "xxx.", settings); DiscoveryNode node = new DiscoveryNode("name1", "id1", buildNewFakeTransportAddress(), emptyMap(), emptySet(), Version.CURRENT); @@ -73,7 +74,7 @@ public void testIdMatch() { Settings settings = Settings.builder() .put("xxx._id", "id1") .build(); - DiscoveryNodeFilters filters = DiscoveryNodeFilters.buildFromSettings(OR, "xxx.", settings); + DiscoveryNodeFilters filters = buildFromSettings(OR, "xxx.", settings); DiscoveryNode node = new DiscoveryNode("name1", "id1", buildNewFakeTransportAddress(), emptyMap(), emptySet(), Version.CURRENT); @@ -88,7 +89,7 @@ public void testIdOrNameMatch() { .put("xxx._id", "id1,blah") .put("xxx.name", "blah,name2") .build()); - DiscoveryNodeFilters filters = DiscoveryNodeFilters.buildFromSettings(OR, "xxx.", settings); + DiscoveryNodeFilters filters = buildFromSettings(OR, "xxx.", settings); final Version version = Version.CURRENT; DiscoveryNode node = new DiscoveryNode("name1", "id1", buildNewFakeTransportAddress(), emptyMap(), emptySet(), version); @@ -106,7 +107,7 @@ public void testTagAndGroupMatch() { .put("xxx.tag", "A") .put("xxx.group", "B") .build()); - DiscoveryNodeFilters filters = DiscoveryNodeFilters.buildFromSettings(AND, "xxx.", settings); + DiscoveryNodeFilters filters = buildFromSettings(AND, "xxx.", settings); Map attributes = new HashMap<>(); attributes.put("tag", "A"); @@ -139,7 +140,7 @@ public void testStarMatch() { Settings settings = Settings.builder() .put("xxx.name", "*") .build(); - DiscoveryNodeFilters filters = DiscoveryNodeFilters.buildFromSettings(OR, "xxx.", settings); + DiscoveryNodeFilters filters = buildFromSettings(OR, "xxx.", settings); DiscoveryNode node = new DiscoveryNode("name1", "id1", buildNewFakeTransportAddress(), emptyMap(), emptySet(), Version.CURRENT); @@ -151,7 +152,7 @@ public void testIpBindFilteringMatchingAnd() { .put("xxx.tag", "A") .put("xxx." + randomFrom("_ip", "_host_ip", "_publish_ip"), "192.1.1.54") .build()); - DiscoveryNodeFilters filters = DiscoveryNodeFilters.buildFromSettings(AND, "xxx.", settings); + DiscoveryNodeFilters filters = buildFromSettings(AND, "xxx.", settings); DiscoveryNode node = new DiscoveryNode("", "", "", "", "192.1.1.54", localAddress, singletonMap("tag", "A"), emptySet(), null); assertThat(filters.match(node), equalTo(true)); @@ -162,7 +163,7 @@ public void testIpBindFilteringNotMatching() { .put("xxx.tag", "B") .put("xxx." + randomFrom("_ip", "_host_ip", "_publish_ip"), "192.1.1.54") .build()); - DiscoveryNodeFilters filters = DiscoveryNodeFilters.buildFromSettings(AND, "xxx.", settings); + DiscoveryNodeFilters filters = buildFromSettings(AND, "xxx.", settings); DiscoveryNode node = new DiscoveryNode("", "", "", "", "192.1.1.54", localAddress, singletonMap("tag", "A"), emptySet(), null); assertThat(filters.match(node), equalTo(false)); @@ -173,7 +174,7 @@ public void testIpBindFilteringNotMatchingAnd() { .put("xxx.tag", "A") .put("xxx." + randomFrom("_ip", "_host_ip", "_publish_ip"), "8.8.8.8") .build()); - DiscoveryNodeFilters filters = DiscoveryNodeFilters.buildFromSettings(AND, "xxx.", settings); + DiscoveryNodeFilters filters = buildFromSettings(AND, "xxx.", settings); DiscoveryNode node = new DiscoveryNode("", "", "", "", "192.1.1.54", localAddress, singletonMap("tag", "A"), emptySet(), null); assertThat(filters.match(node), equalTo(false)); @@ -184,7 +185,7 @@ public void testIpBindFilteringMatchingOr() { .put("xxx." + randomFrom("_ip", "_host_ip", "_publish_ip"), "192.1.1.54") .put("xxx.tag", "A") .build()); - DiscoveryNodeFilters filters = DiscoveryNodeFilters.buildFromSettings(OR, "xxx.", settings); + DiscoveryNodeFilters filters = buildFromSettings(OR, "xxx.", settings); DiscoveryNode node = new DiscoveryNode("", "", "", "", "192.1.1.54", localAddress, singletonMap("tag", "A"), emptySet(), null); assertThat(filters.match(node), equalTo(true)); @@ -195,7 +196,7 @@ public void testIpBindFilteringNotMatchingOr() { .put("xxx.tag", "A") .put("xxx." + randomFrom("_ip", "_host_ip", "_publish_ip"), "8.8.8.8") .build()); - DiscoveryNodeFilters filters = DiscoveryNodeFilters.buildFromSettings(OR, "xxx.", settings); + DiscoveryNodeFilters filters = buildFromSettings(OR, "xxx.", settings); DiscoveryNode node = new DiscoveryNode("", "", "", "", "192.1.1.54", localAddress, singletonMap("tag", "A"), emptySet(), null); assertThat(filters.match(node), equalTo(true)); @@ -206,7 +207,7 @@ public void testIpPublishFilteringMatchingAnd() { .put("xxx.tag", "A") .put("xxx._publish_ip", "192.1.1.54") .build()); - DiscoveryNodeFilters filters = DiscoveryNodeFilters.buildFromSettings(AND, "xxx.", settings); + DiscoveryNodeFilters filters = buildFromSettings(AND, "xxx.", settings); DiscoveryNode node = new DiscoveryNode("", "", "", "", "192.1.1.54", localAddress, singletonMap("tag", "A"), emptySet(), null); assertThat(filters.match(node), equalTo(true)); @@ -217,7 +218,7 @@ public void testIpPublishFilteringNotMatchingAnd() { .put("xxx.tag", "A") .put("xxx._publish_ip", "8.8.8.8") .build()); - DiscoveryNodeFilters filters = DiscoveryNodeFilters.buildFromSettings(AND, "xxx.", settings); + DiscoveryNodeFilters filters = buildFromSettings(AND, "xxx.", settings); DiscoveryNode node = new DiscoveryNode("", "", "", "", "192.1.1.54", localAddress, singletonMap("tag", "A"), emptySet(), null); assertThat(filters.match(node), equalTo(false)); @@ -228,7 +229,7 @@ public void testIpPublishFilteringMatchingOr() { .put("xxx._publish_ip", "192.1.1.54") .put("xxx.tag", "A") .build()); - DiscoveryNodeFilters filters = DiscoveryNodeFilters.buildFromSettings(OR, "xxx.", settings); + DiscoveryNodeFilters filters = buildFromSettings(OR, "xxx.", settings); DiscoveryNode node = new DiscoveryNode("", "", "", "", "192.1.1.54", localAddress, singletonMap("tag", "A"), emptySet(), null); assertThat(filters.match(node), equalTo(true)); @@ -239,7 +240,7 @@ public void testIpPublishFilteringNotMatchingOr() { .put("xxx.tag", "A") .put("xxx._publish_ip", "8.8.8.8") .build()); - DiscoveryNodeFilters filters = DiscoveryNodeFilters.buildFromSettings(OR, "xxx.", settings); + DiscoveryNodeFilters filters = buildFromSettings(OR, "xxx.", settings); DiscoveryNode node = new DiscoveryNode("", "", "", "", "192.1.1.54", localAddress, singletonMap("tag", "A"), emptySet(), null); assertThat(filters.match(node), equalTo(true)); @@ -250,7 +251,7 @@ public void testIpPublishFilteringMatchingWildcard() { Settings settings = shuffleSettings(Settings.builder() .put("xxx._publish_ip", matches ? "192.1.*" : "192.2.*") .build()); - DiscoveryNodeFilters filters = DiscoveryNodeFilters.buildFromSettings(OR, "xxx.", settings); + DiscoveryNodeFilters filters = buildFromSettings(OR, "xxx.", settings); DiscoveryNode node = new DiscoveryNode("", "", "", "", "192.1.1.54", localAddress, emptyMap(), emptySet(), null); assertThat(filters.match(node), equalTo(matches)); @@ -263,17 +264,22 @@ public void testCommaSeparatedValuesTrimmed() { .put("xxx." + randomFrom("_ip", "_host_ip", "_publish_ip"), "192.1.1.1, 192.1.1.54") .put("xxx.tag", "A, B") .build()); - DiscoveryNodeFilters filters = DiscoveryNodeFilters.buildFromSettings(OR, "xxx.", settings); + DiscoveryNodeFilters filters = buildFromSettings(OR, "xxx.", settings); assertTrue(filters.match(node)); } private Settings shuffleSettings(Settings source) { Settings.Builder settings = Settings.builder(); - List keys = new ArrayList<>(source.getAsMap().keySet()); + List keys = new ArrayList<>(source.keySet()); Collections.shuffle(keys, random()); for (String o : keys) { - settings.put(o, source.getAsMap().get(o)); + settings.put(o, source.get(o)); } return settings.build(); } + + public static DiscoveryNodeFilters buildFromSettings(DiscoveryNodeFilters.OpType opType, String prefix, Settings settings) { + Setting.AffixSetting setting = Setting.prefixKeySetting(prefix, key -> Setting.simpleString(key)); + return DiscoveryNodeFilters.buildFromKeyValue(opType, setting.getAsMap(settings)); + } } diff --git a/core/src/test/java/org/elasticsearch/cluster/routing/AllocationIdTests.java b/core/src/test/java/org/elasticsearch/cluster/routing/AllocationIdTests.java index f6b5fc1bd3fce..949d4f350080c 100644 --- a/core/src/test/java/org/elasticsearch/cluster/routing/AllocationIdTests.java +++ b/core/src/test/java/org/elasticsearch/cluster/routing/AllocationIdTests.java @@ -109,20 +109,6 @@ public void testMoveToUnassigned() { assertThat(shard.allocationId(), nullValue()); } - public void testReinitializing() { - logger.info("-- build started shard"); - ShardRouting shard = ShardRouting.newUnassigned(new ShardId("test","_na_", 0), true, StoreRecoverySource.EXISTING_STORE_INSTANCE, new UnassignedInfo(UnassignedInfo.Reason.INDEX_CREATED, null)); - shard = shard.initialize("node1", null, -1); - shard = shard.moveToStarted(); - AllocationId allocationId = shard.allocationId(); - - logger.info("-- reinitializing shard"); - shard = shard.reinitializePrimaryShard(); - assertThat(shard.allocationId().getId(), notNullValue()); - assertThat(shard.allocationId().getRelocationId(), nullValue()); - assertThat(shard.allocationId().getId(), equalTo(allocationId.getId())); - } - public void testSerialization() throws IOException { AllocationId allocationId = AllocationId.newInitializing(); if (randomBoolean()) { diff --git a/core/src/test/java/org/elasticsearch/cluster/routing/OperationRoutingTests.java b/core/src/test/java/org/elasticsearch/cluster/routing/OperationRoutingTests.java index be7ebd4a4c298..1f8de1ca02fd7 100644 --- a/core/src/test/java/org/elasticsearch/cluster/routing/OperationRoutingTests.java +++ b/core/src/test/java/org/elasticsearch/cluster/routing/OperationRoutingTests.java @@ -23,13 +23,13 @@ import org.elasticsearch.action.support.replication.ClusterStateCreationUtils; import org.elasticsearch.cluster.ClusterState; import org.elasticsearch.cluster.metadata.IndexMetaData; -import org.elasticsearch.cluster.node.DiscoveryNodes; import org.elasticsearch.cluster.service.ClusterService; -import org.elasticsearch.common.Nullable; import org.elasticsearch.common.settings.ClusterSettings; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.index.Index; import org.elasticsearch.index.shard.ShardId; +import org.elasticsearch.node.ResponseCollectorService; import org.elasticsearch.test.ClusterServiceUtils; import org.elasticsearch.test.ESTestCase; import org.elasticsearch.threadpool.TestThreadPool; @@ -52,6 +52,7 @@ public class OperationRoutingTests extends ESTestCase{ + public void testGenerateShardId() { int[][] possibleValues = new int[][] { {8,4,2}, {20, 10, 2}, {36, 12, 3}, {15,5,1} @@ -68,6 +69,7 @@ public void testGenerateShardId() { .numberOfReplicas(1) .setRoutingNumShards(shardSplits[0]).build(); int shrunkShard = OperationRouting.generateShardId(shrunk, term, null); + Set shardIds = IndexMetaData.selectShrinkShards(shrunkShard, metaData, shrunk.getNumberOfShards()); assertEquals(1, shardIds.stream().filter((sid) -> sid.id() == shard).count()); @@ -79,6 +81,36 @@ public void testGenerateShardId() { } } + public void testGenerateShardIdSplit() { + int[][] possibleValues = new int[][] { + {2,4,8}, {2, 10, 20}, {3, 12, 36}, {1,5,15} + }; + for (int i = 0; i < 10; i++) { + int[] shardSplits = randomFrom(possibleValues); + assertEquals(shardSplits[0], (shardSplits[0] * shardSplits[1]) / shardSplits[1]); + assertEquals(shardSplits[1], (shardSplits[1] * shardSplits[2]) / shardSplits[2]); + IndexMetaData metaData = IndexMetaData.builder("test").settings(settings(Version.CURRENT)).numberOfShards(shardSplits[0]) + .numberOfReplicas(1).setRoutingNumShards(shardSplits[2]).build(); + String term = randomAlphaOfLength(10); + final int shard = OperationRouting.generateShardId(metaData, term, null); + IndexMetaData split = IndexMetaData.builder("test").settings(settings(Version.CURRENT)).numberOfShards(shardSplits[1]) + .numberOfReplicas(1) + .setRoutingNumShards(shardSplits[2]).build(); + int shrunkShard = OperationRouting.generateShardId(split, term, null); + + ShardId shardId = IndexMetaData.selectSplitShard(shrunkShard, metaData, split.getNumberOfShards()); + assertNotNull(shardId); + assertEquals(shard, shardId.getId()); + + split = IndexMetaData.builder("test").settings(settings(Version.CURRENT)).numberOfShards(shardSplits[2]).numberOfReplicas(1) + .setRoutingNumShards(shardSplits[2]).build(); + shrunkShard = OperationRouting.generateShardId(split, term, null); + shardId = IndexMetaData.selectSplitShard(shrunkShard, metaData, split.getNumberOfShards()); + assertNotNull(shardId); + assertEquals(shard, shardId.getId()); + } + } + public void testPartitionedIndex() { // make sure the same routing value always has each _id fall within the configured partition size for (int shards = 1; shards < 5; shards++) { @@ -371,7 +403,7 @@ public void testPreferNodes() throws InterruptedException, IOException { terminate(threadPool); } } - + public void testFairSessionIdPreferences() throws InterruptedException, IOException { // Ensure that a user session is re-routed back to same nodes for // subsequent searches and that the nodes are selected fairly i.e. @@ -422,13 +454,13 @@ public void testFairSessionIdPreferences() throws InterruptedException, IOExcept assertThat("Search should use more than one of the nodes", selectedNodes.size(), greaterThan(1)); } } - + // Regression test for the routing logic - implements same hashing logic private ShardIterator duelGetShards(ClusterState clusterState, ShardId shardId, String sessionId) { final IndexShardRoutingTable indexShard = clusterState.getRoutingTable().shardRoutingTable(shardId.getIndexName(), shardId.getId()); int routingHash = Murmur3HashFunction.hash(sessionId); routingHash = 31 * routingHash + indexShard.shardId.hashCode(); - return indexShard.activeInitializingShardsIt(routingHash); + return indexShard.activeInitializingShardsIt(routingHash); } public void testThatOnlyNodesSupportNodeIds() throws InterruptedException, IOException { @@ -490,4 +522,90 @@ public void testThatOnlyNodesSupportNodeIds() throws InterruptedException, IOExc } } + public void testAdaptiveReplicaSelection() throws Exception { + final int numIndices = 1; + final int numShards = 1; + final int numReplicas = 2; + final String[] indexNames = new String[numIndices]; + for (int i = 0; i < numIndices; i++) { + indexNames[i] = "test" + i; + } + ClusterState state = ClusterStateCreationUtils.stateWithAssignedPrimariesAndReplicas(indexNames, numShards, numReplicas); + final int numRepeatedSearches = 4; + OperationRouting opRouting = new OperationRouting(Settings.EMPTY, + new ClusterSettings(Settings.EMPTY, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS)); + opRouting.setUseAdaptiveReplicaSelection(true); + List searchedShards = new ArrayList<>(numShards); + Set selectedNodes = new HashSet<>(numShards); + TestThreadPool threadPool = new TestThreadPool("testThatOnlyNodesSupportNodeIds"); + ClusterService clusterService = ClusterServiceUtils.createClusterService(threadPool); + ResponseCollectorService collector = new ResponseCollectorService(Settings.EMPTY, clusterService); + Map outstandingRequests = new HashMap<>(); + GroupShardsIterator groupIterator = opRouting.searchShards(state, + indexNames, null, null, collector, outstandingRequests); + + assertThat("One group per index shard", groupIterator.size(), equalTo(numIndices * numShards)); + + // Test that the shards use a round-robin pattern when there are no stats + assertThat(groupIterator.get(0).size(), equalTo(numReplicas + 1)); + ShardRouting firstChoice = groupIterator.get(0).nextOrNull(); + assertNotNull(firstChoice); + searchedShards.add(firstChoice); + selectedNodes.add(firstChoice.currentNodeId()); + + groupIterator = opRouting.searchShards(state, indexNames, null, null, collector, outstandingRequests); + + assertThat(groupIterator.size(), equalTo(numIndices * numShards)); + ShardRouting secondChoice = groupIterator.get(0).nextOrNull(); + assertNotNull(secondChoice); + searchedShards.add(secondChoice); + selectedNodes.add(secondChoice.currentNodeId()); + + groupIterator = opRouting.searchShards(state, indexNames, null, null, collector, outstandingRequests); + + assertThat(groupIterator.size(), equalTo(numIndices * numShards)); + ShardRouting thirdChoice = groupIterator.get(0).nextOrNull(); + assertNotNull(thirdChoice); + searchedShards.add(thirdChoice); + selectedNodes.add(thirdChoice.currentNodeId()); + + // All three shards should have been separate, because there are no stats yet so they're all ranked equally. + assertThat(searchedShards.size(), equalTo(3)); + + // Now let's start adding node metrics, since that will affect which node is chosen + collector.addNodeStatistics("node_0", 2, TimeValue.timeValueMillis(200).nanos(), TimeValue.timeValueMillis(150).nanos()); + collector.addNodeStatistics("node_1", 1, TimeValue.timeValueMillis(100).nanos(), TimeValue.timeValueMillis(50).nanos()); + collector.addNodeStatistics("node_2", 1, TimeValue.timeValueMillis(200).nanos(), TimeValue.timeValueMillis(200).nanos()); + outstandingRequests.put("node_0", 1L); + outstandingRequests.put("node_1", 1L); + outstandingRequests.put("node_2", 1L); + + groupIterator = opRouting.searchShards(state, indexNames, null, null, collector, outstandingRequests); + ShardRouting shardChoice = groupIterator.get(0).nextOrNull(); + // node 1 should be the lowest ranked node to start + assertThat(shardChoice.currentNodeId(), equalTo("node_1")); + + // node 1 starts getting more loaded... + collector.addNodeStatistics("node_1", 2, TimeValue.timeValueMillis(200).nanos(), TimeValue.timeValueMillis(150).nanos()); + groupIterator = opRouting.searchShards(state, indexNames, null, null, collector, outstandingRequests); + shardChoice = groupIterator.get(0).nextOrNull(); + assertThat(shardChoice.currentNodeId(), equalTo("node_1")); + + // and more loaded... + collector.addNodeStatistics("node_1", 3, TimeValue.timeValueMillis(250).nanos(), TimeValue.timeValueMillis(200).nanos()); + groupIterator = opRouting.searchShards(state, indexNames, null, null, collector, outstandingRequests); + shardChoice = groupIterator.get(0).nextOrNull(); + assertThat(shardChoice.currentNodeId(), equalTo("node_1")); + + // and even more + collector.addNodeStatistics("node_1", 4, TimeValue.timeValueMillis(300).nanos(), TimeValue.timeValueMillis(250).nanos()); + groupIterator = opRouting.searchShards(state, indexNames, null, null, collector, outstandingRequests); + shardChoice = groupIterator.get(0).nextOrNull(); + // finally, node 2 is choosen instead + assertThat(shardChoice.currentNodeId(), equalTo("node_2")); + + IOUtils.close(clusterService); + terminate(threadPool); + } + } diff --git a/core/src/test/java/org/elasticsearch/cluster/routing/UnassignedInfoTests.java b/core/src/test/java/org/elasticsearch/cluster/routing/UnassignedInfoTests.java index 73ff7544ae2c4..b7adc66a55705 100644 --- a/core/src/test/java/org/elasticsearch/cluster/routing/UnassignedInfoTests.java +++ b/core/src/test/java/org/elasticsearch/cluster/routing/UnassignedInfoTests.java @@ -69,7 +69,8 @@ public void testReasonOrdinalOrder() { UnassignedInfo.Reason.REINITIALIZED, UnassignedInfo.Reason.REALLOCATED_REPLICA, UnassignedInfo.Reason.PRIMARY_FAILED, - UnassignedInfo.Reason.FORCED_EMPTY_PRIMARY}; + UnassignedInfo.Reason.FORCED_EMPTY_PRIMARY, + UnassignedInfo.Reason.MANUAL_ALLOCATION,}; for (int i = 0; i < order.length; i++) { assertThat(order[i].ordinal(), equalTo(i)); } diff --git a/core/src/test/java/org/elasticsearch/cluster/routing/allocation/BalancedSingleShardTests.java b/core/src/test/java/org/elasticsearch/cluster/routing/allocation/BalancedSingleShardTests.java index a63447e845b18..405f459e99a39 100644 --- a/core/src/test/java/org/elasticsearch/cluster/routing/allocation/BalancedSingleShardTests.java +++ b/core/src/test/java/org/elasticsearch/cluster/routing/allocation/BalancedSingleShardTests.java @@ -368,8 +368,7 @@ public Decision canRebalance(ShardRouting shardRouting, RoutingAllocation alloca private RoutingAllocation newRoutingAllocation(AllocationDeciders deciders, ClusterState state) { RoutingAllocation allocation = new RoutingAllocation( - deciders, new RoutingNodes(state, false), state, ClusterInfo.EMPTY, System.nanoTime(), false - ); + deciders, new RoutingNodes(state, false), state, ClusterInfo.EMPTY, System.nanoTime()); allocation.debugDecision(true); return allocation; } diff --git a/core/src/test/java/org/elasticsearch/cluster/routing/allocation/MaxRetryAllocationDeciderTests.java b/core/src/test/java/org/elasticsearch/cluster/routing/allocation/MaxRetryAllocationDeciderTests.java index 31e2330a600e8..b4ecb6d873d46 100644 --- a/core/src/test/java/org/elasticsearch/cluster/routing/allocation/MaxRetryAllocationDeciderTests.java +++ b/core/src/test/java/org/elasticsearch/cluster/routing/allocation/MaxRetryAllocationDeciderTests.java @@ -113,7 +113,7 @@ public void testSingleRetryOnIgnore() { assertEquals(routingTable.index("idx").shard(0).shards().get(0).state(), UNASSIGNED); assertEquals(routingTable.index("idx").shard(0).shards().get(0).unassignedInfo().getMessage(), "boom"); - // manual reroute should retry once + // manual resetting of retry count newState = strategy.reroute(clusterState, new AllocationCommands(), false, true).getClusterState(); assertThat(newState, not(equalTo(clusterState))); clusterState = newState; @@ -121,11 +121,12 @@ public void testSingleRetryOnIgnore() { clusterState = ClusterState.builder(clusterState).routingTable(routingTable).build(); assertEquals(routingTable.index("idx").shards().size(), 1); - assertEquals(routingTable.index("idx").shard(0).shards().get(0).unassignedInfo().getNumFailedAllocations(), retries); - assertEquals(routingTable.index("idx").shard(0).shards().get(0).state(), INITIALIZING); + assertEquals(0, routingTable.index("idx").shard(0).shards().get(0).unassignedInfo().getNumFailedAllocations()); + assertEquals(INITIALIZING, routingTable.index("idx").shard(0).shards().get(0).state()); assertEquals(routingTable.index("idx").shard(0).shards().get(0).unassignedInfo().getMessage(), "boom"); - // now we go and check that we are actually stick to unassigned on the next failure ie. no retry + // again fail it N-1 times + for (int i = 0; i < retries-1; i++) { failedShards = Collections.singletonList( new FailedShard(routingTable.index("idx").shard(0).shards().get(0), "boom", new UnsupportedOperationException())); @@ -135,10 +136,23 @@ public void testSingleRetryOnIgnore() { clusterState = newState; routingTable = newState.routingTable(); assertEquals(routingTable.index("idx").shards().size(), 1); - assertEquals(routingTable.index("idx").shard(0).shards().get(0).unassignedInfo().getNumFailedAllocations(), retries+1); - assertEquals(routingTable.index("idx").shard(0).shards().get(0).state(), UNASSIGNED); + assertEquals(i + 1, routingTable.index("idx").shard(0).shards().get(0).unassignedInfo().getNumFailedAllocations()); + assertEquals(INITIALIZING, routingTable.index("idx").shard(0).shards().get(0).state()); assertEquals(routingTable.index("idx").shard(0).shards().get(0).unassignedInfo().getMessage(), "boom"); + } + // now we go and check that we are actually stick to unassigned on the next failure + failedShards = Collections.singletonList( + new FailedShard(routingTable.index("idx").shard(0).shards().get(0), "boom", + new UnsupportedOperationException())); + newState = strategy.applyFailedShards(clusterState, failedShards); + assertThat(newState, not(equalTo(clusterState))); + clusterState = newState; + routingTable = newState.routingTable(); + assertEquals(routingTable.index("idx").shards().size(), 1); + assertEquals(retries, routingTable.index("idx").shard(0).shards().get(0).unassignedInfo().getNumFailedAllocations()); + assertEquals(UNASSIGNED, routingTable.index("idx").shard(0).shards().get(0).state()); + assertEquals("boom", routingTable.index("idx").shard(0).shards().get(0).unassignedInfo().getMessage()); } public void testFailedAllocation() { @@ -161,7 +175,7 @@ public void testFailedAllocation() { assertEquals(unassignedPrimary.unassignedInfo().getMessage(), "boom" + i); // MaxRetryAllocationDecider#canForceAllocatePrimary should return YES decisions because canAllocate returns YES here assertEquals(Decision.YES, new MaxRetryAllocationDecider(Settings.EMPTY).canForceAllocatePrimary( - unassignedPrimary, null, new RoutingAllocation(null, null, clusterState, null, 0, false))); + unassignedPrimary, null, new RoutingAllocation(null, null, clusterState, null, 0))); } // now we go and check that we are actually stick to unassigned on the next failure { @@ -179,7 +193,7 @@ public void testFailedAllocation() { assertEquals(unassignedPrimary.unassignedInfo().getMessage(), "boom"); // MaxRetryAllocationDecider#canForceAllocatePrimary should return a NO decision because canAllocate returns NO here assertEquals(Decision.NO, new MaxRetryAllocationDecider(Settings.EMPTY).canForceAllocatePrimary( - unassignedPrimary, null, new RoutingAllocation(null, null, clusterState, null, 0, false))); + unassignedPrimary, null, new RoutingAllocation(null, null, clusterState, null, 0))); } // change the settings and ensure we can do another round of allocation for that index. @@ -201,7 +215,7 @@ public void testFailedAllocation() { assertEquals(unassignedPrimary.unassignedInfo().getMessage(), "boom"); // bumped up the max retry count, so canForceAllocatePrimary should return a YES decision assertEquals(Decision.YES, new MaxRetryAllocationDecider(Settings.EMPTY).canForceAllocatePrimary( - routingTable.index("idx").shard(0).shards().get(0), null, new RoutingAllocation(null, null, clusterState, null, 0, false))); + routingTable.index("idx").shard(0).shards().get(0), null, new RoutingAllocation(null, null, clusterState, null, 0))); // now we start the shard clusterState = strategy.applyStartedShards(clusterState, Collections.singletonList( @@ -228,7 +242,7 @@ public void testFailedAllocation() { assertEquals(unassignedPrimary.unassignedInfo().getMessage(), "ZOOOMG"); // Counter reset, so MaxRetryAllocationDecider#canForceAllocatePrimary should return a YES decision assertEquals(Decision.YES, new MaxRetryAllocationDecider(Settings.EMPTY).canForceAllocatePrimary( - unassignedPrimary, null, new RoutingAllocation(null, null, clusterState, null, 0, false))); + unassignedPrimary, null, new RoutingAllocation(null, null, clusterState, null, 0))); } } diff --git a/core/src/test/java/org/elasticsearch/cluster/routing/allocation/ResizeAllocationDeciderTests.java b/core/src/test/java/org/elasticsearch/cluster/routing/allocation/ResizeAllocationDeciderTests.java new file mode 100644 index 0000000000000..2022ecb945ba0 --- /dev/null +++ b/core/src/test/java/org/elasticsearch/cluster/routing/allocation/ResizeAllocationDeciderTests.java @@ -0,0 +1,288 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.cluster.routing.allocation; + +import org.elasticsearch.Version; +import org.elasticsearch.action.admin.indices.shrink.ResizeAction; +import org.elasticsearch.cluster.ClusterName; +import org.elasticsearch.cluster.ClusterState; +import org.elasticsearch.cluster.ESAllocationTestCase; +import org.elasticsearch.cluster.EmptyClusterInfoService; +import org.elasticsearch.cluster.metadata.IndexMetaData; +import org.elasticsearch.cluster.metadata.MetaData; +import org.elasticsearch.cluster.node.DiscoveryNodes; +import org.elasticsearch.cluster.routing.RecoverySource; +import org.elasticsearch.cluster.routing.RoutingTable; +import org.elasticsearch.cluster.routing.ShardRouting; +import org.elasticsearch.cluster.routing.ShardRoutingState; +import org.elasticsearch.cluster.routing.TestShardRouting; +import org.elasticsearch.cluster.routing.allocation.allocator.BalancedShardsAllocator; +import org.elasticsearch.cluster.routing.allocation.decider.AllocationDeciders; +import org.elasticsearch.cluster.routing.allocation.decider.Decision; +import org.elasticsearch.cluster.routing.allocation.decider.ResizeAllocationDecider; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.index.Index; +import org.elasticsearch.index.shard.ShardId; +import org.elasticsearch.test.VersionUtils; +import org.elasticsearch.test.gateway.TestGatewayAllocator; + +import java.util.Arrays; +import java.util.Collections; + +import static org.elasticsearch.cluster.routing.ShardRoutingState.INITIALIZING; +import static org.elasticsearch.cluster.routing.ShardRoutingState.STARTED; +import static org.elasticsearch.cluster.routing.ShardRoutingState.UNASSIGNED; + + +public class ResizeAllocationDeciderTests extends ESAllocationTestCase { + + private AllocationService strategy; + + @Override + public void setUp() throws Exception { + super.setUp(); + strategy = new AllocationService(Settings.builder().build(), new AllocationDeciders(Settings.EMPTY, + Collections.singleton(new ResizeAllocationDecider(Settings.EMPTY))), + new TestGatewayAllocator(), new BalancedShardsAllocator(Settings.EMPTY), EmptyClusterInfoService.INSTANCE); + } + + private ClusterState createInitialClusterState(boolean startShards) { + return createInitialClusterState(startShards, Version.CURRENT); + } + + private ClusterState createInitialClusterState(boolean startShards, Version nodeVersion) { + MetaData.Builder metaBuilder = MetaData.builder(); + metaBuilder.put(IndexMetaData.builder("source").settings(settings(Version.CURRENT)) + .numberOfShards(2).numberOfReplicas(0).setRoutingNumShards(16)); + MetaData metaData = metaBuilder.build(); + RoutingTable.Builder routingTableBuilder = RoutingTable.builder(); + routingTableBuilder.addAsNew(metaData.index("source")); + + RoutingTable routingTable = routingTableBuilder.build(); + ClusterState clusterState = ClusterState.builder(ClusterName.CLUSTER_NAME_SETTING.getDefault(Settings.EMPTY)) + .metaData(metaData).routingTable(routingTable).build(); + clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder().add(newNode("node1", nodeVersion)).add(newNode + ("node2", nodeVersion))) + .build(); + RoutingTable prevRoutingTable = routingTable; + routingTable = strategy.reroute(clusterState, "reroute", false).routingTable(); + clusterState = ClusterState.builder(clusterState).routingTable(routingTable).build(); + + assertEquals(prevRoutingTable.index("source").shards().size(), 2); + assertEquals(prevRoutingTable.index("source").shard(0).shards().get(0).state(), UNASSIGNED); + assertEquals(prevRoutingTable.index("source").shard(1).shards().get(0).state(), UNASSIGNED); + + + assertEquals(routingTable.index("source").shards().size(), 2); + + assertEquals(routingTable.index("source").shard(0).shards().get(0).state(), INITIALIZING); + assertEquals(routingTable.index("source").shard(1).shards().get(0).state(), INITIALIZING); + + + if (startShards) { + clusterState = strategy.applyStartedShards(clusterState, + Arrays.asList(routingTable.index("source").shard(0).shards().get(0), + routingTable.index("source").shard(1).shards().get(0))); + routingTable = clusterState.routingTable(); + assertEquals(routingTable.index("source").shards().size(), 2); + assertEquals(routingTable.index("source").shard(0).shards().get(0).state(), STARTED); + assertEquals(routingTable.index("source").shard(1).shards().get(0).state(), STARTED); + + } + return clusterState; + } + + public void testNonResizeRouting() { + ClusterState clusterState = createInitialClusterState(true); + ResizeAllocationDecider resizeAllocationDecider = new ResizeAllocationDecider(Settings.EMPTY); + RoutingAllocation routingAllocation = new RoutingAllocation(null, null, clusterState, null, 0); + ShardRouting shardRouting = TestShardRouting.newShardRouting("non-resize", 0, null, true, ShardRoutingState.UNASSIGNED); + assertEquals(Decision.ALWAYS, resizeAllocationDecider.canAllocate(shardRouting, routingAllocation)); + assertEquals(Decision.ALWAYS, resizeAllocationDecider.canAllocate(shardRouting, clusterState.getRoutingNodes().node("node1"), + routingAllocation)); + } + + public void testShrink() { // we don't handle shrink yet + ClusterState clusterState = createInitialClusterState(true); + MetaData.Builder metaBuilder = MetaData.builder(clusterState.metaData()); + metaBuilder.put(IndexMetaData.builder("target").settings(settings(Version.CURRENT) + .put(IndexMetaData.INDEX_RESIZE_SOURCE_NAME.getKey(), "source") + .put(IndexMetaData.INDEX_RESIZE_SOURCE_UUID_KEY, IndexMetaData.INDEX_UUID_NA_VALUE)) + .numberOfShards(1).numberOfReplicas(0)); + MetaData metaData = metaBuilder.build(); + RoutingTable.Builder routingTableBuilder = RoutingTable.builder(clusterState.routingTable()); + routingTableBuilder.addAsNew(metaData.index("target")); + + clusterState = ClusterState.builder(clusterState) + .routingTable(routingTableBuilder.build()) + .metaData(metaData).build(); + Index idx = clusterState.metaData().index("target").getIndex(); + + ResizeAllocationDecider resizeAllocationDecider = new ResizeAllocationDecider(Settings.EMPTY); + RoutingAllocation routingAllocation = new RoutingAllocation(null, null, clusterState, null, 0); + ShardRouting shardRouting = TestShardRouting.newShardRouting(new ShardId(idx, 0), null, true, RecoverySource + .LocalShardsRecoverySource.INSTANCE, ShardRoutingState.UNASSIGNED); + assertEquals(Decision.ALWAYS, resizeAllocationDecider.canAllocate(shardRouting, routingAllocation)); + assertEquals(Decision.ALWAYS, resizeAllocationDecider.canAllocate(shardRouting, clusterState.getRoutingNodes().node("node1"), + routingAllocation)); + assertEquals(Decision.ALWAYS, resizeAllocationDecider.canAllocate(shardRouting, clusterState.getRoutingNodes().node("node2"), + routingAllocation)); + } + + public void testSourceNotActive() { + ClusterState clusterState = createInitialClusterState(false); + MetaData.Builder metaBuilder = MetaData.builder(clusterState.metaData()); + metaBuilder.put(IndexMetaData.builder("target").settings(settings(Version.CURRENT) + .put(IndexMetaData.INDEX_RESIZE_SOURCE_NAME.getKey(), "source") + .put(IndexMetaData.INDEX_RESIZE_SOURCE_UUID_KEY, IndexMetaData.INDEX_UUID_NA_VALUE)) + .numberOfShards(4).numberOfReplicas(0)); + MetaData metaData = metaBuilder.build(); + RoutingTable.Builder routingTableBuilder = RoutingTable.builder(clusterState.routingTable()); + routingTableBuilder.addAsNew(metaData.index("target")); + + clusterState = ClusterState.builder(clusterState) + .routingTable(routingTableBuilder.build()) + .metaData(metaData).build(); + Index idx = clusterState.metaData().index("target").getIndex(); + + + ResizeAllocationDecider resizeAllocationDecider = new ResizeAllocationDecider(Settings.EMPTY); + RoutingAllocation routingAllocation = new RoutingAllocation(null, clusterState.getRoutingNodes(), clusterState, null, 0); + int shardId = randomIntBetween(0, 3); + int sourceShardId = IndexMetaData.selectSplitShard(shardId, clusterState.metaData().index("source"), 4).id(); + ShardRouting shardRouting = TestShardRouting.newShardRouting(new ShardId(idx, shardId), null, true, RecoverySource + .LocalShardsRecoverySource.INSTANCE, ShardRoutingState.UNASSIGNED); + assertEquals(Decision.NO, resizeAllocationDecider.canAllocate(shardRouting, routingAllocation)); + assertEquals(Decision.NO, resizeAllocationDecider.canAllocate(shardRouting, clusterState.getRoutingNodes().node("node1"), + routingAllocation)); + assertEquals(Decision.NO, resizeAllocationDecider.canAllocate(shardRouting, clusterState.getRoutingNodes().node("node2"), + routingAllocation)); + + routingAllocation.debugDecision(true); + assertEquals("source primary shard [[source][" + sourceShardId + "]] is not active", + resizeAllocationDecider.canAllocate(shardRouting, routingAllocation).getExplanation()); + assertEquals("source primary shard [[source][" + sourceShardId + "]] is not active", + resizeAllocationDecider.canAllocate(shardRouting, clusterState.getRoutingNodes().node("node0"), + routingAllocation).getExplanation()); + assertEquals("source primary shard [[source][" + sourceShardId + "]] is not active", + resizeAllocationDecider.canAllocate(shardRouting, clusterState.getRoutingNodes().node("node1"), + routingAllocation).getExplanation()); + } + + public void testSourcePrimaryActive() { + ClusterState clusterState = createInitialClusterState(true); + MetaData.Builder metaBuilder = MetaData.builder(clusterState.metaData()); + metaBuilder.put(IndexMetaData.builder("target").settings(settings(Version.CURRENT) + .put(IndexMetaData.INDEX_RESIZE_SOURCE_NAME.getKey(), "source") + .put(IndexMetaData.INDEX_RESIZE_SOURCE_UUID_KEY, IndexMetaData.INDEX_UUID_NA_VALUE)) + .numberOfShards(4).numberOfReplicas(0)); + MetaData metaData = metaBuilder.build(); + RoutingTable.Builder routingTableBuilder = RoutingTable.builder(clusterState.routingTable()); + routingTableBuilder.addAsNew(metaData.index("target")); + + clusterState = ClusterState.builder(clusterState) + .routingTable(routingTableBuilder.build()) + .metaData(metaData).build(); + Index idx = clusterState.metaData().index("target").getIndex(); + + + ResizeAllocationDecider resizeAllocationDecider = new ResizeAllocationDecider(Settings.EMPTY); + RoutingAllocation routingAllocation = new RoutingAllocation(null, clusterState.getRoutingNodes(), clusterState, null, 0); + int shardId = randomIntBetween(0, 3); + int sourceShardId = IndexMetaData.selectSplitShard(shardId, clusterState.metaData().index("source"), 4).id(); + ShardRouting shardRouting = TestShardRouting.newShardRouting(new ShardId(idx, shardId), null, true, RecoverySource + .LocalShardsRecoverySource.INSTANCE, ShardRoutingState.UNASSIGNED); + assertEquals(Decision.YES, resizeAllocationDecider.canAllocate(shardRouting, routingAllocation)); + + String allowedNode = clusterState.getRoutingTable().index("source").shard(sourceShardId).primaryShard().currentNodeId(); + + if ("node1".equals(allowedNode)) { + assertEquals(Decision.YES, resizeAllocationDecider.canAllocate(shardRouting, clusterState.getRoutingNodes().node("node1"), + routingAllocation)); + assertEquals(Decision.NO, resizeAllocationDecider.canAllocate(shardRouting, clusterState.getRoutingNodes().node("node2"), + routingAllocation)); + } else { + assertEquals(Decision.NO, resizeAllocationDecider.canAllocate(shardRouting, clusterState.getRoutingNodes().node("node1"), + routingAllocation)); + assertEquals(Decision.YES, resizeAllocationDecider.canAllocate(shardRouting, clusterState.getRoutingNodes().node("node2"), + routingAllocation)); + } + + routingAllocation.debugDecision(true); + assertEquals("source primary is active", resizeAllocationDecider.canAllocate(shardRouting, routingAllocation).getExplanation()); + + if ("node1".equals(allowedNode)) { + assertEquals("source primary is allocated on this node", + resizeAllocationDecider.canAllocate(shardRouting, clusterState.getRoutingNodes().node("node1"), + routingAllocation).getExplanation()); + assertEquals("source primary is allocated on another node", + resizeAllocationDecider.canAllocate(shardRouting, clusterState.getRoutingNodes().node("node2"), + routingAllocation).getExplanation()); + } else { + assertEquals("source primary is allocated on another node", + resizeAllocationDecider.canAllocate(shardRouting, clusterState.getRoutingNodes().node("node1"), + routingAllocation).getExplanation()); + assertEquals("source primary is allocated on this node", + resizeAllocationDecider.canAllocate(shardRouting, clusterState.getRoutingNodes().node("node2"), + routingAllocation).getExplanation()); + } + } + + public void testAllocateOnOldNode() { + Version version = VersionUtils.randomVersionBetween(random(), Version.V_5_0_0, + VersionUtils.getPreviousVersion(ResizeAction.COMPATIBILITY_VERSION)); + ClusterState clusterState = createInitialClusterState(true, version); + MetaData.Builder metaBuilder = MetaData.builder(clusterState.metaData()); + metaBuilder.put(IndexMetaData.builder("target").settings(settings(Version.CURRENT) + .put(IndexMetaData.INDEX_RESIZE_SOURCE_NAME.getKey(), "source") + .put(IndexMetaData.INDEX_RESIZE_SOURCE_UUID_KEY, IndexMetaData.INDEX_UUID_NA_VALUE)) + .numberOfShards(4).numberOfReplicas(0)); + MetaData metaData = metaBuilder.build(); + RoutingTable.Builder routingTableBuilder = RoutingTable.builder(clusterState.routingTable()); + routingTableBuilder.addAsNew(metaData.index("target")); + + clusterState = ClusterState.builder(clusterState) + .routingTable(routingTableBuilder.build()) + .metaData(metaData).build(); + Index idx = clusterState.metaData().index("target").getIndex(); + + + ResizeAllocationDecider resizeAllocationDecider = new ResizeAllocationDecider(Settings.EMPTY); + RoutingAllocation routingAllocation = new RoutingAllocation(null, clusterState.getRoutingNodes(), clusterState, null, 0); + int shardId = randomIntBetween(0, 3); + int sourceShardId = IndexMetaData.selectSplitShard(shardId, clusterState.metaData().index("source"), 4).id(); + ShardRouting shardRouting = TestShardRouting.newShardRouting(new ShardId(idx, shardId), null, true, RecoverySource + .LocalShardsRecoverySource.INSTANCE, ShardRoutingState.UNASSIGNED); + assertEquals(Decision.YES, resizeAllocationDecider.canAllocate(shardRouting, routingAllocation)); + + assertEquals(Decision.NO, resizeAllocationDecider.canAllocate(shardRouting, clusterState.getRoutingNodes().node("node1"), + routingAllocation)); + assertEquals(Decision.NO, resizeAllocationDecider.canAllocate(shardRouting, clusterState.getRoutingNodes().node("node2"), + routingAllocation)); + + routingAllocation.debugDecision(true); + assertEquals("source primary is active", resizeAllocationDecider.canAllocate(shardRouting, routingAllocation).getExplanation()); + assertEquals("node [node1] is too old to split a shard", + resizeAllocationDecider.canAllocate(shardRouting, clusterState.getRoutingNodes().node("node1"), + routingAllocation).getExplanation()); + assertEquals("node [node2] is too old to split a shard", + resizeAllocationDecider.canAllocate(shardRouting, clusterState.getRoutingNodes().node("node2"), + routingAllocation).getExplanation()); + } +} diff --git a/core/src/test/java/org/elasticsearch/cluster/routing/allocation/SameShardRoutingTests.java b/core/src/test/java/org/elasticsearch/cluster/routing/allocation/SameShardRoutingTests.java index 73332fcdce9d0..4b74cee867138 100644 --- a/core/src/test/java/org/elasticsearch/cluster/routing/allocation/SameShardRoutingTests.java +++ b/core/src/test/java/org/elasticsearch/cluster/routing/allocation/SameShardRoutingTests.java @@ -106,8 +106,7 @@ public void testForceAllocatePrimaryOnSameNodeNotAllowed() { ShardRouting primaryShard = clusterState.routingTable().index(index).shard(0).primaryShard(); RoutingNode routingNode = clusterState.getRoutingNodes().node(primaryShard.currentNodeId()); RoutingAllocation routingAllocation = new RoutingAllocation(new AllocationDeciders(Settings.EMPTY, Collections.emptyList()), - new RoutingNodes(clusterState, false), clusterState, ClusterInfo.EMPTY, System.nanoTime(), false - ); + new RoutingNodes(clusterState, false), clusterState, ClusterInfo.EMPTY, System.nanoTime()); // can't force allocate same shard copy to the same node ShardRouting newPrimary = TestShardRouting.newShardRouting(primaryShard.shardId(), null, true, ShardRoutingState.UNASSIGNED); diff --git a/core/src/test/java/org/elasticsearch/cluster/routing/allocation/ShardStateIT.java b/core/src/test/java/org/elasticsearch/cluster/routing/allocation/ShardStateIT.java index de82e72f9dc2f..aa77d7b4bf92f 100644 --- a/core/src/test/java/org/elasticsearch/cluster/routing/allocation/ShardStateIT.java +++ b/core/src/test/java/org/elasticsearch/cluster/routing/allocation/ShardStateIT.java @@ -21,6 +21,7 @@ import org.elasticsearch.cluster.ClusterState; import org.elasticsearch.cluster.health.ClusterHealthStatus; import org.elasticsearch.cluster.metadata.IndexMetaData; +import org.elasticsearch.common.settings.Settings; import org.elasticsearch.index.IndexService; import org.elasticsearch.index.shard.IndexShard; import org.elasticsearch.indices.IndicesService; @@ -32,7 +33,8 @@ public class ShardStateIT extends ESIntegTestCase { public void testPrimaryFailureIncreasesTerm() throws Exception { internalCluster().ensureAtLeastNumDataNodes(2); - prepareCreate("test").setSettings(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 2, IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 1).get(); + prepareCreate("test").setSettings(Settings.builder().put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 2) + .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 1)).get(); ensureGreen(); assertPrimaryTerms(1, 1); diff --git a/core/src/test/java/org/elasticsearch/cluster/routing/allocation/decider/DiskThresholdDeciderTests.java b/core/src/test/java/org/elasticsearch/cluster/routing/allocation/decider/DiskThresholdDeciderTests.java index 2c7df5fee20de..58d19fb61cf05 100644 --- a/core/src/test/java/org/elasticsearch/cluster/routing/allocation/decider/DiskThresholdDeciderTests.java +++ b/core/src/test/java/org/elasticsearch/cluster/routing/allocation/decider/DiskThresholdDeciderTests.java @@ -841,7 +841,7 @@ public void testCanRemainWithShardRelocatingAway() { ); ClusterState clusterState = ClusterState.builder(baseClusterState).routingTable(builder.build()).build(); RoutingAllocation routingAllocation = new RoutingAllocation(null, new RoutingNodes(clusterState), clusterState, clusterInfo, - System.nanoTime(), false); + System.nanoTime()); routingAllocation.debugDecision(true); Decision decision = diskThresholdDecider.canRemain(firstRouting, firstRoutingNode, routingAllocation); assertThat(decision.type(), equalTo(Decision.Type.NO)); @@ -867,8 +867,7 @@ public void testCanRemainWithShardRelocatingAway() { ) ); clusterState = ClusterState.builder(baseClusterState).routingTable(builder.build()).build(); - routingAllocation = new RoutingAllocation(null, new RoutingNodes(clusterState), clusterState, clusterInfo, System.nanoTime(), - false); + routingAllocation = new RoutingAllocation(null, new RoutingNodes(clusterState), clusterState, clusterInfo, System.nanoTime()); routingAllocation.debugDecision(true); decision = diskThresholdDecider.canRemain(firstRouting, firstRoutingNode, routingAllocation); assertThat(decision.type(), equalTo(Decision.Type.YES)); @@ -976,7 +975,7 @@ public void testForSingleDataNode() { ); ClusterState clusterState = ClusterState.builder(baseClusterState).routingTable(builder.build()).build(); RoutingAllocation routingAllocation = new RoutingAllocation(null, new RoutingNodes(clusterState), clusterState, clusterInfo, - System.nanoTime(), false); + System.nanoTime()); routingAllocation.debugDecision(true); Decision decision = diskThresholdDecider.canRemain(firstRouting, firstRoutingNode, routingAllocation); @@ -1036,8 +1035,7 @@ Settings.EMPTY, new ClusterSettings(Settings.EMPTY, ClusterSettings.BUILT_IN_CLU ); clusterState = ClusterState.builder(updateClusterState).routingTable(builder.build()).build(); - routingAllocation = new RoutingAllocation(null, new RoutingNodes(clusterState), clusterState, clusterInfo, System.nanoTime(), - false); + routingAllocation = new RoutingAllocation(null, new RoutingNodes(clusterState), clusterState, clusterInfo, System.nanoTime()); routingAllocation.debugDecision(true); decision = diskThresholdDecider.canRemain(firstRouting, firstRoutingNode, routingAllocation); assertThat(decision.type(), equalTo(Decision.Type.YES)); diff --git a/core/src/test/java/org/elasticsearch/cluster/routing/allocation/decider/DiskThresholdDeciderUnitTests.java b/core/src/test/java/org/elasticsearch/cluster/routing/allocation/decider/DiskThresholdDeciderUnitTests.java index 7379ee78d03bd..3676ca8bd6e85 100644 --- a/core/src/test/java/org/elasticsearch/cluster/routing/allocation/decider/DiskThresholdDeciderUnitTests.java +++ b/core/src/test/java/org/elasticsearch/cluster/routing/allocation/decider/DiskThresholdDeciderUnitTests.java @@ -98,7 +98,7 @@ public void testCanAllocateUsesMaxAvailableSpace() { ImmutableOpenMap.Builder shardSizes = ImmutableOpenMap.builder(); shardSizes.put("[test][0][p]", 10L); // 10 bytes final ClusterInfo clusterInfo = new ClusterInfo(leastAvailableUsages.build(), mostAvailableUsage.build(), shardSizes.build(), ImmutableOpenMap.of()); - RoutingAllocation allocation = new RoutingAllocation(new AllocationDeciders(Settings.EMPTY, Collections.singleton(decider)), clusterState.getRoutingNodes(), clusterState, clusterInfo, System.nanoTime(), false); + RoutingAllocation allocation = new RoutingAllocation(new AllocationDeciders(Settings.EMPTY, Collections.singleton(decider)), clusterState.getRoutingNodes(), clusterState, clusterInfo, System.nanoTime()); allocation.debugDecision(true); Decision decision = decider.canAllocate(test_0, new RoutingNode("node_0", node_0), allocation); assertEquals(mostAvailableUsage.toString(), Decision.Type.YES, decision.type()); @@ -172,7 +172,7 @@ public void testCanRemainUsesLeastAvailableSpace() { shardSizes.put("[test][2][p]", 10L); final ClusterInfo clusterInfo = new ClusterInfo(leastAvailableUsages.build(), mostAvailableUsage.build(), shardSizes.build(), shardRoutingMap.build()); - RoutingAllocation allocation = new RoutingAllocation(new AllocationDeciders(Settings.EMPTY, Collections.singleton(decider)), clusterState.getRoutingNodes(), clusterState, clusterInfo, System.nanoTime(), false); + RoutingAllocation allocation = new RoutingAllocation(new AllocationDeciders(Settings.EMPTY, Collections.singleton(decider)), clusterState.getRoutingNodes(), clusterState, clusterInfo, System.nanoTime()); allocation.debugDecision(true); Decision decision = decider.canRemain(test_0, new RoutingNode("node_0", node_0), allocation); assertEquals(Decision.Type.YES, decision.type()); @@ -224,7 +224,7 @@ public void testShardSizeAndRelocatingSize() { routingTableBuilder.addAsNew(metaData.index("other")); ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.CLUSTER_NAME_SETTING.getDefault(Settings.EMPTY)) .metaData(metaData).routingTable(routingTableBuilder.build()).build(); - RoutingAllocation allocation = new RoutingAllocation(null, null, clusterState, info, 0, false); + RoutingAllocation allocation = new RoutingAllocation(null, null, clusterState, info, 0); final Index index = new Index("test", "1234"); ShardRouting test_0 = ShardRouting.newUnassigned(new ShardId(index, 0), false, PeerRecoverySource.INSTANCE, new UnassignedInfo(UnassignedInfo.Reason.INDEX_CREATED, "foo")); @@ -305,7 +305,7 @@ public void testSizeShrinkIndex() { clusterState = allocationService.applyStartedShards(clusterState, clusterState.getRoutingTable().index("test").shardsWithState(ShardRoutingState.UNASSIGNED)); - RoutingAllocation allocation = new RoutingAllocation(null, clusterState.getRoutingNodes(), clusterState, info, 0, false); + RoutingAllocation allocation = new RoutingAllocation(null, clusterState.getRoutingNodes(), clusterState, info, 0); final Index index = new Index("test", "1234"); ShardRouting test_0 = ShardRouting.newUnassigned(new ShardId(index, 0), true, diff --git a/core/src/test/java/org/elasticsearch/cluster/routing/allocation/decider/FilterAllocationDeciderTests.java b/core/src/test/java/org/elasticsearch/cluster/routing/allocation/decider/FilterAllocationDeciderTests.java index 711e8af13db2a..c4105771229bc 100644 --- a/core/src/test/java/org/elasticsearch/cluster/routing/allocation/decider/FilterAllocationDeciderTests.java +++ b/core/src/test/java/org/elasticsearch/cluster/routing/allocation/decider/FilterAllocationDeciderTests.java @@ -73,7 +73,7 @@ public void testFilterInitialRecovery() { // after failing the shard we are unassigned since the node is blacklisted and we can't initialize on the other node RoutingAllocation allocation = new RoutingAllocation(allocationDeciders, state.getRoutingNodes(), state, - null, 0, false); + null, 0); allocation.debugDecision(true); Decision.Single decision = (Decision.Single) filterAllocationDecider.canAllocate( routingTable.index("idx").shard(0).primaryShard(), @@ -124,7 +124,7 @@ public void testFilterInitialRecovery() { assertEquals(routingTable.index("idx").shard(0).primaryShard().currentNodeId(), "node1"); allocation = new RoutingAllocation(allocationDeciders, state.getRoutingNodes(), state, - null, 0, false); + null, 0); allocation.debugDecision(true); decision = (Decision.Single) filterAllocationDecider.canAllocate( routingTable.index("idx").shard(0).shards().get(0), @@ -183,7 +183,7 @@ private ClusterState createInitialClusterState(AllocationService service, Settin public void testInvalidIPFilter() { String ipKey = randomFrom("_ip", "_host_ip", "_publish_ip"); - Setting filterSetting = randomFrom(IndexMetaData.INDEX_ROUTING_REQUIRE_GROUP_SETTING, + Setting filterSetting = randomFrom(IndexMetaData.INDEX_ROUTING_REQUIRE_GROUP_SETTING, IndexMetaData.INDEX_ROUTING_INCLUDE_GROUP_SETTING, IndexMetaData.INDEX_ROUTING_EXCLUDE_GROUP_SETTING); String invalidIP = randomFrom("192..168.1.1", "192.300.1.1"); IllegalArgumentException e = expectThrows(IllegalArgumentException.class, () -> { @@ -191,12 +191,12 @@ public void testInvalidIPFilter() { indexScopedSettings.updateDynamicSettings(Settings.builder().put(filterSetting.getKey() + ipKey, invalidIP).build(), Settings.builder().put(Settings.EMPTY), Settings.builder(), "test ip validation"); }); - assertEquals("invalid IP address [" + invalidIP + "] for [" + ipKey + "]", e.getMessage()); + assertEquals("invalid IP address [" + invalidIP + "] for [" + filterSetting.getKey() + ipKey + "]", e.getMessage()); } public void testWildcardIPFilter() { String ipKey = randomFrom("_ip", "_host_ip", "_publish_ip"); - Setting filterSetting = randomFrom(IndexMetaData.INDEX_ROUTING_REQUIRE_GROUP_SETTING, + Setting filterSetting = randomFrom(IndexMetaData.INDEX_ROUTING_REQUIRE_GROUP_SETTING, IndexMetaData.INDEX_ROUTING_INCLUDE_GROUP_SETTING, IndexMetaData.INDEX_ROUTING_EXCLUDE_GROUP_SETTING); String wildcardIP = randomFrom("192.168.*", "192.*.1.1"); IndexScopedSettings indexScopedSettings = new IndexScopedSettings(Settings.EMPTY, IndexScopedSettings.BUILT_IN_INDEX_SETTINGS); diff --git a/core/src/test/java/org/elasticsearch/cluster/settings/ClusterSettingsIT.java b/core/src/test/java/org/elasticsearch/cluster/settings/ClusterSettingsIT.java index 61e31666f3492..cdcaf4a1b9c20 100644 --- a/core/src/test/java/org/elasticsearch/cluster/settings/ClusterSettingsIT.java +++ b/core/src/test/java/org/elasticsearch/cluster/settings/ClusterSettingsIT.java @@ -25,7 +25,6 @@ import org.elasticsearch.cluster.metadata.MetaData; import org.elasticsearch.cluster.routing.allocation.decider.EnableAllocationDecider; import org.elasticsearch.common.logging.ESLoggerFactory; -import org.elasticsearch.common.settings.Setting; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.ByteSizeUnit; import org.elasticsearch.discovery.Discovery; @@ -33,10 +32,10 @@ import org.elasticsearch.discovery.zen.ZenDiscovery; import org.elasticsearch.indices.recovery.RecoverySettings; import org.elasticsearch.test.ESIntegTestCase; -import org.elasticsearch.test.ESIntegTestCase.ClusterScope; import org.junit.After; -import static org.elasticsearch.test.ESIntegTestCase.Scope.TEST; +import java.util.Arrays; + import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked; import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertBlocked; import static org.hamcrest.Matchers.containsString; @@ -63,7 +62,7 @@ public void testClusterNonExistingSettingsUpdate() { .get(); fail("bogus value"); } catch (IllegalArgumentException ex) { - assertEquals(ex.getMessage(), "transient setting [no_idea_what_you_are_talking_about], not dynamically updateable"); + assertEquals("transient setting [no_idea_what_you_are_talking_about], not recognized", ex.getMessage()); } } @@ -81,7 +80,7 @@ public void testDeleteIsAppliedFirst() { .get(); assertAcked(response); - assertEquals(response.getTransientSettings().getAsMap().get(DiscoverySettings.PUBLISH_TIMEOUT_SETTING.getKey()), "1s"); + assertEquals(response.getTransientSettings().get(DiscoverySettings.PUBLISH_TIMEOUT_SETTING.getKey()), "1s"); assertTrue(DiscoverySettings.PUBLISH_DIFF_ENABLE_SETTING.get(Settings.EMPTY)); assertFalse(response.getTransientSettings().getAsBoolean(DiscoverySettings.PUBLISH_DIFF_ENABLE_SETTING.getKey(), null)); @@ -89,7 +88,7 @@ public void testDeleteIsAppliedFirst() { .prepareUpdateSettings() .setTransientSettings(Settings.builder().putNull((randomBoolean() ? "discovery.zen.*" : "*")).put(DiscoverySettings.PUBLISH_TIMEOUT_SETTING.getKey(), "2s")) .get(); - assertEquals(response.getTransientSettings().getAsMap().get(DiscoverySettings.PUBLISH_TIMEOUT_SETTING.getKey()), "2s"); + assertEquals(response.getTransientSettings().get(DiscoverySettings.PUBLISH_TIMEOUT_SETTING.getKey()), "2s"); assertNull(response.getTransientSettings().getAsBoolean(DiscoverySettings.PUBLISH_DIFF_ENABLE_SETTING.getKey(), null)); } @@ -105,7 +104,7 @@ public void testResetClusterSetting() { .get(); assertAcked(response); - assertThat(response.getTransientSettings().getAsMap().get(DiscoverySettings.PUBLISH_TIMEOUT_SETTING.getKey()), equalTo("1s")); + assertThat(response.getTransientSettings().get(DiscoverySettings.PUBLISH_TIMEOUT_SETTING.getKey()), equalTo("1s")); assertThat(discoverySettings.getPublishTimeout().seconds(), equalTo(1L)); assertThat(discoverySettings.getPublishDiff(), equalTo(DiscoverySettings.PUBLISH_DIFF_ENABLE_SETTING.get(Settings.EMPTY))); @@ -116,7 +115,7 @@ public void testResetClusterSetting() { .get(); assertAcked(response); - assertNull(response.getTransientSettings().getAsMap().get(DiscoverySettings.PUBLISH_TIMEOUT_SETTING.getKey())); + assertNull(response.getTransientSettings().get(DiscoverySettings.PUBLISH_TIMEOUT_SETTING.getKey())); assertThat(discoverySettings.getPublishTimeout(), equalTo(DiscoverySettings.PUBLISH_TIMEOUT_SETTING.get(Settings.EMPTY))); assertThat(discoverySettings.getPublishDiff(), equalTo(DiscoverySettings.PUBLISH_DIFF_ENABLE_SETTING.get(Settings.EMPTY))); @@ -128,7 +127,7 @@ public void testResetClusterSetting() { .get(); assertAcked(response); - assertThat(response.getTransientSettings().getAsMap().get(DiscoverySettings.PUBLISH_TIMEOUT_SETTING.getKey()), equalTo("1s")); + assertThat(response.getTransientSettings().get(DiscoverySettings.PUBLISH_TIMEOUT_SETTING.getKey()), equalTo("1s")); assertThat(discoverySettings.getPublishTimeout().seconds(), equalTo(1L)); assertFalse(discoverySettings.getPublishDiff()); response = client().admin().cluster() @@ -136,8 +135,8 @@ public void testResetClusterSetting() { .setTransientSettings(Settings.builder().putNull((randomBoolean() ? "discovery.zen.*" : "*"))) .get(); - assertNull(response.getTransientSettings().getAsMap().get(DiscoverySettings.PUBLISH_TIMEOUT_SETTING.getKey())); - assertNull(response.getTransientSettings().getAsMap().get(DiscoverySettings.PUBLISH_DIFF_ENABLE_SETTING.getKey())); + assertNull(response.getTransientSettings().get(DiscoverySettings.PUBLISH_TIMEOUT_SETTING.getKey())); + assertNull(response.getTransientSettings().get(DiscoverySettings.PUBLISH_DIFF_ENABLE_SETTING.getKey())); assertThat(discoverySettings.getPublishTimeout(), equalTo(DiscoverySettings.PUBLISH_TIMEOUT_SETTING.get(Settings.EMPTY))); assertThat(discoverySettings.getPublishDiff(), equalTo(DiscoverySettings.PUBLISH_DIFF_ENABLE_SETTING.get(Settings.EMPTY))); @@ -148,7 +147,7 @@ public void testResetClusterSetting() { .get(); assertAcked(response); - assertThat(response.getPersistentSettings().getAsMap().get(DiscoverySettings.PUBLISH_TIMEOUT_SETTING.getKey()), equalTo("1s")); + assertThat(response.getPersistentSettings().get(DiscoverySettings.PUBLISH_TIMEOUT_SETTING.getKey()), equalTo("1s")); assertThat(discoverySettings.getPublishTimeout().seconds(), equalTo(1L)); assertThat(discoverySettings.getPublishDiff(), equalTo(DiscoverySettings.PUBLISH_DIFF_ENABLE_SETTING.get(Settings.EMPTY))); @@ -159,7 +158,7 @@ public void testResetClusterSetting() { .get(); assertAcked(response); - assertNull(response.getPersistentSettings().getAsMap().get(DiscoverySettings.PUBLISH_TIMEOUT_SETTING.getKey())); + assertNull(response.getPersistentSettings().get(DiscoverySettings.PUBLISH_TIMEOUT_SETTING.getKey())); assertThat(discoverySettings.getPublishTimeout(), equalTo(DiscoverySettings.PUBLISH_TIMEOUT_SETTING.get(Settings.EMPTY))); assertThat(discoverySettings.getPublishDiff(), equalTo(DiscoverySettings.PUBLISH_DIFF_ENABLE_SETTING.get(Settings.EMPTY))); @@ -172,7 +171,7 @@ public void testResetClusterSetting() { .get(); assertAcked(response); - assertThat(response.getPersistentSettings().getAsMap().get(DiscoverySettings.PUBLISH_TIMEOUT_SETTING.getKey()), equalTo("1s")); + assertThat(response.getPersistentSettings().get(DiscoverySettings.PUBLISH_TIMEOUT_SETTING.getKey()), equalTo("1s")); assertThat(discoverySettings.getPublishTimeout().seconds(), equalTo(1L)); assertFalse(discoverySettings.getPublishDiff()); response = client().admin().cluster() @@ -180,8 +179,8 @@ public void testResetClusterSetting() { .setPersistentSettings(Settings.builder().putNull((randomBoolean() ? "discovery.zen.*" : "*"))) .get(); - assertNull(response.getPersistentSettings().getAsMap().get(DiscoverySettings.PUBLISH_TIMEOUT_SETTING.getKey())); - assertNull(response.getPersistentSettings().getAsMap().get(DiscoverySettings.PUBLISH_DIFF_ENABLE_SETTING.getKey())); + assertNull(response.getPersistentSettings().get(DiscoverySettings.PUBLISH_TIMEOUT_SETTING.getKey())); + assertNull(response.getPersistentSettings().get(DiscoverySettings.PUBLISH_DIFF_ENABLE_SETTING.getKey())); assertThat(discoverySettings.getPublishTimeout(), equalTo(DiscoverySettings.PUBLISH_TIMEOUT_SETTING.get(Settings.EMPTY))); assertThat(discoverySettings.getPublishDiff(), equalTo(DiscoverySettings.PUBLISH_DIFF_ENABLE_SETTING.get(Settings.EMPTY))); } @@ -245,11 +244,11 @@ public void testClusterSettingsUpdateResponse() { public void testCanUpdateTracerSettings() { ClusterUpdateSettingsResponse clusterUpdateSettingsResponse = client().admin().cluster() .prepareUpdateSettings() - .setTransientSettings(Settings.builder().putArray("transport.tracer.include", "internal:index/shard/recovery/*", + .setTransientSettings(Settings.builder().putList("transport.tracer.include", "internal:index/shard/recovery/*", "internal:gateway/local*")) .get(); - assertArrayEquals(clusterUpdateSettingsResponse.getTransientSettings().getAsArray("transport.tracer.include"), new String[] {"internal:index/shard/recovery/*", - "internal:gateway/local*"}); + assertEquals(clusterUpdateSettingsResponse.getTransientSettings().getAsList("transport.tracer.include"), + Arrays.asList("internal:index/shard/recovery/*", "internal:gateway/local*")); } public void testUpdateDiscoveryPublishTimeout() { @@ -264,7 +263,7 @@ public void testUpdateDiscoveryPublishTimeout() { .get(); assertAcked(response); - assertThat(response.getTransientSettings().getAsMap().get(DiscoverySettings.PUBLISH_TIMEOUT_SETTING.getKey()), equalTo("1s")); + assertThat(response.getTransientSettings().get(DiscoverySettings.PUBLISH_TIMEOUT_SETTING.getKey()), equalTo("1s")); assertThat(discoverySettings.getPublishTimeout().seconds(), equalTo(1L)); try { diff --git a/core/src/test/java/org/elasticsearch/cluster/structure/RoutingIteratorTests.java b/core/src/test/java/org/elasticsearch/cluster/structure/RoutingIteratorTests.java index 172bcd6bd558b..6fd11aa91dce6 100644 --- a/core/src/test/java/org/elasticsearch/cluster/structure/RoutingIteratorTests.java +++ b/core/src/test/java/org/elasticsearch/cluster/structure/RoutingIteratorTests.java @@ -50,6 +50,7 @@ import static java.util.Collections.unmodifiableMap; import static org.elasticsearch.cluster.routing.ShardRoutingState.INITIALIZING; import static org.hamcrest.Matchers.anyOf; +import static org.hamcrest.Matchers.containsString; import static org.hamcrest.Matchers.equalTo; import static org.hamcrest.Matchers.not; import static org.hamcrest.Matchers.notNullValue; @@ -415,10 +416,6 @@ public void testShardsAndPreferNodeRouting() { } public void testReplicaShardPreferenceIters() throws Exception { - AllocationService strategy = createAllocationService(Settings.builder() - .put("cluster.routing.allocation.node_concurrent_recoveries", 10) - .build()); - OperationRouting operationRouting = new OperationRouting(Settings.EMPTY, new ClusterSettings(Settings.EMPTY, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS)); @@ -430,69 +427,22 @@ public void testReplicaShardPreferenceIters() throws Exception { .addAsNew(metaData.index("test")) .build(); - ClusterState clusterState = ClusterState.builder(ClusterName.CLUSTER_NAME_SETTING.getDefault(Settings.EMPTY)).metaData(metaData).routingTable(routingTable).build(); - - clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder() - .add(newNode("node1")) - .add(newNode("node2")) - .add(newNode("node3")) - .localNodeId("node1") - ).build(); - clusterState = strategy.reroute(clusterState, "reroute"); - - clusterState = strategy.applyStartedShards(clusterState, clusterState.getRoutingNodes().shardsWithState(INITIALIZING)); - - // When replicas haven't initialized, it comes back with the primary first, then initializing replicas - GroupShardsIterator shardIterators = operationRouting.searchShards(clusterState, new String[]{"test"}, null, "_replica_first"); - assertThat(shardIterators.size(), equalTo(2)); // two potential shards - ShardIterator iter = shardIterators.iterator().next(); - assertThat(iter.size(), equalTo(3)); // three potential candidates for the shard - ShardRouting routing = iter.nextOrNull(); - assertNotNull(routing); - assertThat(routing.shardId().id(), anyOf(equalTo(0), equalTo(1))); - assertTrue(routing.primary()); // replicas haven't initialized yet, so primary is first - assertTrue(routing.started()); - routing = iter.nextOrNull(); - assertThat(routing.shardId().id(), anyOf(equalTo(0), equalTo(1))); - assertFalse(routing.primary()); - assertTrue(routing.initializing()); - routing = iter.nextOrNull(); - assertThat(routing.shardId().id(), anyOf(equalTo(0), equalTo(1))); - assertFalse(routing.primary()); - assertTrue(routing.initializing()); - - clusterState = strategy.applyStartedShards(clusterState, clusterState.getRoutingNodes().shardsWithState(INITIALIZING)); - - clusterState = strategy.applyStartedShards(clusterState, clusterState.getRoutingNodes().shardsWithState(INITIALIZING)); - - - shardIterators = operationRouting.searchShards(clusterState, new String[]{"test"}, null, "_replica"); - assertThat(shardIterators.size(), equalTo(2)); // two potential shards - iter = shardIterators.iterator().next(); - assertThat(iter.size(), equalTo(2)); // two potential replicas for the shard - routing = iter.nextOrNull(); - assertNotNull(routing); - assertThat(routing.shardId().id(), anyOf(equalTo(0), equalTo(1))); - assertFalse(routing.primary()); - routing = iter.nextOrNull(); - assertThat(routing.shardId().id(), anyOf(equalTo(0), equalTo(1))); - assertFalse(routing.primary()); - - shardIterators = operationRouting.searchShards(clusterState, new String[]{"test"}, null, "_replica_first"); - assertThat(shardIterators.size(), equalTo(2)); // two potential shards - iter = shardIterators.iterator().next(); - assertThat(iter.size(), equalTo(3)); // three potential candidates for the shard - routing = iter.nextOrNull(); - assertNotNull(routing); - assertThat(routing.shardId().id(), anyOf(equalTo(0), equalTo(1))); - assertFalse(routing.primary()); - routing = iter.nextOrNull(); - assertThat(routing.shardId().id(), anyOf(equalTo(0), equalTo(1))); - assertFalse(routing.primary()); - // finally the primary - routing = iter.nextOrNull(); - assertThat(routing.shardId().id(), anyOf(equalTo(0), equalTo(1))); - assertTrue(routing.primary()); + final ClusterState clusterState = ClusterState + .builder(ClusterName.CLUSTER_NAME_SETTING.getDefault(Settings.EMPTY)) + .metaData(metaData) + .routingTable(routingTable) + .nodes(DiscoveryNodes.builder() + .add(newNode("node1")) + .add(newNode("node2")) + .add(newNode("node3")) + .localNodeId("node1")) + .build(); + + String[] removedPreferences = {"_primary", "_primary_first", "_replica", "_replica_first"}; + for (String pref : removedPreferences) { + expectThrows(IllegalArgumentException.class, + () -> operationRouting.searchShards(clusterState, new String[]{"test"}, null, pref)); + } } } diff --git a/core/src/test/java/org/elasticsearch/common/NumbersTests.java b/core/src/test/java/org/elasticsearch/common/NumbersTests.java index e5563993ad5fd..46378ccc9e9fb 100644 --- a/core/src/test/java/org/elasticsearch/common/NumbersTests.java +++ b/core/src/test/java/org/elasticsearch/common/NumbersTests.java @@ -27,6 +27,21 @@ public class NumbersTests extends ESTestCase { + public void testToLong() { + assertEquals(3L, Numbers.toLong("3", false)); + assertEquals(3L, Numbers.toLong("3.1", true)); + assertEquals(9223372036854775807L, Numbers.toLong("9223372036854775807.00", false)); + assertEquals(-9223372036854775808L, Numbers.toLong("-9223372036854775808.00", false)); + + IllegalArgumentException e = expectThrows(IllegalArgumentException.class, + () -> Numbers.toLong("9223372036854775808", false)); + assertEquals("Value [9223372036854775808] is out of range for a long", e.getMessage()); + + e = expectThrows(IllegalArgumentException.class, + () -> Numbers.toLong("-9223372036854775809", false)); + assertEquals("Value [-9223372036854775809] is out of range for a long", e.getMessage()); + } + public void testToLongExact() { assertEquals(3L, Numbers.toLongExact(Long.valueOf(3L))); assertEquals(3L, Numbers.toLongExact(Integer.valueOf(3))); diff --git a/core/src/test/java/org/elasticsearch/common/blobstore/FsBlobStoreTests.java b/core/src/test/java/org/elasticsearch/common/blobstore/FsBlobStoreTests.java index 7d4ac1acc0798..8b9021cae9370 100644 --- a/core/src/test/java/org/elasticsearch/common/blobstore/FsBlobStoreTests.java +++ b/core/src/test/java/org/elasticsearch/common/blobstore/FsBlobStoreTests.java @@ -20,12 +20,14 @@ import org.apache.lucene.util.LuceneTestCase; import org.elasticsearch.common.blobstore.fs.FsBlobStore; +import org.elasticsearch.common.bytes.BytesArray; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.ByteSizeUnit; import org.elasticsearch.common.unit.ByteSizeValue; import org.elasticsearch.repositories.ESBlobStoreTestCase; import java.io.IOException; +import java.nio.file.Files; import java.nio.file.Path; @LuceneTestCase.SuppressFileSystems("ExtrasFS") @@ -35,4 +37,39 @@ protected BlobStore newBlobStore() throws IOException { Settings settings = randomBoolean() ? Settings.EMPTY : Settings.builder().put("buffer_size", new ByteSizeValue(randomIntBetween(1, 100), ByteSizeUnit.KB)).build(); return new FsBlobStore(settings, tempDir); } + + public void testReadOnly() throws Exception { + Settings settings = Settings.builder().put("readonly", true).build(); + Path tempDir = createTempDir(); + Path path = tempDir.resolve("bar"); + + try (FsBlobStore store = new FsBlobStore(settings, path)) { + assertFalse(Files.exists(path)); + BlobPath blobPath = BlobPath.cleanPath().add("foo"); + store.blobContainer(blobPath); + Path storePath = store.path(); + for (String d : blobPath) { + storePath = storePath.resolve(d); + } + assertFalse(Files.exists(storePath)); + } + + settings = randomBoolean() ? Settings.EMPTY : Settings.builder().put("readonly", false).build(); + try (FsBlobStore store = new FsBlobStore(settings, path)) { + assertTrue(Files.exists(path)); + BlobPath blobPath = BlobPath.cleanPath().add("foo"); + BlobContainer container = store.blobContainer(blobPath); + Path storePath = store.path(); + for (String d : blobPath) { + storePath = storePath.resolve(d); + } + assertTrue(Files.exists(storePath)); + assertTrue(Files.isDirectory(storePath)); + + byte[] data = randomBytes(randomIntBetween(10, scaledRandomIntBetween(1024, 1 << 16))); + writeBlob(container, "test", new BytesArray(data)); + assertArrayEquals(readBlobFully(container, "test", data.length), data); + assertTrue(container.blobExists("test")); + } + } } diff --git a/core/src/test/java/org/elasticsearch/common/cache/CacheTests.java b/core/src/test/java/org/elasticsearch/common/cache/CacheTests.java index 7dbaba02897c6..5675a7b524bd3 100644 --- a/core/src/test/java/org/elasticsearch/common/cache/CacheTests.java +++ b/core/src/test/java/org/elasticsearch/common/cache/CacheTests.java @@ -319,6 +319,29 @@ protected long now() { } } + public void testComputeIfAbsentAfterExpiration() throws ExecutionException { + AtomicLong now = new AtomicLong(); + Cache cache = new Cache() { + @Override + protected long now() { + return now.get(); + } + }; + cache.setExpireAfterAccessNanos(1); + now.set(0); + for (int i = 0; i < numberOfEntries; i++) { + cache.put(i, Integer.toString(i) + "-first"); + } + now.set(2); + for (int i = 0; i < numberOfEntries; i++) { + cache.computeIfAbsent(i, k -> Integer.toString(k) + "-second"); + } + for (int i = 0; i < numberOfEntries; i++) { + assertEquals(i + "-second", cache.get(i)); + } + assertEquals(numberOfEntries, cache.stats().getEvictions()); + } + // randomly promote some entries, step the clock forward, then check that the promoted entries remain and the // non-promoted entries were removed public void testPromotion() { diff --git a/core/src/test/java/org/elasticsearch/common/io/FileSystemUtilsTests.java b/core/src/test/java/org/elasticsearch/common/io/FileSystemUtilsTests.java index e0a8a1c1e1c8b..7d4fc0ae0ed87 100644 --- a/core/src/test/java/org/elasticsearch/common/io/FileSystemUtilsTests.java +++ b/core/src/test/java/org/elasticsearch/common/io/FileSystemUtilsTests.java @@ -19,6 +19,7 @@ package org.elasticsearch.common.io; +import org.apache.lucene.util.Constants; import org.apache.lucene.util.LuceneTestCase.SuppressFileSystems; import org.elasticsearch.test.ESTestCase; import org.junit.Before; @@ -34,6 +35,8 @@ import java.nio.file.StandardOpenOption; import java.util.Arrays; +import static org.hamcrest.Matchers.equalTo; + /** * Unit tests for {@link org.elasticsearch.common.io.FileSystemUtils}. */ @@ -137,4 +140,16 @@ public void testOpenFileURLStream() throws IOException { assertArrayEquals(expectedBytes, actualBytes); } } + + public void testIsDesktopServicesStoreFile() throws IOException { + final Path path = createTempDir(); + final Path desktopServicesStore = path.resolve(".DS_Store"); + Files.createFile(desktopServicesStore); + assertThat(FileSystemUtils.isDesktopServicesStore(desktopServicesStore), equalTo(Constants.MAC_OS_X)); + + Files.delete(desktopServicesStore); + Files.createDirectory(desktopServicesStore); + assertFalse(FileSystemUtils.isDesktopServicesStore(desktopServicesStore)); + } + } diff --git a/core/src/test/java/org/elasticsearch/common/io/stream/BytesStreamsTests.java b/core/src/test/java/org/elasticsearch/common/io/stream/BytesStreamsTests.java index 4585cecb81ae1..27656e9bc092d 100644 --- a/core/src/test/java/org/elasticsearch/common/io/stream/BytesStreamsTests.java +++ b/core/src/test/java/org/elasticsearch/common/io/stream/BytesStreamsTests.java @@ -562,28 +562,6 @@ public int hashCode() { } } - // we ignore this test for now since all existing callers of BytesStreamOutput happily - // call bytes() after close(). - @AwaitsFix(bugUrl = "https://github.com/elastic/elasticsearch/issues/12620") - public void testAccessAfterClose() throws Exception { - BytesStreamOutput out = new BytesStreamOutput(); - - // immediately close - out.close(); - - assertEquals(-1, out.size()); - assertEquals(-1, out.position()); - - // writing a single byte must fail - expectThrows(IllegalArgumentException.class, () -> out.writeByte((byte)0)); - - // writing in bulk must fail - expectThrows(IllegalArgumentException.class, () -> out.writeBytes(new byte[0], 0, 0)); - - // toByteArray() must fail - expectThrows(IllegalArgumentException.class, () -> BytesReference.toBytes(out.bytes())); - } - // create & fill byte[] with randomized data protected byte[] randomizedByteArrayWithSize(int size) { byte[] data = new byte[size]; diff --git a/core/src/test/java/org/elasticsearch/common/io/stream/StreamTests.java b/core/src/test/java/org/elasticsearch/common/io/stream/StreamTests.java index 9d885fe131c7a..d64dece7867aa 100644 --- a/core/src/test/java/org/elasticsearch/common/io/stream/StreamTests.java +++ b/core/src/test/java/org/elasticsearch/common/io/stream/StreamTests.java @@ -26,6 +26,7 @@ import org.elasticsearch.test.ESTestCase; import java.io.ByteArrayInputStream; +import java.io.EOFException; import java.io.IOException; import java.util.ArrayList; import java.util.Arrays; @@ -192,6 +193,22 @@ public void testInputStreamStreamInputDelegatesAvailable() throws IOException { assertEquals(streamInput.available(), length - bytesToRead); } + public void testReadArraySize() throws IOException { + BytesStreamOutput stream = new BytesStreamOutput(); + byte[] array = new byte[randomIntBetween(1, 10)]; + for (int i = 0; i < array.length; i++) { + array[i] = randomByte(); + } + stream.writeByteArray(array); + InputStreamStreamInput streamInput = new InputStreamStreamInput(StreamInput.wrap(BytesReference.toBytes(stream.bytes())), array + .length-1); + expectThrows(EOFException.class, streamInput::readByteArray); + streamInput = new InputStreamStreamInput(StreamInput.wrap(BytesReference.toBytes(stream.bytes())), BytesReference.toBytes(stream + .bytes()).length); + + assertArrayEquals(array, streamInput.readByteArray()); + } + public void testWritableArrays() throws IOException { final String[] strings = generateRandomStringArray(10, 10, false, true); diff --git a/core/src/test/java/org/elasticsearch/common/joda/JodaTests.java b/core/src/test/java/org/elasticsearch/common/joda/JodaTests.java new file mode 100644 index 0000000000000..e77ae2634cc52 --- /dev/null +++ b/core/src/test/java/org/elasticsearch/common/joda/JodaTests.java @@ -0,0 +1,53 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.common.joda; + +import org.elasticsearch.test.ESTestCase; +import org.joda.time.DateTime; +import org.joda.time.DateTimeZone; +import org.joda.time.format.DateTimeFormatter; + + +public class JodaTests extends ESTestCase { + + + public void testBasicTTimePattern() { + FormatDateTimeFormatter formatter1 = Joda.forPattern("basic_t_time"); + assertEquals(formatter1.format(), "basic_t_time"); + DateTimeFormatter parser1 = formatter1.parser(); + + assertEquals(parser1.getZone(), DateTimeZone.UTC); + + FormatDateTimeFormatter formatter2 = Joda.forPattern("basicTTime"); + assertEquals(formatter2.format(), "basicTTime"); + DateTimeFormatter parser2 = formatter2.parser(); + + assertEquals(parser2.getZone(), DateTimeZone.UTC); + + DateTime dt = new DateTime(2004, 6, 9, 10, 20, 30, 40, DateTimeZone.UTC); + assertEquals("T102030.040Z", parser1.print(dt)); + assertEquals("T102030.040Z", parser2.print(dt)); + + expectThrows(IllegalArgumentException.class, () -> Joda.forPattern("basic_t_Time")); + expectThrows(IllegalArgumentException.class, () -> Joda.forPattern("basic_T_Time")); + expectThrows(IllegalArgumentException.class, () -> Joda.forPattern("basic_T_time")); + } + +} diff --git a/core/src/test/java/org/elasticsearch/common/logging/DeprecationLoggerTests.java b/core/src/test/java/org/elasticsearch/common/logging/DeprecationLoggerTests.java index 3f2274321a249..fdb530749e105 100644 --- a/core/src/test/java/org/elasticsearch/common/logging/DeprecationLoggerTests.java +++ b/core/src/test/java/org/elasticsearch/common/logging/DeprecationLoggerTests.java @@ -23,11 +23,13 @@ import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.test.ESTestCase; import org.elasticsearch.test.hamcrest.RegexMatcher; +import org.hamcrest.core.IsSame; import java.io.IOException; import java.util.Collections; import java.util.HashSet; import java.util.List; +import java.util.Locale; import java.util.Map; import java.util.Set; import java.util.stream.IntStream; @@ -71,6 +73,54 @@ public void testAddsHeaderWithThreadContext() throws IOException { } } + public void testContainingNewline() throws IOException { + try (ThreadContext threadContext = new ThreadContext(Settings.EMPTY)) { + final Set threadContexts = Collections.singleton(threadContext); + + logger.deprecated(threadContexts, "this message contains a newline\n"); + + final Map> responseHeaders = threadContext.getResponseHeaders(); + + assertThat(responseHeaders.size(), equalTo(1)); + final List responses = responseHeaders.get("Warning"); + assertThat(responses, hasSize(1)); + assertThat(responses.get(0), warningValueMatcher); + assertThat(responses.get(0), containsString("\"this message contains a newline%0A\"")); + } + } + + public void testSurrogatePair() throws IOException { + try (ThreadContext threadContext = new ThreadContext(Settings.EMPTY)) { + final Set threadContexts = Collections.singleton(threadContext); + + logger.deprecated(threadContexts, "this message contains a surrogate pair 😱"); + + final Map> responseHeaders = threadContext.getResponseHeaders(); + + assertThat(responseHeaders.size(), equalTo(1)); + final List responses = responseHeaders.get("Warning"); + assertThat(responses, hasSize(1)); + assertThat(responses.get(0), warningValueMatcher); + + // convert UTF-16 to UTF-8 by hand to show the hard-coded constant below is correct + assertThat("😱", equalTo("\uD83D\uDE31")); + final int code = 0x10000 + ((0xD83D & 0x3FF) << 10) + (0xDE31 & 0x3FF); + @SuppressWarnings("PointlessBitwiseExpression") + final int[] points = new int[] { + (code >> 18) & 0x07 | 0xF0, + (code >> 12) & 0x3F | 0x80, + (code >> 6) & 0x3F | 0x80, + (code >> 0) & 0x3F | 0x80}; + final StringBuilder sb = new StringBuilder(); + // noinspection ForLoopReplaceableByForEach + for (int i = 0; i < points.length; i++) { + sb.append("%").append(Integer.toString(points[i], 16).toUpperCase(Locale.ROOT)); + } + assertThat(sb.toString(), equalTo("%F0%9F%98%B1")); + assertThat(responses.get(0), containsString("\"this message contains a surrogate pair %F0%9F%98%B1\"")); + } + } + public void testAddsCombinedHeaderWithThreadContext() throws IOException { try (ThreadContext threadContext = new ThreadContext(Settings.EMPTY)) { final Set threadContexts = Collections.singleton(threadContext); @@ -172,15 +222,28 @@ public void testWarningValueFromWarningHeader() throws InterruptedException { assertThat(DeprecationLogger.extractWarningValueFromWarningHeader(first), equalTo(s)); } - public void testEscape() { - assertThat(DeprecationLogger.escape("\\"), equalTo("\\\\")); - assertThat(DeprecationLogger.escape("\""), equalTo("\\\"")); - assertThat(DeprecationLogger.escape("\\\""), equalTo("\\\\\\\"")); - assertThat(DeprecationLogger.escape("\"foo\\bar\""),equalTo("\\\"foo\\\\bar\\\"")); + public void testEscapeBackslashesAndQuotes() { + assertThat(DeprecationLogger.escapeBackslashesAndQuotes("\\"), equalTo("\\\\")); + assertThat(DeprecationLogger.escapeBackslashesAndQuotes("\""), equalTo("\\\"")); + assertThat(DeprecationLogger.escapeBackslashesAndQuotes("\\\""), equalTo("\\\\\\\"")); + assertThat(DeprecationLogger.escapeBackslashesAndQuotes("\"foo\\bar\""),equalTo("\\\"foo\\\\bar\\\"")); // test that characters other than '\' and '"' are left unchanged - String chars = "\t !" + range(0x23, 0x5b) + range(0x5d, 0x73) + range(0x80, 0xff); + String chars = "\t !" + range(0x23, 0x24) + range(0x26, 0x5b) + range(0x5d, 0x73) + range(0x80, 0xff); + final String s = new CodepointSetGenerator(chars.toCharArray()).ofCodePointsLength(random(), 16, 16); + assertThat(DeprecationLogger.escapeBackslashesAndQuotes(s), equalTo(s)); + } + + public void testEncode() { + assertThat(DeprecationLogger.encode("\n"), equalTo("%0A")); + assertThat(DeprecationLogger.encode("😱"), equalTo("%F0%9F%98%B1")); + assertThat(DeprecationLogger.encode("福島深雪"), equalTo("%E7%A6%8F%E5%B3%B6%E6%B7%B1%E9%9B%AA")); + assertThat(DeprecationLogger.encode("100%\n"), equalTo("100%25%0A")); + // test that valid characters are left unchanged + String chars = "\t !" + range(0x23, 0x24) + range(0x26, 0x5b) + range(0x5d, 0x73) + range(0x80, 0xff) + '\\' + '"'; final String s = new CodepointSetGenerator(chars.toCharArray()).ofCodePointsLength(random(), 16, 16); - assertThat(DeprecationLogger.escape(s), equalTo(s)); + assertThat(DeprecationLogger.encode(s), equalTo(s)); + // when no encoding is needed, the original string is returned (optimization) + assertThat(DeprecationLogger.encode(s), IsSame.sameInstance(s)); } private String range(int lowerInclusive, int upperInclusive) { diff --git a/core/src/test/java/org/elasticsearch/common/lucene/all/SimpleAllTests.java b/core/src/test/java/org/elasticsearch/common/lucene/all/SimpleAllTests.java deleted file mode 100644 index d067b813d10e9..0000000000000 --- a/core/src/test/java/org/elasticsearch/common/lucene/all/SimpleAllTests.java +++ /dev/null @@ -1,279 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.common.lucene.all; - -import org.apache.lucene.document.StoredField; -import org.apache.lucene.document.FieldType; -import org.apache.lucene.document.Field; -import org.apache.lucene.document.Document; -import org.apache.lucene.index.DirectoryReader; -import org.apache.lucene.index.IndexReader; -import org.apache.lucene.index.IndexOptions; -import org.apache.lucene.index.IndexWriter; -import org.apache.lucene.index.IndexWriterConfig; -import org.apache.lucene.index.Term; -import org.apache.lucene.search.Explanation; -import org.apache.lucene.search.IndexSearcher; -import org.apache.lucene.search.MatchAllDocsQuery; -import org.apache.lucene.search.Query; -import org.apache.lucene.search.ScoreDoc; -import org.apache.lucene.search.TopDocs; -import org.apache.lucene.store.Directory; -import org.apache.lucene.store.RAMDirectory; -import org.elasticsearch.common.lucene.Lucene; -import org.elasticsearch.test.ESTestCase; - -import java.io.IOException; - -import static org.hamcrest.Matchers.equalTo; - -public class SimpleAllTests extends ESTestCase { - private FieldType getAllFieldType() { - FieldType ft = new FieldType(); - ft.setIndexOptions(IndexOptions.DOCS_AND_FREQS_AND_POSITIONS); - ft.setTokenized(true); - ft.freeze(); - return ft; - } - - private void assertExplanationScore(IndexSearcher searcher, Query query, ScoreDoc scoreDoc) throws IOException { - final Explanation expl = searcher.explain(query, scoreDoc.doc); - assertEquals(scoreDoc.score, expl.getValue(), 0.00001f); - } - - public void testSimpleAllNoBoost() throws Exception { - FieldType allFt = getAllFieldType(); - Directory dir = new RAMDirectory(); - IndexWriter indexWriter = new IndexWriter(dir, new IndexWriterConfig(Lucene.STANDARD_ANALYZER)); - - Document doc = new Document(); - doc.add(new Field("_id", "1", StoredField.TYPE)); - doc.add(new AllField("_all", "something", 1.0f, allFt)); - doc.add(new AllField("_all", "else", 1.0f, allFt)); - - indexWriter.addDocument(doc); - - doc = new Document(); - doc.add(new Field("_id", "2", StoredField.TYPE)); - doc.add(new AllField("_all", "else", 1.0f, allFt)); - doc.add(new AllField("_all", "something", 1.0f, allFt)); - indexWriter.addDocument(doc); - - IndexReader reader = DirectoryReader.open(indexWriter); - IndexSearcher searcher = new IndexSearcher(reader); - - Query query = new AllTermQuery(new Term("_all", "else")); - TopDocs docs = searcher.search(query, 10); - assertThat(docs.totalHits, equalTo(2L)); - assertThat(docs.scoreDocs[0].doc, equalTo(0)); - assertExplanationScore(searcher, query, docs.scoreDocs[0]); - assertThat(docs.scoreDocs[1].doc, equalTo(1)); - assertExplanationScore(searcher, query, docs.scoreDocs[1]); - - query = new AllTermQuery(new Term("_all", "something")); - docs = searcher.search(query, 10); - assertThat(docs.totalHits, equalTo(2L)); - assertThat(docs.scoreDocs[0].doc, equalTo(0)); - assertExplanationScore(searcher, query, docs.scoreDocs[0]); - assertThat(docs.scoreDocs[1].doc, equalTo(1)); - assertExplanationScore(searcher, query, docs.scoreDocs[1]); - - indexWriter.close(); - } - - public void testSimpleAllWithBoost() throws Exception { - Directory dir = new RAMDirectory(); - IndexWriter indexWriter = new IndexWriter(dir, new IndexWriterConfig(Lucene.STANDARD_ANALYZER)); - - FieldType allFt = getAllFieldType(); - Document doc = new Document(); - doc.add(new Field("_id", "1", StoredField.TYPE)); - doc.add(new AllField("_all", "something", 1.0f, allFt)); - doc.add(new AllField("_all", "else", 1.0f, allFt)); - - indexWriter.addDocument(doc); - - doc = new Document(); - doc.add(new Field("_id", "2", StoredField.TYPE)); - doc.add(new AllField("_all", "else", 2.0f, allFt)); - doc.add(new AllField("_all", "something", 1.0f, allFt)); - - indexWriter.addDocument(doc); - - IndexReader reader = DirectoryReader.open(indexWriter); - IndexSearcher searcher = new IndexSearcher(reader); - - // this one is boosted. so the second doc is more relevant - Query query = new AllTermQuery(new Term("_all", "else")); - TopDocs docs = searcher.search(query, 10); - assertThat(docs.totalHits, equalTo(2L)); - assertThat(docs.scoreDocs[0].doc, equalTo(1)); - assertExplanationScore(searcher, query, docs.scoreDocs[0]); - assertThat(docs.scoreDocs[1].doc, equalTo(0)); - assertExplanationScore(searcher, query, docs.scoreDocs[1]); - - query = new AllTermQuery(new Term("_all", "something")); - docs = searcher.search(query, 10); - assertThat(docs.totalHits, equalTo(2L)); - assertThat(docs.scoreDocs[0].doc, equalTo(0)); - assertExplanationScore(searcher, query, docs.scoreDocs[0]); - assertThat(docs.scoreDocs[1].doc, equalTo(1)); - assertExplanationScore(searcher, query, docs.scoreDocs[1]); - - indexWriter.close(); - } - - public void testTermMissingFromOneSegment() throws Exception { - Directory dir = new RAMDirectory(); - IndexWriter indexWriter = new IndexWriter(dir, new IndexWriterConfig(Lucene.STANDARD_ANALYZER)); - - FieldType allFt = getAllFieldType(); - Document doc = new Document(); - doc.add(new Field("_id", "1", StoredField.TYPE)); - doc.add(new AllField("_all", "something", 2.0f, allFt)); - - indexWriter.addDocument(doc); - indexWriter.commit(); - - doc = new Document(); - doc.add(new Field("_id", "2", StoredField.TYPE)); - doc.add(new AllField("_all", "else", 1.0f, allFt)); - indexWriter.addDocument(doc); - - IndexReader reader = DirectoryReader.open(indexWriter); - assertEquals(2, reader.leaves().size()); - IndexSearcher searcher = new IndexSearcher(reader); - - // "something" only appears in the first segment: - Query query = new AllTermQuery(new Term("_all", "something")); - TopDocs docs = searcher.search(query, 10); - assertEquals(1, docs.totalHits); - - indexWriter.close(); - } - - public void testMultipleTokensAllNoBoost() throws Exception { - Directory dir = new RAMDirectory(); - IndexWriter indexWriter = new IndexWriter(dir, new IndexWriterConfig(Lucene.STANDARD_ANALYZER)); - - FieldType allFt = getAllFieldType(); - Document doc = new Document(); - doc.add(new Field("_id", "1", StoredField.TYPE)); - doc.add(new AllField("_all", "something moo", 1.0f, allFt)); - doc.add(new AllField("_all", "else koo", 1.0f, allFt)); - - indexWriter.addDocument(doc); - - doc = new Document(); - doc.add(new Field("_id", "2", StoredField.TYPE)); - doc.add(new AllField("_all", "else koo", 1.0f, allFt)); - doc.add(new AllField("_all", "something moo", 1.0f, allFt)); - - indexWriter.addDocument(doc); - - IndexReader reader = DirectoryReader.open(indexWriter); - IndexSearcher searcher = new IndexSearcher(reader); - - TopDocs docs = searcher.search(new AllTermQuery(new Term("_all", "else")), 10); - assertThat(docs.totalHits, equalTo(2L)); - assertThat(docs.scoreDocs[0].doc, equalTo(0)); - assertThat(docs.scoreDocs[1].doc, equalTo(1)); - - docs = searcher.search(new AllTermQuery(new Term("_all", "koo")), 10); - assertThat(docs.totalHits, equalTo(2L)); - assertThat(docs.scoreDocs[0].doc, equalTo(0)); - assertThat(docs.scoreDocs[1].doc, equalTo(1)); - - docs = searcher.search(new AllTermQuery(new Term("_all", "something")), 10); - assertThat(docs.totalHits, equalTo(2L)); - assertThat(docs.scoreDocs[0].doc, equalTo(0)); - assertThat(docs.scoreDocs[1].doc, equalTo(1)); - - docs = searcher.search(new AllTermQuery(new Term("_all", "moo")), 10); - assertThat(docs.totalHits, equalTo(2L)); - assertThat(docs.scoreDocs[0].doc, equalTo(0)); - assertThat(docs.scoreDocs[1].doc, equalTo(1)); - - indexWriter.close(); - } - - public void testMultipleTokensAllWithBoost() throws Exception { - Directory dir = new RAMDirectory(); - IndexWriter indexWriter = new IndexWriter(dir, new IndexWriterConfig(Lucene.STANDARD_ANALYZER)); - - FieldType allFt = getAllFieldType(); - Document doc = new Document(); - doc.add(new Field("_id", "1", StoredField.TYPE)); - doc.add(new AllField("_all", "something moo", 1.0f, allFt)); - doc.add(new AllField("_all", "else koo", 1.0f, allFt)); - - indexWriter.addDocument(doc); - - doc = new Document(); - doc.add(new Field("_id", "2", StoredField.TYPE)); - doc.add(new AllField("_all", "else koo", 2.0f, allFt)); - doc.add(new AllField("_all", "something moo", 1.0f, allFt)); - - indexWriter.addDocument(doc); - - IndexReader reader = DirectoryReader.open(indexWriter); - IndexSearcher searcher = new IndexSearcher(reader); - - TopDocs docs = searcher.search(new AllTermQuery(new Term("_all", "else")), 10); - assertThat(docs.totalHits, equalTo(2L)); - assertThat(docs.scoreDocs[0].doc, equalTo(1)); - assertThat(docs.scoreDocs[1].doc, equalTo(0)); - - docs = searcher.search(new AllTermQuery(new Term("_all", "koo")), 10); - assertThat(docs.totalHits, equalTo(2L)); - assertThat(docs.scoreDocs[0].doc, equalTo(1)); - assertThat(docs.scoreDocs[1].doc, equalTo(0)); - - docs = searcher.search(new AllTermQuery(new Term("_all", "something")), 10); - assertThat(docs.totalHits, equalTo(2L)); - assertThat(docs.scoreDocs[0].doc, equalTo(0)); - assertThat(docs.scoreDocs[1].doc, equalTo(1)); - - docs = searcher.search(new AllTermQuery(new Term("_all", "moo")), 10); - assertThat(docs.totalHits, equalTo(2L)); - assertThat(docs.scoreDocs[0].doc, equalTo(0)); - assertThat(docs.scoreDocs[1].doc, equalTo(1)); - - indexWriter.close(); - } - - public void testNoTokens() throws Exception { - Directory dir = new RAMDirectory(); - IndexWriter indexWriter = new IndexWriter(dir, new IndexWriterConfig(Lucene.KEYWORD_ANALYZER)); - - FieldType allFt = getAllFieldType(); - Document doc = new Document(); - doc.add(new Field("_id", "1", StoredField.TYPE)); - doc.add(new AllField("_all", "", 2.0f, allFt)); - indexWriter.addDocument(doc); - - IndexReader reader = DirectoryReader.open(indexWriter); - IndexSearcher searcher = new IndexSearcher(reader); - - TopDocs docs = searcher.search(new MatchAllDocsQuery(), 10); - assertThat(docs.totalHits, equalTo(1L)); - assertThat(docs.scoreDocs[0].doc, equalTo(0)); - } -} diff --git a/core/src/test/java/org/elasticsearch/common/network/InetAddressesTests.java b/core/src/test/java/org/elasticsearch/common/network/InetAddressesTests.java index 2aa284dd8439f..f323494b987e5 100644 --- a/core/src/test/java/org/elasticsearch/common/network/InetAddressesTests.java +++ b/core/src/test/java/org/elasticsearch/common/network/InetAddressesTests.java @@ -16,7 +16,9 @@ package org.elasticsearch.common.network; +import org.elasticsearch.common.collect.Tuple; import org.elasticsearch.test.ESTestCase; +import org.hamcrest.Matchers; import java.net.InetAddress; import java.net.UnknownHostException; @@ -214,4 +216,34 @@ public void testToUriStringIPv6() { InetAddress ip = InetAddresses.forString(ipStr); assertEquals("[3ffe::1]", InetAddresses.toUriString(ip)); } + + public void testParseCidr() { + IllegalArgumentException e = expectThrows(IllegalArgumentException.class, () -> InetAddresses.parseCidr("")); + assertThat(e.getMessage(), Matchers.containsString("Expected [ip/prefix] but was []")); + + e = expectThrows(IllegalArgumentException.class, () -> InetAddresses.parseCidr("192.168.1.42/33")); + assertThat(e.getMessage(), Matchers.containsString("Illegal prefix length")); + + e = expectThrows(IllegalArgumentException.class, () -> InetAddresses.parseCidr("::1/129")); + assertThat(e.getMessage(), Matchers.containsString("Illegal prefix length")); + + e = expectThrows(IllegalArgumentException.class, () -> InetAddresses.parseCidr("::ffff:0:0/96")); + assertThat(e.getMessage(), Matchers.containsString("CIDR notation is not allowed with IPv6-mapped IPv4 address")); + + Tuple cidr = InetAddresses.parseCidr("192.168.0.0/24"); + assertEquals(InetAddresses.forString("192.168.0.0"), cidr.v1()); + assertEquals(Integer.valueOf(24), cidr.v2()); + + cidr = InetAddresses.parseCidr("::fffe:0:0/95"); + assertEquals(InetAddresses.forString("::fffe:0:0"), cidr.v1()); + assertEquals(Integer.valueOf(95), cidr.v2()); + + cidr = InetAddresses.parseCidr("192.168.0.0/32"); + assertEquals(InetAddresses.forString("192.168.0.0"), cidr.v1()); + assertEquals(Integer.valueOf(32), cidr.v2()); + + cidr = InetAddresses.parseCidr("::fffe:0:0/128"); + assertEquals(InetAddresses.forString("::fffe:0:0"), cidr.v1()); + assertEquals(Integer.valueOf(128), cidr.v2()); + } } diff --git a/core/src/test/java/org/elasticsearch/common/settings/KeyStoreCommandTestCase.java b/core/src/test/java/org/elasticsearch/common/settings/KeyStoreCommandTestCase.java index 500b7b627b840..c1118b3bc6513 100644 --- a/core/src/test/java/org/elasticsearch/common/settings/KeyStoreCommandTestCase.java +++ b/core/src/test/java/org/elasticsearch/common/settings/KeyStoreCommandTestCase.java @@ -22,7 +22,6 @@ import java.io.IOException; import java.io.InputStream; import java.nio.file.FileSystem; -import java.nio.file.FileSystems; import java.nio.file.Files; import java.nio.file.Path; import java.util.ArrayList; @@ -35,6 +34,7 @@ import org.elasticsearch.cli.CommandTestCase; import org.elasticsearch.common.io.PathUtilsForTesting; import org.elasticsearch.env.Environment; +import org.elasticsearch.env.TestEnvironment; import org.junit.After; import org.junit.Before; @@ -58,7 +58,7 @@ public void setupEnv() throws IOException { env = setupEnv(true, fileSystems); // default to posix, but tests may call setupEnv(false) to overwrite } - static Environment setupEnv(boolean posix, List fileSystems) throws IOException { + public static Environment setupEnv(boolean posix, List fileSystems) throws IOException { final Configuration configuration; if (posix) { configuration = Configuration.unix().toBuilder().setAttributeViews("basic", "owner", "posix", "unix").build(); @@ -70,7 +70,7 @@ static Environment setupEnv(boolean posix, List fileSystems) throws PathUtilsForTesting.installMock(fs); // restored by restoreFileSystem in ESTestCase Path home = fs.getPath("/", "test-home"); Files.createDirectories(home.resolve("config")); - return new Environment(Settings.builder().put("path.home", home).build()); + return TestEnvironment.newEnvironment(Settings.builder().put("path.home", home).build()); } KeyStoreWrapper createKeystore(String password, String... settings) throws Exception { diff --git a/core/src/test/java/org/elasticsearch/common/settings/KeyStoreWrapperTests.java b/core/src/test/java/org/elasticsearch/common/settings/KeyStoreWrapperTests.java index 0c9cdad618a19..11d1e1f573587 100644 --- a/core/src/test/java/org/elasticsearch/common/settings/KeyStoreWrapperTests.java +++ b/core/src/test/java/org/elasticsearch/common/settings/KeyStoreWrapperTests.java @@ -69,8 +69,32 @@ public void testFileSettingExhaustiveBytes() throws Exception { } } - public void testKeystoreSeed() throws Exception { + public void testCreate() throws Exception { KeyStoreWrapper keystore = KeyStoreWrapper.create(new char[0]); assertTrue(keystore.getSettingNames().contains(KeyStoreWrapper.SEED_SETTING.getKey())); } + + public void testUpgradeNoop() throws Exception { + KeyStoreWrapper keystore = KeyStoreWrapper.create(new char[0]); + SecureString seed = keystore.getString(KeyStoreWrapper.SEED_SETTING.getKey()); + keystore.save(env.configFile()); + // upgrade does not overwrite seed + KeyStoreWrapper.upgrade(keystore, env.configFile()); + assertEquals(seed.toString(), keystore.getString(KeyStoreWrapper.SEED_SETTING.getKey()).toString()); + keystore = KeyStoreWrapper.load(env.configFile()); + keystore.decrypt(new char[0]); + assertEquals(seed.toString(), keystore.getString(KeyStoreWrapper.SEED_SETTING.getKey()).toString()); + } + + public void testUpgradeAddsSeed() throws Exception { + KeyStoreWrapper keystore = KeyStoreWrapper.create(new char[0]); + keystore.remove(KeyStoreWrapper.SEED_SETTING.getKey()); + keystore.save(env.configFile()); + KeyStoreWrapper.upgrade(keystore, env.configFile()); + SecureString seed = keystore.getString(KeyStoreWrapper.SEED_SETTING.getKey()); + assertNotNull(seed); + keystore = KeyStoreWrapper.load(env.configFile()); + keystore.decrypt(new char[0]); + assertEquals(seed.toString(), keystore.getString(KeyStoreWrapper.SEED_SETTING.getKey()).toString()); + } } diff --git a/core/src/test/java/org/elasticsearch/common/settings/ScopedSettingsTests.java b/core/src/test/java/org/elasticsearch/common/settings/ScopedSettingsTests.java index c00055d2897af..bd4ac25a8747b 100644 --- a/core/src/test/java/org/elasticsearch/common/settings/ScopedSettingsTests.java +++ b/core/src/test/java/org/elasticsearch/common/settings/ScopedSettingsTests.java @@ -43,6 +43,7 @@ import java.util.concurrent.atomic.AtomicInteger; import java.util.concurrent.atomic.AtomicReference; import java.util.function.BiConsumer; +import java.util.function.Consumer; import java.util.function.Function; import static org.hamcrest.CoreMatchers.containsString; @@ -115,7 +116,7 @@ public void testResetSettingWithIPValidator() { IndexScopedSettings settings = new IndexScopedSettings(currentSettings, new HashSet<>(Arrays.asList(dynamicSetting, IndexMetaData.INDEX_ROUTING_REQUIRE_GROUP_SETTING))); - Settings s = IndexMetaData.INDEX_ROUTING_REQUIRE_GROUP_SETTING.get(currentSettings); + Map s = IndexMetaData.INDEX_ROUTING_REQUIRE_GROUP_SETTING.getAsMap(currentSettings); assertEquals(1, s.size()); assertEquals("192.168.0.1,127.0.0.1", s.get("_ip")); Settings.Builder builder = Settings.builder(); @@ -125,7 +126,7 @@ public void testResetSettingWithIPValidator() { settings.updateDynamicSettings(updates, Settings.builder().put(currentSettings), builder, "node"); currentSettings = builder.build(); - s = IndexMetaData.INDEX_ROUTING_REQUIRE_GROUP_SETTING.get(currentSettings); + s = IndexMetaData.INDEX_ROUTING_REQUIRE_GROUP_SETTING.getAsMap(currentSettings); assertEquals(0, s.size()); assertEquals(1, dynamicSetting.get(currentSettings).intValue()); assertEquals(1, currentSettings.size()); @@ -178,8 +179,8 @@ public void testAddConsumerAffix() { service.applySettings(Settings.builder() .put("foo.test.bar", 2) .put("foo.test_1.bar", 7) - .putArray("foo.test_list.list", "16", "17") - .putArray("foo.test_list_1.list", "18", "19", "20") + .putList("foo.test_list.list", "16", "17") + .putList("foo.test_list_1.list", "18", "19", "20") .build()); assertEquals(2, intResults.get("test").intValue()); assertEquals(7, intResults.get("test_1").intValue()); @@ -194,7 +195,7 @@ public void testAddConsumerAffix() { service.applySettings(Settings.builder() .put("foo.test.bar", 2) .put("foo.test_1.bar", 8) - .putArray("foo.test_list.list", "16", "17") + .putList("foo.test_list.list", "16", "17") .putNull("foo.test_list_1.list") .build()); assertNull("test wasn't changed", intResults.get("test")); @@ -205,6 +206,66 @@ public void testAddConsumerAffix() { assertEquals(1, intResults.size()); } + public void testAddConsumerAffixMap() { + Setting.AffixSetting intSetting = Setting.affixKeySetting("foo.", "bar", + (k) -> Setting.intSetting(k, 1, Property.Dynamic, Property.NodeScope)); + Setting.AffixSetting> listSetting = Setting.affixKeySetting("foo.", "list", + (k) -> Setting.listSetting(k, Arrays.asList("1"), Integer::parseInt, Property.Dynamic, Property.NodeScope)); + AbstractScopedSettings service = new ClusterSettings(Settings.EMPTY,new HashSet<>(Arrays.asList(intSetting, listSetting))); + Map> listResults = new HashMap<>(); + Map intResults = new HashMap<>(); + + Consumer> intConsumer = (map) -> { + intResults.clear(); + intResults.putAll(map); + }; + Consumer>> listConsumer = (map) -> { + listResults.clear(); + listResults.putAll(map); + }; + boolean omitDefaults = randomBoolean(); + service.addAffixMapUpdateConsumer(listSetting, listConsumer, (s, k) -> {}, omitDefaults); + service.addAffixMapUpdateConsumer(intSetting, intConsumer, (s, k) -> {}, omitDefaults); + assertEquals(0, listResults.size()); + assertEquals(0, intResults.size()); + service.applySettings(Settings.builder() + .put("foo.test.bar", 2) + .put("foo.test_1.bar", 7) + .putList("foo.test_list.list", "16", "17") + .putList("foo.test_list_1.list", "18", "19", "20") + .build()); + assertEquals(2, intResults.get("test").intValue()); + assertEquals(7, intResults.get("test_1").intValue()); + assertEquals(Arrays.asList(16, 17), listResults.get("test_list")); + assertEquals(Arrays.asList(18, 19, 20), listResults.get("test_list_1")); + assertEquals(2, listResults.size()); + assertEquals(2, intResults.size()); + + listResults.clear(); + intResults.clear(); + + service.applySettings(Settings.builder() + .put("foo.test.bar", 2) + .put("foo.test_1.bar", 8) + .putList("foo.test_list.list", "16", "17") + .putNull("foo.test_list_1.list") + .build()); + assertNull("test wasn't changed", intResults.get("test")); + assertEquals(8, intResults.get("test_1").intValue()); + assertNull("test_list wasn't changed", listResults.get("test_list")); + if (omitDefaults) { + assertNull(listResults.get("test_list_1")); + assertFalse(listResults.containsKey("test_list_1")); + assertEquals(0, listResults.size()); + assertEquals(1, intResults.size()); + } else { + assertEquals(Arrays.asList(1), listResults.get("test_list_1")); // reset to default + assertEquals(1, listResults.size()); + assertEquals(1, intResults.size()); + } + + } + public void testApply() { Setting testSetting = Setting.intSetting("foo.bar", 1, Property.Dynamic, Property.NodeScope); Setting testSetting2 = Setting.intSetting("foo.bar.baz", 1, Property.Dynamic, Property.NodeScope); @@ -347,10 +408,10 @@ public void testValidator() { public void testGet() { ClusterSettings settings = new ClusterSettings(Settings.EMPTY, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS); - - // group setting - complex matcher + // affix setting - complex matcher Setting setting = settings.get("cluster.routing.allocation.require.value"); - assertEquals(setting, FilterAllocationDecider.CLUSTER_ROUTING_REQUIRE_GROUP_SETTING); + assertEquals(setting, + FilterAllocationDecider.CLUSTER_ROUTING_REQUIRE_GROUP_SETTING.getConcreteSetting("cluster.routing.allocation.require.value")); setting = settings.get("cluster.routing.allocation.total_shards_per_node"); assertEquals(setting, ShardsLimitAllocationDecider.CLUSTER_TOTAL_SHARDS_PER_NODE_SETTING); @@ -407,35 +468,34 @@ public void testDiff() throws IOException { ClusterSettings settings = new ClusterSettings(Settings.EMPTY, new HashSet<>(Arrays.asList(fooBar, fooBarBaz, foorBarQuux, someGroup, someAffix))); Settings diff = settings.diff(Settings.builder().put("foo.bar", 5).build(), Settings.EMPTY); - assertEquals(4, diff.size()); // 4 since foo.bar.quux has 3 values essentially + assertEquals(2, diff.size()); assertThat(diff.getAsInt("foo.bar.baz", null), equalTo(1)); - assertArrayEquals(diff.getAsArray("foo.bar.quux", null), new String[] {"a", "b", "c"}); + assertEquals(diff.getAsList("foo.bar.quux", null), Arrays.asList("a", "b", "c")); diff = settings.diff( Settings.builder().put("foo.bar", 5).build(), - Settings.builder().put("foo.bar.baz", 17).putArray("foo.bar.quux", "d", "e", "f").build()); - assertEquals(4, diff.size()); // 4 since foo.bar.quux has 3 values essentially + Settings.builder().put("foo.bar.baz", 17).putList("foo.bar.quux", "d", "e", "f").build()); + assertEquals(2, diff.size()); assertThat(diff.getAsInt("foo.bar.baz", null), equalTo(17)); - assertArrayEquals(diff.getAsArray("foo.bar.quux", null), new String[] {"d", "e", "f"}); + assertEquals(diff.getAsList("foo.bar.quux", null), Arrays.asList("d", "e", "f")); diff = settings.diff( Settings.builder().put("some.group.foo", 5).build(), - Settings.builder().put("some.group.foobar", 17, "some.group.foo", 25).build()); - assertEquals(6, diff.size()); // 6 since foo.bar.quux has 3 values essentially + Settings.builder().put("some.group.foobar", 17).put("some.group.foo", 25).build()); + assertEquals(4, diff.size()); assertThat(diff.getAsInt("some.group.foobar", null), equalTo(17)); assertNull(diff.get("some.group.foo")); - assertArrayEquals(diff.getAsArray("foo.bar.quux", null), new String[] {"a", "b", "c"}); + assertEquals(diff.getAsList("foo.bar.quux", null), Arrays.asList("a", "b", "c")); assertThat(diff.getAsInt("foo.bar.baz", null), equalTo(1)); assertThat(diff.getAsInt("foo.bar", null), equalTo(1)); diff = settings.diff( Settings.builder().put("some.prefix.foo.somekey", 5).build(), - Settings.builder().put("some.prefix.foobar.somekey", 17, - "some.prefix.foo.somekey", 18).build()); - assertEquals(6, diff.size()); // 6 since foo.bar.quux has 3 values essentially + Settings.builder().put("some.prefix.foobar.somekey", 17).put("some.prefix.foo.somekey", 18).build()); + assertEquals(4, diff.size()); assertThat(diff.getAsInt("some.prefix.foobar.somekey", null), equalTo(17)); assertNull(diff.get("some.prefix.foo.somekey")); - assertArrayEquals(diff.getAsArray("foo.bar.quux", null), new String[] {"a", "b", "c"}); + assertEquals(diff.getAsList("foo.bar.quux", null), Arrays.asList("a", "b", "c")); assertThat(diff.getAsInt("foo.bar.baz", null), equalTo(1)); assertThat(diff.getAsInt("foo.bar", null), equalTo(1)); } @@ -453,48 +513,46 @@ public void testDiffWithAffixAndComplexMatcher() { Settings diff = settings.diff(Settings.builder().put("foo.bar", 5).build(), Settings.EMPTY); assertEquals(1, diff.size()); assertThat(diff.getAsInt("foo.bar.baz", null), equalTo(1)); - assertNull(diff.getAsArray("foo.bar.quux", null)); // affix settings don't know their concrete keys + assertNull(diff.getAsList("foo.bar.quux", null)); // affix settings don't know their concrete keys diff = settings.diff( Settings.builder().put("foo.bar", 5).build(), - Settings.builder().put("foo.bar.baz", 17).putArray("foo.bar.quux", "d", "e", "f").build()); - assertEquals(4, diff.size()); + Settings.builder().put("foo.bar.baz", 17).putList("foo.bar.quux", "d", "e", "f").build()); + assertEquals(2, diff.size()); assertThat(diff.getAsInt("foo.bar.baz", null), equalTo(17)); - assertArrayEquals(diff.getAsArray("foo.bar.quux", null), new String[] {"d", "e", "f"}); + assertEquals(diff.getAsList("foo.bar.quux", null), Arrays.asList("d", "e", "f")); diff = settings.diff( Settings.builder().put("some.group.foo", 5).build(), - Settings.builder().put("some.group.foobar", 17, "some.group.foo", 25).build()); + Settings.builder().put("some.group.foobar", 17).put("some.group.foo", 25).build()); assertEquals(3, diff.size()); assertThat(diff.getAsInt("some.group.foobar", null), equalTo(17)); assertNull(diff.get("some.group.foo")); - assertNull(diff.getAsArray("foo.bar.quux", null)); // affix settings don't know their concrete keys + assertNull(diff.getAsList("foo.bar.quux", null)); // affix settings don't know their concrete keys assertThat(diff.getAsInt("foo.bar.baz", null), equalTo(1)); assertThat(diff.getAsInt("foo.bar", null), equalTo(1)); diff = settings.diff( Settings.builder().put("some.prefix.foo.somekey", 5).build(), - Settings.builder().put("some.prefix.foobar.somekey", 17, - "some.prefix.foo.somekey", 18).build()); + Settings.builder().put("some.prefix.foobar.somekey", 17).put("some.prefix.foo.somekey", 18).build()); assertEquals(3, diff.size()); assertThat(diff.getAsInt("some.prefix.foobar.somekey", null), equalTo(17)); assertNull(diff.get("some.prefix.foo.somekey")); - assertNull(diff.getAsArray("foo.bar.quux", null)); // affix settings don't know their concrete keys + assertNull(diff.getAsList("foo.bar.quux", null)); // affix settings don't know their concrete keys assertThat(diff.getAsInt("foo.bar.baz", null), equalTo(1)); assertThat(diff.getAsInt("foo.bar", null), equalTo(1)); diff = settings.diff( Settings.builder().put("some.prefix.foo.somekey", 5).build(), - Settings.builder().put("some.prefix.foobar.somekey", 17, - "some.prefix.foo.somekey", 18) - .putArray("foo.bar.quux", "x", "y", "z") - .putArray("foo.baz.quux", "d", "e", "f") + Settings.builder().put("some.prefix.foobar.somekey", 17).put("some.prefix.foo.somekey", 18) + .putList("foo.bar.quux", "x", "y", "z") + .putList("foo.baz.quux", "d", "e", "f") .build()); - assertEquals(9, diff.size()); + assertEquals(5, diff.size()); assertThat(diff.getAsInt("some.prefix.foobar.somekey", null), equalTo(17)); assertNull(diff.get("some.prefix.foo.somekey")); - assertArrayEquals(diff.getAsArray("foo.bar.quux", null), new String[] {"x", "y", "z"}); - assertArrayEquals(diff.getAsArray("foo.baz.quux", null), new String[] {"d", "e", "f"}); + assertEquals(diff.getAsList("foo.bar.quux", null), Arrays.asList("x", "y", "z")); + assertEquals(diff.getAsList("foo.baz.quux", null), Arrays.asList("d", "e", "f")); assertThat(diff.getAsInt("foo.bar.baz", null), equalTo(1)); assertThat(diff.getAsInt("foo.bar", null), equalTo(1)); } @@ -504,7 +562,7 @@ public void testUpdateTracer() { AtomicReference> ref = new AtomicReference<>(); settings.addSettingsUpdateConsumer(TransportService.TRACE_LOG_INCLUDE_SETTING, ref::set); settings.applySettings(Settings.builder() - .putArray("transport.tracer.include", "internal:index/shard/recovery/*", "internal:gateway/local*").build()); + .putList("transport.tracer.include", "internal:index/shard/recovery/*", "internal:gateway/local*").build()); assertNotNull(ref.get().size()); assertEquals(ref.get().size(), 2); assertTrue(ref.get().contains("internal:index/shard/recovery/*")); @@ -540,15 +598,15 @@ public void testValidate() { settings.validate(Settings.builder().put("index.store.type", "boom")); settings.validate(Settings.builder().put("index.store.type", "boom").build()); IllegalArgumentException e = expectThrows(IllegalArgumentException.class, () -> - settings.validate(Settings.builder().put("index.store.type", "boom", "i.am.not.a.setting", true))); + settings.validate(Settings.builder().put("index.store.type", "boom").put("i.am.not.a.setting", true))); assertEquals("unknown setting [i.am.not.a.setting]" + unknownMsgSuffix, e.getMessage()); e = expectThrows(IllegalArgumentException.class, () -> - settings.validate(Settings.builder().put("index.store.type", "boom", "i.am.not.a.setting", true).build())); + settings.validate(Settings.builder().put("index.store.type", "boom").put("i.am.not.a.setting", true).build())); assertEquals("unknown setting [i.am.not.a.setting]" + unknownMsgSuffix, e.getMessage()); e = expectThrows(IllegalArgumentException.class, () -> - settings.validate(Settings.builder().put("index.store.type", "boom", "index.number_of_replicas", true).build())); + settings.validate(Settings.builder().put("index.store.type", "boom").put("index.number_of_replicas", true).build())); assertEquals("Failed to parse value [true] for setting [index.number_of_replicas]", e.getMessage()); e = expectThrows(IllegalArgumentException.class, () -> diff --git a/core/src/test/java/org/elasticsearch/common/settings/SettingTests.java b/core/src/test/java/org/elasticsearch/common/settings/SettingTests.java index db73e363f4d21..65d51e126c9f6 100644 --- a/core/src/test/java/org/elasticsearch/common/settings/SettingTests.java +++ b/core/src/test/java/org/elasticsearch/common/settings/SettingTests.java @@ -337,7 +337,7 @@ public void testGroups() { Settings.EMPTY); fail("not accepted"); } catch (IllegalArgumentException ex) { - assertEquals(ex.getMessage(), "illegal value can't update [foo.bar.] from [{}] to [{1.value=1, 2.value=2}]"); + assertEquals(ex.getMessage(), "illegal value can't update [foo.bar.] from [{}] to [{\"1.value\":\"1\",\"2.value\":\"2\"}]"); } } @@ -359,6 +359,12 @@ public void set(Integer a, Integer b) { this.a = a; this.b = b; } + + public void validate(Integer a, Integer b) { + if (Integer.signum(a) != Integer.signum(b)) { + throw new IllegalArgumentException("boom"); + } + } } @@ -366,7 +372,37 @@ public void testComposite() { Composite c = new Composite(); Setting a = Setting.intSetting("foo.int.bar.a", 1, Property.Dynamic, Property.NodeScope); Setting b = Setting.intSetting("foo.int.bar.b", 1, Property.Dynamic, Property.NodeScope); - ClusterSettings.SettingUpdater> settingUpdater = Setting.compoundUpdater(c::set, a, b, logger); + ClusterSettings.SettingUpdater> settingUpdater = Setting.compoundUpdater(c::set, c::validate, a, b, logger); + assertFalse(settingUpdater.apply(Settings.EMPTY, Settings.EMPTY)); + assertNull(c.a); + assertNull(c.b); + + Settings build = Settings.builder().put("foo.int.bar.a", 2).build(); + assertTrue(settingUpdater.apply(build, Settings.EMPTY)); + assertEquals(2, c.a.intValue()); + assertEquals(1, c.b.intValue()); + + Integer aValue = c.a; + assertFalse(settingUpdater.apply(build, build)); + assertSame(aValue, c.a); + Settings previous = build; + build = Settings.builder().put("foo.int.bar.a", 2).put("foo.int.bar.b", 5).build(); + assertTrue(settingUpdater.apply(build, previous)); + assertEquals(2, c.a.intValue()); + assertEquals(5, c.b.intValue()); + + // reset to default + assertTrue(settingUpdater.apply(Settings.EMPTY, build)); + assertEquals(1, c.a.intValue()); + assertEquals(1, c.b.intValue()); + + } + + public void testCompositeValidator() { + Composite c = new Composite(); + Setting a = Setting.intSetting("foo.int.bar.a", 1, Property.Dynamic, Property.NodeScope); + Setting b = Setting.intSetting("foo.int.bar.b", 1, Property.Dynamic, Property.NodeScope); + ClusterSettings.SettingUpdater> settingUpdater = Setting.compoundUpdater(c::set, c::validate, a, b, logger); assertFalse(settingUpdater.apply(Settings.EMPTY, Settings.EMPTY)); assertNull(c.a); assertNull(c.b); @@ -385,6 +421,10 @@ public void testComposite() { assertEquals(2, c.a.intValue()); assertEquals(5, c.b.intValue()); + Settings invalid = Settings.builder().put("foo.int.bar.a", -2).put("foo.int.bar.b", 5).build(); + IllegalArgumentException exc = expectThrows(IllegalArgumentException.class, () -> settingUpdater.apply(invalid, previous)); + assertThat(exc.getMessage(), equalTo("boom")); + // reset to default assertTrue(settingUpdater.apply(Settings.EMPTY, build)); assertEquals(1, c.a.intValue()); @@ -401,7 +441,7 @@ public void testListSettings() { assertEquals("foo,bar", value.get(0)); List input = Arrays.asList("test", "test1, test2", "test", ",,,,"); - Settings.Builder builder = Settings.builder().putArray("foo.bar", input.toArray(new String[0])); + Settings.Builder builder = Settings.builder().putList("foo.bar", input.toArray(new String[0])); assertTrue(listSetting.exists(builder.build())); value = listSetting.get(builder.build()); assertEquals(input.size(), value.size()); @@ -424,11 +464,11 @@ public void testListSettings() { assertEquals(input.size(), ref.get().size()); assertArrayEquals(ref.get().toArray(new String[0]), input.toArray(new String[0])); - settingUpdater.apply(Settings.builder().putArray("foo.bar", "123").build(), builder.build()); + settingUpdater.apply(Settings.builder().putList("foo.bar", "123").build(), builder.build()); assertEquals(1, ref.get().size()); assertArrayEquals(ref.get().toArray(new String[0]), new String[] {"123"}); - settingUpdater.apply(Settings.builder().put("foo.bar", "1,2,3").build(), Settings.builder().putArray("foo.bar", "123").build()); + settingUpdater.apply(Settings.builder().put("foo.bar", "1,2,3").build(), Settings.builder().putList("foo.bar", "123").build()); assertEquals(3, ref.get().size()); assertArrayEquals(ref.get().toArray(new String[0]), new String[] {"1", "2", "3"}); @@ -452,17 +492,17 @@ public void testListSettings() { assertEquals(1, value.size()); assertEquals("foo,bar", value.get(0)); - value = settingWithFallback.get(Settings.builder().putArray("foo.bar", "1", "2").build()); + value = settingWithFallback.get(Settings.builder().putList("foo.bar", "1", "2").build()); assertEquals(2, value.size()); assertEquals("1", value.get(0)); assertEquals("2", value.get(1)); - value = settingWithFallback.get(Settings.builder().putArray("foo.baz", "3", "4").build()); + value = settingWithFallback.get(Settings.builder().putList("foo.baz", "3", "4").build()); assertEquals(2, value.size()); assertEquals("3", value.get(0)); assertEquals("4", value.get(1)); - value = settingWithFallback.get(Settings.builder().putArray("foo.baz", "3", "4").putArray("foo.bar", "1", "2").build()); + value = settingWithFallback.get(Settings.builder().putList("foo.baz", "3", "4").putList("foo.bar", "1", "2").build()); assertEquals(2, value.size()); assertEquals("3", value.get(0)); assertEquals("4", value.get(1)); @@ -472,13 +512,13 @@ public void testListSettingAcceptsNumberSyntax() { Setting> listSetting = Setting.listSetting("foo.bar", Arrays.asList("foo,bar"), (s) -> s.toString(), Property.Dynamic, Property.NodeScope); List input = Arrays.asList("test", "test1, test2", "test", ",,,,"); - Settings.Builder builder = Settings.builder().putArray("foo.bar", input.toArray(new String[0])); + Settings.Builder builder = Settings.builder().putList("foo.bar", input.toArray(new String[0])); // try to parse this really annoying format - for (String key : builder.internalMap().keySet()) { + for (String key : builder.keys()) { assertTrue("key: " + key + " doesn't match", listSetting.match(key)); } builder = Settings.builder().put("foo.bar", "1,2,3"); - for (String key : builder.internalMap().keySet()) { + for (String key : builder.keys()) { assertTrue("key: " + key + " doesn't match", listSetting.match(key)); } assertFalse(listSetting.match("foo_bar")); @@ -537,16 +577,35 @@ public void testAffixKeySetting() { assertFalse(listAffixSetting.match("foo")); } + public void testAffixAsMap() { + Setting.AffixSetting setting = Setting.prefixKeySetting("foo.bar.", key -> + Setting.simpleString(key, Property.NodeScope)); + Settings build = Settings.builder().put("foo.bar.baz", 2).put("foo.bar.foobar", 3).build(); + Map asMap = setting.getAsMap(build); + assertEquals(2, asMap.size()); + assertEquals("2", asMap.get("baz")); + assertEquals("3", asMap.get("foobar")); + + setting = Setting.prefixKeySetting("foo.bar.", key -> + Setting.simpleString(key, Property.NodeScope)); + build = Settings.builder().put("foo.bar.baz", 2).put("foo.bar.foobar", 3).put("foo.bar.baz.deep", 45).build(); + asMap = setting.getAsMap(build); + assertEquals(3, asMap.size()); + assertEquals("2", asMap.get("baz")); + assertEquals("3", asMap.get("foobar")); + assertEquals("45", asMap.get("baz.deep")); + } + public void testGetAllConcreteSettings() { Setting.AffixSetting> listAffixSetting = Setting.affixKeySetting("foo.", "bar", (key) -> Setting.listSetting(key, Collections.emptyList(), Function.identity(), Property.NodeScope)); Settings settings = Settings.builder() - .putArray("foo.1.bar", "1", "2") - .putArray("foo.2.bar", "3", "4", "5") - .putArray("foo.bar", "6") - .putArray("some.other", "6") - .putArray("foo.3.bar", "6") + .putList("foo.1.bar", "1", "2") + .putList("foo.2.bar", "3", "4", "5") + .putList("foo.bar", "6") + .putList("some.other", "6") + .putList("foo.3.bar", "6") .build(); Stream>> allConcreteSettings = listAffixSetting.getAllConcreteSettings(settings); Map> collect = allConcreteSettings.collect(Collectors.toMap(Setting::getKey, (s) -> s.get(settings))); diff --git a/core/src/test/java/org/elasticsearch/common/settings/SettingsFilterTests.java b/core/src/test/java/org/elasticsearch/common/settings/SettingsFilterTests.java index 93f745d290e5c..9e6d4be7095f0 100644 --- a/core/src/test/java/org/elasticsearch/common/settings/SettingsFilterTests.java +++ b/core/src/test/java/org/elasticsearch/common/settings/SettingsFilterTests.java @@ -47,7 +47,7 @@ public void testSettingsFiltering() throws IOException { .put("bar1", "bar1_test") .put("bar.2", "bar2_test") .build(), - Settings.builder() + Settings.builder() .put("foo1", "foo1_test") .build(), "foo", "bar*" @@ -108,7 +108,7 @@ private void testFiltering(Settings source, Settings filtered, String... pattern // Test using direct filtering Settings filteredSettings = settingsFilter.filter(source); - assertThat(filteredSettings.getAsMap().entrySet(), equalTo(filtered.getAsMap().entrySet())); + assertThat(filteredSettings, equalTo(filtered)); // Test using toXContent filtering RestRequest request = new FakeRestRequest(); @@ -119,6 +119,6 @@ private void testFiltering(Settings source, Settings filtered, String... pattern xContentBuilder.endObject(); String filteredSettingsString = xContentBuilder.string(); filteredSettings = Settings.builder().loadFromSource(filteredSettingsString, xContentBuilder.contentType()).build(); - assertThat(filteredSettings.getAsMap().entrySet(), equalTo(filtered.getAsMap().entrySet())); + assertThat(filteredSettings, equalTo(filtered)); } } diff --git a/core/src/test/java/org/elasticsearch/common/settings/SettingsModuleTests.java b/core/src/test/java/org/elasticsearch/common/settings/SettingsModuleTests.java index a1c2711e5acc2..6a2be8217a661 100644 --- a/core/src/test/java/org/elasticsearch/common/settings/SettingsModuleTests.java +++ b/core/src/test/java/org/elasticsearch/common/settings/SettingsModuleTests.java @@ -113,8 +113,8 @@ public void testRegisterSettingsFilter() { Setting.boolSetting("bar.baz", true, Property.NodeScope)), Arrays.asList("foo.*")); assertInstanceBinding(module, Settings.class, (s) -> s == settings); assertInstanceBinding(module, SettingsFilter.class, (s) -> s.filter(settings).size() == 1); - assertInstanceBinding(module, SettingsFilter.class, (s) -> s.filter(settings).getAsMap().containsKey("bar.baz")); - assertInstanceBinding(module, SettingsFilter.class, (s) -> s.filter(settings).getAsMap().get("bar.baz").equals("false")); + assertInstanceBinding(module, SettingsFilter.class, (s) -> s.filter(settings).keySet().contains("bar.baz")); + assertInstanceBinding(module, SettingsFilter.class, (s) -> s.filter(settings).get("bar.baz").equals("false")); } diff --git a/core/src/test/java/org/elasticsearch/common/settings/SettingsTests.java b/core/src/test/java/org/elasticsearch/common/settings/SettingsTests.java index e386c35229951..42cb0f1e3e7e3 100644 --- a/core/src/test/java/org/elasticsearch/common/settings/SettingsTests.java +++ b/core/src/test/java/org/elasticsearch/common/settings/SettingsTests.java @@ -19,26 +19,33 @@ package org.elasticsearch.common.settings; +import org.elasticsearch.ElasticsearchParseException; import org.elasticsearch.Version; -import org.elasticsearch.common.Booleans; +import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.io.stream.BytesStreamOutput; import org.elasticsearch.common.io.stream.StreamInput; -import org.elasticsearch.common.logging.DeprecationLogger; -import org.elasticsearch.common.logging.ESLoggerFactory; -import org.elasticsearch.common.settings.loader.YamlSettingsLoader; +import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.XContent; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.test.ESTestCase; -import org.hamcrest.Matchers; +import org.elasticsearch.test.VersionUtils; +import org.hamcrest.CoreMatchers; +import java.io.ByteArrayInputStream; import java.io.IOException; -import java.util.ArrayList; +import java.nio.charset.StandardCharsets; +import java.nio.file.Files; +import java.nio.file.Path; +import java.util.Arrays; +import java.util.Collections; +import java.util.HashMap; import java.util.Iterator; -import java.util.List; import java.util.Map; import java.util.NoSuchElementException; import java.util.Set; -import static org.hamcrest.Matchers.allOf; -import static org.hamcrest.Matchers.arrayContaining; import static org.hamcrest.Matchers.contains; import static org.hamcrest.Matchers.containsInAnyOrder; import static org.hamcrest.Matchers.containsString; @@ -90,55 +97,6 @@ public void testReplacePropertiesPlaceholderIgnoresPrompt() { assertThat(settings.get("setting2"), is("${prompt.secret}")); } - public void testUnFlattenedSettings() { - Settings settings = Settings.builder() - .put("foo", "abc") - .put("bar", "def") - .put("baz.foo", "ghi") - .put("baz.bar", "jkl") - .putArray("baz.arr", "a", "b", "c") - .build(); - Map map = settings.getAsStructuredMap(); - assertThat(map.keySet(), Matchers.hasSize(3)); - assertThat(map, allOf( - Matchers.hasEntry("foo", "abc"), - Matchers.hasEntry("bar", "def"))); - - @SuppressWarnings("unchecked") Map bazMap = (Map) map.get("baz"); - assertThat(bazMap.keySet(), Matchers.hasSize(3)); - assertThat(bazMap, allOf( - Matchers.hasEntry("foo", "ghi"), - Matchers.hasEntry("bar", "jkl"))); - @SuppressWarnings("unchecked") List bazArr = (List) bazMap.get("arr"); - assertThat(bazArr, contains("a", "b", "c")); - - } - - public void testFallbackToFlattenedSettings() { - Settings settings = Settings.builder() - .put("foo", "abc") - .put("foo.bar", "def") - .put("foo.baz", "ghi").build(); - Map map = settings.getAsStructuredMap(); - assertThat(map.keySet(), Matchers.hasSize(3)); - assertThat(map, allOf( - Matchers.hasEntry("foo", "abc"), - Matchers.hasEntry("foo.bar", "def"), - Matchers.hasEntry("foo.baz", "ghi"))); - - settings = Settings.builder() - .put("foo.bar", "def") - .put("foo", "abc") - .put("foo.baz", "ghi") - .build(); - map = settings.getAsStructuredMap(); - assertThat(map.keySet(), Matchers.hasSize(3)); - assertThat(map, allOf( - Matchers.hasEntry("foo", "abc"), - Matchers.hasEntry("foo.bar", "def"), - Matchers.hasEntry("foo.baz", "ghi"))); - } - public void testGetAsSettings() { Settings settings = Settings.builder() .put("bar", "hello world") @@ -153,73 +111,6 @@ public void testGetAsSettings() { assertThat(fooSettings.get("baz"), equalTo("ghi")); } - @SuppressWarnings("deprecation") //#getAsBooleanLenientForPreEs6Indices is the test subject - public void testLenientBooleanForPreEs6Index() throws IOException { - // time to say goodbye? - // norelease: do what the assumption tells us - assumeTrue( - "It's time to implement #22298. Please delete this test and Settings#getAsBooleanLenientForPreEs6Indices().", - Version.CURRENT.minimumCompatibilityVersion().before(Version.V_6_0_0_alpha1)); - - - String falsy = randomFrom("false", "off", "no", "0"); - String truthy = randomFrom("true", "on", "yes", "1"); - - Settings settings = Settings.builder() - .put("foo", falsy) - .put("bar", truthy).build(); - - final DeprecationLogger deprecationLogger = new DeprecationLogger(ESLoggerFactory.getLogger("testLenientBooleanForPreEs6Index")); - - assertFalse(settings.getAsBooleanLenientForPreEs6Indices(Version.V_5_0_0, "foo", null, deprecationLogger)); - assertTrue(settings.getAsBooleanLenientForPreEs6Indices(Version.V_5_0_0, "bar", null, deprecationLogger)); - assertTrue(settings.getAsBooleanLenientForPreEs6Indices(Version.V_5_0_0, "baz", true, deprecationLogger)); - - List expectedDeprecationWarnings = new ArrayList<>(); - if (Booleans.isBoolean(falsy) == false) { - expectedDeprecationWarnings.add( - "The value [" + falsy + "] of setting [foo] is not coerced into boolean anymore. Please change this value to [false]."); - } - if (Booleans.isBoolean(truthy) == false) { - expectedDeprecationWarnings.add( - "The value [" + truthy + "] of setting [bar] is not coerced into boolean anymore. Please change this value to [true]."); - } - - if (expectedDeprecationWarnings.isEmpty() == false) { - assertWarnings(expectedDeprecationWarnings.toArray(new String[1])); - } - } - - @SuppressWarnings("deprecation") //#getAsBooleanLenientForPreEs6Indices is the test subject - public void testInvalidLenientBooleanForCurrentIndexVersion() { - String falsy = randomFrom("off", "no", "0"); - String truthy = randomFrom("on", "yes", "1"); - - Settings settings = Settings.builder() - .put("foo", falsy) - .put("bar", truthy).build(); - - final DeprecationLogger deprecationLogger = - new DeprecationLogger(ESLoggerFactory.getLogger("testInvalidLenientBooleanForCurrentIndexVersion")); - expectThrows(IllegalArgumentException.class, - () -> settings.getAsBooleanLenientForPreEs6Indices(Version.CURRENT, "foo", null, deprecationLogger)); - expectThrows(IllegalArgumentException.class, - () -> settings.getAsBooleanLenientForPreEs6Indices(Version.CURRENT, "bar", null, deprecationLogger)); - } - - @SuppressWarnings("deprecation") //#getAsBooleanLenientForPreEs6Indices is the test subject - public void testValidLenientBooleanForCurrentIndexVersion() { - Settings settings = Settings.builder() - .put("foo", "false") - .put("bar", "true").build(); - - final DeprecationLogger deprecationLogger = - new DeprecationLogger(ESLoggerFactory.getLogger("testValidLenientBooleanForCurrentIndexVersion")); - assertFalse(settings.getAsBooleanLenientForPreEs6Indices(Version.CURRENT, "foo", null, deprecationLogger)); - assertTrue(settings.getAsBooleanLenientForPreEs6Indices(Version.CURRENT, "bar", null, deprecationLogger)); - assertTrue(settings.getAsBooleanLenientForPreEs6Indices(Version.CURRENT, "baz", true, deprecationLogger)); - } - public void testMultLevelGetPrefix() { Settings settings = Settings.builder() .put("1.2.3", "hello world") @@ -273,105 +164,103 @@ public void testNames() { public void testThatArraysAreOverriddenCorrectly() throws IOException { // overriding a single value with an array Settings settings = Settings.builder() - .put(Settings.builder().putArray("value", "1").build()) - .put(Settings.builder().putArray("value", "2", "3").build()) + .put(Settings.builder().putList("value", "1").build()) + .put(Settings.builder().putList("value", "2", "3").build()) .build(); - assertThat(settings.getAsArray("value"), arrayContaining("2", "3")); + assertThat(settings.getAsList("value"), contains("2", "3")); settings = Settings.builder() .put(Settings.builder().put("value", "1").build()) - .put(Settings.builder().putArray("value", "2", "3").build()) + .put(Settings.builder().putList("value", "2", "3").build()) .build(); - assertThat(settings.getAsArray("value"), arrayContaining("2", "3")); - - settings = Settings.builder() - .put(new YamlSettingsLoader(false).load("value: 1")) - .put(new YamlSettingsLoader(false).load("value: [ 2, 3 ]")) - .build(); - assertThat(settings.getAsArray("value"), arrayContaining("2", "3")); + assertThat(settings.getAsList("value"), contains("2", "3")); + settings = Settings.builder().loadFromSource("value: 1", XContentType.YAML) + .loadFromSource("value: [ 2, 3 ]", XContentType.YAML) + .build(); + assertThat(settings.getAsList("value"), contains("2", "3")); settings = Settings.builder() .put(Settings.builder().put("value.with.deep.key", "1").build()) - .put(Settings.builder().putArray("value.with.deep.key", "2", "3").build()) + .put(Settings.builder().putList("value.with.deep.key", "2", "3").build()) .build(); - assertThat(settings.getAsArray("value.with.deep.key"), arrayContaining("2", "3")); + assertThat(settings.getAsList("value.with.deep.key"), contains("2", "3")); // overriding an array with a shorter array settings = Settings.builder() - .put(Settings.builder().putArray("value", "1", "2").build()) - .put(Settings.builder().putArray("value", "3").build()) + .put(Settings.builder().putList("value", "1", "2").build()) + .put(Settings.builder().putList("value", "3").build()) .build(); - assertThat(settings.getAsArray("value"), arrayContaining("3")); + assertThat(settings.getAsList("value"), contains("3")); settings = Settings.builder() - .put(Settings.builder().putArray("value", "1", "2", "3").build()) - .put(Settings.builder().putArray("value", "4", "5").build()) + .put(Settings.builder().putList("value", "1", "2", "3").build()) + .put(Settings.builder().putList("value", "4", "5").build()) .build(); - assertThat(settings.getAsArray("value"), arrayContaining("4", "5")); + assertThat(settings.getAsList("value"), contains("4", "5")); settings = Settings.builder() - .put(Settings.builder().putArray("value.deep.key", "1", "2", "3").build()) - .put(Settings.builder().putArray("value.deep.key", "4", "5").build()) + .put(Settings.builder().putList("value.deep.key", "1", "2", "3").build()) + .put(Settings.builder().putList("value.deep.key", "4", "5").build()) .build(); - assertThat(settings.getAsArray("value.deep.key"), arrayContaining("4", "5")); + assertThat(settings.getAsList("value.deep.key"), contains("4", "5")); // overriding an array with a longer array settings = Settings.builder() - .put(Settings.builder().putArray("value", "1", "2").build()) - .put(Settings.builder().putArray("value", "3", "4", "5").build()) + .put(Settings.builder().putList("value", "1", "2").build()) + .put(Settings.builder().putList("value", "3", "4", "5").build()) .build(); - assertThat(settings.getAsArray("value"), arrayContaining("3", "4", "5")); + assertThat(settings.getAsList("value"), contains("3", "4", "5")); settings = Settings.builder() - .put(Settings.builder().putArray("value.deep.key", "1", "2", "3").build()) - .put(Settings.builder().putArray("value.deep.key", "4", "5").build()) + .put(Settings.builder().putList("value.deep.key", "1", "2", "3").build()) + .put(Settings.builder().putList("value.deep.key", "4", "5").build()) .build(); - assertThat(settings.getAsArray("value.deep.key"), arrayContaining("4", "5")); + assertThat(settings.getAsList("value.deep.key"), contains("4", "5")); // overriding an array with a single value settings = Settings.builder() - .put(Settings.builder().putArray("value", "1", "2").build()) + .put(Settings.builder().putList("value", "1", "2").build()) .put(Settings.builder().put("value", "3").build()) .build(); - assertThat(settings.getAsArray("value"), arrayContaining("3")); + assertThat(settings.getAsList("value"), contains("3")); settings = Settings.builder() - .put(Settings.builder().putArray("value.deep.key", "1", "2").build()) + .put(Settings.builder().putList("value.deep.key", "1", "2").build()) .put(Settings.builder().put("value.deep.key", "3").build()) .build(); - assertThat(settings.getAsArray("value.deep.key"), arrayContaining("3")); + assertThat(settings.getAsList("value.deep.key"), contains("3")); // test that other arrays are not overridden settings = Settings.builder() - .put(Settings.builder().putArray("value", "1", "2", "3").putArray("a", "b", "c").build()) - .put(Settings.builder().putArray("value", "4", "5").putArray("d", "e", "f").build()) + .put(Settings.builder().putList("value", "1", "2", "3").putList("a", "b", "c").build()) + .put(Settings.builder().putList("value", "4", "5").putList("d", "e", "f").build()) .build(); - assertThat(settings.getAsArray("value"), arrayContaining("4", "5")); - assertThat(settings.getAsArray("a"), arrayContaining("b", "c")); - assertThat(settings.getAsArray("d"), arrayContaining("e", "f")); + assertThat(settings.getAsList("value"), contains("4", "5")); + assertThat(settings.getAsList("a"), contains("b", "c")); + assertThat(settings.getAsList("d"), contains("e", "f")); settings = Settings.builder() - .put(Settings.builder().putArray("value.deep.key", "1", "2", "3").putArray("a", "b", "c").build()) - .put(Settings.builder().putArray("value.deep.key", "4", "5").putArray("d", "e", "f").build()) + .put(Settings.builder().putList("value.deep.key", "1", "2", "3").putList("a", "b", "c").build()) + .put(Settings.builder().putList("value.deep.key", "4", "5").putList("d", "e", "f").build()) .build(); - assertThat(settings.getAsArray("value.deep.key"), arrayContaining("4", "5")); - assertThat(settings.getAsArray("a"), notNullValue()); - assertThat(settings.getAsArray("d"), notNullValue()); + assertThat(settings.getAsList("value.deep.key"), contains("4", "5")); + assertThat(settings.getAsList("a"), notNullValue()); + assertThat(settings.getAsList("d"), notNullValue()); // overriding a deeper structure with an array settings = Settings.builder() .put(Settings.builder().put("value.data", "1").build()) - .put(Settings.builder().putArray("value", "4", "5").build()) + .put(Settings.builder().putList("value", "4", "5").build()) .build(); - assertThat(settings.getAsArray("value"), arrayContaining("4", "5")); + assertThat(settings.getAsList("value"), contains("4", "5")); // overriding an array with a deeper structure settings = Settings.builder() - .put(Settings.builder().putArray("value", "4", "5").build()) + .put(Settings.builder().putList("value", "4", "5").build()) .put(Settings.builder().put("value.data", "1").build()) .build(); assertThat(settings.get("value.data"), is("1")); - assertThat(settings.get("value"), is(nullValue())); + assertThat(settings.get("value"), is("[4, 5]")); } public void testPrefixNormalization() { @@ -419,34 +308,27 @@ public void testFilteredMap() { builder.put("a.b.c.d", "ab3"); - Map fiteredMap = builder.build().filter((k) -> k.startsWith("a.b")).getAsMap(); - assertEquals(3, fiteredMap.size()); + Settings filteredSettings = builder.build().filter((k) -> k.startsWith("a.b")); + assertEquals(3, filteredSettings.size()); int numKeys = 0; - for (String k : fiteredMap.keySet()) { + for (String k : filteredSettings.keySet()) { numKeys++; assertTrue(k.startsWith("a.b")); } assertEquals(3, numKeys); - int numValues = 0; - - for (String v : fiteredMap.values()) { - numValues++; - assertTrue(v.startsWith("ab")); - } - assertEquals(3, numValues); - assertFalse(fiteredMap.containsKey("a.c")); - assertFalse(fiteredMap.containsKey("a")); - assertTrue(fiteredMap.containsKey("a.b")); - assertTrue(fiteredMap.containsKey("a.b.c")); - assertTrue(fiteredMap.containsKey("a.b.c.d")); + assertFalse(filteredSettings.keySet().contains("a.c")); + assertFalse(filteredSettings.keySet().contains("a")); + assertTrue(filteredSettings.keySet().contains("a.b")); + assertTrue(filteredSettings.keySet().contains("a.b.c")); + assertTrue(filteredSettings.keySet().contains("a.b.c.d")); expectThrows(UnsupportedOperationException.class, () -> - fiteredMap.remove("a.b")); - assertEquals("ab1", fiteredMap.get("a.b")); - assertEquals("ab2", fiteredMap.get("a.b.c")); - assertEquals("ab3", fiteredMap.get("a.b.c.d")); + filteredSettings.keySet().remove("a.b")); + assertEquals("ab1", filteredSettings.get("a.b")); + assertEquals("ab2", filteredSettings.get("a.b.c")); + assertEquals("ab3", filteredSettings.get("a.b.c.d")); - Iterator iterator = fiteredMap.keySet().iterator(); + Iterator iterator = filteredSettings.keySet().iterator(); for (int i = 0; i < 10; i++) { assertTrue(iterator.hasNext()); } @@ -472,7 +354,7 @@ public void testPrefixMap() { builder.put("a.c", "ac1"); builder.put("a.b.c.d", "ab3"); - Map prefixMap = builder.build().getByPrefix("a.").getAsMap(); + Settings prefixMap = builder.build().getByPrefix("a."); assertEquals(4, prefixMap.size()); int numKeys = 0; for (String k : prefixMap.keySet()) { @@ -481,20 +363,14 @@ public void testPrefixMap() { } assertEquals(4, numKeys); - int numValues = 0; - for (String v : prefixMap.values()) { - numValues++; - assertTrue(v, v.startsWith("ab") || v.startsWith("ac")); - } - assertEquals(4, numValues); - assertFalse(prefixMap.containsKey("a")); - assertTrue(prefixMap.containsKey("c")); - assertTrue(prefixMap.containsKey("b")); - assertTrue(prefixMap.containsKey("b.c")); - assertTrue(prefixMap.containsKey("b.c.d")); + assertFalse(prefixMap.keySet().contains("a")); + assertTrue(prefixMap.keySet().contains("c")); + assertTrue(prefixMap.keySet().contains("b")); + assertTrue(prefixMap.keySet().contains("b.c")); + assertTrue(prefixMap.keySet().contains("b.c.d")); expectThrows(UnsupportedOperationException.class, () -> - prefixMap.remove("a.b")); + prefixMap.keySet().remove("a.b")); assertEquals("ab1", prefixMap.get("b")); assertEquals("ab2", prefixMap.get("b.c")); assertEquals("ab3", prefixMap.get("b.c.d")); @@ -560,27 +436,24 @@ public void testEmptyFilterMap() { builder.put("a.c", "ac1"); builder.put("a.b.c.d", "ab3"); - Map fiteredMap = builder.build().filter((k) -> false).getAsMap(); - assertEquals(0, fiteredMap.size()); - for (String k : fiteredMap.keySet()) { + Settings filteredSettings = builder.build().filter((k) -> false); + assertEquals(0, filteredSettings.size()); + for (String k : filteredSettings.keySet()) { fail("no element"); } - for (String v : fiteredMap.values()) { - fail("no element"); - } - assertFalse(fiteredMap.containsKey("a.c")); - assertFalse(fiteredMap.containsKey("a")); - assertFalse(fiteredMap.containsKey("a.b")); - assertFalse(fiteredMap.containsKey("a.b.c")); - assertFalse(fiteredMap.containsKey("a.b.c.d")); + assertFalse(filteredSettings.keySet().contains("a.c")); + assertFalse(filteredSettings.keySet().contains("a")); + assertFalse(filteredSettings.keySet().contains("a.b")); + assertFalse(filteredSettings.keySet().contains("a.b.c")); + assertFalse(filteredSettings.keySet().contains("a.b.c.d")); expectThrows(UnsupportedOperationException.class, () -> - fiteredMap.remove("a.b")); - assertNull(fiteredMap.get("a.b")); - assertNull(fiteredMap.get("a.b.c")); - assertNull(fiteredMap.get("a.b.c.d")); + filteredSettings.keySet().remove("a.b")); + assertNull(filteredSettings.get("a.b")); + assertNull(filteredSettings.get("a.b.c")); + assertNull(filteredSettings.get("a.b.c.d")); - Iterator iterator = fiteredMap.keySet().iterator(); + Iterator iterator = filteredSettings.keySet().iterator(); for (int i = 0; i < 10; i++) { assertFalse(iterator.hasNext()); } @@ -602,13 +475,18 @@ public void testWriteSettingsToStream() throws IOException { secureSettings.setString("test.key2.bog", "somethingsecure"); Settings.Builder builder = Settings.builder(); builder.put("test.key1.baz", "blah1"); + builder.putNull("test.key3.bar"); + builder.putList("test.key4.foo", "1", "2"); builder.setSecureSettings(secureSettings); - assertEquals(5, builder.build().size()); + assertEquals(7, builder.build().size()); Settings.writeSettingsToStream(builder.build(), out); StreamInput in = StreamInput.wrap(out.bytes().toBytesRef().bytes); Settings settings = Settings.readSettingsFromStream(in); - assertEquals(1, settings.size()); + assertEquals(3, settings.size()); assertEquals("blah1", settings.get("test.key1.baz")); + assertNull(settings.get("test.key3.bar")); + assertTrue(settings.keySet().contains("test.key3.bar")); + assertEquals(Arrays.asList("1", "2"), settings.getAsList("test.key4.foo")); } public void testSecureSettingConflict() { @@ -619,14 +497,236 @@ public void testSecureSettingConflict() { } public void testGetAsArrayFailsOnDuplicates() { - final Settings settings = - Settings.builder() - .put("foobar.0", "bar") - .put("foobar.1", "baz") - .put("foobar", "foo") - .build(); - final IllegalStateException e = expectThrows(IllegalStateException.class, () -> settings.getAsArray("foobar")); - assertThat(e, hasToString(containsString("settings object contains values for [foobar=foo] and [foobar.0=bar]"))); + final IllegalStateException e = expectThrows(IllegalStateException.class, () -> Settings.builder() + .put("foobar.0", "bar") + .put("foobar.1", "baz") + .put("foobar", "foo") + .build()); + assertThat(e, hasToString(containsString("settings builder can't contain values for [foobar=foo] and [foobar.0=bar]"))); + } + + public void testToAndFromXContent() throws IOException { + Settings settings = Settings.builder() + .putList("foo.bar.baz", "1", "2", "3") + .put("foo.foobar", 2) + .put("rootfoo", "test") + .put("foo.baz", "1,2,3,4") + .putNull("foo.null.baz") + .build(); + final boolean flatSettings = randomBoolean(); + XContentBuilder builder = XContentBuilder.builder(XContentType.JSON.xContent()); + builder.startObject(); + settings.toXContent(builder, new ToXContent.MapParams(Collections.singletonMap("flat_settings", ""+flatSettings))); + builder.endObject(); + XContentParser parser = createParser(builder); + Settings build = Settings.fromXContent(parser); + assertEquals(5, build.size()); + assertEquals(Arrays.asList("1", "2", "3"), build.getAsList("foo.bar.baz")); + assertEquals(2, build.getAsInt("foo.foobar", 0).intValue()); + assertEquals("test", build.get("rootfoo")); + assertEquals("1,2,3,4", build.get("foo.baz")); + assertNull(build.get("foo.null.baz")); + } + + public void testSimpleJsonSettings() throws Exception { + final String json = "/org/elasticsearch/common/settings/loader/test-settings.json"; + final Settings settings = Settings.builder() + .loadFromStream(json, getClass().getResourceAsStream(json), false) + .build(); + + assertThat(settings.get("test1.value1"), equalTo("value1")); + assertThat(settings.get("test1.test2.value2"), equalTo("value2")); + assertThat(settings.getAsInt("test1.test2.value3", -1), equalTo(2)); + + // check array + assertNull(settings.get("test1.test3.0")); + assertNull(settings.get("test1.test3.1")); + assertThat(settings.getAsList("test1.test3").size(), equalTo(2)); + assertThat(settings.getAsList("test1.test3").get(0), equalTo("test3-1")); + assertThat(settings.getAsList("test1.test3").get(1), equalTo("test3-2")); + } + + public void testDuplicateKeysThrowsException() { + assumeFalse("Test only makes sense if XContent parser doesn't have strict duplicate checks enabled", + XContent.isStrictDuplicateDetectionEnabled()); + final String json = "{\"foo\":\"bar\",\"foo\":\"baz\"}"; + final SettingsException e = expectThrows(SettingsException.class, + () -> Settings.builder().loadFromSource(json, XContentType.JSON).build()); + assertThat( + e.toString(), + CoreMatchers.containsString("duplicate settings key [foo] " + + "found at line number [1], " + + "column number [20], " + + "previous value [bar], " + + "current value [baz]")); + + String yaml = "foo: bar\nfoo: baz"; + SettingsException e1 = expectThrows(SettingsException.class, () -> { + Settings.builder().loadFromSource(yaml, XContentType.YAML); + }); + assertEquals(e1.getCause().getClass(), ElasticsearchParseException.class); + String msg = e1.getCause().getMessage(); + assertTrue( + msg, + msg.contains("duplicate settings key [foo] found at line number [2], column number [6], " + + "previous value [bar], current value [baz]")); + } + + public void testToXContent() throws IOException { + // this is just terrible but it's the existing behavior! + Settings test = Settings.builder().putList("foo.bar", "1", "2", "3").put("foo.bar.baz", "test").build(); + XContentBuilder builder = XContentBuilder.builder(XContentType.JSON.xContent()); + builder.startObject(); + test.toXContent(builder, new ToXContent.MapParams(Collections.emptyMap())); + builder.endObject(); + assertEquals("{\"foo\":{\"bar.baz\":\"test\",\"bar\":[\"1\",\"2\",\"3\"]}}", builder.string()); + + test = Settings.builder().putList("foo.bar", "1", "2", "3").build(); + builder = XContentBuilder.builder(XContentType.JSON.xContent()); + builder.startObject(); + test.toXContent(builder, new ToXContent.MapParams(Collections.emptyMap())); + builder.endObject(); + assertEquals("{\"foo\":{\"bar\":[\"1\",\"2\",\"3\"]}}", builder.string()); + + builder = XContentBuilder.builder(XContentType.JSON.xContent()); + builder.startObject(); + test.toXContent(builder, new ToXContent.MapParams(Collections.singletonMap("flat_settings", "true"))); + builder.endObject(); + assertEquals("{\"foo.bar\":[\"1\",\"2\",\"3\"]}", builder.string()); + } + + public void testLoadEmptyStream() throws IOException { + Settings test = Settings.builder().loadFromStream(randomFrom("test.json", "test.yml"), new ByteArrayInputStream(new byte[0]), false) + .build(); + assertEquals(0, test.size()); + } + + public void testSimpleYamlSettings() throws Exception { + final String yaml = "/org/elasticsearch/common/settings/loader/test-settings.yml"; + final Settings settings = Settings.builder() + .loadFromStream(yaml, getClass().getResourceAsStream(yaml), false) + .build(); + + assertThat(settings.get("test1.value1"), equalTo("value1")); + assertThat(settings.get("test1.test2.value2"), equalTo("value2")); + assertThat(settings.getAsInt("test1.test2.value3", -1), equalTo(2)); + + // check array + assertNull(settings.get("test1.test3.0")); + assertNull(settings.get("test1.test3.1")); + assertThat(settings.getAsList("test1.test3").size(), equalTo(2)); + assertThat(settings.getAsList("test1.test3").get(0), equalTo("test3-1")); + assertThat(settings.getAsList("test1.test3").get(1), equalTo("test3-2")); + } + + public void testYamlLegacyList() throws IOException { + Settings settings = Settings.builder() + .loadFromStream("foo.yml", new ByteArrayInputStream("foo.bar.baz.0: 1\nfoo.bar.baz.1: 2".getBytes(StandardCharsets.UTF_8)), + false).build(); + assertThat(settings.getAsList("foo.bar.baz").size(), equalTo(2)); + assertThat(settings.getAsList("foo.bar.baz").get(0), equalTo("1")); + assertThat(settings.getAsList("foo.bar.baz").get(1), equalTo("2")); + } + + public void testIndentation() throws Exception { + String yaml = "/org/elasticsearch/common/settings/loader/indentation-settings.yml"; + ElasticsearchParseException e = expectThrows(ElasticsearchParseException.class, () -> { + Settings.builder().loadFromStream(yaml, getClass().getResourceAsStream(yaml), false); + }); + assertTrue(e.getMessage(), e.getMessage().contains("malformed")); + } + + public void testIndentationWithExplicitDocumentStart() throws Exception { + String yaml = "/org/elasticsearch/common/settings/loader/indentation-with-explicit-document-start-settings.yml"; + ElasticsearchParseException e = expectThrows(ElasticsearchParseException.class, () -> { + Settings.builder().loadFromStream(yaml, getClass().getResourceAsStream(yaml), false); + }); + assertTrue(e.getMessage(), e.getMessage().contains("malformed")); + } + + + public void testMissingValue() throws Exception { + Path tmp = createTempFile("test", ".yaml"); + Files.write(tmp, Collections.singletonList("foo: # missing value\n"), StandardCharsets.UTF_8); + ElasticsearchParseException e = expectThrows(ElasticsearchParseException.class, () -> { + Settings.builder().loadFromPath(tmp); + }); + assertTrue( + e.getMessage(), + e.getMessage().contains("null-valued setting found for key [foo] found at line number [1], column number [5]")); + } + + public void testReadLegacyFromStream() throws IOException { + BytesStreamOutput output = new BytesStreamOutput(); + output.setVersion(VersionUtils.getPreviousVersion(Version.V_6_1_0)); + output.writeVInt(5); + output.writeString("foo.bar.1"); + output.writeOptionalString("1"); + output.writeString("foo.bar.0"); + output.writeOptionalString("0"); + output.writeString("foo.bar.2"); + output.writeOptionalString("2"); + output.writeString("foo.bar.3"); + output.writeOptionalString("3"); + output.writeString("foo.bar.baz"); + output.writeOptionalString("baz"); + StreamInput in = StreamInput.wrap(BytesReference.toBytes(output.bytes())); + in.setVersion(VersionUtils.getPreviousVersion(Version.V_6_1_0)); + Settings settings = Settings.readSettingsFromStream(in); + assertEquals(2, settings.size()); + assertEquals(Arrays.asList("0", "1", "2", "3"), settings.getAsList("foo.bar")); + assertEquals("baz", settings.get("foo.bar.baz")); } + public void testWriteLegacyOutput() throws IOException { + BytesStreamOutput output = new BytesStreamOutput(); + output.setVersion(VersionUtils.getPreviousVersion(Version.V_6_1_0)); + Settings settings = Settings.builder().putList("foo.bar", "0", "1", "2", "3") + .put("foo.bar.baz", "baz").putNull("foo.null").build(); + Settings.writeSettingsToStream(settings, output); + StreamInput in = StreamInput.wrap(BytesReference.toBytes(output.bytes())); + assertEquals(6, in.readVInt()); + Map keyValues = new HashMap<>(); + for (int i = 0; i < 6; i++){ + keyValues.put(in.readString(), in.readOptionalString()); + } + assertEquals(keyValues.get("foo.bar.0"), "0"); + assertEquals(keyValues.get("foo.bar.1"), "1"); + assertEquals(keyValues.get("foo.bar.2"), "2"); + assertEquals(keyValues.get("foo.bar.3"), "3"); + assertEquals(keyValues.get("foo.bar.baz"), "baz"); + assertTrue(keyValues.containsKey("foo.null")); + assertNull(keyValues.get("foo.null")); + + in = StreamInput.wrap(BytesReference.toBytes(output.bytes())); + in.setVersion(output.getVersion()); + Settings readSettings = Settings.readSettingsFromStream(in); + assertEquals(3, readSettings.size()); + assertEquals(Arrays.asList("0", "1", "2", "3"), readSettings.getAsList("foo.bar")); + assertEquals(readSettings.get("foo.bar.baz"), "baz"); + assertTrue(readSettings.keySet().contains("foo.null")); + assertNull(readSettings.get("foo.null")); + } + + public void testReadWriteArray() throws IOException { + BytesStreamOutput output = new BytesStreamOutput(); + output.setVersion(randomFrom(Version.CURRENT, Version.V_6_1_0)); + Settings settings = Settings.builder().putList("foo.bar", "0", "1", "2", "3").put("foo.bar.baz", "baz").build(); + Settings.writeSettingsToStream(settings, output); + StreamInput in = StreamInput.wrap(BytesReference.toBytes(output.bytes())); + Settings build = Settings.readSettingsFromStream(in); + assertEquals(2, build.size()); + assertEquals(build.getAsList("foo.bar"), Arrays.asList("0", "1", "2", "3")); + assertEquals(build.get("foo.bar.baz"), "baz"); + } + + public void testCopy() { + Settings settings = Settings.builder().putList("foo.bar", "0", "1", "2", "3").put("foo.bar.baz", "baz").putNull("test").build(); + assertEquals(Arrays.asList("0", "1", "2", "3"), Settings.builder().copy("foo.bar", settings).build().getAsList("foo.bar")); + assertEquals("baz", Settings.builder().copy("foo.bar.baz", settings).build().get("foo.bar.baz")); + assertNull(Settings.builder().copy("foo.bar.baz", settings).build().get("test")); + assertTrue(Settings.builder().copy("test", settings).build().keySet().contains("test")); + IllegalArgumentException iae = expectThrows(IllegalArgumentException.class, () -> Settings.builder().copy("not_there", settings)); + assertEquals("source key not found in the source settings", iae.getMessage()); + } } diff --git a/core/src/test/java/org/elasticsearch/common/settings/loader/JsonSettingsLoaderTests.java b/core/src/test/java/org/elasticsearch/common/settings/loader/JsonSettingsLoaderTests.java deleted file mode 100644 index fc1300d94138e..0000000000000 --- a/core/src/test/java/org/elasticsearch/common/settings/loader/JsonSettingsLoaderTests.java +++ /dev/null @@ -1,75 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.common.settings.loader; - -import org.elasticsearch.ElasticsearchParseException; -import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.common.settings.SettingsException; -import org.elasticsearch.common.xcontent.XContent; -import org.elasticsearch.common.xcontent.XContentType; -import org.elasticsearch.test.ESTestCase; - -import static org.hamcrest.CoreMatchers.containsString; -import static org.hamcrest.Matchers.equalTo; - -public class JsonSettingsLoaderTests extends ESTestCase { - - public void testSimpleJsonSettings() throws Exception { - final String json = "/org/elasticsearch/common/settings/loader/test-settings.json"; - final Settings settings = Settings.builder() - .loadFromStream(json, getClass().getResourceAsStream(json)) - .build(); - - assertThat(settings.get("test1.value1"), equalTo("value1")); - assertThat(settings.get("test1.test2.value2"), equalTo("value2")); - assertThat(settings.getAsInt("test1.test2.value3", -1), equalTo(2)); - - // check array - assertThat(settings.get("test1.test3.0"), equalTo("test3-1")); - assertThat(settings.get("test1.test3.1"), equalTo("test3-2")); - assertThat(settings.getAsArray("test1.test3").length, equalTo(2)); - assertThat(settings.getAsArray("test1.test3")[0], equalTo("test3-1")); - assertThat(settings.getAsArray("test1.test3")[1], equalTo("test3-2")); - } - - public void testDuplicateKeysThrowsException() { - assumeFalse("Test only makes sense if XContent parser doesn't have strict duplicate checks enabled", - XContent.isStrictDuplicateDetectionEnabled()); - final String json = "{\"foo\":\"bar\",\"foo\":\"baz\"}"; - final SettingsException e = expectThrows(SettingsException.class, - () -> Settings.builder().loadFromSource(json, XContentType.JSON).build()); - assertEquals(e.getCause().getClass(), ElasticsearchParseException.class); - assertThat( - e.toString(), - containsString("duplicate settings key [foo] " + - "found at line number [1], " + - "column number [20], " + - "previous value [bar], " + - "current value [baz]")); - } - - public void testNullValuedSettingThrowsException() { - final String json = "{\"foo\":null}"; - final ElasticsearchParseException e = - expectThrows(ElasticsearchParseException.class, () -> new JsonSettingsLoader(false).load(json)); - assertThat(e.toString(), containsString("null-valued setting found for key [foo] found at line number [1], column number [8]")); - } - -} diff --git a/core/src/test/java/org/elasticsearch/common/settings/loader/YamlSettingsLoaderTests.java b/core/src/test/java/org/elasticsearch/common/settings/loader/YamlSettingsLoaderTests.java deleted file mode 100644 index e4b4de0ceb616..0000000000000 --- a/core/src/test/java/org/elasticsearch/common/settings/loader/YamlSettingsLoaderTests.java +++ /dev/null @@ -1,98 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.common.settings.loader; - -import org.elasticsearch.ElasticsearchParseException; -import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.common.settings.SettingsException; -import org.elasticsearch.common.xcontent.XContent; -import org.elasticsearch.common.xcontent.XContentType; -import org.elasticsearch.test.ESTestCase; - -import java.nio.charset.StandardCharsets; -import java.nio.file.Files; -import java.nio.file.Path; -import java.util.Collections; - -import static org.hamcrest.Matchers.equalTo; - -public class YamlSettingsLoaderTests extends ESTestCase { - - public void testSimpleYamlSettings() throws Exception { - final String yaml = "/org/elasticsearch/common/settings/loader/test-settings.yml"; - final Settings settings = Settings.builder() - .loadFromStream(yaml, getClass().getResourceAsStream(yaml)) - .build(); - - assertThat(settings.get("test1.value1"), equalTo("value1")); - assertThat(settings.get("test1.test2.value2"), equalTo("value2")); - assertThat(settings.getAsInt("test1.test2.value3", -1), equalTo(2)); - - // check array - assertThat(settings.get("test1.test3.0"), equalTo("test3-1")); - assertThat(settings.get("test1.test3.1"), equalTo("test3-2")); - assertThat(settings.getAsArray("test1.test3").length, equalTo(2)); - assertThat(settings.getAsArray("test1.test3")[0], equalTo("test3-1")); - assertThat(settings.getAsArray("test1.test3")[1], equalTo("test3-2")); - } - - public void testIndentation() throws Exception { - String yaml = "/org/elasticsearch/common/settings/loader/indentation-settings.yml"; - ElasticsearchParseException e = expectThrows(ElasticsearchParseException.class, () -> { - Settings.builder().loadFromStream(yaml, getClass().getResourceAsStream(yaml)); - }); - assertTrue(e.getMessage(), e.getMessage().contains("malformed")); - } - - public void testIndentationWithExplicitDocumentStart() throws Exception { - String yaml = "/org/elasticsearch/common/settings/loader/indentation-with-explicit-document-start-settings.yml"; - ElasticsearchParseException e = expectThrows(ElasticsearchParseException.class, () -> { - Settings.builder().loadFromStream(yaml, getClass().getResourceAsStream(yaml)); - }); - assertTrue(e.getMessage(), e.getMessage().contains("malformed")); - } - - public void testDuplicateKeysThrowsException() { - assumeFalse("Test only makes sense if XContent parser doesn't have strict duplicate checks enabled", - XContent.isStrictDuplicateDetectionEnabled()); - - String yaml = "foo: bar\nfoo: baz"; - SettingsException e = expectThrows(SettingsException.class, () -> { - Settings.builder().loadFromSource(yaml, XContentType.YAML); - }); - assertEquals(e.getCause().getClass(), ElasticsearchParseException.class); - String msg = e.getCause().getMessage(); - assertTrue( - msg, - msg.contains("duplicate settings key [foo] found at line number [2], column number [6], " + - "previous value [bar], current value [baz]")); - } - - public void testMissingValue() throws Exception { - Path tmp = createTempFile("test", ".yaml"); - Files.write(tmp, Collections.singletonList("foo: # missing value\n"), StandardCharsets.UTF_8); - ElasticsearchParseException e = expectThrows(ElasticsearchParseException.class, () -> { - Settings.builder().loadFromPath(tmp); - }); - assertTrue( - e.getMessage(), - e.getMessage().contains("null-valued setting found for key [foo] found at line number [1], column number [5]")); - } -} diff --git a/core/src/test/java/org/elasticsearch/common/unit/FuzzinessTests.java b/core/src/test/java/org/elasticsearch/common/unit/FuzzinessTests.java index b370250bf9dad..87a9441cb25dd 100644 --- a/core/src/test/java/org/elasticsearch/common/unit/FuzzinessTests.java +++ b/core/src/test/java/org/elasticsearch/common/unit/FuzzinessTests.java @@ -18,6 +18,7 @@ */ package org.elasticsearch.common.unit; +import org.elasticsearch.ElasticsearchParseException; import org.elasticsearch.common.io.stream.BytesStreamOutput; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.xcontent.XContentBuilder; @@ -49,8 +50,8 @@ public void testParseFromXContent() throws IOException { assertThat(parser.nextToken(), equalTo(XContentParser.Token.START_OBJECT)); assertThat(parser.nextToken(), equalTo(XContentParser.Token.FIELD_NAME)); assertThat(parser.nextToken(), equalTo(XContentParser.Token.VALUE_NUMBER)); - Fuzziness parse = Fuzziness.parse(parser); - assertThat(parse.asFloat(), equalTo(floatValue)); + Fuzziness fuzziness = Fuzziness.parse(parser); + assertThat(fuzziness.asFloat(), equalTo(floatValue)); assertThat(parser.nextToken(), equalTo(XContentParser.Token.END_OBJECT)); } { @@ -67,21 +68,21 @@ public void testParseFromXContent() throws IOException { assertThat(parser.nextToken(), equalTo(XContentParser.Token.START_OBJECT)); assertThat(parser.nextToken(), equalTo(XContentParser.Token.FIELD_NAME)); assertThat(parser.nextToken(), anyOf(equalTo(XContentParser.Token.VALUE_NUMBER), equalTo(XContentParser.Token.VALUE_STRING))); - Fuzziness parse = Fuzziness.parse(parser); + Fuzziness fuzziness = Fuzziness.parse(parser); if (value.intValue() >= 1) { - assertThat(parse.asDistance(), equalTo(Math.min(2, value.intValue()))); + assertThat(fuzziness.asDistance(), equalTo(Math.min(2, value.intValue()))); } assertThat(parser.nextToken(), equalTo(XContentParser.Token.END_OBJECT)); if (intValue.equals(value)) { switch (intValue) { case 1: - assertThat(parse, sameInstance(Fuzziness.ONE)); + assertThat(fuzziness, sameInstance(Fuzziness.ONE)); break; case 2: - assertThat(parse, sameInstance(Fuzziness.TWO)); + assertThat(fuzziness, sameInstance(Fuzziness.TWO)); break; case 0: - assertThat(parse, sameInstance(Fuzziness.ZERO)); + assertThat(fuzziness, sameInstance(Fuzziness.ZERO)); break; default: break; @@ -90,19 +91,26 @@ public void testParseFromXContent() throws IOException { } { XContentBuilder json; - if (randomBoolean()) { + boolean isDefaultAutoFuzzinessTested = randomBoolean(); + if (isDefaultAutoFuzzinessTested) { json = Fuzziness.AUTO.toXContent(jsonBuilder().startObject(), null).endObject(); } else { + String auto = randomBoolean() ? "AUTO" : "auto"; + if (randomBoolean()) { + auto += ":" + randomIntBetween(1, 3) + "," + randomIntBetween(4, 10); + } json = jsonBuilder().startObject() - .field(Fuzziness.X_FIELD_NAME, randomBoolean() ? "AUTO" : "auto") - .endObject(); + .field(Fuzziness.X_FIELD_NAME, auto) + .endObject(); } XContentParser parser = createParser(json); assertThat(parser.nextToken(), equalTo(XContentParser.Token.START_OBJECT)); assertThat(parser.nextToken(), equalTo(XContentParser.Token.FIELD_NAME)); assertThat(parser.nextToken(), equalTo(XContentParser.Token.VALUE_STRING)); - Fuzziness parse = Fuzziness.parse(parser); - assertThat(parse, sameInstance(Fuzziness.AUTO)); + Fuzziness fuzziness = Fuzziness.parse(parser); + if (isDefaultAutoFuzzinessTested) { + assertThat(fuzziness, sameInstance(Fuzziness.AUTO)); + } assertThat(parser.nextToken(), equalTo(XContentParser.Token.END_OBJECT)); } } @@ -132,13 +140,30 @@ public void testSerialization() throws IOException { assertEquals(fuzziness, deserializedFuzziness); } - public void testSerializationAuto() throws IOException { + public void testSerializationDefaultAuto() throws IOException { Fuzziness fuzziness = Fuzziness.AUTO; Fuzziness deserializedFuzziness = doSerializeRoundtrip(fuzziness); assertEquals(fuzziness, deserializedFuzziness); assertEquals(fuzziness.asFloat(), deserializedFuzziness.asFloat(), 0f); } + public void testSerializationCustomAuto() throws IOException { + String auto = "AUTO:4,7"; + XContentBuilder json = jsonBuilder().startObject() + .field(Fuzziness.X_FIELD_NAME, auto) + .endObject(); + + XContentParser parser = createParser(json); + assertThat(parser.nextToken(), equalTo(XContentParser.Token.START_OBJECT)); + assertThat(parser.nextToken(), equalTo(XContentParser.Token.FIELD_NAME)); + assertThat(parser.nextToken(), equalTo(XContentParser.Token.VALUE_STRING)); + Fuzziness fuzziness = Fuzziness.parse(parser); + + Fuzziness deserializedFuzziness = doSerializeRoundtrip(fuzziness); + assertEquals(fuzziness, deserializedFuzziness); + assertEquals(fuzziness.asString(), deserializedFuzziness.asString()); + } + private static Fuzziness doSerializeRoundtrip(Fuzziness in) throws IOException { BytesStreamOutput output = new BytesStreamOutput(); in.writeTo(output); diff --git a/core/src/test/java/org/elasticsearch/common/util/LocaleUtilsTests.java b/core/src/test/java/org/elasticsearch/common/util/LocaleUtilsTests.java new file mode 100644 index 0000000000000..9675b225a16b6 --- /dev/null +++ b/core/src/test/java/org/elasticsearch/common/util/LocaleUtilsTests.java @@ -0,0 +1,67 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.common.util; + +import org.elasticsearch.test.ESTestCase; +import org.hamcrest.Matchers; + +import java.util.Locale; + +public class LocaleUtilsTests extends ESTestCase { + + public void testIllegalLang() { + IllegalArgumentException e = expectThrows(IllegalArgumentException.class, + () -> LocaleUtils.parse("yz")); + assertThat(e.getMessage(), Matchers.containsString("Unknown language: yz")); + + e = expectThrows(IllegalArgumentException.class, + () -> LocaleUtils.parse("yz-CA")); + assertThat(e.getMessage(), Matchers.containsString("Unknown language: yz")); + } + + public void testIllegalCountry() { + IllegalArgumentException e = expectThrows(IllegalArgumentException.class, + () -> LocaleUtils.parse("en-YZ")); + assertThat(e.getMessage(), Matchers.containsString("Unknown country: YZ")); + + e = expectThrows(IllegalArgumentException.class, + () -> LocaleUtils.parse("en-YZ-foobar")); + assertThat(e.getMessage(), Matchers.containsString("Unknown country: YZ")); + } + + public void testIllegalNumberOfParts() { + IllegalArgumentException e = expectThrows(IllegalArgumentException.class, + () -> LocaleUtils.parse("en-US-foo-bar")); + assertThat(e.getMessage(), Matchers.containsString("Locales can have at most 3 parts but got 4")); + } + + public void testUnderscores() { + Locale locale1 = LocaleUtils.parse("fr_FR"); + Locale locale2 = LocaleUtils.parse("fr-FR"); + assertEquals(locale2, locale1); + } + + public void testSimple() { + assertEquals(Locale.FRENCH, LocaleUtils.parse("fr")); + assertEquals(Locale.FRANCE, LocaleUtils.parse("fr-FR")); + assertEquals(Locale.ROOT, LocaleUtils.parse("root")); + assertEquals(Locale.ROOT, LocaleUtils.parse("")); + } +} diff --git a/core/src/test/java/org/elasticsearch/common/util/concurrent/QueueResizingEsThreadPoolExecutorTests.java b/core/src/test/java/org/elasticsearch/common/util/concurrent/QueueResizingEsThreadPoolExecutorTests.java index 5365e1bb90931..125cb572ea54d 100644 --- a/core/src/test/java/org/elasticsearch/common/util/concurrent/QueueResizingEsThreadPoolExecutorTests.java +++ b/core/src/test/java/org/elasticsearch/common/util/concurrent/QueueResizingEsThreadPoolExecutorTests.java @@ -198,26 +198,26 @@ public void testExecutionEWMACalculation() throws Exception { executor.prestartAllCoreThreads(); logger.info("--> executor: {}", executor); - assertThat((long)executor.getTaskExecutionEWMA(), equalTo(1000000L)); + assertThat((long)executor.getTaskExecutionEWMA(), equalTo(0L)); executeTask(executor, 1); assertBusy(() -> { - assertThat((long)executor.getTaskExecutionEWMA(), equalTo(700030L)); + assertThat((long)executor.getTaskExecutionEWMA(), equalTo(30L)); }); executeTask(executor, 1); assertBusy(() -> { - assertThat((long)executor.getTaskExecutionEWMA(), equalTo(490050L)); + assertThat((long)executor.getTaskExecutionEWMA(), equalTo(51L)); }); executeTask(executor, 1); assertBusy(() -> { - assertThat((long)executor.getTaskExecutionEWMA(), equalTo(343065L)); + assertThat((long)executor.getTaskExecutionEWMA(), equalTo(65L)); }); executeTask(executor, 1); assertBusy(() -> { - assertThat((long)executor.getTaskExecutionEWMA(), equalTo(240175L)); + assertThat((long)executor.getTaskExecutionEWMA(), equalTo(75L)); }); executeTask(executor, 1); assertBusy(() -> { - assertThat((long)executor.getTaskExecutionEWMA(), equalTo(168153L)); + assertThat((long)executor.getTaskExecutionEWMA(), equalTo(83L)); }); executor.shutdown(); diff --git a/core/src/test/java/org/elasticsearch/common/util/concurrent/ThreadContextTests.java b/core/src/test/java/org/elasticsearch/common/util/concurrent/ThreadContextTests.java index bee56c229c02a..e71efa46424b2 100644 --- a/core/src/test/java/org/elasticsearch/common/util/concurrent/ThreadContextTests.java +++ b/core/src/test/java/org/elasticsearch/common/util/concurrent/ThreadContextTests.java @@ -29,7 +29,6 @@ import java.util.List; import java.util.Map; import java.util.function.Supplier; - import static org.hamcrest.Matchers.equalTo; import static org.hamcrest.Matchers.hasItem; import static org.hamcrest.Matchers.hasSize; @@ -215,8 +214,8 @@ public void testResponseHeaders() { public void testCopyHeaders() { Settings build = Settings.builder().put("request.headers.default", "1").build(); ThreadContext threadContext = new ThreadContext(build); - threadContext.copyHeaders(Collections.emptyMap().entrySet()); - threadContext.copyHeaders(Collections.singletonMap("foo", "bar").entrySet()); + threadContext.copyHeaders(Collections.emptyMap().entrySet()); + threadContext.copyHeaders(Collections.singletonMap("foo", "bar").entrySet()); assertEquals("bar", threadContext.getHeader("foo")); } @@ -443,7 +442,7 @@ public void onAfter() { assertEquals("bar", threadContext.getHeader("foo")); assertEquals("bar_transient", threadContext.getTransient("foo")); assertNotNull(threadContext.getTransient("failure")); - assertEquals("exception from doRun", ((RuntimeException)threadContext.getTransient("failure")).getMessage()); + assertEquals("exception from doRun", ((RuntimeException) threadContext.getTransient("failure")).getMessage()); assertFalse(threadContext.isDefaultContext()); threadContext.putTransient("after", "after"); } @@ -604,7 +603,7 @@ protected void doRun() throws Exception { public void testMarkAsSystemContext() throws IOException { try (ThreadContext threadContext = new ThreadContext(Settings.EMPTY)) { assertFalse(threadContext.isSystemContext()); - try(ThreadContext.StoredContext context = threadContext.stashContext()){ + try (ThreadContext.StoredContext context = threadContext.stashContext()) { assertFalse(threadContext.isSystemContext()); threadContext.markAsSystemContext(); assertTrue(threadContext.isSystemContext()); @@ -613,6 +612,17 @@ public void testMarkAsSystemContext() throws IOException { } } + public void testPutHeaders() { + Settings build = Settings.builder().put("request.headers.default", "1").build(); + ThreadContext threadContext = new ThreadContext(build); + threadContext.putHeader(Collections.emptyMap()); + threadContext.putHeader(Collections.singletonMap("foo", "bar")); + assertEquals("bar", threadContext.getHeader("foo")); + IllegalArgumentException e = expectThrows(IllegalArgumentException.class, () -> + threadContext.putHeader(Collections.singletonMap("foo", "boom"))); + assertEquals("value for key [foo] already present", e.getMessage()); + } + /** * Sometimes wraps a Runnable in an AbstractRunnable. */ diff --git a/core/src/test/java/org/elasticsearch/common/util/concurrent/TimedRunnableTests.java b/core/src/test/java/org/elasticsearch/common/util/concurrent/TimedRunnableTests.java new file mode 100644 index 0000000000000..b61f47e67a366 --- /dev/null +++ b/core/src/test/java/org/elasticsearch/common/util/concurrent/TimedRunnableTests.java @@ -0,0 +1,117 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.common.util.concurrent; + +import org.elasticsearch.test.ESTestCase; + +import java.util.concurrent.RejectedExecutionException; +import java.util.concurrent.atomic.AtomicBoolean; +import java.util.concurrent.atomic.AtomicReference; + +import static org.hamcrest.Matchers.equalTo; + +public final class TimedRunnableTests extends ESTestCase { + + public void testTimedRunnableDelegatesToAbstractRunnable() { + final boolean isForceExecution = randomBoolean(); + final AtomicBoolean onAfter = new AtomicBoolean(); + final AtomicReference onRejection = new AtomicReference<>(); + final AtomicReference onFailure = new AtomicReference<>(); + final AtomicBoolean doRun = new AtomicBoolean(); + + final AbstractRunnable runnable = new AbstractRunnable() { + @Override + public boolean isForceExecution() { + return isForceExecution; + } + + @Override + public void onAfter() { + onAfter.set(true); + } + + @Override + public void onRejection(final Exception e) { + onRejection.set(e); + } + + @Override + public void onFailure(final Exception e) { + onFailure.set(e); + } + + @Override + protected void doRun() throws Exception { + doRun.set(true); + } + }; + + final TimedRunnable timedRunnable = new TimedRunnable(runnable); + + assertThat(timedRunnable.isForceExecution(), equalTo(isForceExecution)); + + timedRunnable.onAfter(); + assertTrue(onAfter.get()); + + final Exception rejection = new RejectedExecutionException(); + timedRunnable.onRejection(rejection); + assertThat(onRejection.get(), equalTo(rejection)); + + final Exception failure = new Exception(); + timedRunnable.onFailure(failure); + assertThat(onFailure.get(), equalTo(failure)); + + timedRunnable.run(); + assertTrue(doRun.get()); + } + + public void testTimedRunnableDelegatesRunInFailureCase() { + final AtomicBoolean onAfter = new AtomicBoolean(); + final AtomicReference onFailure = new AtomicReference<>(); + final AtomicBoolean doRun = new AtomicBoolean(); + + final Exception exception = new Exception(); + + final AbstractRunnable runnable = new AbstractRunnable() { + @Override + public void onAfter() { + onAfter.set(true); + } + + @Override + public void onFailure(final Exception e) { + onFailure.set(e); + } + + @Override + protected void doRun() throws Exception { + doRun.set(true); + throw exception; + } + }; + + final TimedRunnable timedRunnable = new TimedRunnable(runnable); + timedRunnable.run(); + assertTrue(doRun.get()); + assertThat(onFailure.get(), equalTo(exception)); + assertTrue(onAfter.get()); + } + +} diff --git a/core/src/test/java/org/elasticsearch/common/xcontent/ConstructingObjectParserTests.java b/core/src/test/java/org/elasticsearch/common/xcontent/ConstructingObjectParserTests.java index bc0bd430a8892..7e5bdbd017449 100644 --- a/core/src/test/java/org/elasticsearch/common/xcontent/ConstructingObjectParserTests.java +++ b/core/src/test/java/org/elasticsearch/common/xcontent/ConstructingObjectParserTests.java @@ -224,7 +224,7 @@ class NoConstructorArgs { parser.apply(createParser(JsonXContent.jsonXContent, "{}"), null); fail("Expected AssertionError"); } catch (AssertionError e) { - assertEquals("[constructor_args_required] must configure at least on constructor argument. If it doesn't have any it should " + assertEquals("[constructor_args_required] must configure at least one constructor argument. If it doesn't have any it should " + "use ObjectParser instead of ConstructingObjectParser. This is a bug in the parser declaration.", e.getMessage()); } } diff --git a/core/src/test/java/org/elasticsearch/common/xcontent/ObjectParserTests.java b/core/src/test/java/org/elasticsearch/common/xcontent/ObjectParserTests.java index 0d879e4813116..baa2b3bcb36e6 100644 --- a/core/src/test/java/org/elasticsearch/common/xcontent/ObjectParserTests.java +++ b/core/src/test/java/org/elasticsearch/common/xcontent/ObjectParserTests.java @@ -34,6 +34,7 @@ import java.util.Collections; import java.util.List; +import static org.hamcrest.Matchers.containsString; import static org.hamcrest.Matchers.hasSize; public class ObjectParserTests extends ESTestCase { @@ -231,12 +232,8 @@ class TestStruct { TestStruct s = new TestStruct(); objectParser.declareField((i, c, x) -> c.test = i.text(), new ParseField("numeric_value"), ObjectParser.ValueType.FLOAT); - try { - objectParser.parse(parser, s, null); - fail("wrong type - must be number"); - } catch (IllegalArgumentException ex) { - assertEquals(ex.getMessage(), "[foo] numeric_value doesn't support values of type: VALUE_BOOLEAN"); - } + Exception e = expectThrows(ParsingException.class, () -> objectParser.parse(parser, s, null)); + assertThat(e.getMessage(), containsString("[foo] numeric_value doesn't support values of type: VALUE_BOOLEAN")); } public void testParseNested() throws IOException { diff --git a/core/src/test/java/org/elasticsearch/common/xcontent/XContentParserTests.java b/core/src/test/java/org/elasticsearch/common/xcontent/XContentParserTests.java index 3397a463ea823..8e3246d8b8a59 100644 --- a/core/src/test/java/org/elasticsearch/common/xcontent/XContentParserTests.java +++ b/core/src/test/java/org/elasticsearch/common/xcontent/XContentParserTests.java @@ -42,6 +42,48 @@ public class XContentParserTests extends ESTestCase { + public void testFloat() throws IOException { + final XContentType xContentType = randomFrom(XContentType.values()); + + final String field = randomAlphaOfLengthBetween(1, 5); + final Float value = randomFloat(); + + try (XContentBuilder builder = XContentBuilder.builder(xContentType.xContent())) { + builder.startObject(); + if (randomBoolean()) { + builder.field(field, value); + } else { + builder.field(field).value(value); + } + builder.endObject(); + + final Number number; + try (XContentParser parser = createParser(xContentType.xContent(), builder.bytes())) { + assertEquals(XContentParser.Token.START_OBJECT, parser.nextToken()); + assertEquals(XContentParser.Token.FIELD_NAME, parser.nextToken()); + assertEquals(field, parser.currentName()); + assertEquals(XContentParser.Token.VALUE_NUMBER, parser.nextToken()); + + number = parser.numberValue(); + + assertEquals(XContentParser.Token.END_OBJECT, parser.nextToken()); + assertNull(parser.nextToken()); + } + + assertEquals(value, number.floatValue(), 0.0f); + + if (xContentType == XContentType.CBOR) { + // CBOR parses back a float + assertTrue(number instanceof Float); + } else { + // JSON, YAML and SMILE parses back the float value as a double + // This will change for SMILE in Jackson 2.9 where all binary based + // formats will return a float + assertTrue(number instanceof Double); + } + } + } + public void testReadList() throws IOException { assertThat(readList("{\"foo\": [\"bar\"]}"), contains("bar")); assertThat(readList("{\"foo\": [\"bar\",\"baz\"]}"), contains("bar", "baz")); diff --git a/core/src/test/java/org/elasticsearch/discovery/DiscoveryDisruptionIT.java b/core/src/test/java/org/elasticsearch/discovery/DiscoveryDisruptionIT.java index 5dbf5a2c97ddc..344c5567a8657 100644 --- a/core/src/test/java/org/elasticsearch/discovery/DiscoveryDisruptionIT.java +++ b/core/src/test/java/org/elasticsearch/discovery/DiscoveryDisruptionIT.java @@ -257,9 +257,7 @@ public void testElectMasterWithLatestVersion() throws Exception { isolatePreferredMaster.startDisrupting(); assertAcked(client(randomFrom(nonPreferredNodes)).admin().indices().prepareCreate("test").setSettings( - INDEX_NUMBER_OF_SHARDS_SETTING.getKey(), 1, - INDEX_NUMBER_OF_REPLICAS_SETTING.getKey(), 0 - )); + Settings.builder().put(INDEX_NUMBER_OF_SHARDS_SETTING.getKey(), 1).put(INDEX_NUMBER_OF_REPLICAS_SETTING.getKey(), 0))); internalCluster().clearDisruptionScheme(false); internalCluster().setDisruptionScheme(isolateAllNodes); diff --git a/core/src/test/java/org/elasticsearch/discovery/DiscoveryModuleTests.java b/core/src/test/java/org/elasticsearch/discovery/DiscoveryModuleTests.java index 39a9dbff959c6..8c2d84cd8c89d 100644 --- a/core/src/test/java/org/elasticsearch/discovery/DiscoveryModuleTests.java +++ b/core/src/test/java/org/elasticsearch/discovery/DiscoveryModuleTests.java @@ -20,6 +20,8 @@ import org.apache.lucene.util.IOUtils; import org.elasticsearch.Version; +import org.elasticsearch.cluster.ClusterState; +import org.elasticsearch.cluster.node.DiscoveryNode; import org.elasticsearch.cluster.routing.allocation.AllocationService; import org.elasticsearch.cluster.service.ClusterApplier; import org.elasticsearch.cluster.service.MasterService; @@ -40,10 +42,12 @@ import java.io.IOException; import java.util.Arrays; +import java.util.Collection; import java.util.Collections; import java.util.List; import java.util.Map; import java.util.concurrent.atomic.AtomicBoolean; +import java.util.function.BiConsumer; import java.util.function.Supplier; import static org.mockito.Mockito.mock; @@ -160,7 +164,23 @@ public void testDuplicateHostsProvider() { public void testLazyConstructionHostsProvider() { DummyHostsProviderPlugin plugin = () -> Collections.singletonMap("custom", - () -> { throw new AssertionError("created hosts provider which was not selected"); }); + () -> { + throw new AssertionError("created hosts provider which was not selected"); + }); newModule(Settings.EMPTY, Collections.singletonList(plugin)); } + + public void testJoinValidator() { + BiConsumer consumer = (a, b) -> {}; + DiscoveryModule module = newModule(Settings.EMPTY, Collections.singletonList(new DiscoveryPlugin() { + @Override + public BiConsumer getJoinValidator() { + return consumer; + } + })); + ZenDiscovery discovery = (ZenDiscovery) module.getDiscovery(); + Collection> onJoinValidators = discovery.getOnJoinValidators(); + assertEquals(2, onJoinValidators.size()); + assertTrue(onJoinValidators.contains(consumer)); + } } diff --git a/core/src/test/java/org/elasticsearch/discovery/SnapshotDisruptionIT.java b/core/src/test/java/org/elasticsearch/discovery/SnapshotDisruptionIT.java new file mode 100644 index 0000000000000..3458cca0cf78e --- /dev/null +++ b/core/src/test/java/org/elasticsearch/discovery/SnapshotDisruptionIT.java @@ -0,0 +1,173 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.discovery; + +import org.elasticsearch.action.ActionFuture; +import org.elasticsearch.action.admin.cluster.snapshots.create.CreateSnapshotResponse; +import org.elasticsearch.action.admin.cluster.snapshots.get.GetSnapshotsResponse; +import org.elasticsearch.action.index.IndexRequestBuilder; +import org.elasticsearch.cluster.ClusterChangedEvent; +import org.elasticsearch.cluster.ClusterStateListener; +import org.elasticsearch.cluster.SnapshotsInProgress; +import org.elasticsearch.cluster.service.ClusterService; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.unit.ByteSizeUnit; +import org.elasticsearch.snapshots.SnapshotInfo; +import org.elasticsearch.snapshots.SnapshotMissingException; +import org.elasticsearch.snapshots.SnapshotState; +import org.elasticsearch.test.ESIntegTestCase; +import org.elasticsearch.test.disruption.NetworkDisruption; +import org.elasticsearch.test.junit.annotations.TestLogging; + +import java.util.Collections; +import java.util.HashSet; +import java.util.List; +import java.util.Set; +import java.util.concurrent.CountDownLatch; +import java.util.concurrent.ExecutionException; +import java.util.concurrent.TimeUnit; + +import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked; +import static org.hamcrest.Matchers.instanceOf; + +/** + * Tests snapshot operations during disruptions. + */ +@ESIntegTestCase.ClusterScope(scope = ESIntegTestCase.Scope.TEST, numDataNodes = 0, transportClientRatio = 0, autoMinMasterNodes = false) +@TestLogging("org.elasticsearch.snapshot:TRACE") +public class SnapshotDisruptionIT extends AbstractDisruptionTestCase { + + public void testDisruptionOnSnapshotInitialization() throws Exception { + final Settings settings = Settings.builder() + .put(DEFAULT_SETTINGS) + .put(DiscoverySettings.COMMIT_TIMEOUT_SETTING.getKey(), "30s") // wait till cluster state is committed + .build(); + final String idxName = "test"; + configureCluster(settings, 4, null, 2); + final List allMasterEligibleNodes = internalCluster().startMasterOnlyNodes(3); + final String dataNode = internalCluster().startDataOnlyNode(); + ensureStableCluster(4); + + createRandomIndex(idxName); + + logger.info("--> creating repository"); + assertAcked(client().admin().cluster().preparePutRepository("test-repo") + .setType("fs").setSettings(Settings.builder() + .put("location", randomRepoPath()) + .put("compress", randomBoolean()) + .put("chunk_size", randomIntBetween(100, 1000), ByteSizeUnit.BYTES))); + + // Writing incompatible snapshot can cause this test to fail due to a race condition in repo initialization + // by the current master and the former master. It is not causing any issues in real life scenario, but + // might make this test to fail. We are going to complete initialization of the snapshot to prevent this failures. + logger.info("--> initializing the repository"); + assertEquals(SnapshotState.SUCCESS, client().admin().cluster().prepareCreateSnapshot("test-repo", "test-snap-1") + .setWaitForCompletion(true).setIncludeGlobalState(true).setIndices().get().getSnapshotInfo().state()); + + final String masterNode1 = internalCluster().getMasterName(); + Set otherNodes = new HashSet<>(); + otherNodes.addAll(allMasterEligibleNodes); + otherNodes.remove(masterNode1); + otherNodes.add(dataNode); + + NetworkDisruption networkDisruption = + new NetworkDisruption(new NetworkDisruption.TwoPartitions(Collections.singleton(masterNode1), otherNodes), + new NetworkDisruption.NetworkUnresponsive()); + internalCluster().setDisruptionScheme(networkDisruption); + + ClusterService clusterService = internalCluster().clusterService(masterNode1); + CountDownLatch disruptionStarted = new CountDownLatch(1); + clusterService.addListener(new ClusterStateListener() { + @Override + public void clusterChanged(ClusterChangedEvent event) { + SnapshotsInProgress snapshots = event.state().custom(SnapshotsInProgress.TYPE); + if (snapshots != null && snapshots.entries().size() > 0) { + if (snapshots.entries().get(0).state() == SnapshotsInProgress.State.INIT) { + // The snapshot started, we can start disruption so the INIT state will arrive to another master node + logger.info("--> starting disruption"); + networkDisruption.startDisrupting(); + clusterService.removeListener(this); + disruptionStarted.countDown(); + } + } + } + }); + + logger.info("--> starting snapshot"); + ActionFuture future = client(masterNode1).admin().cluster() + .prepareCreateSnapshot("test-repo", "test-snap-2").setWaitForCompletion(false).setIndices(idxName).execute(); + + logger.info("--> waiting for disruption to start"); + assertTrue(disruptionStarted.await(1, TimeUnit.MINUTES)); + + logger.info("--> wait until the snapshot is done"); + assertBusy(() -> { + SnapshotsInProgress snapshots = dataNodeClient().admin().cluster().prepareState().setLocal(true).get().getState() + .custom(SnapshotsInProgress.TYPE); + if (snapshots != null && snapshots.entries().size() > 0) { + logger.info("Current snapshot state [{}]", snapshots.entries().get(0).state()); + fail("Snapshot is still running"); + } else { + logger.info("Snapshot is no longer in the cluster state"); + } + }, 1, TimeUnit.MINUTES); + + logger.info("--> verify that snapshot was successful or no longer exist"); + assertBusy(() -> { + try { + GetSnapshotsResponse snapshotsStatusResponse = dataNodeClient().admin().cluster().prepareGetSnapshots("test-repo") + .setSnapshots("test-snap-2").get(); + SnapshotInfo snapshotInfo = snapshotsStatusResponse.getSnapshots().get(0); + assertEquals(SnapshotState.SUCCESS, snapshotInfo.state()); + assertEquals(snapshotInfo.totalShards(), snapshotInfo.successfulShards()); + assertEquals(0, snapshotInfo.failedShards()); + logger.info("--> done verifying"); + } catch (SnapshotMissingException exception) { + logger.info("--> snapshot doesn't exist"); + } + }, 1, TimeUnit.MINUTES); + + logger.info("--> stopping disrupting"); + networkDisruption.stopDisrupting(); + ensureStableCluster(4, masterNode1); + logger.info("--> done"); + + try { + future.get(); + } catch (Exception ex) { + logger.info("--> got exception from hanged master", ex); + Throwable cause = ex.getCause(); + assertThat(cause, instanceOf(MasterNotDiscoveredException.class)); + cause = cause.getCause(); + assertThat(cause, instanceOf(Discovery.FailedToCommitClusterStateException.class)); + } + } + + private void createRandomIndex(String idxName) throws ExecutionException, InterruptedException { + assertAcked(prepareCreate(idxName, 0, Settings.builder().put("number_of_shards", between(1, 20)) + .put("number_of_replicas", 0))); + logger.info("--> indexing some data"); + final int numdocs = randomIntBetween(10, 100); + IndexRequestBuilder[] builders = new IndexRequestBuilder[numdocs]; + for (int i = 0; i < builders.length; i++) { + builders[i] = client().prepareIndex(idxName, "type1", Integer.toString(i)).setSource("field1", "bar " + i); + } + indexRandom(true, builders); + } +} diff --git a/core/src/test/java/org/elasticsearch/discovery/ZenFaultDetectionTests.java b/core/src/test/java/org/elasticsearch/discovery/ZenFaultDetectionTests.java index 3186cdaefbf73..1a837b825d867 100644 --- a/core/src/test/java/org/elasticsearch/discovery/ZenFaultDetectionTests.java +++ b/core/src/test/java/org/elasticsearch/discovery/ZenFaultDetectionTests.java @@ -59,7 +59,6 @@ import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicReference; -import static java.util.Collections.singleton; import static org.hamcrest.Matchers.equalTo; import static org.hamcrest.Matchers.greaterThanOrEqualTo; @@ -137,7 +136,7 @@ protected MockTransportService build(Settings settings, Version version) { Settings.builder() .put(settings) // trace zenfd actions but keep the default otherwise - .put(TransportService.TRACE_LOG_EXCLUDE_SETTING.getKey(), singleton(TransportLivenessAction.NAME)) + .putList(TransportService.TRACE_LOG_EXCLUDE_SETTING.getKey(), TransportLivenessAction.NAME) .build(), new MockTcpTransport(settings, threadPool, BigArrays.NON_RECYCLING_INSTANCE, circuitBreakerService, namedWriteableRegistry, new NetworkService(Collections.emptyList()), version), @@ -145,7 +144,7 @@ namedWriteableRegistry, new NetworkService(Collections.emptyList()), version), TransportService.NOOP_TRANSPORT_INTERCEPTOR, (boundAddress) -> new DiscoveryNode(Node.NODE_NAME_SETTING.get(settings), boundAddress.publishAddress(), - Node.NODE_ATTRIBUTES.get(settings).getAsMap(), DiscoveryNode.getRolesFromSettings(settings), version), + Node.NODE_ATTRIBUTES.getAsMap(settings), DiscoveryNode.getRolesFromSettings(settings), version), null); transportService.start(); transportService.acceptIncomingRequests(); diff --git a/core/src/test/java/org/elasticsearch/discovery/zen/PublishClusterStateActionTests.java b/core/src/test/java/org/elasticsearch/discovery/zen/PublishClusterStateActionTests.java index 2e29347204576..9693a1baadc79 100644 --- a/core/src/test/java/org/elasticsearch/discovery/zen/PublishClusterStateActionTests.java +++ b/core/src/test/java/org/elasticsearch/discovery/zen/PublishClusterStateActionTests.java @@ -179,7 +179,7 @@ public static MockNode createMockNode(String name, final Settings basSettings, @ ThreadPool threadPool, Logger logger, Map nodes) throws Exception { final Settings settings = Settings.builder() .put("name", name) - .put(TransportService.TRACE_LOG_INCLUDE_SETTING.getKey(), "", + .put(TransportService.TRACE_LOG_INCLUDE_SETTING.getKey(), "").put( TransportService.TRACE_LOG_EXCLUDE_SETTING.getKey(), "NOTHING") .put(basSettings) .build(); @@ -705,6 +705,73 @@ public void testTimeoutOrCommit() throws Exception { } } + private void assertPublishClusterStateStats(String description, MockNode node, long expectedFull, long expectedIncompatibleDiffs, + long expectedCompatibleDiffs) { + PublishClusterStateStats stats = node.action.stats(); + assertThat(description + ": full cluster states", stats.getFullClusterStateReceivedCount(), equalTo(expectedFull)); + assertThat(description + ": incompatible cluster state diffs", stats.getIncompatibleClusterStateDiffReceivedCount(), + equalTo(expectedIncompatibleDiffs)); + assertThat(description + ": compatible cluster state diffs", stats.getCompatibleClusterStateDiffReceivedCount(), + equalTo(expectedCompatibleDiffs)); + } + + public void testPublishClusterStateStats() throws Exception { + MockNode nodeA = createMockNode("nodeA").setAsMaster(); + MockNode nodeB = createMockNode("nodeB"); + + assertPublishClusterStateStats("nodeA: initial state", nodeA, 0, 0, 0); + assertPublishClusterStateStats("nodeB: initial state", nodeB, 0, 0, 0); + + // Initial cluster state + ClusterState clusterState = nodeA.clusterState; + + // cluster state update - add nodeB + DiscoveryNodes discoveryNodes = DiscoveryNodes.builder(clusterState.nodes()).add(nodeB.discoveryNode).build(); + ClusterState previousClusterState = clusterState; + clusterState = ClusterState.builder(clusterState).nodes(discoveryNodes).incrementVersion().build(); + publishStateAndWait(nodeA.action, clusterState, previousClusterState); + + // Sent as a full cluster state update + assertPublishClusterStateStats("nodeA: after full update", nodeA, 0, 0, 0); + assertPublishClusterStateStats("nodeB: after full update", nodeB, 1, 0, 0); + + // Increment cluster state version + previousClusterState = clusterState; + clusterState = ClusterState.builder(clusterState).incrementVersion().build(); + publishStateAndWait(nodeA.action, clusterState, previousClusterState); + + // Sent, successfully, as a cluster state diff + assertPublishClusterStateStats("nodeA: after successful diff update", nodeA, 0, 0, 0); + assertPublishClusterStateStats("nodeB: after successful diff update", nodeB, 1, 0, 1); + + // Increment cluster state version twice + previousClusterState = ClusterState.builder(clusterState).incrementVersion().build(); + clusterState = ClusterState.builder(previousClusterState).incrementVersion().build(); + publishStateAndWait(nodeA.action, clusterState, previousClusterState); + + // Sent, unsuccessfully, as a diff and then retried as a full update + assertPublishClusterStateStats("nodeA: after unsuccessful diff update", nodeA, 0, 0, 0); + assertPublishClusterStateStats("nodeB: after unsuccessful diff update", nodeB, 2, 1, 1); + + // node A steps down from being master + nodeA.resetMasterId(); + nodeB.resetMasterId(); + + // node B becomes the master and sends a version of the cluster state that goes back + discoveryNodes = DiscoveryNodes.builder(discoveryNodes) + .add(nodeA.discoveryNode) + .add(nodeB.discoveryNode) + .masterNodeId(nodeB.discoveryNode.getId()) + .localNodeId(nodeB.discoveryNode.getId()) + .build(); + previousClusterState = ClusterState.builder(new ClusterName("test")).nodes(discoveryNodes).build(); + clusterState = ClusterState.builder(clusterState).nodes(discoveryNodes).incrementVersion().build(); + publishStateAndWait(nodeB.action, clusterState, previousClusterState); + + // Sent, unsuccessfully, as a diff, and then retried as a full update + assertPublishClusterStateStats("nodeA: B became master", nodeA, 1, 1, 0); + assertPublishClusterStateStats("nodeB: B became master", nodeB, 2, 1, 1); + } private MetaData buildMetaDataForVersion(MetaData metaData, long version) { ImmutableOpenMap.Builder indices = ImmutableOpenMap.builder(metaData.indices()); diff --git a/core/src/test/java/org/elasticsearch/discovery/zen/UnicastZenPingTests.java b/core/src/test/java/org/elasticsearch/discovery/zen/UnicastZenPingTests.java index 0492bc82e5f73..3c7a49a176635 100644 --- a/core/src/test/java/org/elasticsearch/discovery/zen/UnicastZenPingTests.java +++ b/core/src/test/java/org/elasticsearch/discovery/zen/UnicastZenPingTests.java @@ -179,7 +179,7 @@ public void connectToNode(DiscoveryNode node, ConnectionProfile connectionProfil final ClusterState stateMismatch = ClusterState.builder(new ClusterName("mismatch")).version(randomNonNegativeLong()).build(); Settings hostsSettings = Settings.builder() - .putArray("discovery.zen.ping.unicast.hosts", + .putList("discovery.zen.ping.unicast.hosts", NetworkAddress.format(new InetSocketAddress(handleA.address.address().getAddress(), handleA.address.address().getPort())), NetworkAddress.format(new InetSocketAddress(handleB.address.address().getAddress(), handleB.address.address().getPort())), NetworkAddress.format(new InetSocketAddress(handleC.address.address().getAddress(), handleC.address.address().getPort())), @@ -305,7 +305,7 @@ public TransportAddress[] addressesFromString(String address, int perAddressLimi new InetSocketAddress(handleC.address.address().getAddress(), handleC.address.address().getPort()))}); final Settings hostsSettings = Settings.builder() - .putArray("discovery.zen.ping.unicast.hosts", "UZP_A", "UZP_B", "UZP_C") + .putList("discovery.zen.ping.unicast.hosts", "UZP_A", "UZP_B", "UZP_C") .put("cluster.name", "test") .build(); @@ -589,7 +589,7 @@ public void testResolveReuseExistingNodeConnections() throws ExecutionException, final boolean useHosts = randomBoolean(); final Settings.Builder hostsSettingsBuilder = Settings.builder().put("cluster.name", "test"); if (useHosts) { - hostsSettingsBuilder.putArray("discovery.zen.ping.unicast.hosts", + hostsSettingsBuilder.putList("discovery.zen.ping.unicast.hosts", NetworkAddress.format(new InetSocketAddress(handleB.address.address().getAddress(), handleB.address.address().getPort())) ); } else { diff --git a/core/src/test/java/org/elasticsearch/discovery/zen/ZenDiscoveryIT.java b/core/src/test/java/org/elasticsearch/discovery/zen/ZenDiscoveryIT.java index 7821d4fd944fc..ed13f34b609cc 100644 --- a/core/src/test/java/org/elasticsearch/discovery/zen/ZenDiscoveryIT.java +++ b/core/src/test/java/org/elasticsearch/discovery/zen/ZenDiscoveryIT.java @@ -47,7 +47,6 @@ import org.elasticsearch.transport.TransportException; import org.elasticsearch.transport.TransportResponse; import org.elasticsearch.transport.TransportService; -import org.hamcrest.Matchers; import java.io.IOException; import java.net.UnknownHostException; @@ -255,6 +254,11 @@ public void testDiscoveryStats() throws Exception { " \"total\" : 0,\n" + " \"pending\" : 0,\n" + " \"committed\" : 0\n" + + " },\n" + + " \"published_cluster_states\" : {\n" + + " \"full_states\" : 0,\n" + + " \"incompatible_diffs\" : 0,\n" + + " \"compatible_diffs\" : 0\n" + " }\n" + " }\n" + "}"; @@ -275,6 +279,11 @@ public void testDiscoveryStats() throws Exception { assertThat(stats.getQueueStats().getCommitted(), equalTo(0)); assertThat(stats.getQueueStats().getPending(), equalTo(0)); + assertThat(stats.getPublishStats(), notNullValue()); + assertThat(stats.getPublishStats().getFullClusterStateReceivedCount(), equalTo(0L)); + assertThat(stats.getPublishStats().getIncompatibleClusterStateDiffReceivedCount(), equalTo(0L)); + assertThat(stats.getPublishStats().getCompatibleClusterStateDiffReceivedCount(), equalTo(0L)); + XContentBuilder builder = XContentFactory.jsonBuilder().prettyPrint(); builder.startObject(); stats.toXContent(builder, ToXContent.EMPTY_PARAMS); diff --git a/core/src/test/java/org/elasticsearch/discovery/zen/ZenDiscoveryUnitTests.java b/core/src/test/java/org/elasticsearch/discovery/zen/ZenDiscoveryUnitTests.java index bc653e14e3275..b0dc783349ca8 100644 --- a/core/src/test/java/org/elasticsearch/discovery/zen/ZenDiscoveryUnitTests.java +++ b/core/src/test/java/org/elasticsearch/discovery/zen/ZenDiscoveryUnitTests.java @@ -320,7 +320,8 @@ public void onNewClusterState(String source, Supplier clusterState } }; ZenDiscovery zenDiscovery = new ZenDiscovery(settings, threadPool, service, new NamedWriteableRegistry(ClusterModule.getNamedWriteables()), - masterService, clusterApplier, clusterSettings, Collections::emptyList, ESAllocationTestCase.createAllocationService()); + masterService, clusterApplier, clusterSettings, Collections::emptyList, ESAllocationTestCase.createAllocationService(), + Collections.emptyList()); zenDiscovery.start(); return zenDiscovery; } @@ -342,7 +343,10 @@ public void testValidateOnUnsupportedIndexVersionCreated() throws Exception { ClusterState.Builder stateBuilder = ClusterState.builder(ClusterName.DEFAULT); final DiscoveryNode otherNode = new DiscoveryNode("other_node", buildNewFakeTransportAddress(), emptyMap(), EnumSet.allOf(DiscoveryNode.Role.class), Version.CURRENT); - MembershipAction.ValidateJoinRequestRequestHandler request = new MembershipAction.ValidateJoinRequestRequestHandler(); + final DiscoveryNode localNode = new DiscoveryNode("other_node", buildNewFakeTransportAddress(), emptyMap(), + EnumSet.allOf(DiscoveryNode.Role.class), Version.CURRENT); + MembershipAction.ValidateJoinRequestRequestHandler request = new MembershipAction.ValidateJoinRequestRequestHandler + (() -> localNode, ZenDiscovery.addBuiltInJoinValidators(Collections.emptyList())); final boolean incompatible = randomBoolean(); IndexMetaData indexMetaData = IndexMetaData.builder("test").settings(Settings.builder() .put(SETTING_VERSION_CREATED, incompatible ? VersionUtils.getPreviousVersion(Version.CURRENT.minimumIndexCompatibilityVersion()) diff --git a/core/src/test/java/org/elasticsearch/env/EnvironmentTests.java b/core/src/test/java/org/elasticsearch/env/EnvironmentTests.java index 51391a8643b48..6ddf6b3ba73b1 100644 --- a/core/src/test/java/org/elasticsearch/env/EnvironmentTests.java +++ b/core/src/test/java/org/elasticsearch/env/EnvironmentTests.java @@ -42,15 +42,15 @@ public Environment newEnvironment(Settings settings) throws IOException { Settings build = Settings.builder() .put(settings) .put(Environment.PATH_HOME_SETTING.getKey(), createTempDir().toAbsolutePath()) - .putArray(Environment.PATH_DATA_SETTING.getKey(), tmpPaths()).build(); - return new Environment(build); + .putList(Environment.PATH_DATA_SETTING.getKey(), tmpPaths()).build(); + return new Environment(build, null); } public void testRepositoryResolution() throws IOException { Environment environment = newEnvironment(); assertThat(environment.resolveRepoFile("/test/repos/repo1"), nullValue()); assertThat(environment.resolveRepoFile("test/repos/repo1"), nullValue()); - environment = newEnvironment(Settings.builder().putArray(Environment.PATH_REPO_SETTING.getKey(), "/test/repos", "/another/repos", "/test/repos/../other").build()); + environment = newEnvironment(Settings.builder().putList(Environment.PATH_REPO_SETTING.getKey(), "/test/repos", "/another/repos", "/test/repos/../other").build()); assertThat(environment.resolveRepoFile("/test/repos/repo1"), notNullValue()); assertThat(environment.resolveRepoFile("test/repos/repo1"), notNullValue()); assertThat(environment.resolveRepoFile("/another/repos/repo1"), notNullValue()); @@ -76,21 +76,21 @@ public void testRepositoryResolution() throws IOException { public void testPathDataWhenNotSet() { final Path pathHome = createTempDir().toAbsolutePath(); final Settings settings = Settings.builder().put("path.home", pathHome).build(); - final Environment environment = new Environment(settings); + final Environment environment = new Environment(settings, null); assertThat(environment.dataFiles(), equalTo(new Path[]{pathHome.resolve("data")})); } public void testPathDataNotSetInEnvironmentIfNotSet() { final Settings settings = Settings.builder().put("path.home", createTempDir().toAbsolutePath()).build(); assertFalse(Environment.PATH_DATA_SETTING.exists(settings)); - final Environment environment = new Environment(settings); + final Environment environment = new Environment(settings, null); assertFalse(Environment.PATH_DATA_SETTING.exists(environment.settings())); } public void testPathLogsWhenNotSet() { final Path pathHome = createTempDir().toAbsolutePath(); final Settings settings = Settings.builder().put("path.home", pathHome).build(); - final Environment environment = new Environment(settings); + final Environment environment = new Environment(settings, null); assertThat(environment.logsFile(), equalTo(pathHome.resolve("logs"))); } @@ -111,7 +111,7 @@ public void testConfigPath() { public void testConfigPathWhenNotSet() { final Path pathHome = createTempDir().toAbsolutePath(); final Settings settings = Settings.builder().put("path.home", pathHome).build(); - final Environment environment = new Environment(settings); + final Environment environment = new Environment(settings, null); assertThat(environment.configFile(), equalTo(pathHome.resolve("config"))); } diff --git a/core/src/test/java/org/elasticsearch/env/NodeEnvironmentTests.java b/core/src/test/java/org/elasticsearch/env/NodeEnvironmentTests.java index 42cb4a5811b2e..90161e5faaf9f 100644 --- a/core/src/test/java/org/elasticsearch/env/NodeEnvironmentTests.java +++ b/core/src/test/java/org/elasticsearch/env/NodeEnvironmentTests.java @@ -80,12 +80,12 @@ public void testNodeLockSingleEnvironment() throws IOException { // Reuse the same location and attempt to lock again IllegalStateException ex = - expectThrows(IllegalStateException.class, () -> new NodeEnvironment(settings, new Environment(settings))); + expectThrows(IllegalStateException.class, () -> new NodeEnvironment(settings, TestEnvironment.newEnvironment(settings))); assertThat(ex.getMessage(), containsString("failed to obtain node lock")); // Close the environment that holds the lock and make sure we can get the lock after release env.close(); - env = new NodeEnvironment(settings, new Environment(settings)); + env = new NodeEnvironment(settings, TestEnvironment.newEnvironment(settings)); assertThat(env.nodeDataPaths(), arrayWithSize(dataPaths.size())); for (int i = 0; i < dataPaths.size(); i++) { @@ -120,7 +120,7 @@ public void testNodeLockMultipleEnvironment() throws IOException { final Settings settings = buildEnvSettings(Settings.builder().put("node.max_local_storage_nodes", 2).build()); final NodeEnvironment first = newNodeEnvironment(settings); List dataPaths = Environment.PATH_DATA_SETTING.get(settings); - NodeEnvironment second = new NodeEnvironment(settings, new Environment(settings)); + NodeEnvironment second = new NodeEnvironment(settings, TestEnvironment.newEnvironment(settings)); assertEquals(first.nodeDataPaths().length, dataPaths.size()); assertEquals(second.nodeDataPaths().length, dataPaths.size()); for (int i = 0; i < dataPaths.size(); i++) { @@ -477,13 +477,13 @@ public NodeEnvironment newNodeEnvironment() throws IOException { @Override public NodeEnvironment newNodeEnvironment(Settings settings) throws IOException { Settings build = buildEnvSettings(settings); - return new NodeEnvironment(build, new Environment(build)); + return new NodeEnvironment(build, TestEnvironment.newEnvironment(build)); } public Settings buildEnvSettings(Settings settings) { return Settings.builder() .put(Environment.PATH_HOME_SETTING.getKey(), createTempDir().toAbsolutePath().toString()) - .putArray(Environment.PATH_DATA_SETTING.getKey(), tmpPaths()) + .putList(Environment.PATH_DATA_SETTING.getKey(), tmpPaths()) .put(settings).build(); } @@ -491,8 +491,8 @@ public NodeEnvironment newNodeEnvironment(String[] dataPaths, Settings settings) Settings build = Settings.builder() .put(settings) .put(Environment.PATH_HOME_SETTING.getKey(), createTempDir().toAbsolutePath().toString()) - .putArray(Environment.PATH_DATA_SETTING.getKey(), dataPaths).build(); - return new NodeEnvironment(build, new Environment(build)); + .putList(Environment.PATH_DATA_SETTING.getKey(), dataPaths).build(); + return new NodeEnvironment(build, TestEnvironment.newEnvironment(build)); } public NodeEnvironment newNodeEnvironment(String[] dataPaths, String sharedDataPath, Settings settings) throws IOException { @@ -500,7 +500,7 @@ public NodeEnvironment newNodeEnvironment(String[] dataPaths, String sharedDataP .put(settings) .put(Environment.PATH_HOME_SETTING.getKey(), createTempDir().toAbsolutePath().toString()) .put(Environment.PATH_SHARED_DATA_SETTING.getKey(), sharedDataPath) - .putArray(Environment.PATH_DATA_SETTING.getKey(), dataPaths).build(); - return new NodeEnvironment(build, new Environment(build)); + .putList(Environment.PATH_DATA_SETTING.getKey(), dataPaths).build(); + return new NodeEnvironment(build, TestEnvironment.newEnvironment(build)); } } diff --git a/core/src/test/java/org/elasticsearch/gateway/MetaDataWriteDataNodesIT.java b/core/src/test/java/org/elasticsearch/gateway/MetaDataWriteDataNodesIT.java index 3dfeca3053ed7..fc4caa0bc09be 100644 --- a/core/src/test/java/org/elasticsearch/gateway/MetaDataWriteDataNodesIT.java +++ b/core/src/test/java/org/elasticsearch/gateway/MetaDataWriteDataNodesIT.java @@ -48,7 +48,7 @@ public void testMetaWrittenAlsoOnDataNode() throws Exception { // this test checks that index state is written on data only nodes if they have a shard allocated String masterNode = internalCluster().startMasterOnlyNode(Settings.EMPTY); String dataNode = internalCluster().startDataOnlyNode(Settings.EMPTY); - assertAcked(prepareCreate("test").setSettings("index.number_of_replicas", 0)); + assertAcked(prepareCreate("test").setSettings(Settings.builder().put("index.number_of_replicas", 0))); index("test", "doc", "1", jsonBuilder().startObject().field("text", "some text").endObject()); ensureGreen("test"); assertIndexInMetaState(dataNode, "test"); diff --git a/core/src/test/java/org/elasticsearch/gateway/PrimaryShardAllocatorTests.java b/core/src/test/java/org/elasticsearch/gateway/PrimaryShardAllocatorTests.java index e91017ecdf913..e3687548190a3 100644 --- a/core/src/test/java/org/elasticsearch/gateway/PrimaryShardAllocatorTests.java +++ b/core/src/test/java/org/elasticsearch/gateway/PrimaryShardAllocatorTests.java @@ -388,7 +388,7 @@ private RoutingAllocation getRestoreRoutingAllocation(AllocationDeciders allocat .metaData(metaData) .routingTable(routingTable) .nodes(DiscoveryNodes.builder().add(node1).add(node2).add(node3)).build(); - return new RoutingAllocation(allocationDeciders, new RoutingNodes(state, false), state, null, System.nanoTime(), false); + return new RoutingAllocation(allocationDeciders, new RoutingNodes(state, false), state, null, System.nanoTime()); } private RoutingAllocation routingAllocationWithOnePrimaryNoReplicas(AllocationDeciders deciders, UnassignedInfo.Reason reason, @@ -416,7 +416,7 @@ private RoutingAllocation routingAllocationWithOnePrimaryNoReplicas(AllocationDe .metaData(metaData) .routingTable(routingTableBuilder.build()) .nodes(DiscoveryNodes.builder().add(node1).add(node2).add(node3)).build(); - return new RoutingAllocation(deciders, new RoutingNodes(state, false), state, null, System.nanoTime(), false); + return new RoutingAllocation(deciders, new RoutingNodes(state, false), state, null, System.nanoTime()); } private void assertClusterHealthStatus(RoutingAllocation allocation, ClusterHealthStatus expectedStatus) { diff --git a/core/src/test/java/org/elasticsearch/gateway/RecoveryFromGatewayIT.java b/core/src/test/java/org/elasticsearch/gateway/RecoveryFromGatewayIT.java index 9a5201c0ea876..23254e81060a0 100644 --- a/core/src/test/java/org/elasticsearch/gateway/RecoveryFromGatewayIT.java +++ b/core/src/test/java/org/elasticsearch/gateway/RecoveryFromGatewayIT.java @@ -159,10 +159,8 @@ public void testSingleNodeNoFlush() throws Exception { .endObject().endObject().string(); // note: default replica settings are tied to #data nodes-1 which is 0 here. We can do with 1 in this test. int numberOfShards = numberOfShards(); - assertAcked(prepareCreate("test").setSettings( - SETTING_NUMBER_OF_SHARDS, numberOfShards(), - SETTING_NUMBER_OF_REPLICAS, randomIntBetween(0, 1) - ).addMapping("type1", mapping, XContentType.JSON)); + assertAcked(prepareCreate("test").setSettings(Settings.builder().put(SETTING_NUMBER_OF_SHARDS, numberOfShards()) + .put(SETTING_NUMBER_OF_REPLICAS, randomIntBetween(0, 1))).addMapping("type1", mapping, XContentType.JSON)); int value1Docs; int value2Docs; @@ -517,7 +515,7 @@ public void testRecoveryDifferentNodeOrderStartup() throws Exception { public void testStartedShardFoundIfStateNotYetProcessed() throws Exception { // nodes may need to report the shards they processed the initial recovered cluster state from the master final String nodeName = internalCluster().startNode(); - assertAcked(prepareCreate("test").setSettings(SETTING_NUMBER_OF_SHARDS, 1)); + assertAcked(prepareCreate("test").setSettings(Settings.builder().put(SETTING_NUMBER_OF_SHARDS, 1))); final Index index = resolveIndex("test"); final ShardId shardId = new ShardId(index, 0); index("test", "type", "1"); diff --git a/core/src/test/java/org/elasticsearch/gateway/ReplicaShardAllocatorTests.java b/core/src/test/java/org/elasticsearch/gateway/ReplicaShardAllocatorTests.java index 133c8e3381605..f53c8da2f2d96 100644 --- a/core/src/test/java/org/elasticsearch/gateway/ReplicaShardAllocatorTests.java +++ b/core/src/test/java/org/elasticsearch/gateway/ReplicaShardAllocatorTests.java @@ -316,7 +316,7 @@ private RoutingAllocation onePrimaryOnNode1And1Replica(AllocationDeciders decide .metaData(metaData) .routingTable(routingTable) .nodes(DiscoveryNodes.builder().add(node1).add(node2).add(node3)).build(); - return new RoutingAllocation(deciders, new RoutingNodes(state, false), state, ClusterInfo.EMPTY, System.nanoTime(), false); + return new RoutingAllocation(deciders, new RoutingNodes(state, false), state, ClusterInfo.EMPTY, System.nanoTime()); } private RoutingAllocation onePrimaryOnNode1And1ReplicaRecovering(AllocationDeciders deciders) { @@ -338,7 +338,7 @@ private RoutingAllocation onePrimaryOnNode1And1ReplicaRecovering(AllocationDecid .metaData(metaData) .routingTable(routingTable) .nodes(DiscoveryNodes.builder().add(node1).add(node2).add(node3)).build(); - return new RoutingAllocation(deciders, new RoutingNodes(state, false), state, ClusterInfo.EMPTY, System.nanoTime(), false); + return new RoutingAllocation(deciders, new RoutingNodes(state, false), state, ClusterInfo.EMPTY, System.nanoTime()); } class TestAllocator extends ReplicaShardAllocator { diff --git a/core/src/test/java/org/elasticsearch/get/GetActionIT.java b/core/src/test/java/org/elasticsearch/get/GetActionIT.java index f9c4b0d960638..c6135c8f2286a 100644 --- a/core/src/test/java/org/elasticsearch/get/GetActionIT.java +++ b/core/src/test/java/org/elasticsearch/get/GetActionIT.java @@ -19,7 +19,6 @@ package org.elasticsearch.get; -import org.elasticsearch.ElasticsearchException; import org.elasticsearch.Version; import org.elasticsearch.action.DocWriteResponse; import org.elasticsearch.action.ShardOperationFailedException; @@ -132,16 +131,17 @@ public void testSimpleGet() { assertThat(response.getField("field1").getValues().get(0).toString(), equalTo("value1")); assertThat(response.getField("field2"), nullValue()); - logger.info("--> flush the index, so we load it from it"); - flush(); - logger.info("--> realtime get 1 (loaded from index)"); + logger.info("--> realtime get 1"); response = client().prepareGet(indexOrAlias(), "type1", "1").get(); assertThat(response.isExists(), equalTo(true)); assertThat(response.getIndex(), equalTo("test")); assertThat(response.getSourceAsMap().get("field1").toString(), equalTo("value1")); assertThat(response.getSourceAsMap().get("field2").toString(), equalTo("value2")); + logger.info("--> refresh the index, so we load it from it"); + refresh(); + logger.info("--> non realtime get 1 (loaded from index)"); response = client().prepareGet(indexOrAlias(), "type1", "1").setRealtime(false).get(); assertThat(response.isExists(), equalTo(true)); @@ -303,7 +303,9 @@ public void testGetDocWithMultivaluedFieldsMultiTypeBWC() throws Exception { assertAcked(prepareCreate("test") .addMapping("type1", mapping1, XContentType.JSON) .addMapping("type2", mapping2, XContentType.JSON) - .setSettings("index.refresh_interval", -1, "index.version.created", Version.V_5_6_0.id)); // multi types in 5.6 + // multi types in 5.6 + .setSettings(Settings.builder().put("index.refresh_interval", -1).put("index.version.created", Version.V_5_6_0.id))); + ensureGreen(); GetResponse response = client().prepareGet("test", "type1", "1").get(); @@ -577,7 +579,8 @@ public void testGetFieldsMetaDataWithRouting() throws Exception { assertAcked(prepareCreate("test") .addMapping("doc", "field1", "type=keyword,store=true") .addAlias(new Alias("alias")) - .setSettings("index.refresh_interval", -1, "index.version.created", Version.V_5_6_0.id)); // multi types in 5.6 + .setSettings(Settings.builder().put("index.refresh_interval", -1).put("index.version.created", Version.V_5_6_0.id))); + // multi types in 5.6 client().prepareIndex("test", "doc", "1") .setRouting("1") @@ -613,7 +616,8 @@ public void testGetFieldsMetaDataWithParentChild() throws Exception { .addMapping("parent") .addMapping("my-type1", "_parent", "type=parent", "field1", "type=keyword,store=true") .addAlias(new Alias("alias")) - .setSettings("index.refresh_interval", -1, "index.version.created", Version.V_5_6_0.id)); // multi types in 5.6 + .setSettings(Settings.builder().put("index.refresh_interval", -1).put("index.version.created", Version.V_5_6_0.id))); + // multi types in 5.6 client().prepareIndex("test", "my-type1", "1") .setRouting("1") @@ -677,7 +681,8 @@ public void testGetFieldsNonLeafField() throws Exception { public void testGetFieldsComplexField() throws Exception { assertAcked(prepareCreate("my-index") - .setSettings("index.refresh_interval", -1, "index.version.created", Version.V_5_6_0.id) // multi types in 5.6 + // multi types in 5.6 + .setSettings(Settings.builder().put("index.refresh_interval", -1).put("index.version.created", Version.V_5_6_0.id)) .addMapping("my-type2", jsonBuilder().startObject().startObject("my-type2").startObject("properties") .startObject("field1").field("type", "object").startObject("properties") .startObject("field2").field("type", "object").startObject("properties") @@ -913,68 +918,6 @@ void indexSingleDocumentWithStringFieldsGeneratedFromText(boolean stored, boolea index("test", "doc", "1", doc); } - public void testGeneratedNumberFieldsUnstored() throws IOException { - indexSingleDocumentWithNumericFieldsGeneratedFromText(false, randomBoolean()); - String[] fieldsList = {"token_count", "text.token_count"}; - // before refresh - document is only in translog - assertGetFieldsAlwaysNull(indexOrAlias(), "doc", "1", fieldsList); - refresh(); - //after refresh - document is in translog and also indexed - assertGetFieldsAlwaysNull(indexOrAlias(), "doc", "1", fieldsList); - flush(); - //after flush - document is in not anymore translog - only indexed - assertGetFieldsAlwaysNull(indexOrAlias(), "doc", "1", fieldsList); - } - - public void testGeneratedNumberFieldsStored() throws IOException { - indexSingleDocumentWithNumericFieldsGeneratedFromText(true, randomBoolean()); - String[] fieldsList = {"token_count", "text.token_count"}; - assertGetFieldsAlwaysWorks(indexOrAlias(), "doc", "1", fieldsList); - flush(); - //after flush - document is in not anymore translog - only indexed - assertGetFieldsAlwaysWorks(indexOrAlias(), "doc", "1", fieldsList); - } - - void indexSingleDocumentWithNumericFieldsGeneratedFromText(boolean stored, boolean sourceEnabled) { - String storedString = stored ? "true" : "false"; - String createIndexSource = "{\n" + - " \"settings\": {\n" + - " \"index.translog.flush_threshold_size\": \"1pb\",\n" + - " \"refresh_interval\": \"-1\"\n" + - " },\n" + - " \"mappings\": {\n" + - " \"doc\": {\n" + - " \"_source\" : {\"enabled\" : " + sourceEnabled + "}," + - " \"properties\": {\n" + - " \"token_count\": {\n" + - " \"type\": \"token_count\",\n" + - " \"analyzer\": \"standard\",\n" + - " \"store\": \"" + storedString + "\"" + - " },\n" + - " \"text\": {\n" + - " \"type\": \"text\",\n" + - " \"fields\": {\n" + - " \"token_count\": {\n" + - " \"type\": \"token_count\",\n" + - " \"analyzer\": \"standard\",\n" + - " \"store\": \"" + storedString + "\"" + - " }\n" + - " }\n" + - " }" + - " }\n" + - " }\n" + - " }\n" + - "}"; - - assertAcked(prepareCreate("test").addAlias(new Alias("alias")).setSource(createIndexSource, XContentType.JSON)); - ensureGreen(); - String doc = "{\n" + - " \"token_count\": \"A text with five words.\",\n" + - " \"text\": \"A text with five words.\"\n" + - "}\n"; - index("test", "doc", "1", doc); - } - private void assertGetFieldsAlwaysWorks(String index, String type, String docId, String[] fields) { assertGetFieldsAlwaysWorks(index, type, docId, fields, null); } @@ -997,18 +940,6 @@ private void assertGetFieldWorks(String index, String type, String docId, String assertNotNull(response.getField(field)); } - private void assertGetFieldException(String index, String type, String docId, String field) { - try { - client().prepareGet().setIndex(index).setType(type).setId(docId).setStoredFields(field); - fail(); - } catch (ElasticsearchException e) { - assertTrue(e.getMessage().contains("You can only get this field after refresh() has been called.")); - } - MultiGetResponse multiGetResponse = client().prepareMultiGet().add(new MultiGetRequest.Item(index, type, docId).storedFields(field)).get(); - assertNull(multiGetResponse.getResponses()[0].getResponse()); - assertTrue(multiGetResponse.getResponses()[0].getFailure().getMessage().contains("You can only get this field after refresh() has been called.")); - } - protected void assertGetFieldsNull(String index, String type, String docId, String[] fields) { assertGetFieldsNull(index, type, docId, fields, null); } diff --git a/core/src/test/java/org/elasticsearch/index/IndexModuleTests.java b/core/src/test/java/org/elasticsearch/index/IndexModuleTests.java index 4b9645a3af87d..f1037d67ff4aa 100644 --- a/core/src/test/java/org/elasticsearch/index/IndexModuleTests.java +++ b/core/src/test/java/org/elasticsearch/index/IndexModuleTests.java @@ -42,6 +42,7 @@ import org.elasticsearch.env.Environment; import org.elasticsearch.env.NodeEnvironment; import org.elasticsearch.env.ShardLock; +import org.elasticsearch.env.TestEnvironment; import org.elasticsearch.index.analysis.AnalysisRegistry; import org.elasticsearch.index.cache.query.DisabledQueryCache; import org.elasticsearch.index.cache.query.IndexQueryCache; @@ -118,7 +119,7 @@ public void setUp() throws Exception { indicesQueryCache = new IndicesQueryCache(settings); indexSettings = IndexSettingsModule.newIndexSettings("foo", settings); index = indexSettings.getIndex(); - environment = new Environment(settings); + environment = TestEnvironment.newEnvironment(settings); emptyAnalysisRegistry = new AnalysisRegistry(environment, emptyMap(), emptyMap(), emptyMap(), emptyMap(), emptyMap(), emptyMap(), emptyMap(), emptyMap()); threadPool = new TestThreadPool("test"); @@ -191,7 +192,7 @@ public void beforeIndexRemoved(IndexService indexService, IndexRemovalReason rea module.addIndexEventListener(eventListener); IndexService indexService = newIndexService(module); IndexSettings x = indexService.getIndexSettings(); - assertEquals(x.getSettings().getAsMap(), indexSettings.getSettings().getAsMap()); + assertEquals(x.getSettings(), indexSettings.getSettings()); assertEquals(x.getIndex(), index); indexService.getIndexEventListener().beforeIndexRemoved(null, null); assertTrue(atomicBoolean.get()); diff --git a/core/src/test/java/org/elasticsearch/index/IndexSettingsTests.java b/core/src/test/java/org/elasticsearch/index/IndexSettingsTests.java index ad1c3f4143af7..6be786aff88b5 100644 --- a/core/src/test/java/org/elasticsearch/index/IndexSettingsTests.java +++ b/core/src/test/java/org/elasticsearch/index/IndexSettingsTests.java @@ -32,6 +32,7 @@ import org.elasticsearch.test.VersionUtils; import java.util.Arrays; +import java.util.Collections; import java.util.HashSet; import java.util.Set; import java.util.concurrent.TimeUnit; @@ -59,7 +60,7 @@ public void testRunListener() { assertEquals("0xdeadbeef", settings.getUUID()); assertFalse(settings.updateIndexMetaData(metaData)); - assertEquals(metaData.getSettings().getAsMap(), settings.getSettings().getAsMap()); + assertEquals(metaData.getSettings(), settings.getSettings()); assertEquals(0, integer.get()); assertTrue(settings.updateIndexMetaData(newIndexMeta("index", Settings.builder().put(theSettings).put("index.test.setting.int", 42) .build()))); @@ -82,7 +83,7 @@ public void testSettingsUpdateValidator() { assertEquals("0xdeadbeef", settings.getUUID()); assertFalse(settings.updateIndexMetaData(metaData)); - assertEquals(metaData.getSettings().getAsMap(), settings.getSettings().getAsMap()); + assertEquals(metaData.getSettings(), settings.getSettings()); assertEquals(0, integer.get()); expectThrows(IllegalArgumentException.class, () -> settings.updateIndexMetaData(newIndexMeta("index", Settings.builder().put(theSettings).put("index.test.setting.int", 42).build()))); @@ -155,7 +156,7 @@ public void testSettingsConsistency() { } catch (IllegalArgumentException ex) { assertEquals("uuid mismatch on settings update expected: 0xdeadbeef but was: _na_", ex.getMessage()); } - assertEquals(metaData.getSettings().getAsMap(), settings.getSettings().getAsMap()); + assertEquals(metaData.getSettings(), settings.getSettings()); } public IndexSettings newIndexSettings(IndexMetaData metaData, Settings nodeSettings, Setting... settings) { @@ -289,6 +290,58 @@ public void testMaxResultWindow() { assertEquals(IndexSettings.MAX_RESULT_WINDOW_SETTING.get(Settings.EMPTY).intValue(), settings.getMaxResultWindow()); } + public void testMaxInnerResultWindow() { + IndexMetaData metaData = newIndexMeta("index", Settings.builder() + .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT) + .put(IndexSettings.MAX_INNER_RESULT_WINDOW_SETTING.getKey(), 200) + .build()); + IndexSettings settings = new IndexSettings(metaData, Settings.EMPTY); + assertEquals(200, settings.getMaxInnerResultWindow()); + settings.updateIndexMetaData(newIndexMeta("index", Settings.builder().put(IndexSettings.MAX_INNER_RESULT_WINDOW_SETTING.getKey(), + 50).build())); + assertEquals(50, settings.getMaxInnerResultWindow()); + settings.updateIndexMetaData(newIndexMeta("index", Settings.EMPTY)); + assertEquals(IndexSettings.MAX_INNER_RESULT_WINDOW_SETTING.get(Settings.EMPTY).intValue(), settings.getMaxInnerResultWindow()); + + metaData = newIndexMeta("index", Settings.builder() + .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT) + .build()); + settings = new IndexSettings(metaData, Settings.EMPTY); + assertEquals(IndexSettings.MAX_INNER_RESULT_WINDOW_SETTING.get(Settings.EMPTY).intValue(), settings.getMaxInnerResultWindow()); + } + + public void testMaxDocvalueFields() { + IndexMetaData metaData = newIndexMeta("index", Settings.builder().put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT) + .put(IndexSettings.MAX_DOCVALUE_FIELDS_SEARCH_SETTING.getKey(), 200).build()); + IndexSettings settings = new IndexSettings(metaData, Settings.EMPTY); + assertEquals(200, settings.getMaxDocvalueFields()); + settings.updateIndexMetaData( + newIndexMeta("index", Settings.builder().put(IndexSettings.MAX_DOCVALUE_FIELDS_SEARCH_SETTING.getKey(), 50).build())); + assertEquals(50, settings.getMaxDocvalueFields()); + settings.updateIndexMetaData(newIndexMeta("index", Settings.EMPTY)); + assertEquals(IndexSettings.MAX_DOCVALUE_FIELDS_SEARCH_SETTING.get(Settings.EMPTY).intValue(), settings.getMaxDocvalueFields()); + + metaData = newIndexMeta("index", Settings.builder().put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT).build()); + settings = new IndexSettings(metaData, Settings.EMPTY); + assertEquals(IndexSettings.MAX_DOCVALUE_FIELDS_SEARCH_SETTING.get(Settings.EMPTY).intValue(), settings.getMaxDocvalueFields()); + } + + public void testMaxScriptFields() { + IndexMetaData metaData = newIndexMeta("index", Settings.builder().put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT) + .put(IndexSettings.MAX_SCRIPT_FIELDS_SETTING.getKey(), 100).build()); + IndexSettings settings = new IndexSettings(metaData, Settings.EMPTY); + assertEquals(100, settings.getMaxScriptFields()); + settings.updateIndexMetaData( + newIndexMeta("index", Settings.builder().put(IndexSettings.MAX_SCRIPT_FIELDS_SETTING.getKey(), 20).build())); + assertEquals(20, settings.getMaxScriptFields()); + settings.updateIndexMetaData(newIndexMeta("index", Settings.EMPTY)); + assertEquals(IndexSettings.MAX_SCRIPT_FIELDS_SETTING.get(Settings.EMPTY).intValue(), settings.getMaxScriptFields()); + + metaData = newIndexMeta("index", Settings.builder().put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT).build()); + settings = new IndexSettings(metaData, Settings.EMPTY); + assertEquals(IndexSettings.MAX_SCRIPT_FIELDS_SETTING.get(Settings.EMPTY).intValue(), settings.getMaxScriptFields()); + } + public void testMaxAdjacencyMatrixFiltersSetting() { IndexMetaData metaData = newIndexMeta("index", Settings.builder() .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT) @@ -327,7 +380,7 @@ public void testGCDeletesSetting() { assertEquals(TimeValue.parseTimeValue(newGCDeleteSetting.getStringRep(), new TimeValue(1, TimeUnit.DAYS), IndexSettings.INDEX_GC_DELETES_SETTING.getKey()).getMillis(), settings.getGcDeletesInMillis()); settings.updateIndexMetaData(newIndexMeta("index", Settings.builder().put(IndexSettings.INDEX_GC_DELETES_SETTING.getKey(), - randomBoolean() ? -1 : new TimeValue(-1, TimeUnit.MILLISECONDS)).build())); + (randomBoolean() ? -1 : new TimeValue(-1, TimeUnit.MILLISECONDS)).toString()).build())); assertEquals(-1, settings.getGcDeletesInMillis()); } @@ -479,4 +532,19 @@ public void testSingleTypeSetting() { } } } + + public void testQueryDefaultField() { + IndexSettings index = newIndexSettings( + newIndexMeta("index", Settings.EMPTY), Settings.EMPTY + ); + assertThat(index.getDefaultFields(), equalTo(Collections.singletonList("*"))); + index = newIndexSettings( + newIndexMeta("index", Settings.EMPTY), Settings.builder().put("index.query.default_field", "body").build() + ); + assertThat(index.getDefaultFields(), equalTo(Collections.singletonList("body"))); + index.updateIndexMetaData( + newIndexMeta("index", Settings.builder().putList("index.query.default_field", "body", "title").build()) + ); + assertThat(index.getDefaultFields(), equalTo(Arrays.asList("body", "title"))); + } } diff --git a/core/src/test/java/org/elasticsearch/index/IndexSortIT.java b/core/src/test/java/org/elasticsearch/index/IndexSortIT.java index bb59bc948805c..c981d88a3d1a8 100644 --- a/core/src/test/java/org/elasticsearch/index/IndexSortIT.java +++ b/core/src/test/java/org/elasticsearch/index/IndexSortIT.java @@ -26,8 +26,6 @@ import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.test.ESIntegTestCase; -import org.junit.AfterClass; -import org.junit.BeforeClass; import java.io.IOException; @@ -80,7 +78,7 @@ public void testIndexSort() { .put(indexSettings()) .put("index.number_of_shards", "1") .put("index.number_of_replicas", "1") - .putArray("index.sort.field", "date", "numeric_dv", "keyword_dv") + .putList("index.sort.field", "date", "numeric_dv", "keyword_dv") ) .addMapping("test", TEST_MAPPING) .get(); @@ -99,7 +97,7 @@ public void testInvalidIndexSort() { () -> prepareCreate("test") .setSettings(Settings.builder() .put(indexSettings()) - .putArray("index.sort.field", "invalid_field") + .putList("index.sort.field", "invalid_field") ) .addMapping("test", TEST_MAPPING) .get() @@ -110,7 +108,7 @@ public void testInvalidIndexSort() { () -> prepareCreate("test") .setSettings(Settings.builder() .put(indexSettings()) - .putArray("index.sort.field", "numeric") + .putList("index.sort.field", "numeric") ) .addMapping("test", TEST_MAPPING) .get() @@ -121,7 +119,7 @@ public void testInvalidIndexSort() { () -> prepareCreate("test") .setSettings(Settings.builder() .put(indexSettings()) - .putArray("index.sort.field", "keyword") + .putList("index.sort.field", "keyword") ) .addMapping("test", TEST_MAPPING) .get() diff --git a/core/src/test/java/org/elasticsearch/index/IndexSortSettingsTests.java b/core/src/test/java/org/elasticsearch/index/IndexSortSettingsTests.java index 74ec1cc02d93f..78569d927be76 100644 --- a/core/src/test/java/org/elasticsearch/index/IndexSortSettingsTests.java +++ b/core/src/test/java/org/elasticsearch/index/IndexSortSettingsTests.java @@ -76,9 +76,9 @@ public void testSimpleIndexSort() throws IOException { public void testIndexSortWithArrays() throws IOException { Settings settings = Settings.builder() - .putArray("index.sort.field", "field1", "field2") - .putArray("index.sort.order", "asc", "desc") - .putArray("index.sort.missing", "_last", "_first") + .putList("index.sort.field", "field1", "field2") + .putList("index.sort.order", "asc", "desc") + .putList("index.sort.missing", "_last", "_first") .build(); IndexSettings indexSettings = indexSettings(settings); IndexSortConfig config = indexSettings.getIndexSortConfig(); @@ -108,7 +108,7 @@ public void testInvalidIndexSort() throws IOException { public void testInvalidIndexSortWithArray() throws IOException { final Settings settings = Settings.builder() .put("index.sort.field", "field1") - .putArray("index.sort.order", new String[] {"asc", "desc"}) + .putList("index.sort.order", new String[] {"asc", "desc"}) .build(); IllegalArgumentException exc = expectThrows(IllegalArgumentException.class, () -> indexSettings(settings)); diff --git a/core/src/test/java/org/elasticsearch/index/IndexingSlowLogTests.java b/core/src/test/java/org/elasticsearch/index/IndexingSlowLogTests.java index a3d14fc518499..45b0d0aa2475c 100644 --- a/core/src/test/java/org/elasticsearch/index/IndexingSlowLogTests.java +++ b/core/src/test/java/org/elasticsearch/index/IndexingSlowLogTests.java @@ -22,6 +22,7 @@ import org.apache.lucene.document.NumericDocValuesField; import org.elasticsearch.Version; import org.elasticsearch.cluster.metadata.IndexMetaData; +import org.elasticsearch.common.bytes.BytesArray; import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.TimeValue; @@ -62,6 +63,16 @@ public void testSlowLogParsedDocumentPrinterSourceToLog() throws IOException { p = new SlowLogParsedDocumentPrinter(index, pd, 10, true, 3); assertThat(p.toString(), containsString("source[{\"f]")); assertThat(p.toString(), startsWith("[foo/123] took")); + + // Throwing a error if source cannot be converted + source = new BytesArray("invalid"); + pd = new ParsedDocument(new NumericDocValuesField("version", 1), SeqNoFieldMapper.SequenceIDFields.emptySeqID(), "id", + "test", null, null, source, XContentType.JSON, null); + p = new SlowLogParsedDocumentPrinter(index, pd, 10, true, 3); + + assertThat(p.toString(), containsString("_failed_to_convert_[Unrecognized token 'invalid':" + + " was expecting ('true', 'false' or 'null')\n" + + " at [Source: org.elasticsearch.common.bytes.BytesReference$MarkSupportingStreamInputWrapper")); } public void testReformatSetting() { diff --git a/core/src/test/java/org/elasticsearch/index/analysis/AnalysisRegistryTests.java b/core/src/test/java/org/elasticsearch/index/analysis/AnalysisRegistryTests.java index 9303159c265b9..9c0f2b3c7a550 100644 --- a/core/src/test/java/org/elasticsearch/index/analysis/AnalysisRegistryTests.java +++ b/core/src/test/java/org/elasticsearch/index/analysis/AnalysisRegistryTests.java @@ -30,6 +30,7 @@ import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.env.Environment; +import org.elasticsearch.env.TestEnvironment; import org.elasticsearch.index.IndexSettings; import org.elasticsearch.indices.analysis.AnalysisModule; import org.elasticsearch.indices.analysis.AnalysisModule.AnalysisProvider; @@ -56,8 +57,8 @@ private static AnalyzerProvider analyzerProvider(final String name) { } private static AnalysisRegistry emptyAnalysisRegistry(Settings settings) { - return new AnalysisRegistry(new Environment(settings), emptyMap(), emptyMap(), emptyMap(), emptyMap(), emptyMap(), emptyMap(), - emptyMap(), emptyMap()); + return new AnalysisRegistry(TestEnvironment.newEnvironment(settings), emptyMap(), emptyMap(), emptyMap(), emptyMap(), emptyMap(), + emptyMap(), emptyMap(), emptyMap()); } private static IndexSettings indexSettingsOfCurrentVersion(Settings.Builder settings) { @@ -129,9 +130,9 @@ public void testConfigureCamelCaseTokenFilter() throws IOException { .put("index.analysis.filter.testFilter.type", "mock") .put("index.analysis.filter.test_filter.type", "mock") .put("index.analysis.analyzer.custom_analyzer_with_camel_case.tokenizer", "standard") - .putArray("index.analysis.analyzer.custom_analyzer_with_camel_case.filter", "lowercase", "testFilter") + .putList("index.analysis.analyzer.custom_analyzer_with_camel_case.filter", "lowercase", "testFilter") .put("index.analysis.analyzer.custom_analyzer_with_snake_case.tokenizer", "standard") - .putArray("index.analysis.analyzer.custom_analyzer_with_snake_case.filter", "lowercase", "test_filter").build(); + .putList("index.analysis.analyzer.custom_analyzer_with_snake_case.filter", "lowercase", "test_filter").build(); IndexSettings idxSettings = IndexSettingsModule.newIndexSettings("index", indexSettings); @@ -157,8 +158,8 @@ public Map> getTokenFilters() { return singletonMap("mock", MockFactory::new); } }; - IndexAnalyzers indexAnalyzers = new AnalysisModule(new Environment(settings), singletonList(plugin)).getAnalysisRegistry() - .build(idxSettings); + IndexAnalyzers indexAnalyzers = new AnalysisModule(TestEnvironment.newEnvironment(settings), + singletonList(plugin)).getAnalysisRegistry().build(idxSettings); // This shouldn't contain English stopwords try (NamedAnalyzer custom_analyser = indexAnalyzers.get("custom_analyzer_with_camel_case")) { @@ -209,8 +210,8 @@ public void testNoTypeOrTokenizerErrorMessage() throws IOException { .builder() .put(IndexMetaData.SETTING_VERSION_CREATED, version) .put(Environment.PATH_HOME_SETTING.getKey(), createTempDir().toString()) - .putArray("index.analysis.analyzer.test_analyzer.filter", new String[] {"lowercase", "stop", "shingle"}) - .putArray("index.analysis.analyzer.test_analyzer.char_filter", new String[] {"html_strip"}) + .putList("index.analysis.analyzer.test_analyzer.filter", new String[] {"lowercase", "stop", "shingle"}) + .putList("index.analysis.analyzer.test_analyzer.char_filter", new String[] {"html_strip"}) .build(); IndexSettings idxSettings = IndexSettingsModule.newIndexSettings("index", settings); diff --git a/core/src/test/java/org/elasticsearch/index/analysis/AnalysisTests.java b/core/src/test/java/org/elasticsearch/index/analysis/AnalysisTests.java index 4073bbdbbc9c7..4ed2202f585ea 100644 --- a/core/src/test/java/org/elasticsearch/index/analysis/AnalysisTests.java +++ b/core/src/test/java/org/elasticsearch/index/analysis/AnalysisTests.java @@ -22,6 +22,7 @@ import org.apache.lucene.analysis.CharArraySet; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.env.Environment; +import org.elasticsearch.env.TestEnvironment; import org.elasticsearch.test.ESTestCase; import java.io.BufferedWriter; @@ -29,7 +30,6 @@ import java.io.IOException; import java.io.OutputStream; import java.nio.charset.CharacterCodingException; -import java.nio.charset.Charset; import java.nio.charset.MalformedInputException; import java.nio.charset.StandardCharsets; import java.nio.file.Files; @@ -50,7 +50,7 @@ public void testParseStemExclusion() { assertThat(set.contains("baz"), is(false)); /* Array */ - settings = Settings.builder().putArray("stem_exclusion", "foo","bar").build(); + settings = Settings.builder().putList("stem_exclusion", "foo","bar").build(); set = Analysis.parseStemExclusion(settings, CharArraySet.EMPTY_SET); assertThat(set.contains("foo"), is(true)); assertThat(set.contains("bar"), is(true)); @@ -62,7 +62,7 @@ public void testParseNonExistingFile() { Settings nodeSettings = Settings.builder() .put("foo.bar_path", tempDir.resolve("foo.dict")) .put(Environment.PATH_HOME_SETTING.getKey(), tempDir).build(); - Environment env = new Environment(nodeSettings); + Environment env = TestEnvironment.newEnvironment(nodeSettings); IllegalArgumentException ex = expectThrows(IllegalArgumentException.class, () -> Analysis.getWordList(env, nodeSettings, "foo.bar")); assertEquals("IOException while reading foo.bar_path: " + tempDir.resolve("foo.dict").toString(), ex.getMessage()); @@ -81,7 +81,7 @@ public void testParseFalseEncodedFile() throws IOException { writer.write(new byte[]{(byte) 0xff, 0x00, 0x00}); // some invalid UTF-8 writer.write('\n'); } - Environment env = new Environment(nodeSettings); + Environment env = TestEnvironment.newEnvironment(nodeSettings); IllegalArgumentException ex = expectThrows(IllegalArgumentException.class, () -> Analysis.getWordList(env, nodeSettings, "foo.bar")); assertEquals("Unsupported character encoding detected while reading foo.bar_path: " + tempDir.resolve("foo.dict").toString() @@ -102,7 +102,7 @@ public void testParseWordList() throws IOException { writer.write("world"); writer.write('\n'); } - Environment env = new Environment(nodeSettings); + Environment env = TestEnvironment.newEnvironment(nodeSettings); List wordList = Analysis.getWordList(env, nodeSettings, "foo.bar"); assertEquals(Arrays.asList("hello", "world"), wordList); diff --git a/core/src/test/java/org/elasticsearch/index/analysis/CustomNormalizerTests.java b/core/src/test/java/org/elasticsearch/index/analysis/CustomNormalizerTests.java index 66b28ec419a7f..7d8d64e6962d5 100644 --- a/core/src/test/java/org/elasticsearch/index/analysis/CustomNormalizerTests.java +++ b/core/src/test/java/org/elasticsearch/index/analysis/CustomNormalizerTests.java @@ -42,7 +42,7 @@ public class CustomNormalizerTests extends ESTokenStreamTestCase { public void testBasics() throws IOException { Settings settings = Settings.builder() - .putArray("index.analysis.normalizer.my_normalizer.filter", "lowercase") + .putList("index.analysis.normalizer.my_normalizer.filter", "lowercase") .put(Environment.PATH_HOME_SETTING.getKey(), createTempDir().toString()) .build(); ESTestCase.TestAnalysis analysis = AnalysisTestsHelper.createTestAnalysisFromSettings(settings, MOCK_ANALYSIS_PLUGIN); @@ -57,7 +57,7 @@ public void testBasics() throws IOException { public void testUnknownType() { Settings settings = Settings.builder() .put("index.analysis.normalizer.my_normalizer.type", "foobar") - .putArray("index.analysis.normalizer.my_normalizer.filter", "lowercase", "asciifolding") + .putList("index.analysis.normalizer.my_normalizer.filter", "lowercase", "asciifolding") .put(Environment.PATH_HOME_SETTING.getKey(), createTempDir().toString()) .build(); IllegalArgumentException e = expectThrows(IllegalArgumentException.class, @@ -78,7 +78,7 @@ public void testTokenizer() throws IOException { public void testCharFilters() throws IOException { Settings settings = Settings.builder() .put("index.analysis.char_filter.my_mapping.type", "mock_char_filter") - .putArray("index.analysis.normalizer.my_normalizer.char_filter", "my_mapping") + .putList("index.analysis.normalizer.my_normalizer.char_filter", "my_mapping") .put(Environment.PATH_HOME_SETTING.getKey(), createTempDir().toString()) .build(); ESTestCase.TestAnalysis analysis = AnalysisTestsHelper.createTestAnalysisFromSettings(settings, MOCK_ANALYSIS_PLUGIN); @@ -92,7 +92,7 @@ public void testCharFilters() throws IOException { public void testIllegalFilters() throws IOException { Settings settings = Settings.builder() - .putArray("index.analysis.normalizer.my_normalizer.filter", "mock_forbidden") + .putList("index.analysis.normalizer.my_normalizer.filter", "mock_forbidden") .put(Environment.PATH_HOME_SETTING.getKey(), createTempDir().toString()) .build(); IllegalArgumentException e = expectThrows(IllegalArgumentException.class, @@ -102,7 +102,7 @@ public void testIllegalFilters() throws IOException { public void testIllegalCharFilters() throws IOException { Settings settings = Settings.builder() - .putArray("index.analysis.normalizer.my_normalizer.char_filter", "mock_forbidden") + .putList("index.analysis.normalizer.my_normalizer.char_filter", "mock_forbidden") .put(Environment.PATH_HOME_SETTING.getKey(), createTempDir().toString()) .build(); IllegalArgumentException e = expectThrows(IllegalArgumentException.class, diff --git a/core/src/test/java/org/elasticsearch/index/analysis/ShingleTokenFilterFactoryTests.java b/core/src/test/java/org/elasticsearch/index/analysis/ShingleTokenFilterFactoryTests.java index 3997ece13610a..3af58d4ef73af 100644 --- a/core/src/test/java/org/elasticsearch/index/analysis/ShingleTokenFilterFactoryTests.java +++ b/core/src/test/java/org/elasticsearch/index/analysis/ShingleTokenFilterFactoryTests.java @@ -27,6 +27,7 @@ import org.apache.lucene.analysis.Tokenizer; import org.apache.lucene.analysis.core.WhitespaceTokenizer; import org.apache.lucene.analysis.miscellaneous.DisableGraphAttribute; +import org.elasticsearch.index.IndexSettings; import org.elasticsearch.test.ESTestCase; import org.elasticsearch.test.ESTokenStreamTestCase; @@ -102,4 +103,25 @@ public void testDisableGraph() throws IOException { assertFalse(stream.hasAttribute(DisableGraphAttribute.class)); } } + + /*` + * test that throws an error when trying to get a ShingleTokenFilter where difference between max_shingle_size and min_shingle_size + * is greater than the allowed value of max_shingle_diff + */ + public void testMaxShingleDiffException() throws Exception{ + String RESOURCE2 = "/org/elasticsearch/index/analysis/shingle_analysis2.json"; + int maxAllowedShingleDiff = 3; + int shingleDiff = 8; + try { + ESTestCase.TestAnalysis analysis = AnalysisTestsHelper.createTestAnalysisFromClassPath(createTempDir(), RESOURCE2); + analysis.tokenFilter.get("shingle"); + fail(); + } catch (IllegalArgumentException ex) { + assertEquals( + "In Shingle TokenFilter the difference between max_shingle_size and min_shingle_size (and +1 if outputting unigrams)" + + " must be less than or equal to: [" + maxAllowedShingleDiff + "] but was [" + shingleDiff + "]. This limit" + + " can be set by changing the [" + IndexSettings.MAX_SHINGLE_DIFF_SETTING.getKey() + "] index level setting.", + ex.getMessage()); + } + } } diff --git a/core/src/test/java/org/elasticsearch/index/analysis/StopAnalyzerTests.java b/core/src/test/java/org/elasticsearch/index/analysis/StopAnalyzerTests.java index e166f4b7b9e30..8a60e9be65ddd 100644 --- a/core/src/test/java/org/elasticsearch/index/analysis/StopAnalyzerTests.java +++ b/core/src/test/java/org/elasticsearch/index/analysis/StopAnalyzerTests.java @@ -33,7 +33,7 @@ public class StopAnalyzerTests extends ESTokenStreamTestCase { public void testDefaultsCompoundAnalysis() throws Exception { String json = "/org/elasticsearch/index/analysis/stop.json"; Settings settings = Settings.builder() - .loadFromStream(json, getClass().getResourceAsStream(json)) + .loadFromStream(json, getClass().getResourceAsStream(json), false) .put(Environment.PATH_HOME_SETTING.getKey(), createTempDir().toString()) .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT) .build(); diff --git a/core/src/test/java/org/elasticsearch/index/analysis/WhitespaceTokenizerFactoryTests.java b/core/src/test/java/org/elasticsearch/index/analysis/WhitespaceTokenizerFactoryTests.java new file mode 100644 index 0000000000000..6dbb5e174b145 --- /dev/null +++ b/core/src/test/java/org/elasticsearch/index/analysis/WhitespaceTokenizerFactoryTests.java @@ -0,0 +1,83 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.index.analysis; + +import com.carrotsearch.randomizedtesting.generators.RandomStrings; + +import org.apache.lucene.analysis.core.WhitespaceTokenizer; +import org.elasticsearch.Version; +import org.elasticsearch.cluster.metadata.IndexMetaData; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.index.Index; +import org.elasticsearch.index.IndexSettings; +import org.elasticsearch.test.ESTestCase; +import org.elasticsearch.test.IndexSettingsModule; + +import java.io.IOException; +import java.io.Reader; +import java.io.StringReader; + +import static org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents; + +public class WhitespaceTokenizerFactoryTests extends ESTestCase { + + public void testSimpleWhiteSpaceTokenizer() throws IOException { + final Settings indexSettings = Settings.builder().put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT).build(); + IndexSettings indexProperties = IndexSettingsModule.newIndexSettings(new Index("test", "_na_"), indexSettings); + WhitespaceTokenizer tokenizer = (WhitespaceTokenizer) new WhitespaceTokenizerFactory(indexProperties, null, "whitespace_maxlen", + Settings.EMPTY).create(); + + try (Reader reader = new StringReader("one, two, three")) { + tokenizer.setReader(reader); + assertTokenStreamContents(tokenizer, new String[] { "one,", "two,", "three" }); + } + } + + public void testMaxTokenLength() throws IOException { + final Settings indexSettings = Settings.builder().put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT).build(); + IndexSettings indexProperties = IndexSettingsModule.newIndexSettings(new Index("test", "_na_"), indexSettings); + final Settings settings = Settings.builder().put(WhitespaceTokenizerFactory.MAX_TOKEN_LENGTH, 2).build(); + WhitespaceTokenizer tokenizer = (WhitespaceTokenizer) new WhitespaceTokenizerFactory(indexProperties, null, "whitespace_maxlen", + settings).create(); + try (Reader reader = new StringReader("one, two, three")) { + tokenizer.setReader(reader); + assertTokenStreamContents(tokenizer, new String[] { "on", "e,", "tw", "o,", "th", "re", "e" }); + } + + final Settings defaultSettings = Settings.EMPTY; + tokenizer = (WhitespaceTokenizer) new WhitespaceTokenizerFactory(indexProperties, null, "whitespace_maxlen", defaultSettings) + .create(); + String veryLongToken = RandomStrings.randomAsciiAlphanumOfLength(random(), 256); + try (Reader reader = new StringReader(veryLongToken)) { + tokenizer.setReader(reader); + assertTokenStreamContents(tokenizer, new String[] { veryLongToken.substring(0, 255), veryLongToken.substring(255) }); + } + + final Settings tooLongSettings = Settings.builder().put(WhitespaceTokenizerFactory.MAX_TOKEN_LENGTH, 1024 * 1024 + 1).build(); + IllegalArgumentException e = expectThrows(IllegalArgumentException.class, + () -> new WhitespaceTokenizerFactory(indexProperties, null, "whitespace_maxlen", tooLongSettings).create()); + assertEquals("maxTokenLen must be greater than 0 and less than 1048576 passed: 1048577", e.getMessage()); + + final Settings negativeSettings = Settings.builder().put(WhitespaceTokenizerFactory.MAX_TOKEN_LENGTH, -1).build(); + e = expectThrows(IllegalArgumentException.class, + () -> new WhitespaceTokenizerFactory(indexProperties, null, "whitespace_maxlen", negativeSettings).create()); + assertEquals("maxTokenLen must be greater than 0 and less than 1048576 passed: -1", e.getMessage()); + } +} diff --git a/core/src/test/java/org/elasticsearch/index/analysis/synonyms/SynonymsAnalysisTests.java b/core/src/test/java/org/elasticsearch/index/analysis/synonyms/SynonymsAnalysisTests.java index b5640cdd1206f..36c9dee10919f 100644 --- a/core/src/test/java/org/elasticsearch/index/analysis/synonyms/SynonymsAnalysisTests.java +++ b/core/src/test/java/org/elasticsearch/index/analysis/synonyms/SynonymsAnalysisTests.java @@ -23,11 +23,9 @@ import org.apache.lucene.analysis.Analyzer; import org.apache.lucene.analysis.TokenStream; import org.apache.lucene.analysis.tokenattributes.CharTermAttribute; -import org.apache.lucene.queryparser.classic.ParseException; import org.elasticsearch.Version; import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.common.logging.Loggers; -import org.elasticsearch.common.lucene.all.AllTokenStream; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.env.Environment; import org.elasticsearch.index.IndexSettings; @@ -60,7 +58,7 @@ public void testSynonymsAnalysis() throws IOException { String json = "/org/elasticsearch/index/analysis/synonyms/synonyms.json"; Settings settings = Settings.builder(). - loadFromStream(json, getClass().getResourceAsStream(json)) + loadFromStream(json, getClass().getResourceAsStream(json), false) .put(Environment.PATH_HOME_SETTING.getKey(), home) .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT).build(); @@ -85,12 +83,12 @@ public void testSynonymWordDeleteByAnalyzer() throws IOException { .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT) .put("path.home", createTempDir().toString()) .put("index.analysis.filter.synonym.type", "synonym") - .putArray("index.analysis.filter.synonym.synonyms", "kimchy => shay", "dude => elasticsearch", "abides => man!") + .putList("index.analysis.filter.synonym.synonyms", "kimchy => shay", "dude => elasticsearch", "abides => man!") .put("index.analysis.filter.stop_within_synonym.type", "stop") - .putArray("index.analysis.filter.stop_within_synonym.stopwords", "kimchy", "elasticsearch") + .putList("index.analysis.filter.stop_within_synonym.stopwords", "kimchy", "elasticsearch") .put("index.analysis.analyzer.synonymAnalyzerWithStopSynonymBeforeSynonym.tokenizer", "whitespace") - .putArray("index.analysis.analyzer.synonymAnalyzerWithStopSynonymBeforeSynonym.filter", "stop_within_synonym","synonym") - .put().build(); + .putList("index.analysis.analyzer.synonymAnalyzerWithStopSynonymBeforeSynonym.filter", "stop_within_synonym","synonym") + .build(); IndexSettings idxSettings = IndexSettingsModule.newIndexSettings("index", settings); try { indexAnalyzers = createTestAnalysis(idxSettings, settings).indexAnalyzers; @@ -106,12 +104,12 @@ public void testExpandSynonymWordDeleteByAnalyzer() throws IOException { .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT) .put("path.home", createTempDir().toString()) .put("index.analysis.filter.synonym_expand.type", "synonym") - .putArray("index.analysis.filter.synonym_expand.synonyms", "kimchy, shay", "dude, elasticsearch", "abides, man!") + .putList("index.analysis.filter.synonym_expand.synonyms", "kimchy, shay", "dude, elasticsearch", "abides, man!") .put("index.analysis.filter.stop_within_synonym.type", "stop") - .putArray("index.analysis.filter.stop_within_synonym.stopwords", "kimchy", "elasticsearch") + .putList("index.analysis.filter.stop_within_synonym.stopwords", "kimchy", "elasticsearch") .put("index.analysis.analyzer.synonymAnalyzerExpandWithStopBeforeSynonym.tokenizer", "whitespace") - .putArray("index.analysis.analyzer.synonymAnalyzerExpandWithStopBeforeSynonym.filter", "stop_within_synonym","synonym_expand") - .put().build(); + .putList("index.analysis.analyzer.synonymAnalyzerExpandWithStopBeforeSynonym.filter", "stop_within_synonym","synonym_expand") + .build(); IndexSettings idxSettings = IndexSettingsModule.newIndexSettings("index", settings); try { indexAnalyzers = createTestAnalysis(idxSettings, settings).indexAnalyzers; @@ -126,7 +124,7 @@ public void testExpandSynonymWordDeleteByAnalyzer() throws IOException { private void match(String analyzerName, String source, String target) throws IOException { Analyzer analyzer = indexAnalyzers.get(analyzerName).analyzer(); - TokenStream stream = AllTokenStream.allTokenStream("_all", source, 1.0f, analyzer); + TokenStream stream = analyzer.tokenStream("", source); stream.reset(); CharTermAttribute termAtt = stream.addAttribute(CharTermAttribute.class); diff --git a/core/src/test/java/org/elasticsearch/index/engine/CombinedDeletionPolicyTests.java b/core/src/test/java/org/elasticsearch/index/engine/CombinedDeletionPolicyTests.java index d1eef05c2efa1..5d4385cbd384b 100644 --- a/core/src/test/java/org/elasticsearch/index/engine/CombinedDeletionPolicyTests.java +++ b/core/src/test/java/org/elasticsearch/index/engine/CombinedDeletionPolicyTests.java @@ -30,7 +30,7 @@ import java.util.Collections; import java.util.List; -import static org.elasticsearch.index.translog.TranslogDeletionPolicyTests.createTranslogDeletionPolicy; +import static org.elasticsearch.index.translog.TranslogDeletionPolicies.createTranslogDeletionPolicy; import static org.mockito.Mockito.mock; import static org.mockito.Mockito.times; import static org.mockito.Mockito.verify; diff --git a/core/src/test/java/org/elasticsearch/index/engine/InternalEngineTests.java b/core/src/test/java/org/elasticsearch/index/engine/InternalEngineTests.java index 8116cea1391f9..e196c6b4d0bbe 100644 --- a/core/src/test/java/org/elasticsearch/index/engine/InternalEngineTests.java +++ b/core/src/test/java/org/elasticsearch/index/engine/InternalEngineTests.java @@ -26,9 +26,7 @@ import org.apache.logging.log4j.core.LogEvent; import org.apache.logging.log4j.core.appender.AbstractAppender; import org.apache.logging.log4j.core.filter.RegexFilter; -import org.apache.lucene.analysis.Analyzer; -import org.apache.lucene.analysis.standard.StandardAnalyzer; -import org.apache.lucene.codecs.Codec; +import org.apache.lucene.codecs.lucene50.Lucene50StoredFieldsFormat; import org.apache.lucene.document.Field; import org.apache.lucene.document.LongPoint; import org.apache.lucene.document.NumericDocValuesField; @@ -45,7 +43,6 @@ import org.apache.lucene.index.LiveIndexWriterConfig; import org.apache.lucene.index.LogByteSizeMergePolicy; import org.apache.lucene.index.LogDocMergePolicy; -import org.apache.lucene.index.MergePolicy; import org.apache.lucene.index.NoMergePolicy; import org.apache.lucene.index.NumericDocValues; import org.apache.lucene.index.PointValues; @@ -76,14 +73,13 @@ import org.elasticsearch.cluster.routing.ShardRouting; import org.elasticsearch.cluster.routing.ShardRoutingState; import org.elasticsearch.cluster.routing.TestShardRouting; -import org.elasticsearch.common.Nullable; import org.elasticsearch.common.Randomness; import org.elasticsearch.common.Strings; import org.elasticsearch.common.bytes.BytesArray; import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.collect.Tuple; import org.elasticsearch.common.logging.Loggers; -import org.elasticsearch.common.lucene.Lucene; +import org.elasticsearch.common.lucene.index.ElasticsearchDirectoryReader; import org.elasticsearch.common.lucene.uid.Versions; import org.elasticsearch.common.lucene.uid.VersionsAndSeqNoResolver; import org.elasticsearch.common.lucene.uid.VersionsAndSeqNoResolver.DocIdAndSeqNo; @@ -92,54 +88,37 @@ import org.elasticsearch.common.util.BigArrays; import org.elasticsearch.common.util.concurrent.AbstractRunnable; import org.elasticsearch.common.util.concurrent.ConcurrentCollections; -import org.elasticsearch.common.xcontent.NamedXContentRegistry; -import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.common.xcontent.XContentType; -import org.elasticsearch.index.Index; import org.elasticsearch.index.IndexSettings; import org.elasticsearch.index.VersionType; -import org.elasticsearch.index.analysis.AnalyzerScope; -import org.elasticsearch.index.analysis.IndexAnalyzers; -import org.elasticsearch.index.analysis.NamedAnalyzer; import org.elasticsearch.index.codec.CodecService; import org.elasticsearch.index.engine.Engine.Searcher; import org.elasticsearch.index.fieldvisitor.FieldsVisitor; import org.elasticsearch.index.mapper.ContentPath; -import org.elasticsearch.index.mapper.DocumentMapper; -import org.elasticsearch.index.mapper.DocumentMapperForType; import org.elasticsearch.index.mapper.IdFieldMapper; import org.elasticsearch.index.mapper.Mapper.BuilderContext; -import org.elasticsearch.index.mapper.MapperService; import org.elasticsearch.index.mapper.Mapping; import org.elasticsearch.index.mapper.MetadataFieldMapper; +import org.elasticsearch.index.mapper.ParseContext; import org.elasticsearch.index.mapper.ParseContext.Document; import org.elasticsearch.index.mapper.ParsedDocument; import org.elasticsearch.index.mapper.RootObjectMapper; import org.elasticsearch.index.mapper.SeqNoFieldMapper; import org.elasticsearch.index.mapper.SourceFieldMapper; import org.elasticsearch.index.mapper.Uid; +import org.elasticsearch.index.seqno.SeqNoStats; import org.elasticsearch.index.seqno.SequenceNumbers; import org.elasticsearch.index.seqno.SequenceNumbersService; import org.elasticsearch.index.shard.IndexSearcherWrapper; -import org.elasticsearch.index.shard.IndexShard; import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.index.shard.ShardUtils; -import org.elasticsearch.index.similarity.SimilarityService; -import org.elasticsearch.index.store.DirectoryService; import org.elasticsearch.index.store.DirectoryUtils; import org.elasticsearch.index.store.Store; import org.elasticsearch.index.translog.Translog; import org.elasticsearch.index.translog.TranslogConfig; -import org.elasticsearch.indices.IndicesModule; -import org.elasticsearch.indices.mapper.MapperRegistry; -import org.elasticsearch.test.DummyShardLock; -import org.elasticsearch.test.ESTestCase; import org.elasticsearch.test.IndexSettingsModule; -import org.elasticsearch.threadpool.TestThreadPool; -import org.elasticsearch.threadpool.ThreadPool; import org.hamcrest.MatcherAssert; -import org.junit.After; -import org.junit.Before; +import org.hamcrest.Matchers; import java.io.IOException; import java.io.UncheckedIOException; @@ -167,21 +146,19 @@ import java.util.concurrent.atomic.AtomicLong; import java.util.concurrent.atomic.AtomicReference; import java.util.function.BiFunction; -import java.util.function.Function; import java.util.function.LongSupplier; import java.util.function.Supplier; +import java.util.function.ToLongBiFunction; import java.util.stream.Collectors; import java.util.stream.LongStream; -import static java.util.Collections.emptyList; import static java.util.Collections.emptyMap; import static java.util.Collections.shuffle; import static org.elasticsearch.index.engine.Engine.Operation.Origin.LOCAL_TRANSLOG_RECOVERY; import static org.elasticsearch.index.engine.Engine.Operation.Origin.PEER_RECOVERY; import static org.elasticsearch.index.engine.Engine.Operation.Origin.PRIMARY; import static org.elasticsearch.index.engine.Engine.Operation.Origin.REPLICA; -import static org.elasticsearch.index.mapper.SourceToParse.source; -import static org.elasticsearch.index.translog.TranslogDeletionPolicyTests.createTranslogDeletionPolicy; +import static org.elasticsearch.index.translog.TranslogDeletionPolicies.createTranslogDeletionPolicy; import static org.hamcrest.CoreMatchers.instanceOf; import static org.hamcrest.Matchers.equalTo; import static org.hamcrest.Matchers.everyItem; @@ -193,274 +170,7 @@ import static org.hamcrest.Matchers.notNullValue; import static org.hamcrest.Matchers.nullValue; -public class InternalEngineTests extends ESTestCase { - - protected final ShardId shardId = new ShardId(new Index("index", "_na_"), 0); - private static final IndexSettings INDEX_SETTINGS = IndexSettingsModule.newIndexSettings("index", Settings.EMPTY); - - protected ThreadPool threadPool; - - private Store store; - private Store storeReplica; - - protected InternalEngine engine; - protected InternalEngine replicaEngine; - - private IndexSettings defaultSettings; - private String codecName; - private Path primaryTranslogDir; - private Path replicaTranslogDir; - - @Override - @Before - public void setUp() throws Exception { - super.setUp(); - - CodecService codecService = new CodecService(null, logger); - String name = Codec.getDefault().getName(); - if (Arrays.asList(codecService.availableCodecs()).contains(name)) { - // some codecs are read only so we only take the ones that we have in the service and randomly - // selected by lucene test case. - codecName = name; - } else { - codecName = "default"; - } - defaultSettings = IndexSettingsModule.newIndexSettings("test", Settings.builder() - .put(IndexSettings.INDEX_GC_DELETES_SETTING.getKey(), "1h") // make sure this doesn't kick in on us - .put(EngineConfig.INDEX_CODEC_SETTING.getKey(), codecName) - .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT) - .put(IndexSettings.MAX_REFRESH_LISTENERS_PER_SHARD.getKey(), - between(10, 10 * IndexSettings.MAX_REFRESH_LISTENERS_PER_SHARD.get(Settings.EMPTY))) - .build()); // TODO randomize more settings - threadPool = new TestThreadPool(getClass().getName()); - store = createStore(); - storeReplica = createStore(); - Lucene.cleanLuceneIndex(store.directory()); - Lucene.cleanLuceneIndex(storeReplica.directory()); - primaryTranslogDir = createTempDir("translog-primary"); - engine = createEngine(store, primaryTranslogDir); - LiveIndexWriterConfig currentIndexWriterConfig = engine.getCurrentIndexWriterConfig(); - - assertEquals(engine.config().getCodec().getName(), codecService.codec(codecName).getName()); - assertEquals(currentIndexWriterConfig.getCodec().getName(), codecService.codec(codecName).getName()); - if (randomBoolean()) { - engine.config().setEnableGcDeletes(false); - } - replicaTranslogDir = createTempDir("translog-replica"); - replicaEngine = createEngine(storeReplica, replicaTranslogDir); - currentIndexWriterConfig = replicaEngine.getCurrentIndexWriterConfig(); - - assertEquals(replicaEngine.config().getCodec().getName(), codecService.codec(codecName).getName()); - assertEquals(currentIndexWriterConfig.getCodec().getName(), codecService.codec(codecName).getName()); - if (randomBoolean()) { - engine.config().setEnableGcDeletes(false); - } - } - - public EngineConfig copy(EngineConfig config, EngineConfig.OpenMode openMode) { - return copy(config, openMode, config.getAnalyzer()); - } - - public EngineConfig copy(EngineConfig config, EngineConfig.OpenMode openMode, Analyzer analyzer) { - return new EngineConfig(openMode, config.getShardId(), config.getThreadPool(), config.getIndexSettings(), config.getWarmer(), - config.getStore(), config.getMergePolicy(), analyzer, config.getSimilarity(), - new CodecService(null, logger), config.getEventListener(), config.getQueryCache(), - config.getQueryCachingPolicy(), config.getTranslogConfig(), - config.getFlushMergesAfter(), config.getRefreshListeners(), config.getIndexSort(), config.getTranslogRecoveryRunner()); - } - - @Override - @After - public void tearDown() throws Exception { - super.tearDown(); - if (engine != null && engine.isClosed.get() == false) { - engine.getTranslog().getDeletionPolicy().assertNoOpenTranslogRefs(); - } - if (replicaEngine != null && replicaEngine.isClosed.get() == false) { - replicaEngine.getTranslog().getDeletionPolicy().assertNoOpenTranslogRefs(); - } - IOUtils.close( - replicaEngine, storeReplica, - engine, store); - terminate(threadPool); - } - - - private static Document testDocumentWithTextField() { - return testDocumentWithTextField("test"); - } - - private static Document testDocumentWithTextField(String value) { - Document document = testDocument(); - document.add(new TextField("value", value, Field.Store.YES)); - return document; - } - - - private static Document testDocument() { - return new Document(); - } - - public static ParsedDocument createParsedDoc(String id, String routing) { - return testParsedDocument(id, routing, testDocumentWithTextField(), new BytesArray("{ \"value\" : \"test\" }"), null); - } - - private static ParsedDocument testParsedDocument(String id, String routing, Document document, BytesReference source, Mapping mappingUpdate) { - Field uidField = new Field("_id", Uid.encodeId(id), IdFieldMapper.Defaults.FIELD_TYPE); - Field versionField = new NumericDocValuesField("_version", 0); - SeqNoFieldMapper.SequenceIDFields seqID = SeqNoFieldMapper.SequenceIDFields.emptySeqID(); - document.add(uidField); - document.add(versionField); - document.add(seqID.seqNo); - document.add(seqID.seqNoDocValue); - document.add(seqID.primaryTerm); - BytesRef ref = source.toBytesRef(); - document.add(new StoredField(SourceFieldMapper.NAME, ref.bytes, ref.offset, ref.length)); - return new ParsedDocument(versionField, seqID, id, "test", routing, Arrays.asList(document), source, XContentType.JSON, - mappingUpdate); - } - - protected Store createStore() throws IOException { - return createStore(newDirectory()); - } - - protected Store createStore(final Directory directory) throws IOException { - return createStore(INDEX_SETTINGS, directory); - } - - protected Store createStore(final IndexSettings indexSettings, final Directory directory) throws IOException { - final DirectoryService directoryService = new DirectoryService(shardId, indexSettings) { - @Override - public Directory newDirectory() throws IOException { - return directory; - } - }; - return new Store(shardId, indexSettings, directoryService, new DummyShardLock(shardId)); - } - - protected Translog createTranslog() throws IOException { - return createTranslog(primaryTranslogDir); - } - - protected Translog createTranslog(Path translogPath) throws IOException { - TranslogConfig translogConfig = new TranslogConfig(shardId, translogPath, INDEX_SETTINGS, BigArrays.NON_RECYCLING_INSTANCE); - return new Translog(translogConfig, null, createTranslogDeletionPolicy(INDEX_SETTINGS), () -> SequenceNumbersService.UNASSIGNED_SEQ_NO); - } - - protected InternalEngine createEngine(Store store, Path translogPath) throws IOException { - return createEngine(defaultSettings, store, translogPath, newMergePolicy(), null); - } - - protected InternalEngine createEngine(Store store, Path translogPath, - Function sequenceNumbersServiceSupplier) throws IOException { - return createEngine(defaultSettings, store, translogPath, newMergePolicy(), null, sequenceNumbersServiceSupplier); - } - - protected InternalEngine createEngine(IndexSettings indexSettings, Store store, Path translogPath, MergePolicy mergePolicy) throws IOException { - return createEngine(indexSettings, store, translogPath, mergePolicy, null); - - } - - protected InternalEngine createEngine(IndexSettings indexSettings, Store store, Path translogPath, MergePolicy mergePolicy, - @Nullable IndexWriterFactory indexWriterFactory) throws IOException { - return createEngine(indexSettings, store, translogPath, mergePolicy, indexWriterFactory, null); - } - - protected InternalEngine createEngine( - IndexSettings indexSettings, - Store store, - Path translogPath, - MergePolicy mergePolicy, - @Nullable IndexWriterFactory indexWriterFactory, - @Nullable Function sequenceNumbersServiceSupplier) throws IOException { - return createEngine(indexSettings, store, translogPath, mergePolicy, indexWriterFactory, sequenceNumbersServiceSupplier, null); - } - - protected InternalEngine createEngine( - IndexSettings indexSettings, - Store store, - Path translogPath, - MergePolicy mergePolicy, - @Nullable IndexWriterFactory indexWriterFactory, - @Nullable Function sequenceNumbersServiceSupplier, - @Nullable Sort indexSort) throws IOException { - EngineConfig config = config(indexSettings, store, translogPath, mergePolicy, null, indexSort); - InternalEngine internalEngine = createInternalEngine(indexWriterFactory, sequenceNumbersServiceSupplier, config); - if (config.getOpenMode() == EngineConfig.OpenMode.OPEN_INDEX_AND_TRANSLOG) { - internalEngine.recoverFromTranslog(); - } - return internalEngine; - } - - @FunctionalInterface - public interface IndexWriterFactory { - - IndexWriter createWriter(Directory directory, IndexWriterConfig iwc) throws IOException; - } - - public static InternalEngine createInternalEngine(@Nullable final IndexWriterFactory indexWriterFactory, - @Nullable final Function sequenceNumbersServiceSupplier, - final EngineConfig config) { - return new InternalEngine(config) { - @Override - IndexWriter createWriter(Directory directory, IndexWriterConfig iwc) throws IOException { - return (indexWriterFactory != null) ? - indexWriterFactory.createWriter(directory, iwc) : - super.createWriter(directory, iwc); - } - - @Override - public SequenceNumbersService seqNoService() { - return (sequenceNumbersServiceSupplier != null) ? sequenceNumbersServiceSupplier.apply(config) : super.seqNoService(); - } - }; - } - - public EngineConfig config(IndexSettings indexSettings, Store store, Path translogPath, MergePolicy mergePolicy, - ReferenceManager.RefreshListener refreshListener) { - return config(indexSettings, store, translogPath, mergePolicy, refreshListener, null); - } - - public EngineConfig config(IndexSettings indexSettings, Store store, Path translogPath, MergePolicy mergePolicy, - ReferenceManager.RefreshListener refreshListener, Sort indexSort) { - IndexWriterConfig iwc = newIndexWriterConfig(); - TranslogConfig translogConfig = new TranslogConfig(shardId, translogPath, indexSettings, BigArrays.NON_RECYCLING_INSTANCE); - final EngineConfig.OpenMode openMode; - try { - if (Lucene.indexExists(store.directory()) == false) { - openMode = EngineConfig.OpenMode.CREATE_INDEX_AND_TRANSLOG; - } else { - openMode = EngineConfig.OpenMode.OPEN_INDEX_AND_TRANSLOG; - } - } catch (IOException e) { - throw new ElasticsearchException("can't find index?", e); - } - Engine.EventListener listener = new Engine.EventListener() { - @Override - public void onFailedEngine(String reason, @Nullable Exception e) { - // we don't need to notify anybody in this test - } - }; - final TranslogHandler handler = new TranslogHandler(xContentRegistry(), IndexSettingsModule.newIndexSettings(shardId.getIndexName(), - indexSettings.getSettings())); - final List refreshListenerList = - refreshListener == null ? emptyList() : Collections.singletonList(refreshListener); - EngineConfig config = new EngineConfig(openMode, shardId, threadPool, indexSettings, null, store, - mergePolicy, iwc.getAnalyzer(), iwc.getSimilarity(), new CodecService(null, logger), listener, - IndexSearcher.getDefaultQueryCache(), IndexSearcher.getDefaultQueryCachingPolicy(), translogConfig, - TimeValue.timeValueMinutes(5), refreshListenerList, indexSort, handler); - - return config; - } - - private static final BytesReference B_1 = new BytesArray(new byte[]{1}); - private static final BytesReference B_2 = new BytesArray(new byte[]{2}); - private static final BytesReference B_3 = new BytesArray(new byte[]{3}); - private static final BytesArray SOURCE = bytesArray("{}"); - - private static BytesArray bytesArray(String string) { - return new BytesArray(string.getBytes(Charset.defaultCharset())); - } +public class InternalEngineTests extends EngineTestCase { public void testSegments() throws Exception { try (Store store = createStore(); @@ -495,6 +205,7 @@ public void testSegments() throws Exception { assertThat(segments.get(0).getDeletedDocs(), equalTo(0)); assertThat(segments.get(0).isCompound(), equalTo(true)); assertThat(segments.get(0).ramTree, nullValue()); + assertThat(segments.get(0).getAttributes().keySet(), Matchers.contains(Lucene50StoredFieldsFormat.MODE_KEY)); engine.flush(); @@ -666,8 +377,8 @@ public void testSegmentsWithMergeFlag() throws Exception { public void testSegmentsWithIndexSort() throws Exception { Sort indexSort = new Sort(new SortedSetSortField("_type", false)); try (Store store = createStore(); - Engine engine = createEngine(defaultSettings, store, createTempDir(), NoMergePolicy.INSTANCE, - null, null, indexSort)) { + Engine engine = + createEngine(defaultSettings, store, createTempDir(), NoMergePolicy.INSTANCE, null, null, null, indexSort)) { List segments = engine.segments(true); assertThat(segments.isEmpty(), equalTo(true)); @@ -718,17 +429,33 @@ public void testSegmentsStatsIncludingFileSizes() throws Exception { } public void testCommitStats() throws IOException { - final AtomicLong maxSeqNo = new AtomicLong(SequenceNumbersService.NO_OPS_PERFORMED); - final AtomicLong localCheckpoint = new AtomicLong(SequenceNumbersService.NO_OPS_PERFORMED); - final AtomicLong globalCheckpoint = new AtomicLong(SequenceNumbersService.UNASSIGNED_SEQ_NO); + final AtomicLong maxSeqNo = new AtomicLong(SequenceNumbers.NO_OPS_PERFORMED); + final AtomicLong localCheckpoint = new AtomicLong(SequenceNumbers.NO_OPS_PERFORMED); + final AtomicLong globalCheckpoint = new AtomicLong(SequenceNumbers.UNASSIGNED_SEQ_NO); try ( Store store = createStore(); - InternalEngine engine = createEngine(store, createTempDir(), (config) -> new SequenceNumbersService( - config.getShardId(), - config.getIndexSettings(), - maxSeqNo.get(), - localCheckpoint.get(), - globalCheckpoint.get()) + InternalEngine engine = createEngine(store, createTempDir(), (config, seqNoStats) -> new SequenceNumbersService( + config.getShardId(), + config.getAllocationId(), + config.getIndexSettings(), + seqNoStats.getMaxSeqNo(), + seqNoStats.getLocalCheckpoint(), + seqNoStats.getGlobalCheckpoint()) { + @Override + public long getMaxSeqNo() { + return maxSeqNo.get(); + } + + @Override + public long getLocalCheckpoint() { + return localCheckpoint.get(); + } + + @Override + public long getGlobalCheckpoint() { + return globalCheckpoint.get(); + } + } )) { CommitStats stats1 = engine.commitStats(); assertThat(stats1.getGeneration(), greaterThan(0L)); @@ -737,19 +464,19 @@ public void testCommitStats() throws IOException { assertThat(stats1.getUserData(), hasKey(SequenceNumbers.LOCAL_CHECKPOINT_KEY)); assertThat( Long.parseLong(stats1.getUserData().get(SequenceNumbers.LOCAL_CHECKPOINT_KEY)), - equalTo(SequenceNumbersService.NO_OPS_PERFORMED)); + equalTo(SequenceNumbers.NO_OPS_PERFORMED)); assertThat(stats1.getUserData(), hasKey(SequenceNumbers.MAX_SEQ_NO)); assertThat( Long.parseLong(stats1.getUserData().get(SequenceNumbers.MAX_SEQ_NO)), - equalTo(SequenceNumbersService.NO_OPS_PERFORMED)); + equalTo(SequenceNumbers.NO_OPS_PERFORMED)); - maxSeqNo.set(rarely() ? SequenceNumbersService.NO_OPS_PERFORMED : randomIntBetween(0, 1024)); + maxSeqNo.set(rarely() ? SequenceNumbers.NO_OPS_PERFORMED : randomIntBetween(0, 1024)); localCheckpoint.set( - rarely() || maxSeqNo.get() == SequenceNumbersService.NO_OPS_PERFORMED ? - SequenceNumbersService.NO_OPS_PERFORMED : randomIntBetween(0, 1024)); - globalCheckpoint.set(rarely() || localCheckpoint.get() == SequenceNumbersService.NO_OPS_PERFORMED ? - SequenceNumbersService.UNASSIGNED_SEQ_NO : randomIntBetween(0, (int) localCheckpoint.get())); + rarely() || maxSeqNo.get() == SequenceNumbers.NO_OPS_PERFORMED ? + SequenceNumbers.NO_OPS_PERFORMED : randomIntBetween(0, 1024)); + globalCheckpoint.set(rarely() || localCheckpoint.get() == SequenceNumbers.NO_OPS_PERFORMED ? + SequenceNumbers.UNASSIGNED_SEQ_NO : randomIntBetween(0, (int) localCheckpoint.get())); engine.flush(true, true); @@ -824,11 +551,11 @@ public void testTranslogMultipleOperationsSameDocument() throws IOException { for (int i = 0; i < ops; i++) { final ParsedDocument doc = testParsedDocument("1", null, testDocumentWithTextField(), SOURCE, null); if (randomBoolean()) { - final Engine.Index operation = new Engine.Index(newUid(doc), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, i, VersionType.EXTERNAL, Engine.Operation.Origin.PRIMARY, System.nanoTime(), -1, false); + final Engine.Index operation = new Engine.Index(newUid(doc), doc, SequenceNumbers.UNASSIGNED_SEQ_NO, 0, i, VersionType.EXTERNAL, Engine.Operation.Origin.PRIMARY, System.nanoTime(), -1, false); operations.add(operation); initialEngine.index(operation); } else { - final Engine.Delete operation = new Engine.Delete("test", "1", newUid(doc), SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, i, VersionType.EXTERNAL, Engine.Operation.Origin.PRIMARY, System.nanoTime()); + final Engine.Delete operation = new Engine.Delete("test", "1", newUid(doc), SequenceNumbers.UNASSIGNED_SEQ_NO, 0, i, VersionType.EXTERNAL, Engine.Operation.Origin.PRIMARY, System.nanoTime()); operations.add(operation); initialEngine.delete(operation); } @@ -895,19 +622,11 @@ public void testTranslogRecoveryWithMultipleGenerations() throws IOException { Store store = createStore(); final AtomicInteger counter = new AtomicInteger(); try { - initialEngine = createEngine(store, createTempDir(), (config) -> - new SequenceNumbersService( - config.getShardId(), - config.getIndexSettings(), - SequenceNumbersService.NO_OPS_PERFORMED, - SequenceNumbersService.NO_OPS_PERFORMED, - SequenceNumbersService.UNASSIGNED_SEQ_NO) { - @Override - public long generateSeqNo() { - return seqNos.get(counter.getAndIncrement()); - } - } - ); + initialEngine = createEngine( + store, + createTempDir(), + InternalEngine::sequenceNumberService, + (engine, operation) -> seqNos.get(counter.getAndIncrement())); for (int i = 0; i < docs; i++) { final String id = Integer.toString(i); final ParsedDocument doc = testParsedDocument(id, null, testDocumentWithTextField(), SOURCE, null); @@ -935,7 +654,7 @@ public void testConcurrentGetAndFlush() throws Exception { engine.index(indexForDoc(doc)); final AtomicReference latestGetResult = new AtomicReference<>(); - final Function searcherFactory = engine::acquireSearcher; + final BiFunction searcherFactory = engine::acquireSearcher; latestGetResult.set(engine.get(newGet(true, doc), searcherFactory)); final AtomicBoolean flushFinished = new AtomicBoolean(false); final CyclicBarrier barrier = new CyclicBarrier(2); @@ -970,7 +689,7 @@ public void testSimpleOperations() throws Exception { MatcherAssert.assertThat(searchResult, EngineSearcherTotalHitsMatcher.engineSearcherTotalHits(0)); searchResult.close(); - final Function searcherFactory = engine::acquireSearcher; + final BiFunction searcherFactory = engine::acquireSearcher; // create a document Document document = testDocumentWithTextField(); @@ -995,6 +714,12 @@ public void testSimpleOperations() throws Exception { assertThat(getResult.docIdAndVersion(), notNullValue()); getResult.release(); + // but not real time is not yet visible + getResult = engine.get(newGet(false, doc), searcherFactory); + assertThat(getResult.exists(), equalTo(false)); + getResult.release(); + + // refresh and it should be there engine.refresh("test"); @@ -1212,7 +937,7 @@ public void testRenewSyncFlush() throws Exception { final boolean forceMergeFlushes = randomBoolean(); final ParsedDocument parsedDoc3 = testParsedDocument("3", null, testDocumentWithTextField(), B_1, null); if (forceMergeFlushes) { - engine.index(new Engine.Index(newUid(parsedDoc3), parsedDoc3, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, Versions.MATCH_ANY, VersionType.INTERNAL, Engine.Operation.Origin.PRIMARY, System.nanoTime() - engine.engineConfig.getFlushMergesAfter().nanos(), -1, false)); + engine.index(new Engine.Index(newUid(parsedDoc3), parsedDoc3, SequenceNumbers.UNASSIGNED_SEQ_NO, 0, Versions.MATCH_ANY, VersionType.INTERNAL, Engine.Operation.Origin.PRIMARY, System.nanoTime() - engine.engineConfig.getFlushMergesAfter().nanos(), -1, false)); } else { engine.index(indexForDoc(parsedDoc3)); } @@ -1230,6 +955,7 @@ public void testRenewSyncFlush() throws Exception { assertTrue(engine.tryRenewSyncCommit()); assertEquals(1, engine.segments(false).size()); } else { + engine.refresh("test"); assertBusy(() -> assertEquals(1, engine.segments(false).size())); } assertEquals(store.readLastCommittedSegmentsInfo().getUserData().get(Engine.SYNC_COMMIT_ID), syncId); @@ -1299,9 +1025,85 @@ public void testVersioningNewCreate() throws IOException { Engine.IndexResult indexResult = engine.index(create); assertThat(indexResult.getVersion(), equalTo(1L)); - create = new Engine.Index(newUid(doc), doc, indexResult.getSeqNo(), create.primaryTerm(), indexResult.getVersion(), create.versionType().versionTypeForReplicationAndRecovery(), REPLICA, 0, -1, false); + create = new Engine.Index(newUid(doc), doc, indexResult.getSeqNo(), create.primaryTerm(), indexResult.getVersion(), + create.versionType().versionTypeForReplicationAndRecovery(), REPLICA, 0, -1, false); + indexResult = replicaEngine.index(create); + assertThat(indexResult.getVersion(), equalTo(1L)); + } + + public void testReplicatedVersioningWithFlush() throws IOException { + ParsedDocument doc = testParsedDocument("1", null, testDocument(), B_1, null); + Engine.Index create = new Engine.Index(newUid(doc), doc, Versions.MATCH_DELETED); + Engine.IndexResult indexResult = engine.index(create); + assertThat(indexResult.getVersion(), equalTo(1L)); + assertTrue(indexResult.isCreated()); + + + create = new Engine.Index(newUid(doc), doc, indexResult.getSeqNo(), create.primaryTerm(), indexResult.getVersion(), + create.versionType().versionTypeForReplicationAndRecovery(), REPLICA, 0, -1, false); indexResult = replicaEngine.index(create); assertThat(indexResult.getVersion(), equalTo(1L)); + assertTrue(indexResult.isCreated()); + + if (randomBoolean()) { + engine.flush(); + } + if (randomBoolean()) { + replicaEngine.flush(); + } + + Engine.Index update = new Engine.Index(newUid(doc), doc, 1); + Engine.IndexResult updateResult = engine.index(update); + assertThat(updateResult.getVersion(), equalTo(2L)); + assertFalse(updateResult.isCreated()); + + + update = new Engine.Index(newUid(doc), doc, updateResult.getSeqNo(), update.primaryTerm(), updateResult.getVersion(), + update.versionType().versionTypeForReplicationAndRecovery(), REPLICA, 0, -1, false); + updateResult = replicaEngine.index(update); + assertThat(updateResult.getVersion(), equalTo(2L)); + assertFalse(updateResult.isCreated()); + replicaEngine.refresh("test"); + try (Searcher searcher = replicaEngine.acquireSearcher("test")) { + assertEquals(1, searcher.getDirectoryReader().numDocs()); + } + + engine.refresh("test"); + try (Searcher searcher = engine.acquireSearcher("test")) { + assertEquals(1, searcher.getDirectoryReader().numDocs()); + } + } + + /** + * simulates what an upsert / update API does + */ + public void testVersionedUpdate() throws IOException { + final BiFunction searcherFactory = engine::acquireSearcher; + + ParsedDocument doc = testParsedDocument("1", null, testDocument(), B_1, null); + Engine.Index create = new Engine.Index(newUid(doc), doc, Versions.MATCH_DELETED); + Engine.IndexResult indexResult = engine.index(create); + assertThat(indexResult.getVersion(), equalTo(1L)); + try (Engine.GetResult get = engine.get(new Engine.Get(true, doc.type(), doc.id(), create.uid()), searcherFactory)) { + assertEquals(1, get.version()); + } + + Engine.Index update_1 = new Engine.Index(newUid(doc), doc, 1); + Engine.IndexResult update_1_result = engine.index(update_1); + assertThat(update_1_result.getVersion(), equalTo(2L)); + + try (Engine.GetResult get = engine.get(new Engine.Get(true, doc.type(), doc.id(), create.uid()), searcherFactory)) { + assertEquals(2, get.version()); + } + + Engine.Index update_2 = new Engine.Index(newUid(doc), doc, 2); + Engine.IndexResult update_2_result = engine.index(update_2); + assertThat(update_2_result.getVersion(), equalTo(3L)); + + try (Engine.GetResult get = engine.get(new Engine.Get(true, doc.type(), doc.id(), create.uid()), searcherFactory)) { + assertEquals(3, get.version()); + } + } public void testVersioningNewIndex() throws IOException { @@ -1330,12 +1132,14 @@ public void testForceMerge() throws IOException { assertEquals(numDocs, test.reader().numDocs()); } engine.forceMerge(true, 1, false, false, false); + engine.refresh("test"); assertEquals(engine.segments(true).size(), 1); ParsedDocument doc = testParsedDocument(Integer.toString(0), null, testDocument(), B_1, null); Engine.Index index = indexForDoc(doc); engine.delete(new Engine.Delete(index.type(), index.id(), index.uid())); engine.forceMerge(true, 10, true, false, false); //expunge deletes + engine.refresh("test"); assertEquals(engine.segments(true).size(), 1); try (Engine.Searcher test = engine.acquireSearcher("test")) { @@ -1347,7 +1151,7 @@ public void testForceMerge() throws IOException { index = indexForDoc(doc); engine.delete(new Engine.Delete(index.type(), index.id(), index.uid())); engine.forceMerge(true, 10, false, false, false); //expunge deletes - + engine.refresh("test"); assertEquals(engine.segments(true).size(), 1); try (Engine.Searcher test = engine.acquireSearcher("test")) { assertEquals(numDocs - 2, test.reader().numDocs()); @@ -1414,11 +1218,11 @@ public void run() { public void testVersioningCreateExistsException() throws IOException { ParsedDocument doc = testParsedDocument("1", null, testDocument(), B_1, null); - Engine.Index create = new Engine.Index(newUid(doc), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, Versions.MATCH_DELETED, VersionType.INTERNAL, PRIMARY, 0, -1, false); + Engine.Index create = new Engine.Index(newUid(doc), doc, SequenceNumbers.UNASSIGNED_SEQ_NO, 0, Versions.MATCH_DELETED, VersionType.INTERNAL, PRIMARY, 0, -1, false); Engine.IndexResult indexResult = engine.index(create); assertThat(indexResult.getVersion(), equalTo(1L)); - create = new Engine.Index(newUid(doc), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, Versions.MATCH_DELETED, VersionType.INTERNAL, PRIMARY, 0, -1, false); + create = new Engine.Index(newUid(doc), doc, SequenceNumbers.UNASSIGNED_SEQ_NO, 0, Versions.MATCH_DELETED, VersionType.INTERNAL, PRIMARY, 0, -1, false); indexResult = engine.index(create); assertTrue(indexResult.hasFailure()); assertThat(indexResult.getFailure(), instanceOf(VersionConflictEngineException.class)); @@ -1459,7 +1263,7 @@ protected List generateSingleDocHistory(boolean forReplica, Ve } if (randomBoolean()) { op = new Engine.Index(id, testParsedDocument("1", null, testDocumentWithTextField(valuePrefix + i), B_1, null), - forReplica && i >= startWithSeqNo ? i * 2 : SequenceNumbersService.UNASSIGNED_SEQ_NO, + forReplica && i >= startWithSeqNo ? i * 2 : SequenceNumbers.UNASSIGNED_SEQ_NO, forReplica && i >= startWithSeqNo && incrementTermWhenIntroducingSeqNo ? primaryTerm + 1 : primaryTerm, version, forReplica ? versionType.versionTypeForReplicationAndRecovery() : versionType, @@ -1468,7 +1272,7 @@ protected List generateSingleDocHistory(boolean forReplica, Ve ); } else { op = new Engine.Delete("test", "1", id, - forReplica && i >= startWithSeqNo ? i * 2 : SequenceNumbersService.UNASSIGNED_SEQ_NO, + forReplica && i >= startWithSeqNo ? i * 2 : SequenceNumbers.UNASSIGNED_SEQ_NO, forReplica && i >= startWithSeqNo && incrementTermWhenIntroducingSeqNo ? primaryTerm + 1 : primaryTerm, version, forReplica ? versionType.versionTypeForReplicationAndRecovery() : versionType, @@ -1554,6 +1358,7 @@ private void assertOpsOnReplica(List ops, InternalEngine repli } if (randomBoolean()) { engine.flush(); + engine.refresh("test"); } firstOp = false; } @@ -1709,11 +1514,12 @@ private int assertOpsOnPrimary(List ops, long currentOpVersion } if (randomBoolean()) { engine.flush(); + engine.refresh("test"); } if (rarely()) { // simulate GC deletes - engine.refresh("gc_simulation"); + engine.refresh("gc_simulation", Engine.SearcherScope.INTERNAL); engine.clearDeletedTombstones(); if (docDeleted) { lastOpVersion = Versions.NOT_FOUND; @@ -1798,6 +1604,7 @@ public void testNonInternalVersioningOnPrimary() throws IOException { } if (randomBoolean()) { engine.flush(); + engine.refresh("test"); } } @@ -1877,7 +1684,7 @@ class OpAndVersion { ParsedDocument doc = testParsedDocument("1", null, testDocument(), bytesArray(""), null); final Term uidTerm = newUid(doc); engine.index(indexForDoc(doc)); - final Function searcherFactory = engine::acquireSearcher; + final BiFunction searcherFactory = engine::acquireSearcher; for (int i = 0; i < thread.length; i++) { thread[i] = new Thread(() -> { startGun.countDown(); @@ -1897,7 +1704,7 @@ class OpAndVersion { Engine.Index index = new Engine.Index(uidTerm, testParsedDocument("1", null, testDocument(), bytesArray(Strings.collectionToCommaDelimitedString(values)), null), - SequenceNumbersService.UNASSIGNED_SEQ_NO, 2, + SequenceNumbers.UNASSIGNED_SEQ_NO, 2, get.version(), VersionType.INTERNAL, PRIMARY, System.currentTimeMillis(), -1, false); Engine.IndexResult indexResult = engine.index(index); @@ -2014,23 +1821,23 @@ public void testIndexWriterInfoStream() throws IllegalAccessException, IOExcepti public void testSeqNoAndCheckpoints() throws IOException { final int opCount = randomIntBetween(1, 256); - long primarySeqNo = SequenceNumbersService.NO_OPS_PERFORMED; + long primarySeqNo = SequenceNumbers.NO_OPS_PERFORMED; final String[] ids = new String[]{"1", "2", "3"}; final Set indexedIds = new HashSet<>(); - long localCheckpoint = SequenceNumbersService.NO_OPS_PERFORMED; - long replicaLocalCheckpoint = SequenceNumbersService.NO_OPS_PERFORMED; - long globalCheckpoint = SequenceNumbersService.UNASSIGNED_SEQ_NO; - long maxSeqNo = SequenceNumbersService.NO_OPS_PERFORMED; + long localCheckpoint = SequenceNumbers.NO_OPS_PERFORMED; + long replicaLocalCheckpoint = SequenceNumbers.NO_OPS_PERFORMED; + final long globalCheckpoint; + long maxSeqNo = SequenceNumbers.NO_OPS_PERFORMED; InternalEngine initialEngine = null; try { initialEngine = engine; - final ShardRouting primary = TestShardRouting.newShardRouting(shardId, "node1", true, ShardRoutingState.STARTED); + final ShardRouting primary = TestShardRouting.newShardRouting("test", shardId.id(), "node1", null, true, ShardRoutingState.STARTED, allocationId); final ShardRouting replica = TestShardRouting.newShardRouting(shardId, "node2", false, ShardRoutingState.STARTED); initialEngine.seqNoService().updateAllocationIdsFromMaster(1L, new HashSet<>(Arrays.asList(primary.allocationId().getId(), replica.allocationId().getId())), new IndexShardRoutingTable.Builder(shardId).addShard(primary).addShard(replica).build(), Collections.emptySet()); - initialEngine.seqNoService().activatePrimaryMode(primary.allocationId().getId(), primarySeqNo); + initialEngine.seqNoService().activatePrimaryMode(primarySeqNo); for (int op = 0; op < opCount; op++) { final String id; // mostly index, sometimes delete @@ -2038,7 +1845,7 @@ public void testSeqNoAndCheckpoints() throws IOException { // we have some docs indexed, so delete one of them id = randomFrom(indexedIds); final Engine.Delete delete = new Engine.Delete( - "test", id, newUid(id), SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, + "test", id, newUid(id), SequenceNumbers.UNASSIGNED_SEQ_NO, 0, rarely() ? 100 : Versions.MATCH_ANY, VersionType.INTERNAL, PRIMARY, 0); final Engine.DeleteResult result = initialEngine.delete(delete); if (!result.hasFailure()) { @@ -2047,7 +1854,7 @@ public void testSeqNoAndCheckpoints() throws IOException { indexedIds.remove(id); primarySeqNo++; } else { - assertThat(result.getSeqNo(), equalTo(SequenceNumbersService.UNASSIGNED_SEQ_NO)); + assertThat(result.getSeqNo(), equalTo(SequenceNumbers.UNASSIGNED_SEQ_NO)); assertThat(initialEngine.seqNoService().getMaxSeqNo(), equalTo(primarySeqNo)); } } else { @@ -2055,7 +1862,7 @@ public void testSeqNoAndCheckpoints() throws IOException { id = randomFrom(ids); ParsedDocument doc = testParsedDocument(id, null, testDocumentWithTextField(), SOURCE, null); final Engine.Index index = new Engine.Index(newUid(doc), doc, - SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, + SequenceNumbers.UNASSIGNED_SEQ_NO, 0, rarely() ? 100 : Versions.MATCH_ANY, VersionType.INTERNAL, PRIMARY, 0, -1, false); final Engine.IndexResult result = initialEngine.index(index); @@ -2065,7 +1872,7 @@ public void testSeqNoAndCheckpoints() throws IOException { indexedIds.add(id); primarySeqNo++; } else { - assertThat(result.getSeqNo(), equalTo(SequenceNumbersService.UNASSIGNED_SEQ_NO)); + assertThat(result.getSeqNo(), equalTo(SequenceNumbers.UNASSIGNED_SEQ_NO)); assertThat(initialEngine.seqNoService().getMaxSeqNo(), equalTo(primarySeqNo)); } } @@ -2189,17 +1996,17 @@ public void testConcurrentWritesAndCommits() throws Exception { } while (doneIndexing == false); // now, verify all the commits have the correct docs according to the user commit data - long prevLocalCheckpoint = SequenceNumbersService.NO_OPS_PERFORMED; - long prevMaxSeqNo = SequenceNumbersService.NO_OPS_PERFORMED; + long prevLocalCheckpoint = SequenceNumbers.NO_OPS_PERFORMED; + long prevMaxSeqNo = SequenceNumbers.NO_OPS_PERFORMED; for (Engine.IndexCommitRef commitRef : commits) { final IndexCommit commit = commitRef.getIndexCommit(); Map userData = commit.getUserData(); long localCheckpoint = userData.containsKey(SequenceNumbers.LOCAL_CHECKPOINT_KEY) ? Long.parseLong(userData.get(SequenceNumbers.LOCAL_CHECKPOINT_KEY)) : - SequenceNumbersService.NO_OPS_PERFORMED; + SequenceNumbers.NO_OPS_PERFORMED; long maxSeqNo = userData.containsKey(SequenceNumbers.MAX_SEQ_NO) ? Long.parseLong(userData.get(SequenceNumbers.MAX_SEQ_NO)) : - SequenceNumbersService.UNASSIGNED_SEQ_NO; + SequenceNumbers.UNASSIGNED_SEQ_NO; // local checkpoint and max seq no shouldn't go backwards assertThat(localCheckpoint, greaterThanOrEqualTo(prevLocalCheckpoint)); assertThat(maxSeqNo, greaterThanOrEqualTo(prevMaxSeqNo)); @@ -2209,7 +2016,7 @@ public void testConcurrentWritesAndCommits() throws Exception { if (highest != null) { highestSeqNo = highest.longValue(); } else { - highestSeqNo = SequenceNumbersService.NO_OPS_PERFORMED; + highestSeqNo = SequenceNumbers.NO_OPS_PERFORMED; } // make sure localCheckpoint <= highest seq no found <= maxSeqNo assertThat(highestSeqNo, greaterThanOrEqualTo(localCheckpoint)); @@ -2307,17 +2114,17 @@ public void testEnableGcDeletes() throws Exception { Engine engine = new InternalEngine(config(defaultSettings, store, createTempDir(), newMergePolicy(), null))) { engine.config().setEnableGcDeletes(false); - final Function searcherFactory = engine::acquireSearcher; + final BiFunction searcherFactory = engine::acquireSearcher; // Add document Document document = testDocument(); document.add(new TextField("value", "test1", Field.Store.YES)); ParsedDocument doc = testParsedDocument("1", null, document, B_2, null); - engine.index(new Engine.Index(newUid(doc), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 1, VersionType.EXTERNAL, Engine.Operation.Origin.PRIMARY, System.nanoTime(), -1, false)); + engine.index(new Engine.Index(newUid(doc), doc, SequenceNumbers.UNASSIGNED_SEQ_NO, 0, 1, VersionType.EXTERNAL, Engine.Operation.Origin.PRIMARY, System.nanoTime(), -1, false)); // Delete document we just added: - engine.delete(new Engine.Delete("test", "1", newUid(doc), SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 10, VersionType.EXTERNAL, Engine.Operation.Origin.PRIMARY, System.nanoTime())); + engine.delete(new Engine.Delete("test", "1", newUid(doc), SequenceNumbers.UNASSIGNED_SEQ_NO, 0, 10, VersionType.EXTERNAL, Engine.Operation.Origin.PRIMARY, System.nanoTime())); // Get should not find the document Engine.GetResult getResult = engine.get(newGet(true, doc), searcherFactory); @@ -2331,14 +2138,14 @@ public void testEnableGcDeletes() throws Exception { } // Delete non-existent document - engine.delete(new Engine.Delete("test", "2", newUid("2"), SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 10, VersionType.EXTERNAL, Engine.Operation.Origin.PRIMARY, System.nanoTime())); + engine.delete(new Engine.Delete("test", "2", newUid("2"), SequenceNumbers.UNASSIGNED_SEQ_NO, 0, 10, VersionType.EXTERNAL, Engine.Operation.Origin.PRIMARY, System.nanoTime())); // Get should not find the document (we never indexed uid=2): getResult = engine.get(new Engine.Get(true, "type", "2", newUid("2")), searcherFactory); assertThat(getResult.exists(), equalTo(false)); // Try to index uid=1 with a too-old version, should fail: - Engine.Index index = new Engine.Index(newUid(doc), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 2, VersionType.EXTERNAL, Engine.Operation.Origin.PRIMARY, System.nanoTime(), -1, false); + Engine.Index index = new Engine.Index(newUid(doc), doc, SequenceNumbers.UNASSIGNED_SEQ_NO, 0, 2, VersionType.EXTERNAL, Engine.Operation.Origin.PRIMARY, System.nanoTime(), -1, false); Engine.IndexResult indexResult = engine.index(index); assertTrue(indexResult.hasFailure()); assertThat(indexResult.getFailure(), instanceOf(VersionConflictEngineException.class)); @@ -2348,7 +2155,7 @@ public void testEnableGcDeletes() throws Exception { assertThat(getResult.exists(), equalTo(false)); // Try to index uid=2 with a too-old version, should fail: - Engine.Index index1 = new Engine.Index(newUid(doc), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 2, VersionType.EXTERNAL, Engine.Operation.Origin.PRIMARY, System.nanoTime(), -1, false); + Engine.Index index1 = new Engine.Index(newUid(doc), doc, SequenceNumbers.UNASSIGNED_SEQ_NO, 0, 2, VersionType.EXTERNAL, Engine.Operation.Origin.PRIMARY, System.nanoTime(), -1, false); indexResult = engine.index(index1); assertTrue(indexResult.hasFailure()); assertThat(indexResult.getFailure(), instanceOf(VersionConflictEngineException.class)); @@ -2359,29 +2166,6 @@ public void testEnableGcDeletes() throws Exception { } } - protected Term newUid(String id) { - return new Term("_id", Uid.encodeId(id)); - } - - protected Term newUid(ParsedDocument doc) { - return newUid(doc.id()); - } - - protected Engine.Get newGet(boolean realtime, ParsedDocument doc) { - return new Engine.Get(realtime, doc.type(), doc.id(), newUid(doc)); - } - - private Engine.Index indexForDoc(ParsedDocument doc) { - return new Engine.Index(newUid(doc), doc); - } - - private Engine.Index replicaIndexForDoc(ParsedDocument doc, long version, long seqNo, - boolean isRetry) { - return new Engine.Index(newUid(doc), doc, seqNo, 1, version, VersionType.EXTERNAL, - Engine.Operation.Origin.REPLICA, System.nanoTime(), - IndexRequest.UNSET_AUTO_GENERATED_TIMESTAMP, isRetry); - } - public void testExtractShardId() { try (Engine.Searcher test = this.engine.acquireSearcher("test")) { ShardId shardId = ShardUtils.extractShardId(test.getDirectoryReader()); @@ -2466,7 +2250,7 @@ public void testTranslogReplayWithFailure() throws IOException { final int numDocs = randomIntBetween(1, 10); for (int i = 0; i < numDocs; i++) { ParsedDocument doc = testParsedDocument(Integer.toString(i), null, testDocument(), new BytesArray("{}"), null); - Engine.Index firstIndexRequest = new Engine.Index(newUid(doc), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, Versions.MATCH_DELETED, VersionType.INTERNAL, PRIMARY, System.nanoTime(), -1, false); + Engine.Index firstIndexRequest = new Engine.Index(newUid(doc), doc, SequenceNumbers.UNASSIGNED_SEQ_NO, 0, Versions.MATCH_DELETED, VersionType.INTERNAL, PRIMARY, System.nanoTime(), -1, false); Engine.IndexResult indexResult = engine.index(firstIndexRequest); assertThat(indexResult.getVersion(), equalTo(1L)); } @@ -2565,7 +2349,7 @@ public void testSkipTranslogReplay() throws IOException { final int numDocs = randomIntBetween(1, 10); for (int i = 0; i < numDocs; i++) { ParsedDocument doc = testParsedDocument(Integer.toString(i), null, testDocument(), new BytesArray("{}"), null); - Engine.Index firstIndexRequest = new Engine.Index(newUid(doc), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, Versions.MATCH_DELETED, VersionType.INTERNAL, PRIMARY, System.nanoTime(), -1, false); + Engine.Index firstIndexRequest = new Engine.Index(newUid(doc), doc, SequenceNumbers.UNASSIGNED_SEQ_NO, 0, Versions.MATCH_DELETED, VersionType.INTERNAL, PRIMARY, System.nanoTime(), -1, false); Engine.IndexResult indexResult = engine.index(firstIndexRequest); assertThat(indexResult.getVersion(), equalTo(1L)); } @@ -2600,7 +2384,7 @@ public void testTranslogReplay() throws IOException { final int numDocs = randomIntBetween(1, 10); for (int i = 0; i < numDocs; i++) { ParsedDocument doc = testParsedDocument(Integer.toString(i), null, testDocument(), new BytesArray("{}"), null); - Engine.Index firstIndexRequest = new Engine.Index(newUid(doc), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, Versions.MATCH_DELETED, VersionType.INTERNAL, PRIMARY, System.nanoTime(), -1, false); + Engine.Index firstIndexRequest = new Engine.Index(newUid(doc), doc, SequenceNumbers.UNASSIGNED_SEQ_NO, 0, Versions.MATCH_DELETED, VersionType.INTERNAL, PRIMARY, System.nanoTime(), -1, false); Engine.IndexResult indexResult = engine.index(firstIndexRequest); assertThat(indexResult.getVersion(), equalTo(1L)); } @@ -2615,7 +2399,7 @@ public void testTranslogReplay() throws IOException { assertVisibleCount(engine, numDocs, false); parser = (TranslogHandler) engine.config().getTranslogRecoveryRunner(); - assertEquals(numDocs, parser.appliedOperations.get()); + assertEquals(numDocs, parser.appliedOperations()); if (parser.mappingUpdate != null) { assertEquals(1, parser.getRecoveredTypes().size()); assertTrue(parser.getRecoveredTypes().containsKey("test")); @@ -2627,20 +2411,21 @@ public void testTranslogReplay() throws IOException { engine = createEngine(store, primaryTranslogDir); assertVisibleCount(engine, numDocs, false); parser = (TranslogHandler) engine.config().getTranslogRecoveryRunner(); - assertEquals(0, parser.appliedOperations.get()); + assertEquals(0, parser.appliedOperations()); final boolean flush = randomBoolean(); int randomId = randomIntBetween(numDocs + 1, numDocs + 10); ParsedDocument doc = testParsedDocument(Integer.toString(randomId), null, testDocument(), new BytesArray("{}"), null); - Engine.Index firstIndexRequest = new Engine.Index(newUid(doc), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 1, VersionType.EXTERNAL, PRIMARY, System.nanoTime(), -1, false); + Engine.Index firstIndexRequest = new Engine.Index(newUid(doc), doc, SequenceNumbers.UNASSIGNED_SEQ_NO, 0, 1, VersionType.EXTERNAL, PRIMARY, System.nanoTime(), -1, false); Engine.IndexResult indexResult = engine.index(firstIndexRequest); assertThat(indexResult.getVersion(), equalTo(1L)); if (flush) { engine.flush(); + engine.refresh("test"); } doc = testParsedDocument(Integer.toString(randomId), null, testDocument(), new BytesArray("{}"), null); - Engine.Index idxRequest = new Engine.Index(newUid(doc), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 2, VersionType.EXTERNAL, PRIMARY, System.nanoTime(), -1, false); + Engine.Index idxRequest = new Engine.Index(newUid(doc), doc, SequenceNumbers.UNASSIGNED_SEQ_NO, 0, 2, VersionType.EXTERNAL, PRIMARY, System.nanoTime(), -1, false); Engine.IndexResult result = engine.index(idxRequest); engine.refresh("test"); assertThat(result.getVersion(), equalTo(2L)); @@ -2656,7 +2441,7 @@ public void testTranslogReplay() throws IOException { assertThat(topDocs.totalHits, equalTo(numDocs + 1L)); } parser = (TranslogHandler) engine.config().getTranslogRecoveryRunner(); - assertEquals(flush ? 1 : 2, parser.appliedOperations.get()); + assertEquals(flush ? 1 : 2, parser.appliedOperations()); engine.delete(new Engine.Delete("test", Integer.toString(randomId), newUid(doc))); if (randomBoolean()) { engine.refresh("test"); @@ -2670,102 +2455,11 @@ public void testTranslogReplay() throws IOException { } } - public static class TranslogHandler implements EngineConfig.TranslogRecoveryRunner { - - private final MapperService mapperService; - public Mapping mappingUpdate = null; - private final Map recoveredTypes = new HashMap<>(); - private final AtomicLong appliedOperations = new AtomicLong(); - - public TranslogHandler(NamedXContentRegistry xContentRegistry, IndexSettings indexSettings) { - NamedAnalyzer defaultAnalyzer = new NamedAnalyzer("default", AnalyzerScope.INDEX, new StandardAnalyzer()); - IndexAnalyzers indexAnalyzers = new IndexAnalyzers(indexSettings, defaultAnalyzer, defaultAnalyzer, defaultAnalyzer, Collections.emptyMap(), Collections.emptyMap()); - SimilarityService similarityService = new SimilarityService(indexSettings, null, Collections.emptyMap()); - MapperRegistry mapperRegistry = new IndicesModule(Collections.emptyList()).getMapperRegistry(); - mapperService = new MapperService(indexSettings, indexAnalyzers, xContentRegistry, similarityService, mapperRegistry, - () -> null); - } - - private DocumentMapperForType docMapper(String type) { - RootObjectMapper.Builder rootBuilder = new RootObjectMapper.Builder(type); - DocumentMapper.Builder b = new DocumentMapper.Builder(rootBuilder, mapperService); - return new DocumentMapperForType(b.build(mapperService), mappingUpdate); - } - - private void applyOperation(Engine engine, Engine.Operation operation) throws IOException { - switch (operation.operationType()) { - case INDEX: - Engine.Index engineIndex = (Engine.Index) operation; - Mapping update = engineIndex.parsedDoc().dynamicMappingsUpdate(); - if (engineIndex.parsedDoc().dynamicMappingsUpdate() != null) { - recoveredTypes.compute(engineIndex.type(), (k, mapping) -> mapping == null ? update : mapping.merge(update, false)); - } - engine.index(engineIndex); - break; - case DELETE: - engine.delete((Engine.Delete) operation); - break; - case NO_OP: - engine.noOp((Engine.NoOp) operation); - break; - default: - throw new IllegalStateException("No operation defined for [" + operation + "]"); - } - } - - /** - * Returns the recovered types modifying the mapping during the recovery - */ - public Map getRecoveredTypes() { - return recoveredTypes; - } - - @Override - public int run(Engine engine, Translog.Snapshot snapshot) throws IOException { - int opsRecovered = 0; - Translog.Operation operation; - while ((operation = snapshot.next()) != null) { - applyOperation(engine, convertToEngineOp(operation, Engine.Operation.Origin.LOCAL_TRANSLOG_RECOVERY)); - opsRecovered++; - appliedOperations.incrementAndGet(); - } - return opsRecovered; - } - - private Engine.Operation convertToEngineOp(Translog.Operation operation, Engine.Operation.Origin origin) { - switch (operation.opType()) { - case INDEX: - final Translog.Index index = (Translog.Index) operation; - final String indexName = mapperService.index().getName(); - final Engine.Index engineIndex = IndexShard.prepareIndex(docMapper(index.type()), - mapperService.getIndexSettings().getIndexVersionCreated(), - source(indexName, index.type(), index.id(), index.source(), XContentFactory.xContentType(index.source())) - .routing(index.routing()).parent(index.parent()), index.seqNo(), index.primaryTerm(), - index.version(), index.versionType().versionTypeForReplicationAndRecovery(), origin, - index.getAutoGeneratedIdTimestamp(), true); - return engineIndex; - case DELETE: - final Translog.Delete delete = (Translog.Delete) operation; - final Engine.Delete engineDelete = new Engine.Delete(delete.type(), delete.id(), delete.uid(), delete.seqNo(), - delete.primaryTerm(), delete.version(), delete.versionType().versionTypeForReplicationAndRecovery(), - origin, System.nanoTime()); - return engineDelete; - case NO_OP: - final Translog.NoOp noOp = (Translog.NoOp) operation; - final Engine.NoOp engineNoOp = - new Engine.NoOp(noOp.seqNo(), noOp.primaryTerm(), origin, System.nanoTime(), noOp.reason()); - return engineNoOp; - default: - throw new IllegalStateException("No operation defined for [" + operation + "]"); - } - } - } - public void testRecoverFromForeignTranslog() throws IOException { final int numDocs = randomIntBetween(1, 10); for (int i = 0; i < numDocs; i++) { ParsedDocument doc = testParsedDocument(Integer.toString(i), null, testDocument(), new BytesArray("{}"), null); - Engine.Index firstIndexRequest = new Engine.Index(newUid(doc), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, Versions.MATCH_DELETED, VersionType.INTERNAL, PRIMARY, System.nanoTime(), -1, false); + Engine.Index firstIndexRequest = new Engine.Index(newUid(doc), doc, SequenceNumbers.UNASSIGNED_SEQ_NO, 0, Versions.MATCH_DELETED, VersionType.INTERNAL, PRIMARY, System.nanoTime(), -1, false); Engine.IndexResult index = engine.index(firstIndexRequest); assertThat(index.getVersion(), equalTo(1L)); } @@ -2775,7 +2469,7 @@ public void testRecoverFromForeignTranslog() throws IOException { Translog translog = new Translog( new TranslogConfig(shardId, createTempDir(), INDEX_SETTINGS, BigArrays.NON_RECYCLING_INSTANCE), - null, createTranslogDeletionPolicy(INDEX_SETTINGS), () -> SequenceNumbersService.UNASSIGNED_SEQ_NO); + null, createTranslogDeletionPolicy(INDEX_SETTINGS), () -> SequenceNumbers.UNASSIGNED_SEQ_NO); translog.add(new Translog.Index("test", "SomeBogusId", 0, "{}".getBytes(Charset.forName("UTF-8")))); assertEquals(generation.translogFileGeneration, translog.currentFileGeneration()); translog.close(); @@ -2785,12 +2479,11 @@ public void testRecoverFromForeignTranslog() throws IOException { TranslogConfig translogConfig = new TranslogConfig(shardId, translog.location(), config.getIndexSettings(), BigArrays.NON_RECYCLING_INSTANCE); - EngineConfig brokenConfig = new EngineConfig(EngineConfig.OpenMode.OPEN_INDEX_AND_TRANSLOG, shardId, threadPool, - config.getIndexSettings(), null, store, newMergePolicy(), config.getAnalyzer(), - config.getSimilarity(), new CodecService(null, logger), config.getEventListener(), - IndexSearcher.getDefaultQueryCache(), IndexSearcher.getDefaultQueryCachingPolicy(), translogConfig, - TimeValue.timeValueMinutes(5), config.getRefreshListeners(), null, - config.getTranslogRecoveryRunner()); + EngineConfig brokenConfig = new EngineConfig(EngineConfig.OpenMode.OPEN_INDEX_AND_TRANSLOG, shardId, allocationId.getId(), + threadPool, config.getIndexSettings(), null, store, newMergePolicy(), config.getAnalyzer(), config.getSimilarity(), + new CodecService(null, logger), config.getEventListener(), IndexSearcher.getDefaultQueryCache(), + IndexSearcher.getDefaultQueryCachingPolicy(), false, translogConfig, TimeValue.timeValueMinutes(5), + config.getRefreshListeners(), null, config.getTranslogRecoveryRunner()); try { InternalEngine internalEngine = new InternalEngine(brokenConfig); @@ -2802,6 +2495,89 @@ public void testRecoverFromForeignTranslog() throws IOException { assertVisibleCount(engine, numDocs, false); } + public void testHistoryUUIDIsSetIfMissing() throws IOException { + final int numDocs = randomIntBetween(0, 3); + for (int i = 0; i < numDocs; i++) { + ParsedDocument doc = testParsedDocument(Integer.toString(i), null, testDocument(), new BytesArray("{}"), null); + Engine.Index firstIndexRequest = new Engine.Index(newUid(doc), doc, SequenceNumbers.UNASSIGNED_SEQ_NO, 0, Versions.MATCH_DELETED, VersionType.INTERNAL, PRIMARY, System.nanoTime(), -1, false); + Engine.IndexResult index = engine.index(firstIndexRequest); + assertThat(index.getVersion(), equalTo(1L)); + } + assertVisibleCount(engine, numDocs); + engine.close(); + + IndexWriterConfig iwc = new IndexWriterConfig(null) + .setCommitOnClose(false) + // we don't want merges to happen here - we call maybe merge on the engine + // later once we stared it up otherwise we would need to wait for it here + // we also don't specify a codec here and merges should use the engines for this index + .setMergePolicy(NoMergePolicy.INSTANCE) + .setOpenMode(IndexWriterConfig.OpenMode.APPEND); + try (IndexWriter writer = new IndexWriter(store.directory(), iwc)) { + Map newCommitData = new HashMap<>(); + for (Map.Entry entry: writer.getLiveCommitData()) { + if (entry.getKey().equals(Engine.HISTORY_UUID_KEY) == false) { + newCommitData.put(entry.getKey(), entry.getValue()); + } + } + writer.setLiveCommitData(newCommitData.entrySet()); + writer.commit(); + } + + final IndexSettings indexSettings = IndexSettingsModule.newIndexSettings("test", Settings.builder() + .put(defaultSettings.getSettings()) + .put(IndexMetaData.SETTING_VERSION_CREATED, Version.V_6_0_0_beta1) + .build()); + + EngineConfig config = engine.config(); + + EngineConfig newConfig = new EngineConfig( + randomBoolean() ? EngineConfig.OpenMode.OPEN_INDEX_AND_TRANSLOG : EngineConfig.OpenMode.OPEN_INDEX_CREATE_TRANSLOG, + shardId, allocationId.getId(), + threadPool, indexSettings, null, store, newMergePolicy(), config.getAnalyzer(), config.getSimilarity(), + new CodecService(null, logger), config.getEventListener(), IndexSearcher.getDefaultQueryCache(), + IndexSearcher.getDefaultQueryCachingPolicy(), false, config.getTranslogConfig(), TimeValue.timeValueMinutes(5), + config.getRefreshListeners(), null, config.getTranslogRecoveryRunner()); + engine = new InternalEngine(newConfig); + if (newConfig.getOpenMode() == EngineConfig.OpenMode.OPEN_INDEX_AND_TRANSLOG) { + engine.recoverFromTranslog(); + assertVisibleCount(engine, numDocs, false); + } else { + assertVisibleCount(engine, 0, false); + } + assertThat(engine.getHistoryUUID(), notNullValue()); + } + + public void testHistoryUUIDCanBeForced() throws IOException { + final int numDocs = randomIntBetween(0, 3); + for (int i = 0; i < numDocs; i++) { + ParsedDocument doc = testParsedDocument(Integer.toString(i), null, testDocument(), new BytesArray("{}"), null); + Engine.Index firstIndexRequest = new Engine.Index(newUid(doc), doc, SequenceNumbers.UNASSIGNED_SEQ_NO, 0, Versions.MATCH_DELETED, VersionType.INTERNAL, PRIMARY, System.nanoTime(), -1, false); + Engine.IndexResult index = engine.index(firstIndexRequest); + assertThat(index.getVersion(), equalTo(1L)); + } + assertVisibleCount(engine, numDocs); + final String oldHistoryUUID = engine.getHistoryUUID(); + engine.close(); + EngineConfig config = engine.config(); + + EngineConfig newConfig = new EngineConfig( + randomBoolean() ? EngineConfig.OpenMode.OPEN_INDEX_AND_TRANSLOG : EngineConfig.OpenMode.OPEN_INDEX_CREATE_TRANSLOG, + shardId, allocationId.getId(), + threadPool, config.getIndexSettings(), null, store, newMergePolicy(), config.getAnalyzer(), config.getSimilarity(), + new CodecService(null, logger), config.getEventListener(), IndexSearcher.getDefaultQueryCache(), + IndexSearcher.getDefaultQueryCachingPolicy(), true, config.getTranslogConfig(), TimeValue.timeValueMinutes(5), + config.getRefreshListeners(), null, config.getTranslogRecoveryRunner()); + engine = new InternalEngine(newConfig); + if (newConfig.getOpenMode() == EngineConfig.OpenMode.OPEN_INDEX_AND_TRANSLOG) { + engine.recoverFromTranslog(); + assertVisibleCount(engine, numDocs, false); + } else { + assertVisibleCount(engine, 0, false); + } + assertThat(engine.getHistoryUUID(), not(equalTo(oldHistoryUUID))); + } + public void testShardNotAvailableExceptionWhenEngineClosedConcurrently() throws IOException, InterruptedException { AtomicReference exception = new AtomicReference<>(); String operation = randomFrom("optimize", "refresh", "flush"); @@ -2841,7 +2617,7 @@ public void run() { } /** - * Tests that when the the close method returns the engine is actually guaranteed to have cleaned up and that resources are closed + * Tests that when the close method returns the engine is actually guaranteed to have cleaned up and that resources are closed */ public void testConcurrentEngineClosed() throws BrokenBarrierException, InterruptedException { Thread[] closingThreads = new Thread[3]; @@ -2901,7 +2677,7 @@ public void testCurrentTranslogIDisCommitted() throws IOException { // create { ParsedDocument doc = testParsedDocument(Integer.toString(0), null, testDocument(), new BytesArray("{}"), null); - Engine.Index firstIndexRequest = new Engine.Index(newUid(doc), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, Versions.MATCH_DELETED, VersionType.INTERNAL, PRIMARY, System.nanoTime(), -1, false); + Engine.Index firstIndexRequest = new Engine.Index(newUid(doc), doc, SequenceNumbers.UNASSIGNED_SEQ_NO, 0, Versions.MATCH_DELETED, VersionType.INTERNAL, PRIMARY, System.nanoTime(), -1, false); try (InternalEngine engine = new InternalEngine(copy(config, EngineConfig.OpenMode.CREATE_INDEX_AND_TRANSLOG))){ assertFalse(engine.isRecovering()); @@ -3251,7 +3027,7 @@ public void testRetryWithAutogeneratedIdWorksAndNoDuplicateDocs() throws IOExcep boolean isRetry = false; long autoGeneratedIdTimestamp = 0; - Engine.Index index = new Engine.Index(newUid(doc), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, Versions.MATCH_ANY, VersionType.INTERNAL, PRIMARY, System.nanoTime(), autoGeneratedIdTimestamp, isRetry); + Engine.Index index = new Engine.Index(newUid(doc), doc, SequenceNumbers.UNASSIGNED_SEQ_NO, 0, Versions.MATCH_ANY, VersionType.INTERNAL, PRIMARY, System.nanoTime(), autoGeneratedIdTimestamp, isRetry); Engine.IndexResult indexResult = engine.index(index); assertThat(indexResult.getVersion(), equalTo(1L)); @@ -3260,7 +3036,7 @@ public void testRetryWithAutogeneratedIdWorksAndNoDuplicateDocs() throws IOExcep assertThat(indexResult.getVersion(), equalTo(1L)); isRetry = true; - index = new Engine.Index(newUid(doc), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, Versions.MATCH_ANY, VersionType.INTERNAL, PRIMARY, System.nanoTime(), autoGeneratedIdTimestamp, isRetry); + index = new Engine.Index(newUid(doc), doc, SequenceNumbers.UNASSIGNED_SEQ_NO, 0, Versions.MATCH_ANY, VersionType.INTERNAL, PRIMARY, System.nanoTime(), autoGeneratedIdTimestamp, isRetry); indexResult = engine.index(index); assertThat(indexResult.getVersion(), equalTo(1L)); engine.refresh("test"); @@ -3285,7 +3061,7 @@ public void testRetryWithAutogeneratedIdsAndWrongOrderWorksAndNoDuplicateDocs() boolean isRetry = true; long autoGeneratedIdTimestamp = 0; - Engine.Index firstIndexRequest = new Engine.Index(newUid(doc), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, Versions.MATCH_ANY, VersionType.INTERNAL, PRIMARY, System.nanoTime(), autoGeneratedIdTimestamp, isRetry); + Engine.Index firstIndexRequest = new Engine.Index(newUid(doc), doc, SequenceNumbers.UNASSIGNED_SEQ_NO, 0, Versions.MATCH_ANY, VersionType.INTERNAL, PRIMARY, System.nanoTime(), autoGeneratedIdTimestamp, isRetry); Engine.IndexResult result = engine.index(firstIndexRequest); assertThat(result.getVersion(), equalTo(1L)); @@ -3294,7 +3070,7 @@ public void testRetryWithAutogeneratedIdsAndWrongOrderWorksAndNoDuplicateDocs() assertThat(indexReplicaResult.getVersion(), equalTo(1L)); isRetry = false; - Engine.Index secondIndexRequest = new Engine.Index(newUid(doc), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, Versions.MATCH_ANY, VersionType.INTERNAL, PRIMARY, System.nanoTime(), autoGeneratedIdTimestamp, isRetry); + Engine.Index secondIndexRequest = new Engine.Index(newUid(doc), doc, SequenceNumbers.UNASSIGNED_SEQ_NO, 0, Versions.MATCH_ANY, VersionType.INTERNAL, PRIMARY, System.nanoTime(), autoGeneratedIdTimestamp, isRetry); Engine.IndexResult indexResult = engine.index(secondIndexRequest); assertTrue(indexResult.isCreated()); engine.refresh("test"); @@ -3321,7 +3097,7 @@ public Engine.Index randomAppendOnly(ParsedDocument doc, boolean retry, final lo } public Engine.Index appendOnlyPrimary(ParsedDocument doc, boolean retry, final long autoGeneratedIdTimestamp) { - return new Engine.Index(newUid(doc), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, Versions.MATCH_ANY, + return new Engine.Index(newUid(doc), doc, SequenceNumbers.UNASSIGNED_SEQ_NO, 0, Versions.MATCH_ANY, VersionType.INTERNAL, Engine.Operation.Origin.PRIMARY, System.nanoTime(), autoGeneratedIdTimestamp, retry); } @@ -3558,7 +3334,7 @@ public void afterRefresh(boolean didRefresh) throws IOException { public void testSequenceIDs() throws Exception { Tuple seqID = getSequenceID(engine, new Engine.Get(false, "type", "2", newUid("1"))); // Non-existent doc returns no seqnum and no primary term - assertThat(seqID.v1(), equalTo(SequenceNumbersService.UNASSIGNED_SEQ_NO)); + assertThat(seqID.v1(), equalTo(SequenceNumbers.UNASSIGNED_SEQ_NO)); assertThat(seqID.v2(), equalTo(0L)); // create a document @@ -3589,7 +3365,7 @@ public void testSequenceIDs() throws Exception { document = testDocumentWithTextField(); document.add(new Field(SourceFieldMapper.NAME, BytesReference.toBytes(B_1), SourceFieldMapper.Defaults.FIELD_TYPE)); doc = testParsedDocument("1", null, document, B_1, null); - engine.index(new Engine.Index(newUid(doc), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 3, + engine.index(new Engine.Index(newUid(doc), doc, SequenceNumbers.UNASSIGNED_SEQ_NO, 3, Versions.MATCH_ANY, VersionType.INTERNAL, Engine.Operation.Origin.PRIMARY, System.nanoTime(), -1, false)); engine.refresh("test"); @@ -3607,46 +3383,38 @@ public void testSequenceIDs() throws Exception { } /** - * A sequence number service that will generate a sequence number and if {@code stall} is set to {@code true} will wait on the barrier - * and the referenced latch before returning. If the local checkpoint should advance (because {@code stall} is {@code false}), then the - * value of {@code expectedLocalCheckpoint} is set accordingly. + * A sequence number generator that will generate a sequence number and if {@code stall} is set to true will wait on the barrier and the + * referenced latch before returning. If the local checkpoint should advance (because {@code stall} is false, then the value of + * {@code expectedLocalCheckpoint} is set accordingly. * * @param latchReference to latch the thread for the purpose of stalling * @param barrier to signal the thread has generated a new sequence number * @param stall whether or not the thread should stall * @param expectedLocalCheckpoint the expected local checkpoint after generating a new sequence * number - * @return a sequence number service + * @return a sequence number generator */ - private SequenceNumbersService getStallingSeqNoService( + private ToLongBiFunction getStallingSeqNoGenerator( final AtomicReference latchReference, final CyclicBarrier barrier, final AtomicBoolean stall, final AtomicLong expectedLocalCheckpoint) { - return new SequenceNumbersService( - shardId, - defaultSettings, - SequenceNumbersService.NO_OPS_PERFORMED, - SequenceNumbersService.NO_OPS_PERFORMED, - SequenceNumbersService.UNASSIGNED_SEQ_NO) { - @Override - public long generateSeqNo() { - final long seqNo = super.generateSeqNo(); - final CountDownLatch latch = latchReference.get(); - if (stall.get()) { - try { - barrier.await(); - latch.await(); - } catch (BrokenBarrierException | InterruptedException e) { - throw new RuntimeException(e); - } - } else { - if (expectedLocalCheckpoint.get() + 1 == seqNo) { - expectedLocalCheckpoint.set(seqNo); - } + return (engine, operation) -> { + final long seqNo = engine.seqNoService().generateSeqNo(); + final CountDownLatch latch = latchReference.get(); + if (stall.get()) { + try { + barrier.await(); + latch.await(); + } catch (BrokenBarrierException | InterruptedException e) { + throw new RuntimeException(e); + } + } else { + if (expectedLocalCheckpoint.get() + 1 == seqNo) { + expectedLocalCheckpoint.set(seqNo); } - return seqNo; } + return seqNo; }; } @@ -3658,10 +3426,10 @@ public void testSequenceNumberAdvancesToMaxSeqOnEngineOpenOnPrimary() throws Bro final AtomicReference latchReference = new AtomicReference<>(new CountDownLatch(1)); final CyclicBarrier barrier = new CyclicBarrier(2); final AtomicBoolean stall = new AtomicBoolean(); - final AtomicLong expectedLocalCheckpoint = new AtomicLong(SequenceNumbersService.NO_OPS_PERFORMED); + final AtomicLong expectedLocalCheckpoint = new AtomicLong(SequenceNumbers.NO_OPS_PERFORMED); final List threads = new ArrayList<>(); - final SequenceNumbersService seqNoService = getStallingSeqNoService(latchReference, barrier, stall, expectedLocalCheckpoint); - initialEngine = createEngine(defaultSettings, store, primaryTranslogDir, newMergePolicy(), null, (config) -> seqNoService); + initialEngine = + createEngine(defaultSettings, store, primaryTranslogDir, newMergePolicy(), null, InternalEngine::sequenceNumberService, getStallingSeqNoGenerator(latchReference, barrier, stall, expectedLocalCheckpoint)); final InternalEngine finalInitialEngine = initialEngine; for (int i = 0; i < docs; i++) { final String id = Integer.toString(i); @@ -3753,11 +3521,11 @@ public void testOutOfOrderSequenceNumbersWithVersionConflict() throws IOExceptio final AtomicLong sequenceNumber = new AtomicLong(); final Engine.Operation.Origin origin = randomFrom(LOCAL_TRANSLOG_RECOVERY, PEER_RECOVERY, PRIMARY, REPLICA); final LongSupplier sequenceNumberSupplier = - origin == PRIMARY ? () -> SequenceNumbersService.UNASSIGNED_SEQ_NO : sequenceNumber::getAndIncrement; + origin == PRIMARY ? () -> SequenceNumbers.UNASSIGNED_SEQ_NO : sequenceNumber::getAndIncrement; document.add(new Field(SourceFieldMapper.NAME, BytesReference.toBytes(B_1), SourceFieldMapper.Defaults.FIELD_TYPE)); final ParsedDocument doc = testParsedDocument("1", null, document, B_1, null); final Term uid = newUid(doc); - final Function searcherFactory = engine::acquireSearcher; + final BiFunction searcherFactory = engine::acquireSearcher; for (int i = 0; i < numberOfOperations; i++) { if (randomBoolean()) { final Engine.Index index = new Engine.Index( @@ -3835,17 +3603,17 @@ public void testNoOps() throws IOException { final int localCheckpoint = randomIntBetween(0, maxSeqNo); final int globalCheckpoint = randomIntBetween(0, localCheckpoint); try { - final SequenceNumbersService seqNoService = - new SequenceNumbersService(shardId, defaultSettings, maxSeqNo, localCheckpoint, globalCheckpoint) { - @Override - public long generateSeqNo() { - throw new UnsupportedOperationException(); - } - }; - noOpEngine = new InternalEngine(copy(engine.config(), EngineConfig.OpenMode.OPEN_INDEX_AND_TRANSLOG)) { + final BiFunction supplier = (engineConfig, ignored) -> new SequenceNumbersService( + engineConfig.getShardId(), + engineConfig.getAllocationId(), + engineConfig.getIndexSettings(), + maxSeqNo, + localCheckpoint, + globalCheckpoint); + noOpEngine = new InternalEngine(copy(engine.config(), EngineConfig.OpenMode.OPEN_INDEX_AND_TRANSLOG), supplier) { @Override - public SequenceNumbersService seqNoService() { - return seqNoService; + protected long doGenerateSeqNoForOperation(Operation operation) { + throw new UnsupportedOperationException(); } }; noOpEngine.recoverFromTranslog(); @@ -3888,10 +3656,10 @@ public void testMinGenerationForSeqNo() throws IOException, BrokenBarrierExcepti final AtomicReference latchReference = new AtomicReference<>(); final CyclicBarrier barrier = new CyclicBarrier(2); final AtomicBoolean stall = new AtomicBoolean(); - final AtomicLong expectedLocalCheckpoint = new AtomicLong(SequenceNumbersService.NO_OPS_PERFORMED); + final AtomicLong expectedLocalCheckpoint = new AtomicLong(SequenceNumbers.NO_OPS_PERFORMED); final Map threads = new LinkedHashMap<>(); - final SequenceNumbersService seqNoService = getStallingSeqNoService(latchReference, barrier, stall, expectedLocalCheckpoint); - actualEngine = createEngine(defaultSettings, store, primaryTranslogDir, newMergePolicy(), null, (config) -> seqNoService); + actualEngine = + createEngine(defaultSettings, store, primaryTranslogDir, newMergePolicy(), null, InternalEngine::sequenceNumberService, getStallingSeqNoGenerator(latchReference, barrier, stall, expectedLocalCheckpoint)); final InternalEngine finalActualEngine = actualEngine; final Translog translog = finalActualEngine.getTranslog(); final long generation = finalActualEngine.getTranslog().currentFileGeneration(); @@ -3964,7 +3732,7 @@ private Tuple getSequenceID(Engine engine, Engine.Get get) throws En DocIdAndSeqNo docIdAndSeqNo = VersionsAndSeqNoResolver.loadDocIdAndSeqNo(searcher.reader(), get.uid()); if (docIdAndSeqNo == null) { primaryTerm = 0; - seqNo = SequenceNumbersService.UNASSIGNED_SEQ_NO; + seqNo = SequenceNumbers.UNASSIGNED_SEQ_NO; } else { seqNo = docIdAndSeqNo.seqNo; primaryTerm = VersionsAndSeqNoResolver.loadPrimaryTerm(docIdAndSeqNo, get.uid().field()); @@ -3980,25 +3748,20 @@ public void testRestoreLocalCheckpointFromTranslog() throws IOException { InternalEngine actualEngine = null; try { final Set completedSeqNos = new HashSet<>(); - final SequenceNumbersService seqNoService = - new SequenceNumbersService( - shardId, - defaultSettings, - SequenceNumbersService.NO_OPS_PERFORMED, - SequenceNumbersService.NO_OPS_PERFORMED, - SequenceNumbersService.UNASSIGNED_SEQ_NO) { + final BiFunction supplier = (engineConfig, seqNoStats) -> new SequenceNumbersService( + engineConfig.getShardId(), + engineConfig.getAllocationId(), + engineConfig.getIndexSettings(), + seqNoStats.getMaxSeqNo(), + seqNoStats.getLocalCheckpoint(), + seqNoStats.getGlobalCheckpoint()) { @Override public void markSeqNoAsCompleted(long seqNo) { super.markSeqNoAsCompleted(seqNo); completedSeqNos.add(seqNo); } }; - actualEngine = new InternalEngine(copy(engine.config(), EngineConfig.OpenMode.OPEN_INDEX_AND_TRANSLOG)) { - @Override - public SequenceNumbersService seqNoService() { - return seqNoService; - } - }; + actualEngine = new InternalEngine(copy(engine.config(), EngineConfig.OpenMode.OPEN_INDEX_AND_TRANSLOG), supplier); final int operations = randomIntBetween(0, 1024); final Set expectedCompletedSeqNos = new HashSet<>(); for (int i = 0; i < operations; i++) { @@ -4022,7 +3785,7 @@ public SequenceNumbersService seqNoService() { } final long currentLocalCheckpoint = actualEngine.seqNoService().getLocalCheckpoint(); final long resetLocalCheckpoint = - randomIntBetween(Math.toIntExact(SequenceNumbersService.NO_OPS_PERFORMED), Math.toIntExact(currentLocalCheckpoint)); + randomIntBetween(Math.toIntExact(SequenceNumbers.NO_OPS_PERFORMED), Math.toIntExact(currentLocalCheckpoint)); actualEngine.seqNoService().resetLocalCheckpoint(resetLocalCheckpoint); completedSeqNos.clear(); actualEngine.restoreLocalCheckpointFromTranslog(); @@ -4112,4 +3875,125 @@ public void testFillUpSequenceIdGapsOnRecovery() throws IOException { IOUtils.close(recoveringEngine); } } + + + public void assertSameReader(Searcher left, Searcher right) { + List leftLeaves = ElasticsearchDirectoryReader.unwrap(left.getDirectoryReader()).leaves(); + List rightLeaves = ElasticsearchDirectoryReader.unwrap(right.getDirectoryReader()).leaves(); + assertEquals(rightLeaves.size(), leftLeaves.size()); + for (int i = 0; i < leftLeaves.size(); i++) { + assertSame(leftLeaves.get(i).reader(), rightLeaves.get(0).reader()); + } + } + + public void assertNotSameReader(Searcher left, Searcher right) { + List leftLeaves = ElasticsearchDirectoryReader.unwrap(left.getDirectoryReader()).leaves(); + List rightLeaves = ElasticsearchDirectoryReader.unwrap(right.getDirectoryReader()).leaves(); + if (rightLeaves.size() == leftLeaves.size()) { + for (int i = 0; i < leftLeaves.size(); i++) { + if (leftLeaves.get(i).reader() != rightLeaves.get(0).reader()) { + return; // all is well + } + } + fail("readers are same"); + } + } + + public void testRefreshScopedSearcher() throws IOException { + try (Searcher getSearcher = engine.acquireSearcher("test", Engine.SearcherScope.INTERNAL); + Searcher searchSearcher = engine.acquireSearcher("test", Engine.SearcherScope.EXTERNAL)){ + assertSameReader(getSearcher, searchSearcher); + } + for (int i = 0; i < 10; i++) { + final String docId = Integer.toString(i); + final ParsedDocument doc = + testParsedDocument(docId, null, testDocumentWithTextField(), SOURCE, null); + Engine.Index primaryResponse = indexForDoc(doc); + engine.index(primaryResponse); + } + assertTrue(engine.refreshNeeded()); + engine.refresh("test", Engine.SearcherScope.INTERNAL); + try (Searcher getSearcher = engine.acquireSearcher("test", Engine.SearcherScope.INTERNAL); + Searcher searchSearcher = engine.acquireSearcher("test", Engine.SearcherScope.EXTERNAL)){ + assertEquals(10, getSearcher.reader().numDocs()); + assertEquals(0, searchSearcher.reader().numDocs()); + assertNotSameReader(getSearcher, searchSearcher); + } + + engine.refresh("test", Engine.SearcherScope.EXTERNAL); + + try (Searcher getSearcher = engine.acquireSearcher("test", Engine.SearcherScope.INTERNAL); + Searcher searchSearcher = engine.acquireSearcher("test", Engine.SearcherScope.EXTERNAL)){ + assertEquals(10, getSearcher.reader().numDocs()); + assertEquals(10, searchSearcher.reader().numDocs()); + assertSameReader(getSearcher, searchSearcher); + } + } + + public void testSeqNoGenerator() throws IOException { + engine.close(); + final long seqNo = randomIntBetween(Math.toIntExact(SequenceNumbers.NO_OPS_PERFORMED), Integer.MAX_VALUE); + final BiFunction seqNoService = (config, seqNoStats) -> new SequenceNumbersService( + config.getShardId(), + config.getAllocationId(), + config.getIndexSettings(), + SequenceNumbers.NO_OPS_PERFORMED, + SequenceNumbers.NO_OPS_PERFORMED, + SequenceNumbers.UNASSIGNED_SEQ_NO); + final AtomicLong seqNoGenerator = new AtomicLong(seqNo); + try (Engine e = createEngine(defaultSettings, store, primaryTranslogDir, newMergePolicy(), null, seqNoService, (engine, operation) -> seqNoGenerator.getAndIncrement())) { + final String id = "id"; + final Field uidField = new Field("_id", id, IdFieldMapper.Defaults.FIELD_TYPE); + final String type = "type"; + final Field versionField = new NumericDocValuesField("_version", 0); + final SeqNoFieldMapper.SequenceIDFields seqID = SeqNoFieldMapper.SequenceIDFields.emptySeqID(); + final ParseContext.Document document = new ParseContext.Document(); + document.add(uidField); + document.add(versionField); + document.add(seqID.seqNo); + document.add(seqID.seqNoDocValue); + document.add(seqID.primaryTerm); + final BytesReference source = new BytesArray(new byte[]{1}); + final ParsedDocument parsedDocument = new ParsedDocument( + versionField, + seqID, + id, + type, + "routing", + Collections.singletonList(document), + source, + XContentType.JSON, + null); + + final Engine.Index index = new Engine.Index( + new Term("_id", parsedDocument.id()), + parsedDocument, + SequenceNumbers.UNASSIGNED_SEQ_NO, + (long) randomIntBetween(1, 8), + Versions.MATCH_ANY, + VersionType.INTERNAL, + Engine.Operation.Origin.PRIMARY, + System.currentTimeMillis(), + System.currentTimeMillis(), + randomBoolean()); + final Engine.IndexResult indexResult = e.index(index); + assertThat(indexResult.getSeqNo(), equalTo(seqNo)); + assertThat(seqNoGenerator.get(), equalTo(seqNo + 1)); + + final Engine.Delete delete = new Engine.Delete( + type, + id, + new Term("_id", parsedDocument.id()), + SequenceNumbers.UNASSIGNED_SEQ_NO, + (long) randomIntBetween(1, 8), + Versions.MATCH_ANY, + VersionType.INTERNAL, + Engine.Operation.Origin.PRIMARY, + System.currentTimeMillis()); + final Engine.DeleteResult deleteResult = e.delete(delete); + assertThat(deleteResult.getSeqNo(), equalTo(seqNo + 1)); + assertThat(seqNoGenerator.get(), equalTo(seqNo + 2)); + } + } + } diff --git a/core/src/test/java/org/elasticsearch/index/engine/SegmentTests.java b/core/src/test/java/org/elasticsearch/index/engine/SegmentTests.java index 9ee0a343b95e5..f9641ba24d7ac 100644 --- a/core/src/test/java/org/elasticsearch/index/engine/SegmentTests.java +++ b/core/src/test/java/org/elasticsearch/index/engine/SegmentTests.java @@ -31,6 +31,7 @@ import org.elasticsearch.test.ESTestCase; import java.io.IOException; +import java.util.Collections; import java.util.Objects; public class SegmentTests extends ESTestCase { @@ -81,6 +82,9 @@ static Segment randomSegment() { segment.mergeId = randomAlphaOfLengthBetween(1, 10); segment.memoryInBytes = randomNonNegativeLong(); segment.segmentSort = randomIndexSort(); + if (randomBoolean()) { + segment.attributes = Collections.singletonMap("foo", "bar"); + } return segment; } diff --git a/core/src/test/java/org/elasticsearch/index/fieldstats/FieldStatsProviderRefreshTests.java b/core/src/test/java/org/elasticsearch/index/fieldstats/FieldStatsProviderRefreshTests.java index cff2d13ce634f..e742afb614154 100644 --- a/core/src/test/java/org/elasticsearch/index/fieldstats/FieldStatsProviderRefreshTests.java +++ b/core/src/test/java/org/elasticsearch/index/fieldstats/FieldStatsProviderRefreshTests.java @@ -24,6 +24,7 @@ import org.elasticsearch.action.search.SearchResponse; import org.elasticsearch.action.search.SearchType; import org.elasticsearch.cluster.metadata.IndexMetaData; +import org.elasticsearch.common.settings.Settings; import org.elasticsearch.index.query.QueryBuilders; import org.elasticsearch.indices.IndicesRequestCache; import org.elasticsearch.rest.RestStatus; @@ -38,9 +39,8 @@ public class FieldStatsProviderRefreshTests extends ESSingleNodeTestCase { public void testQueryRewriteOnRefresh() throws Exception { assertAcked(client().admin().indices().prepareCreate("index").addMapping("type", "s", "type=text") - .setSettings(IndicesRequestCache.INDEX_CACHE_REQUEST_ENABLED_SETTING.getKey(), true, - IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1, - IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0) + .setSettings(Settings.builder().put(IndicesRequestCache.INDEX_CACHE_REQUEST_ENABLED_SETTING.getKey(), true) + .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1).put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0)) .get()); // Index some documents diff --git a/core/src/test/java/org/elasticsearch/index/mapper/AllFieldIT.java b/core/src/test/java/org/elasticsearch/index/mapper/AllFieldIT.java deleted file mode 100644 index 2be58b3b68e6b..0000000000000 --- a/core/src/test/java/org/elasticsearch/index/mapper/AllFieldIT.java +++ /dev/null @@ -1,109 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.index.mapper; - -import org.elasticsearch.Version; -import org.elasticsearch.action.search.SearchResponse; -import org.elasticsearch.index.query.QueryBuilders; -import org.elasticsearch.plugins.Plugin; -import org.elasticsearch.test.ESIntegTestCase; -import org.elasticsearch.test.InternalSettingsPlugin; - -import java.util.Arrays; -import java.util.Collection; - -import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder; -import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked; -import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertHitCount; -import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchHits; - -public class AllFieldIT extends ESIntegTestCase { - - @Override - protected Collection> nodePlugins() { - return Arrays.asList(InternalSettingsPlugin.class); // uses index.version.created - } - - public void test5xIndicesContinueToUseAll() throws Exception { - // Default 5.x settings - assertAcked(prepareCreate("test").setSettings("index.version.created", Version.V_5_1_1.id)); - client().prepareIndex("test", "type", "1").setSource("body", "foo").get(); - refresh(); - SearchResponse resp = client().prepareSearch("test").setQuery(QueryBuilders.matchQuery("_all", "foo")).get(); - assertHitCount(resp, 1); - assertSearchHits(resp, "1"); - - // _all explicitly enabled - assertAcked(prepareCreate("test2") - .setSource(jsonBuilder() - .startObject() - .startObject("mappings") - .startObject("type") - .startObject("_all") - .field("enabled", true) - .endObject() // _all - .endObject() // type - .endObject() // mappings - .endObject()) - .setSettings("index.version.created", Version.V_5_4_0_ID)); - client().prepareIndex("test2", "type", "1").setSource("foo", "bar").get(); - refresh(); - resp = client().prepareSearch("test2").setQuery(QueryBuilders.matchQuery("_all", "bar")).get(); - assertHitCount(resp, 1); - assertSearchHits(resp, "1"); - - // _all explicitly disabled - assertAcked(prepareCreate("test3") - .setSource(jsonBuilder() - .startObject() - .startObject("mappings") - .startObject("type") - .startObject("_all") - .field("enabled", false) - .endObject() // _all - .endObject() // type - .endObject() // mappings - .endObject()) - .setSettings("index.version.created", Version.V_5_4_0_ID)); - client().prepareIndex("test3", "type", "1").setSource("foo", "baz").get(); - refresh(); - resp = client().prepareSearch("test3").setQuery(QueryBuilders.matchQuery("_all", "baz")).get(); - assertHitCount(resp, 0); - - // _all present, but not enabled or disabled (default settings) - assertAcked(prepareCreate("test4") - .setSource(jsonBuilder() - .startObject() - .startObject("mappings") - .startObject("type") - .startObject("_all") - .endObject() // _all - .endObject() // type - .endObject() // mappings - .endObject()) - .setSettings("index.version.created", Version.V_5_4_0_ID)); - client().prepareIndex("test4", "type", "1").setSource("foo", "eggplant").get(); - refresh(); - resp = client().prepareSearch("test4").setQuery(QueryBuilders.matchQuery("_all", "eggplant")).get(); - assertHitCount(resp, 1); - assertSearchHits(resp, "1"); - } - -} diff --git a/core/src/test/java/org/elasticsearch/index/mapper/BinaryRangeUtilTests.java b/core/src/test/java/org/elasticsearch/index/mapper/BinaryRangeUtilTests.java deleted file mode 100644 index 8a4e6945ffc36..0000000000000 --- a/core/src/test/java/org/elasticsearch/index/mapper/BinaryRangeUtilTests.java +++ /dev/null @@ -1,97 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ -package org.elasticsearch.index.mapper; - -import org.apache.lucene.util.BytesRef; -import org.elasticsearch.test.ESTestCase; - -public class BinaryRangeUtilTests extends ESTestCase { - - public void testBasics() { - BytesRef encoded1 = new BytesRef(BinaryRangeUtil.encode(Long.MIN_VALUE)); - BytesRef encoded2 = new BytesRef(BinaryRangeUtil.encode(-1L)); - BytesRef encoded3 = new BytesRef(BinaryRangeUtil.encode(0L)); - BytesRef encoded4 = new BytesRef(BinaryRangeUtil.encode(1L)); - BytesRef encoded5 = new BytesRef(BinaryRangeUtil.encode(Long.MAX_VALUE)); - - assertTrue(encoded1.compareTo(encoded2) < 0); - assertTrue(encoded2.compareTo(encoded1) > 0); - assertTrue(encoded2.compareTo(encoded3) < 0); - assertTrue(encoded3.compareTo(encoded2) > 0); - assertTrue(encoded3.compareTo(encoded4) < 0); - assertTrue(encoded4.compareTo(encoded3) > 0); - assertTrue(encoded4.compareTo(encoded5) < 0); - assertTrue(encoded5.compareTo(encoded4) > 0); - - encoded1 = new BytesRef(BinaryRangeUtil.encode(Double.NEGATIVE_INFINITY)); - encoded2 = new BytesRef(BinaryRangeUtil.encode(-1D)); - encoded3 = new BytesRef(BinaryRangeUtil.encode(0D)); - encoded4 = new BytesRef(BinaryRangeUtil.encode(1D)); - encoded5 = new BytesRef(BinaryRangeUtil.encode(Double.POSITIVE_INFINITY)); - - assertTrue(encoded1.compareTo(encoded2) < 0); - assertTrue(encoded2.compareTo(encoded1) > 0); - assertTrue(encoded2.compareTo(encoded3) < 0); - assertTrue(encoded3.compareTo(encoded2) > 0); - assertTrue(encoded3.compareTo(encoded4) < 0); - assertTrue(encoded4.compareTo(encoded3) > 0); - assertTrue(encoded4.compareTo(encoded5) < 0); - assertTrue(encoded5.compareTo(encoded4) > 0); - } - - public void testEncode_long() { - int iters = randomIntBetween(32, 1024); - for (int i = 0; i < iters; i++) { - long number1 = randomLong(); - BytesRef encodedNumber1 = new BytesRef(BinaryRangeUtil.encode(number1)); - long number2 = randomLong(); - BytesRef encodedNumber2 = new BytesRef(BinaryRangeUtil.encode(number2)); - - int cmp = normalize(Long.compare(number1, number2)); - assertEquals(cmp, normalize(encodedNumber1.compareTo(encodedNumber2))); - cmp = normalize(Long.compare(number2, number1)); - assertEquals(cmp, normalize(encodedNumber2.compareTo(encodedNumber1))); - } - } - - public void testEncode_double() { - int iters = randomIntBetween(32, 1024); - for (int i = 0; i < iters; i++) { - double number1 = randomDouble(); - BytesRef encodedNumber1 = new BytesRef(BinaryRangeUtil.encode(number1)); - double number2 = randomDouble(); - BytesRef encodedNumber2 = new BytesRef(BinaryRangeUtil.encode(number2)); - - int cmp = normalize(Double.compare(number1, number2)); - assertEquals(cmp, normalize(encodedNumber1.compareTo(encodedNumber2))); - cmp = normalize(Double.compare(number2, number1)); - assertEquals(cmp, normalize(encodedNumber2.compareTo(encodedNumber1))); - } - } - - private static int normalize(int cmp) { - if (cmp < 0) { - return -1; - } else if (cmp > 0) { - return 1; - } - return 0; - } - -} diff --git a/core/src/test/java/org/elasticsearch/index/mapper/CopyToMapperTests.java b/core/src/test/java/org/elasticsearch/index/mapper/CopyToMapperTests.java index a5ba66fd8c22d..3fb3b94b22980 100644 --- a/core/src/test/java/org/elasticsearch/index/mapper/CopyToMapperTests.java +++ b/core/src/test/java/org/elasticsearch/index/mapper/CopyToMapperTests.java @@ -568,4 +568,95 @@ private void assertFieldValue(Document doc, String field, Number... expected) { assertArrayEquals(expected, actual); } + public void testCopyToMultiField() throws Exception { + String mapping = jsonBuilder().startObject().startObject("doc") + .startObject("properties") + .startObject("my_field") + .field("type", "keyword") + .field("copy_to", "my_field.bar") + .startObject("fields") + .startObject("bar") + .field("type", "text") + .endObject() + .endObject() + .endObject() + .endObject() + .endObject().endObject().string(); + + MapperService mapperService = createIndex("test").mapperService(); + IllegalArgumentException e = expectThrows(IllegalArgumentException.class, + () -> mapperService.merge("doc", new CompressedXContent(mapping), MergeReason.MAPPING_UPDATE, randomBoolean())); + assertEquals("[copy_to] may not be used to copy to a multi-field: [my_field.bar]", e.getMessage()); + } + + public void testNestedCopyTo() throws Exception { + String mapping = jsonBuilder().startObject().startObject("doc") + .startObject("properties") + .startObject("n") + .field("type", "nested") + .startObject("properties") + .startObject("foo") + .field("type", "keyword") + .field("copy_to", "n.bar") + .endObject() + .startObject("bar") + .field("type", "text") + .endObject() + .endObject() + .endObject() + .endObject() + .endObject().endObject().string(); + + MapperService mapperService = createIndex("test").mapperService(); + mapperService.merge("doc", new CompressedXContent(mapping), MergeReason.MAPPING_UPDATE, randomBoolean()); // no exception + } + + public void testNestedCopyToMultiField() throws Exception { + String mapping = jsonBuilder().startObject().startObject("doc") + .startObject("properties") + .startObject("n") + .field("type", "nested") + .startObject("properties") + .startObject("my_field") + .field("type", "keyword") + .field("copy_to", "n.my_field.bar") + .startObject("fields") + .startObject("bar") + .field("type", "text") + .endObject() + .endObject() + .endObject() + .endObject() + .endObject() + .endObject() + .endObject().endObject().string(); + + MapperService mapperService = createIndex("test").mapperService(); + IllegalArgumentException e = expectThrows(IllegalArgumentException.class, + () -> mapperService.merge("doc", new CompressedXContent(mapping), MergeReason.MAPPING_UPDATE, randomBoolean())); + assertEquals("[copy_to] may not be used to copy to a multi-field: [n.my_field.bar]", e.getMessage()); + } + + public void testCopyFromMultiField() throws Exception { + String mapping = jsonBuilder().startObject().startObject("doc") + .startObject("properties") + .startObject("my_field") + .field("type", "keyword") + .startObject("fields") + .startObject("bar") + .field("type", "text") + .field("copy_to", "my_field.baz") + .endObject() + .endObject() + .endObject() + .endObject() + .endObject().endObject().string(); + + MapperService mapperService = createIndex("test").mapperService(); + MapperParsingException e = expectThrows(MapperParsingException.class, + () -> mapperService.merge("doc", new CompressedXContent(mapping), MergeReason.MAPPING_UPDATE, randomBoolean())); + assertThat(e.getMessage(), + Matchers.containsString("copy_to in multi fields is not allowed. Found the copy_to in field [bar] " + + "which is within a multi field.")); + } } diff --git a/core/src/test/java/org/elasticsearch/index/mapper/DateFieldTypeTests.java b/core/src/test/java/org/elasticsearch/index/mapper/DateFieldTypeTests.java index 425d2d8d4ded2..43136b67e8ccf 100644 --- a/core/src/test/java/org/elasticsearch/index/mapper/DateFieldTypeTests.java +++ b/core/src/test/java/org/elasticsearch/index/mapper/DateFieldTypeTests.java @@ -200,11 +200,11 @@ public void testRangeQuery() throws IOException { LongPoint.newRangeQuery("field", instant1, instant2), SortedNumericDocValuesField.newSlowRangeQuery("field", instant1, instant2)); assertEquals(expected, - ft.rangeQuery(date1, date2, true, true, context).rewrite(new MultiReader())); + ft.rangeQuery(date1, date2, true, true, null, null, null, context).rewrite(new MultiReader())); ft.setIndexOptions(IndexOptions.NONE); IllegalArgumentException e = expectThrows(IllegalArgumentException.class, - () -> ft.rangeQuery(date1, date2, true, true, context)); + () -> ft.rangeQuery(date1, date2, true, true, null, null, null, context)); assertEquals("Cannot search on field [field] since it is not indexed.", e.getMessage()); } } diff --git a/core/src/test/java/org/elasticsearch/index/mapper/DocumentFieldMapperTests.java b/core/src/test/java/org/elasticsearch/index/mapper/DocumentFieldMapperTests.java index 398708d75f9c7..4e79a68c50e5c 100644 --- a/core/src/test/java/org/elasticsearch/index/mapper/DocumentFieldMapperTests.java +++ b/core/src/test/java/org/elasticsearch/index/mapper/DocumentFieldMapperTests.java @@ -23,14 +23,18 @@ import org.apache.lucene.analysis.TokenStream; import org.apache.lucene.analysis.Tokenizer; import org.apache.lucene.analysis.tokenattributes.CharTermAttribute; -import org.apache.lucene.document.Field; import org.apache.lucene.index.IndexableField; +import org.apache.lucene.index.Term; +import org.apache.lucene.search.DocValuesFieldExistsQuery; +import org.apache.lucene.search.Query; +import org.apache.lucene.search.TermQuery; import org.apache.lucene.util.LuceneTestCase; import org.elasticsearch.Version; import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.index.analysis.AnalyzerScope; import org.elasticsearch.index.analysis.NamedAnalyzer; +import org.elasticsearch.index.query.QueryShardContext; import java.io.IOException; import java.io.StringReader; @@ -88,6 +92,15 @@ public String typeName() { return "fake"; } + @Override + public Query existsQuery(QueryShardContext context) { + if (hasDocValues()) { + return new DocValuesFieldExistsQuery(name()); + } else { + return new TermQuery(new Term(FieldNamesFieldMapper.NAME, name())); + } + } + } static class FakeFieldMapper extends FieldMapper { diff --git a/core/src/test/java/org/elasticsearch/index/mapper/DocumentParserTests.java b/core/src/test/java/org/elasticsearch/index/mapper/DocumentParserTests.java index 4d83cc998462c..cbf890ef47687 100644 --- a/core/src/test/java/org/elasticsearch/index/mapper/DocumentParserTests.java +++ b/core/src/test/java/org/elasticsearch/index/mapper/DocumentParserTests.java @@ -19,6 +19,7 @@ package org.elasticsearch.index.mapper; +import org.elasticsearch.ExceptionsHelper; import org.elasticsearch.Version; import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.common.bytes.BytesArray; @@ -1375,4 +1376,26 @@ public void testDynamicFieldsStartingAndEndingWithDot() throws Exception { containsString("object field starting or ending with a [.] makes object resolution ambiguous: [top..foo..bar]")); } } + + public void testBlankFieldNames() throws Exception { + final BytesReference bytes = XContentFactory.jsonBuilder() + .startObject() + .field("", "foo") + .endObject().bytes(); + + MapperParsingException err = expectThrows(MapperParsingException.class, () -> + client().prepareIndex("idx", "type").setSource(bytes, XContentType.JSON).get()); + assertThat(ExceptionsHelper.detailedMessage(err), containsString("field name cannot be an empty string")); + + final BytesReference bytes2 = XContentFactory.jsonBuilder() + .startObject() + .startObject("foo") + .field("", "bar") + .endObject() + .endObject().bytes(); + + err = expectThrows(MapperParsingException.class, () -> + client().prepareIndex("idx", "type").setSource(bytes2, XContentType.JSON).get()); + assertThat(ExceptionsHelper.detailedMessage(err), containsString("field name cannot be an empty string")); + } } diff --git a/core/src/test/java/org/elasticsearch/index/mapper/DynamicMappingDisabledTests.java b/core/src/test/java/org/elasticsearch/index/mapper/DynamicMappingDisabledTests.java deleted file mode 100644 index 686bbafbcd23a..0000000000000 --- a/core/src/test/java/org/elasticsearch/index/mapper/DynamicMappingDisabledTests.java +++ /dev/null @@ -1,138 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.index.mapper; - -import org.elasticsearch.action.ActionListener; -import org.elasticsearch.action.bulk.BulkItemResponse; -import org.elasticsearch.action.bulk.BulkRequest; -import org.elasticsearch.action.bulk.BulkResponse; -import org.elasticsearch.action.bulk.TransportBulkAction; -import org.elasticsearch.action.bulk.TransportShardBulkAction; -import org.elasticsearch.action.index.IndexRequest; -import org.elasticsearch.action.support.ActionFilters; -import org.elasticsearch.action.support.AutoCreateIndex; -import org.elasticsearch.action.update.UpdateHelper; -import org.elasticsearch.client.Requests; -import org.elasticsearch.cluster.action.shard.ShardStateAction; -import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; -import org.elasticsearch.cluster.service.ClusterService; -import org.elasticsearch.common.io.stream.NamedWriteableRegistry; -import org.elasticsearch.common.network.NetworkService; -import org.elasticsearch.common.settings.ClusterSettings; -import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.common.util.BigArrays; -import org.elasticsearch.index.IndexNotFoundException; -import org.elasticsearch.indices.IndicesService; -import org.elasticsearch.indices.breaker.NoneCircuitBreakerService; -import org.elasticsearch.test.ESSingleNodeTestCase; -import org.elasticsearch.threadpool.TestThreadPool; -import org.elasticsearch.threadpool.ThreadPool; -import org.elasticsearch.transport.MockTcpTransport; -import org.elasticsearch.transport.Transport; -import org.elasticsearch.transport.TransportService; -import org.junit.After; -import org.junit.AfterClass; -import org.junit.BeforeClass; - -import java.util.Collections; -import java.util.concurrent.TimeUnit; -import java.util.concurrent.atomic.AtomicBoolean; - -import static org.elasticsearch.test.ClusterServiceUtils.createClusterService; -import static org.hamcrest.CoreMatchers.instanceOf; - -public class DynamicMappingDisabledTests extends ESSingleNodeTestCase { - - private static ThreadPool threadPool; - private ClusterService clusterService; - private TransportService transportService; - private TransportBulkAction transportBulkAction; - - @BeforeClass - public static void createThreadPool() { - threadPool = new TestThreadPool("DynamicMappingDisabledTests"); - } - - @Override - public void setUp() throws Exception { - super.setUp(); - Settings settings = Settings.builder() - .put(MapperService.INDEX_MAPPER_DYNAMIC_SETTING.getKey(), false) - .build(); - clusterService = createClusterService(threadPool); - Transport transport = new MockTcpTransport(settings, threadPool, BigArrays.NON_RECYCLING_INSTANCE, - new NoneCircuitBreakerService(), new NamedWriteableRegistry(Collections.emptyList()), - new NetworkService(Collections.emptyList())); - transportService = new TransportService(clusterService.getSettings(), transport, threadPool, - TransportService.NOOP_TRANSPORT_INTERCEPTOR, x -> clusterService.localNode(), null); - IndicesService indicesService = getInstanceFromNode(IndicesService.class); - ShardStateAction shardStateAction = new ShardStateAction(settings, clusterService, transportService, null, null, threadPool); - ActionFilters actionFilters = new ActionFilters(Collections.emptySet()); - IndexNameExpressionResolver indexNameExpressionResolver = new IndexNameExpressionResolver(settings); - AutoCreateIndex autoCreateIndex = new AutoCreateIndex(settings, new ClusterSettings(settings, - ClusterSettings.BUILT_IN_CLUSTER_SETTINGS), indexNameExpressionResolver); - UpdateHelper updateHelper = new UpdateHelper(settings, null); - TransportShardBulkAction shardBulkAction = new TransportShardBulkAction(settings, transportService, clusterService, - indicesService, threadPool, shardStateAction, null, updateHelper, actionFilters, indexNameExpressionResolver); - transportBulkAction = new TransportBulkAction(settings, threadPool, transportService, clusterService, - null, shardBulkAction, null, actionFilters, indexNameExpressionResolver, autoCreateIndex, System::currentTimeMillis); - } - - @After - public void tearDown() throws Exception { - super.tearDown(); - clusterService.close(); - transportService.close(); - } - - - @AfterClass - public static void destroyThreadPool() { - ThreadPool.terminate(threadPool, 30, TimeUnit.SECONDS); - // since static must set to null to be eligible for collection - threadPool = null; - } - - public void testDynamicDisabled() { - IndexRequest request = new IndexRequest("index", "type", "1"); - request.source(Requests.INDEX_CONTENT_TYPE, "foo", 3); - BulkRequest bulkRequest = new BulkRequest(); - bulkRequest.add(request); - final AtomicBoolean gotResponse = new AtomicBoolean(); - - transportBulkAction.execute(bulkRequest, new ActionListener() { - @Override - public void onResponse(BulkResponse bulkResponse) { - BulkItemResponse itemResponse = bulkResponse.getItems()[0]; - assertTrue(itemResponse.isFailed()); - assertThat(itemResponse.getFailure().getCause(), instanceOf(IndexNotFoundException.class)); - assertEquals("no such index and [index.mapper.dynamic] is [false]", itemResponse.getFailure().getCause().getMessage()); - gotResponse.set(true); - } - - @Override - public void onFailure(Exception e) { - fail("unexpected failure in bulk action, expected failed bulk item"); - } - }); - - assertTrue(gotResponse.get()); - } -} diff --git a/core/src/test/java/org/elasticsearch/index/mapper/DynamicMappingIT.java b/core/src/test/java/org/elasticsearch/index/mapper/DynamicMappingIT.java index d183242ee19fe..5ee0740505cb8 100644 --- a/core/src/test/java/org/elasticsearch/index/mapper/DynamicMappingIT.java +++ b/core/src/test/java/org/elasticsearch/index/mapper/DynamicMappingIT.java @@ -98,7 +98,8 @@ public void testMappingsPropagatedToMasterNodeImmediately() throws IOException { } public void testMappingsPropagatedToMasterNodeImmediatelyMultiType() throws IOException { - assertAcked(prepareCreate("index").setSettings("index.version.created", Version.V_5_6_0.id)); // allows for multiple types + assertAcked(prepareCreate("index").setSettings(Settings.builder().put("index.version.created", Version.V_5_6_0.id))); + // allows for multiple types // works when the type has been dynamically created client().prepareIndex("index", "type", "1").setSource("foo", 3).get(); @@ -153,34 +154,4 @@ public void run() { assertTrue(client().prepareGet("index", "type", Integer.toString(i)).get().isExists()); } } - - public void testAutoCreateWithDisabledDynamicMappings() throws Exception { - assertAcked(client().admin().indices().preparePutTemplate("my_template") - .setCreate(true) - .setPatterns(Collections.singletonList("index_*")) - .addMapping("foo", "field", "type=keyword") - .setSettings(Settings.builder().put("index.mapper.dynamic", false).build()) - .get()); - - // succeeds since 'foo' has an explicit mapping in the template - indexRandom(true, false, client().prepareIndex("index_1", "foo", "1").setSource("field", "abc")); - - // fails since 'bar' does not have an explicit mapping in the template and dynamic template creation is disabled - TypeMissingException e1 = expectThrows(TypeMissingException.class, - () -> client().prepareIndex("index_2", "bar", "1").setSource("field", "abc").get()); - assertEquals("type[bar] missing", e1.getMessage()); - assertEquals("trying to auto create mapping, but dynamic mapping is disabled", e1.getCause().getMessage()); - - BulkResponse bulkResponse = client().prepareBulk().add(new IndexRequest("index_2", "bar", "2").source("field", "abc")).get(); - assertTrue(bulkResponse.hasFailures()); - BulkItemResponse.Failure firstFailure = bulkResponse.getItems()[0].getFailure(); - assertThat(firstFailure.getCause(), instanceOf(TypeMissingException.class)); - assertEquals("type[bar] missing", firstFailure.getCause().getMessage()); - assertEquals("trying to auto create mapping, but dynamic mapping is disabled", firstFailure.getCause().getCause().getMessage()); - - // make sure no mappings were created for bar - GetIndexResponse getIndexResponse = client().admin().indices().prepareGetIndex().addIndices("index_2").get(); - assertFalse(getIndexResponse.mappings().containsKey("bar")); - } - } diff --git a/core/src/test/java/org/elasticsearch/index/mapper/DynamicMappingTests.java b/core/src/test/java/org/elasticsearch/index/mapper/DynamicMappingTests.java index 06c31f4dd1849..023d2249f2f82 100644 --- a/core/src/test/java/org/elasticsearch/index/mapper/DynamicMappingTests.java +++ b/core/src/test/java/org/elasticsearch/index/mapper/DynamicMappingTests.java @@ -217,7 +217,7 @@ private Mapper parse(DocumentMapper mapper, DocumentMapperParser parser, XConten ParseContext.InternalParseContext ctx = new ParseContext.InternalParseContext(settings, parser, mapper, source, xContentParser); assertEquals(XContentParser.Token.START_OBJECT, ctx.parser().nextToken()); ctx.parser().nextToken(); - DocumentParser.parseObjectOrNested(ctx, mapper.root(), true); + DocumentParser.parseObjectOrNested(ctx, mapper.root()); Mapping mapping = DocumentParser.createDynamicUpdate(mapper.mapping(), mapper, ctx.getDynamicMappers()); return mapping == null ? null : mapping.root(); } @@ -639,8 +639,7 @@ private void doTestDefaultFloatingPointMappings(DocumentMapper mapper, XContentB .field("baz", (double) 3.2f) // double that can be accurately represented as a float .field("quux", "3.2") // float detected through numeric detection .endObject().bytes(); - ParsedDocument parsedDocument = mapper.parse(SourceToParse.source("index", "type", "id", source, - XContentType.JSON)); + ParsedDocument parsedDocument = mapper.parse(SourceToParse.source("index", "type", "id", source, builder.contentType())); Mapping update = parsedDocument.dynamicMappingsUpdate(); assertNotNull(update); assertThat(((FieldMapper) update.root().getMapper("foo")).fieldType().typeName(), equalTo("float")); diff --git a/core/src/test/java/org/elasticsearch/index/mapper/DynamicMappingVersionTests.java b/core/src/test/java/org/elasticsearch/index/mapper/DynamicMappingVersionTests.java new file mode 100644 index 0000000000000..37c887401f24a --- /dev/null +++ b/core/src/test/java/org/elasticsearch/index/mapper/DynamicMappingVersionTests.java @@ -0,0 +1,69 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.index.mapper; + +import org.elasticsearch.common.settings.Setting; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.xcontent.XContentFactory; +import org.elasticsearch.common.xcontent.XContentType; +import org.elasticsearch.plugins.Plugin; +import org.elasticsearch.test.ESSingleNodeTestCase; +import org.elasticsearch.test.InternalSettingsPlugin; + +import java.io.IOException; +import java.util.Collection; + +public class DynamicMappingVersionTests extends ESSingleNodeTestCase { + + @Override + protected Collection> getPlugins() { + return pluginList(InternalSettingsPlugin.class); + } + + public void testDynamicMappingDefault() throws IOException { + MapperService mapperService = createIndex("my-index").mapperService(); + DocumentMapper documentMapper = mapperService + .documentMapperWithAutoCreate("my-type").getDocumentMapper(); + + ParsedDocument parsedDoc = documentMapper.parse( + SourceToParse.source("my-index", "my-type", "1", XContentFactory.jsonBuilder() + .startObject() + .field("foo", 3) + .endObject() + .bytes(), XContentType.JSON)); + + String expectedMapping = XContentFactory.jsonBuilder().startObject() + .startObject("my-type") + .startObject("properties") + .startObject("foo").field("type", "long") + .endObject().endObject().endObject().endObject().string(); + assertEquals(expectedMapping, parsedDoc.dynamicMappingsUpdate().toString()); + } + + public void testDynamicMappingSettingRemoval() { + Settings settings = Settings.builder() + .put(MapperService.INDEX_MAPPER_DYNAMIC_SETTING.getKey(), false) + .build(); + Exception e = expectThrows(IllegalArgumentException.class, () -> createIndex("test-index", settings)); + assertEquals(e.getMessage(), "Setting index.mapper.dynamic was removed after version 6.0.0"); + assertSettingDeprecationsAndWarnings(new Setting[] { MapperService.INDEX_MAPPER_DYNAMIC_SETTING }); + } + +} diff --git a/core/src/test/java/org/elasticsearch/index/mapper/ExternalMapper.java b/core/src/test/java/org/elasticsearch/index/mapper/ExternalMapper.java index 4a2c36d829f0a..33e3bc201835d 100755 --- a/core/src/test/java/org/elasticsearch/index/mapper/ExternalMapper.java +++ b/core/src/test/java/org/elasticsearch/index/mapper/ExternalMapper.java @@ -20,12 +20,17 @@ package org.elasticsearch.index.mapper; import org.apache.lucene.index.IndexableField; -import org.locationtech.spatial4j.shape.Point; +import org.apache.lucene.index.Term; +import org.apache.lucene.search.DocValuesFieldExistsQuery; +import org.apache.lucene.search.Query; +import org.apache.lucene.search.TermQuery; import org.elasticsearch.common.collect.Iterators; import org.elasticsearch.common.geo.GeoPoint; import org.elasticsearch.common.geo.builders.ShapeBuilders; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.index.query.QueryShardContext; +import org.locationtech.spatial4j.shape.Point; import java.io.IOException; import java.nio.charset.Charset; @@ -128,6 +133,15 @@ public MappedFieldType clone() { public String typeName() { return "faketype"; } + + @Override + public Query existsQuery(QueryShardContext context) { + if (hasDocValues()) { + return new DocValuesFieldExistsQuery(name()); + } else { + return new TermQuery(new Term(FieldNamesFieldMapper.NAME, name())); + } + } } private final String generatedValue; diff --git a/core/src/test/java/org/elasticsearch/index/mapper/FakeStringFieldMapper.java b/core/src/test/java/org/elasticsearch/index/mapper/FakeStringFieldMapper.java index 642282c9d5c62..464b0d9f8406a 100755 --- a/core/src/test/java/org/elasticsearch/index/mapper/FakeStringFieldMapper.java +++ b/core/src/test/java/org/elasticsearch/index/mapper/FakeStringFieldMapper.java @@ -23,16 +23,14 @@ import org.apache.lucene.document.SortedSetDocValuesField; import org.apache.lucene.index.IndexOptions; import org.apache.lucene.index.IndexableField; +import org.apache.lucene.index.Term; +import org.apache.lucene.search.DocValuesFieldExistsQuery; import org.apache.lucene.search.Query; +import org.apache.lucene.search.TermQuery; import org.apache.lucene.util.BytesRef; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentBuilder; -import org.elasticsearch.index.mapper.FieldMapper; -import org.elasticsearch.index.mapper.MappedFieldType; -import org.elasticsearch.index.mapper.Mapper; -import org.elasticsearch.index.mapper.MapperParsingException; -import org.elasticsearch.index.mapper.ParseContext; -import org.elasticsearch.index.mapper.StringFieldType; +import org.elasticsearch.index.query.QueryShardContext; import java.io.IOException; import java.util.List; @@ -114,6 +112,15 @@ public Query nullValueQuery() { } return termQuery(nullValue(), null); } + + @Override + public Query existsQuery(QueryShardContext context) { + if (hasDocValues()) { + return new DocValuesFieldExistsQuery(name()); + } else { + return new TermQuery(new Term(FieldNamesFieldMapper.NAME, name())); + } + } } protected FakeStringFieldMapper(String simpleName, FakeStringFieldType fieldType, MappedFieldType defaultFieldType, diff --git a/core/src/test/java/org/elasticsearch/index/mapper/FieldNamesFieldMapperTests.java b/core/src/test/java/org/elasticsearch/index/mapper/FieldNamesFieldMapperTests.java index dde37962af586..3655f04fcbba1 100644 --- a/core/src/test/java/org/elasticsearch/index/mapper/FieldNamesFieldMapperTests.java +++ b/core/src/test/java/org/elasticsearch/index/mapper/FieldNamesFieldMapperTests.java @@ -19,28 +19,17 @@ package org.elasticsearch.index.mapper; -import org.apache.lucene.document.Field; import org.apache.lucene.index.IndexOptions; -import org.apache.lucene.index.IndexableField; -import org.elasticsearch.common.bytes.BytesArray; import org.elasticsearch.common.compress.CompressedXContent; -import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.common.xcontent.XContentType; -import org.elasticsearch.index.IndexService; -import org.elasticsearch.index.query.QueryShardContext; -import org.elasticsearch.indices.IndicesModule; -import org.elasticsearch.indices.mapper.MapperRegistry; import org.elasticsearch.test.ESSingleNodeTestCase; -import java.io.IOException; import java.util.Arrays; import java.util.Collections; -import java.util.List; -import java.util.Map; +import java.util.Set; import java.util.SortedSet; import java.util.TreeSet; -import java.util.function.Supplier; public class FieldNamesFieldMapperTests extends ESSingleNodeTestCase { @@ -56,7 +45,7 @@ private static SortedSet set(T... values) { return new TreeSet<>(Arrays.asList(values)); } - void assertFieldNames(SortedSet expected, ParsedDocument doc) { + void assertFieldNames(Set expected, ParsedDocument doc) { String[] got = doc.rootDoc().getValues("_field_names"); assertEquals(expected, set(got)); } @@ -99,12 +88,13 @@ public void testInjectIntoDocDuringParsing() throws Exception { .bytes(), XContentType.JSON)); - assertFieldNames(set("a", "a.keyword", "b", "b.c", "_id", "_version", "_seq_no", "_primary_term", "_source"), doc); + assertFieldNames(Collections.emptySet(), doc); } public void testExplicitEnabled() throws Exception { String mapping = XContentFactory.jsonBuilder().startObject().startObject("type") .startObject("_field_names").field("enabled", true).endObject() + .startObject("properties").startObject("field").field("type", "keyword").field("doc_values", false).endObject().endObject() .endObject().endObject().string(); DocumentMapper docMapper = createIndex("test").mapperService().documentMapperParser().parse("type", new CompressedXContent(mapping)); FieldNamesFieldMapper fieldNamesMapper = docMapper.metadataMapper(FieldNamesFieldMapper.class); @@ -117,7 +107,7 @@ public void testExplicitEnabled() throws Exception { .bytes(), XContentType.JSON)); - assertFieldNames(set("field", "field.keyword", "_id", "_version", "_seq_no", "_primary_term", "_source"), doc); + assertFieldNames(set("field"), doc); } public void testDisabled() throws Exception { @@ -154,110 +144,4 @@ public void testMergingMappings() throws Exception { mapperEnabled = mapperService.merge("type", new CompressedXContent(enabledMapping), MapperService.MergeReason.MAPPING_UPDATE, false); assertTrue(mapperEnabled.metadataMapper(FieldNamesFieldMapper.class).fieldType().isEnabled()); } - - private static class DummyMetadataFieldMapper extends MetadataFieldMapper { - - public static class TypeParser implements MetadataFieldMapper.TypeParser { - - @Override - public Builder parse(String name, Map node, ParserContext parserContext) throws MapperParsingException { - return new MetadataFieldMapper.Builder("_dummy", FIELD_TYPE, FIELD_TYPE) { - @Override - public DummyMetadataFieldMapper build(BuilderContext context) { - return new DummyMetadataFieldMapper(context.indexSettings()); - } - }; - } - - @Override - public MetadataFieldMapper getDefault(MappedFieldType fieldType, ParserContext context) { - final Settings indexSettings = context.mapperService().getIndexSettings().getSettings(); - return new DummyMetadataFieldMapper(indexSettings); - } - - } - - private static class DummyFieldType extends TermBasedFieldType { - - DummyFieldType() { - super(); - } - - private DummyFieldType(MappedFieldType other) { - super(other); - } - - @Override - public MappedFieldType clone() { - return new DummyFieldType(this); - } - - @Override - public String typeName() { - return "_dummy"; - } - - } - - private static final MappedFieldType FIELD_TYPE = new DummyFieldType(); - static { - FIELD_TYPE.setTokenized(false); - FIELD_TYPE.setIndexOptions(IndexOptions.DOCS); - FIELD_TYPE.setName("_dummy"); - FIELD_TYPE.freeze(); - } - - protected DummyMetadataFieldMapper(Settings indexSettings) { - super("_dummy", FIELD_TYPE, FIELD_TYPE, indexSettings); - } - - @Override - public void preParse(ParseContext context) throws IOException { - } - - @Override - public void postParse(ParseContext context) throws IOException { - context.doc().add(new Field("_dummy", "dummy", FIELD_TYPE)); - } - - @Override - protected void parseCreateField(ParseContext context, List fields) throws IOException { - } - - @Override - protected String contentType() { - return "_dummy"; - } - - } - - public void testSeesFieldsFromPlugins() throws IOException { - IndexService indexService = createIndex("test"); - IndicesModule indicesModule = newTestIndicesModule( - Collections.emptyMap(), - Collections.singletonMap("_dummy", new DummyMetadataFieldMapper.TypeParser()) - ); - final MapperRegistry mapperRegistry = indicesModule.getMapperRegistry(); - Supplier queryShardContext = () -> { - return indexService.newQueryShardContext(0, null, () -> { throw new UnsupportedOperationException(); }, null); - }; - MapperService mapperService = new MapperService(indexService.getIndexSettings(), indexService.getIndexAnalyzers(), - indexService.xContentRegistry(), indexService.similarityService(), mapperRegistry, queryShardContext); - DocumentMapperParser parser = new DocumentMapperParser(indexService.getIndexSettings(), mapperService, - indexService.getIndexAnalyzers(), indexService.xContentRegistry(), indexService.similarityService(), mapperRegistry, - queryShardContext); - String mapping = XContentFactory.jsonBuilder().startObject().startObject("type").endObject().endObject().string(); - DocumentMapper mapper = parser.parse("type", new CompressedXContent(mapping)); - ParsedDocument parsedDocument = mapper.parse(SourceToParse.source("index", "type", "id", new BytesArray("{}"), - XContentType.JSON)); - IndexableField[] fields = parsedDocument.rootDoc().getFields(FieldNamesFieldMapper.NAME); - boolean found = false; - for (IndexableField f : fields) { - if ("_dummy".equals(f.stringValue())) { - found = true; - break; - } - } - assertTrue("Could not find the dummy field among " + Arrays.toString(fields), found); - } } diff --git a/core/src/test/java/org/elasticsearch/index/mapper/FieldNamesFieldTypeTests.java b/core/src/test/java/org/elasticsearch/index/mapper/FieldNamesFieldTypeTests.java index b3c9da806fa76..945407fc39492 100644 --- a/core/src/test/java/org/elasticsearch/index/mapper/FieldNamesFieldTypeTests.java +++ b/core/src/test/java/org/elasticsearch/index/mapper/FieldNamesFieldTypeTests.java @@ -21,10 +21,18 @@ import org.apache.lucene.index.Term; import org.apache.lucene.search.Query; import org.apache.lucene.search.TermQuery; -import org.elasticsearch.index.mapper.FieldNamesFieldMapper; -import org.elasticsearch.index.mapper.MappedFieldType; +import org.elasticsearch.Version; +import org.elasticsearch.cluster.metadata.IndexMetaData; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.index.IndexSettings; +import org.elasticsearch.index.query.QueryShardContext; import org.junit.Before; +import java.util.Collections; + +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.when; + public class FieldNamesFieldTypeTests extends FieldTypeTestCase { @Override protected MappedFieldType createDefaultFieldType() { @@ -43,13 +51,28 @@ public void modify(MappedFieldType ft) { } public void testTermQuery() { - FieldNamesFieldMapper.FieldNamesFieldType type = new FieldNamesFieldMapper.FieldNamesFieldType(); - type.setName(FieldNamesFieldMapper.CONTENT_TYPE); - type.setEnabled(true); - Query termQuery = type.termQuery("field_name", null); + + FieldNamesFieldMapper.FieldNamesFieldType fieldNamesFieldType = new FieldNamesFieldMapper.FieldNamesFieldType(); + fieldNamesFieldType.setName(FieldNamesFieldMapper.CONTENT_TYPE); + KeywordFieldMapper.KeywordFieldType fieldType = new KeywordFieldMapper.KeywordFieldType(); + fieldType.setName("field_name"); + + Settings settings = settings(Version.CURRENT).build(); + IndexSettings indexSettings = new IndexSettings( + new IndexMetaData.Builder("foo").settings(settings).numberOfShards(1).numberOfReplicas(0).build(), settings); + MapperService mapperService = mock(MapperService.class); + when(mapperService.fullName("_field_names")).thenReturn(fieldNamesFieldType); + when(mapperService.fullName("field_name")).thenReturn(fieldType); + when(mapperService.simpleMatchToIndexNames("field_name")).thenReturn(Collections.singletonList("field_name")); + + QueryShardContext queryShardContext = new QueryShardContext(0, + indexSettings, null, null, mapperService, null, null, null, null, null, null, () -> 0L, null); + fieldNamesFieldType.setEnabled(true); + Query termQuery = fieldNamesFieldType.termQuery("field_name", queryShardContext); assertEquals(new TermQuery(new Term(FieldNamesFieldMapper.CONTENT_TYPE, "field_name")), termQuery); - type.setEnabled(false); - IllegalStateException e = expectThrows(IllegalStateException.class, () -> type.termQuery("field_name", null)); + assertWarnings("terms query on the _field_names field is deprecated and will be removed, use exists query instead"); + fieldNamesFieldType.setEnabled(false); + IllegalStateException e = expectThrows(IllegalStateException.class, () -> fieldNamesFieldType.termQuery("field_name", null)); assertEquals("Cannot run [exists] queries if the [_field_names] field is disabled", e.getMessage()); } } diff --git a/core/src/test/java/org/elasticsearch/index/mapper/FieldTypeLookupTests.java b/core/src/test/java/org/elasticsearch/index/mapper/FieldTypeLookupTests.java index 4ae9b004413c3..fe885a46b87ef 100644 --- a/core/src/test/java/org/elasticsearch/index/mapper/FieldTypeLookupTests.java +++ b/core/src/test/java/org/elasticsearch/index/mapper/FieldTypeLookupTests.java @@ -19,6 +19,11 @@ package org.elasticsearch.index.mapper; +import org.apache.lucene.index.Term; +import org.apache.lucene.search.DocValuesFieldExistsQuery; +import org.apache.lucene.search.Query; +import org.apache.lucene.search.TermQuery; +import org.elasticsearch.index.query.QueryShardContext; import org.elasticsearch.test.ESTestCase; import java.util.Arrays; @@ -223,5 +228,14 @@ public MappedFieldType clone() { public String typeName() { return "otherfaketype"; } + + @Override + public Query existsQuery(QueryShardContext context) { + if (hasDocValues()) { + return new DocValuesFieldExistsQuery(name()); + } else { + return new TermQuery(new Term(FieldNamesFieldMapper.NAME, name())); + } + } } } diff --git a/core/src/test/java/org/elasticsearch/index/mapper/IdFieldTypeTests.java b/core/src/test/java/org/elasticsearch/index/mapper/IdFieldTypeTests.java index ffd83475ab887..5be1923cbed3c 100644 --- a/core/src/test/java/org/elasticsearch/index/mapper/IdFieldTypeTests.java +++ b/core/src/test/java/org/elasticsearch/index/mapper/IdFieldTypeTests.java @@ -42,7 +42,7 @@ public void testRangeQuery() { MappedFieldType ft = createDefaultFieldType(); ft.setName("_id"); IllegalArgumentException e = expectThrows(IllegalArgumentException.class, - () -> ft.rangeQuery(null, null, randomBoolean(), randomBoolean(), null)); + () -> ft.rangeQuery(null, null, randomBoolean(), randomBoolean(), null, null, null, null)); assertEquals("Field [_id] of type [_id] does not support range queries", e.getMessage()); } diff --git a/core/src/test/java/org/elasticsearch/index/mapper/IpFieldMapperTests.java b/core/src/test/java/org/elasticsearch/index/mapper/IpFieldMapperTests.java index 88db0b1b274fd..8632a936de0ef 100644 --- a/core/src/test/java/org/elasticsearch/index/mapper/IpFieldMapperTests.java +++ b/core/src/test/java/org/elasticsearch/index/mapper/IpFieldMapperTests.java @@ -262,7 +262,6 @@ public void testSerializeDefaults() throws Exception { // a whole lot of bogus settings right now it picks up from calling super.doXContentBody... assertTrue(got, got.contains("\"null_value\":null")); assertTrue(got, got.contains("\"ignore_malformed\":false")); - assertTrue(got, got.contains("\"include_in_all\":false")); } public void testEmptyName() throws IOException { diff --git a/core/src/test/java/org/elasticsearch/index/mapper/IpFieldTypeTests.java b/core/src/test/java/org/elasticsearch/index/mapper/IpFieldTypeTests.java index 5c65aa5a09de7..1c0024b769ff4 100644 --- a/core/src/test/java/org/elasticsearch/index/mapper/IpFieldTypeTests.java +++ b/core/src/test/java/org/elasticsearch/index/mapper/IpFieldTypeTests.java @@ -106,83 +106,84 @@ public void testRangeQuery() { InetAddressPoint.newRangeQuery("field", InetAddresses.forString("::"), InetAddressPoint.MAX_VALUE), - ft.rangeQuery(null, null, randomBoolean(), randomBoolean(), null)); + ft.rangeQuery(null, null, randomBoolean(), randomBoolean(), null, null, null, null)); assertEquals( InetAddressPoint.newRangeQuery("field", InetAddresses.forString("::"), InetAddresses.forString("192.168.2.0")), - ft.rangeQuery(null, "192.168.2.0", randomBoolean(), true, null)); + ft.rangeQuery(null, "192.168.2.0", randomBoolean(), true, null, null, null, null)); assertEquals( InetAddressPoint.newRangeQuery("field", InetAddresses.forString("::"), InetAddresses.forString("192.168.1.255")), - ft.rangeQuery(null, "192.168.2.0", randomBoolean(), false, null)); + ft.rangeQuery(null, "192.168.2.0", randomBoolean(), false, null, null, null, null)); assertEquals( InetAddressPoint.newRangeQuery("field", InetAddresses.forString("2001:db8::"), InetAddressPoint.MAX_VALUE), - ft.rangeQuery("2001:db8::", null, true, randomBoolean(), null)); + ft.rangeQuery("2001:db8::", null, true, randomBoolean(), null, null, null, null)); assertEquals( InetAddressPoint.newRangeQuery("field", InetAddresses.forString("2001:db8::1"), InetAddressPoint.MAX_VALUE), - ft.rangeQuery("2001:db8::", null, false, randomBoolean(), null)); + ft.rangeQuery("2001:db8::", null, false, randomBoolean(), null, null, null, null)); assertEquals( InetAddressPoint.newRangeQuery("field", InetAddresses.forString("2001:db8::"), InetAddresses.forString("2001:db8::ffff")), - ft.rangeQuery("2001:db8::", "2001:db8::ffff", true, true, null)); + ft.rangeQuery("2001:db8::", "2001:db8::ffff", true, true, null, null, null, null)); assertEquals( InetAddressPoint.newRangeQuery("field", InetAddresses.forString("2001:db8::1"), InetAddresses.forString("2001:db8::fffe")), - ft.rangeQuery("2001:db8::", "2001:db8::ffff", false, false, null)); + ft.rangeQuery("2001:db8::", "2001:db8::ffff", false, false, null, null, null, null)); assertEquals( InetAddressPoint.newRangeQuery("field", InetAddresses.forString("2001:db8::2"), InetAddresses.forString("2001:db8::")), // same lo/hi values but inclusive=false so this won't match anything - ft.rangeQuery("2001:db8::1", "2001:db8::1", false, false, null)); + ft.rangeQuery("2001:db8::1", "2001:db8::1", false, false, null, null, null, null)); // Upper bound is the min IP and is not inclusive assertEquals(new MatchNoDocsQuery(), - ft.rangeQuery("::", "::", true, false, null)); + ft.rangeQuery("::", "::", true, false, null, null, null, null)); // Lower bound is the max IP and is not inclusive assertEquals(new MatchNoDocsQuery(), - ft.rangeQuery("ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff", "ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff", false, true, null)); + ft.rangeQuery("ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff", "ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff", + false, true, null, null, null, null)); assertEquals( InetAddressPoint.newRangeQuery("field", InetAddresses.forString("::"), InetAddresses.forString("::fffe:ffff:ffff")), // same lo/hi values but inclusive=false so this won't match anything - ft.rangeQuery("::", "0.0.0.0", true, false, null)); + ft.rangeQuery("::", "0.0.0.0", true, false, null, null, null, null)); assertEquals( InetAddressPoint.newRangeQuery("field", InetAddresses.forString("::1:0:0:0"), InetAddressPoint.MAX_VALUE), // same lo/hi values but inclusive=false so this won't match anything - ft.rangeQuery("255.255.255.255", "ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff", false, true, null)); + ft.rangeQuery("255.255.255.255", "ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff", false, true, null, null, null, null)); assertEquals( // lower bound is ipv4, upper bound is ipv6 InetAddressPoint.newRangeQuery("field", InetAddresses.forString("192.168.1.7"), InetAddresses.forString("2001:db8::")), - ft.rangeQuery("::ffff:c0a8:107", "2001:db8::", true, true, null)); + ft.rangeQuery("::ffff:c0a8:107", "2001:db8::", true, true, null, null, null, null)); ft.setIndexOptions(IndexOptions.NONE); IllegalArgumentException e = expectThrows(IllegalArgumentException.class, - () -> ft.rangeQuery("::1", "2001::", true, true, null)); + () -> ft.rangeQuery("::1", "2001::", true, true, null, null, null, null)); assertEquals("Cannot search on field [field] since it is not indexed.", e.getMessage()); } } diff --git a/core/src/test/java/org/elasticsearch/index/mapper/KeywordFieldMapperTests.java b/core/src/test/java/org/elasticsearch/index/mapper/KeywordFieldMapperTests.java index 3ecef3aa0f514..e67b25b051b4e 100644 --- a/core/src/test/java/org/elasticsearch/index/mapper/KeywordFieldMapperTests.java +++ b/core/src/test/java/org/elasticsearch/index/mapper/KeywordFieldMapperTests.java @@ -70,9 +70,9 @@ protected Collection> getPlugins() { public void setup() { indexService = createIndex("test", Settings.builder() .put("index.analysis.normalizer.my_lowercase.type", "custom") - .putArray("index.analysis.normalizer.my_lowercase.filter", "lowercase") + .putList("index.analysis.normalizer.my_lowercase.filter", "lowercase") .put("index.analysis.normalizer.my_other_lowercase.type", "custom") - .putArray("index.analysis.normalizer.my_other_lowercase.filter", "mock_other_lowercase").build()); + .putList("index.analysis.normalizer.my_other_lowercase.filter", "mock_other_lowercase").build()); parser = indexService.mapperService().documentMapperParser(); } diff --git a/core/src/test/java/org/elasticsearch/index/mapper/MapperServiceTests.java b/core/src/test/java/org/elasticsearch/index/mapper/MapperServiceTests.java index 74a98a9930857..3ab9ba8406b7f 100644 --- a/core/src/test/java/org/elasticsearch/index/mapper/MapperServiceTests.java +++ b/core/src/test/java/org/elasticsearch/index/mapper/MapperServiceTests.java @@ -245,21 +245,6 @@ public void testOtherDocumentMappersOnlyUpdatedWhenChangingFieldType() throws IO assertNotSame(indexService.mapperService().documentMapper("type1"), documentMapper); } - public void testAllEnabled() throws Exception { - IndexService indexService = createIndex("test"); - assertFalse(indexService.mapperService().allEnabled()); - - CompressedXContent enabledAll = new CompressedXContent(XContentFactory.jsonBuilder().startObject() - .startObject("_all") - .field("enabled", true) - .endObject().endObject().bytes()); - - Exception e = expectThrows(MapperParsingException.class, - () -> indexService.mapperService().merge(MapperService.DEFAULT_MAPPING, enabledAll, - MergeReason.MAPPING_UPDATE, random().nextBoolean())); - assertThat(e.getMessage(), containsString("[_all] is disabled in 6.0")); - } - public void testPartitionedConstraints() { // partitioned index must have routing IllegalArgumentException noRoutingException = expectThrows(IllegalArgumentException.class, () -> { diff --git a/core/src/test/java/org/elasticsearch/index/mapper/MapperTests.java b/core/src/test/java/org/elasticsearch/index/mapper/MapperTests.java index 72b1c95d8bd02..ebebf476d19e8 100644 --- a/core/src/test/java/org/elasticsearch/index/mapper/MapperTests.java +++ b/core/src/test/java/org/elasticsearch/index/mapper/MapperTests.java @@ -45,37 +45,4 @@ public void testBuilderContextWithIndexSettingsAsNull() { NullPointerException e = expectThrows(NullPointerException.class, () -> new Mapper.BuilderContext(null, new ContentPath(1))); } - public void testExceptionForIncludeInAll() throws IOException { - XContentBuilder mapping = createMappingWithIncludeInAll(); - Settings settings = Settings.builder().put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT).build(); - - final MapperService currentMapperService = MapperTestUtils.newMapperService(xContentRegistry(), createTempDir(), settings, "test"); - Exception e = expectThrows(MapperParsingException.class, () -> - currentMapperService.parse("type", new CompressedXContent(mapping.string()), true)); - assertEquals("[include_in_all] is not allowed for indices created on or after version 6.0.0 as [_all] is deprecated. " + - "As a replacement, you can use an [copy_to] on mapping fields to create your own catch all field.", - e.getMessage()); - - settings = Settings.builder().put(IndexMetaData.SETTING_VERSION_CREATED, Version.V_5_3_0).build(); - - // Create the mapping service with an older index creation version - final MapperService oldMapperService = MapperTestUtils.newMapperService(xContentRegistry(), createTempDir(), settings, "test"); - // Should not throw an exception now - oldMapperService.parse("type", new CompressedXContent(mapping.string()), true); - } - - private static XContentBuilder createMappingWithIncludeInAll() throws IOException { - return jsonBuilder() - .startObject() - .startObject("type") - .startObject("properties") - .startObject("a") - .field("type", "text") - .field("include_in_all", randomBoolean()) - .endObject() - .endObject() - .endObject() - .endObject(); - } - } diff --git a/core/src/test/java/org/elasticsearch/index/mapper/MultiFieldIncludeInAllMapperTests.java b/core/src/test/java/org/elasticsearch/index/mapper/MultiFieldIncludeInAllMapperTests.java deleted file mode 100644 index c4195b776a6d9..0000000000000 --- a/core/src/test/java/org/elasticsearch/index/mapper/MultiFieldIncludeInAllMapperTests.java +++ /dev/null @@ -1,66 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ -package org.elasticsearch.index.mapper; - -import org.elasticsearch.common.compress.CompressedXContent; -import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.common.xcontent.XContentBuilder; -import org.elasticsearch.index.MapperTestUtils; -import org.elasticsearch.test.ESTestCase; - -import java.io.IOException; - -import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder; - - -public class MultiFieldIncludeInAllMapperTests extends ESTestCase { - public void testExceptionForIncludeInAllInMultiFields() throws IOException { - XContentBuilder mapping = createMappingWithIncludeInAllInMultiField(); - - // first check that for newer versions we throw exception if include_in_all is found withing multi field - MapperService mapperService = MapperTestUtils.newMapperService(xContentRegistry(), createTempDir(), Settings.EMPTY, "test"); - Exception e = expectThrows(MapperParsingException.class, () -> - mapperService.parse("type", new CompressedXContent(mapping.string()), true)); - assertEquals("include_in_all in multi fields is not allowed. Found the include_in_all in field [c] which is within a multi field.", - e.getMessage()); - } - - private static XContentBuilder createMappingWithIncludeInAllInMultiField() throws IOException { - XContentBuilder mapping = jsonBuilder(); - mapping.startObject() - .startObject("type") - .startObject("properties") - .startObject("a") - .field("type", "text") - .endObject() - .startObject("b") - .field("type", "text") - .startObject("fields") - .startObject("c") - .field("type", "text") - .field("include_in_all", false) - .endObject() - .endObject() - .endObject() - .endObject() - .endObject() - .endObject(); - return mapping; - } -} diff --git a/core/src/test/java/org/elasticsearch/index/mapper/MultiFieldTests.java b/core/src/test/java/org/elasticsearch/index/mapper/MultiFieldTests.java index eb1148e9f4598..adc84277a6ed6 100644 --- a/core/src/test/java/org/elasticsearch/index/mapper/MultiFieldTests.java +++ b/core/src/test/java/org/elasticsearch/index/mapper/MultiFieldTests.java @@ -35,7 +35,9 @@ import java.io.IOException; import java.util.Arrays; +import java.util.HashSet; import java.util.Map; +import java.util.Set; import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder; import static org.elasticsearch.test.StreamsUtils.copyToBytesFromClasspath; @@ -106,14 +108,6 @@ private void testMultiField(String mapping) throws Exception { assertThat(docMapper.mappers().getMapper("name.test1").fieldType().tokenized(), equalTo(true)); assertThat(docMapper.mappers().getMapper("name.test1").fieldType().eagerGlobalOrdinals(), equalTo(true)); - assertThat(docMapper.mappers().getMapper("name.test2"), notNullValue()); - assertThat(docMapper.mappers().getMapper("name.test2"), instanceOf(TokenCountFieldMapper.class)); - assertNotSame(IndexOptions.NONE, docMapper.mappers().getMapper("name.test2").fieldType().indexOptions()); - assertThat(docMapper.mappers().getMapper("name.test2").fieldType().stored(), equalTo(true)); - assertThat(docMapper.mappers().getMapper("name.test2").fieldType().tokenized(), equalTo(false)); - assertThat(((TokenCountFieldMapper) docMapper.mappers().getMapper("name.test2")).analyzer(), equalTo("simple")); - assertThat(((TokenCountFieldMapper) docMapper.mappers().getMapper("name.test2")).analyzer(), equalTo("simple")); - assertThat(docMapper.mappers().getMapper("object1.multi1"), notNullValue()); assertThat(docMapper.mappers().getMapper("object1.multi1"), instanceOf(DateFieldMapper.class)); assertThat(docMapper.mappers().getMapper("object1.multi1.string"), notNullValue()); @@ -163,8 +157,9 @@ public void testBuildThenParse() throws Exception { // can to unnecessary re-syncing of the mappings between the local instance and cluster state public void testMultiFieldsInConsistentOrder() throws Exception { String[] multiFieldNames = new String[randomIntBetween(2, 10)]; + Set seenFields = new HashSet<>(); for (int i = 0; i < multiFieldNames.length; i++) { - multiFieldNames[i] = randomAlphaOfLength(4); + multiFieldNames[i] = randomValueOtherThanMany(s -> !seenFields.add(s), () -> randomAlphaOfLength(4)); } XContentBuilder builder = jsonBuilder().startObject().startObject("type").startObject("properties") diff --git a/core/src/test/java/org/elasticsearch/index/mapper/MultiFieldsIntegrationIT.java b/core/src/test/java/org/elasticsearch/index/mapper/MultiFieldsIntegrationIT.java index ae922e6a731f8..8dbddcc5daa54 100644 --- a/core/src/test/java/org/elasticsearch/index/mapper/MultiFieldsIntegrationIT.java +++ b/core/src/test/java/org/elasticsearch/index/mapper/MultiFieldsIntegrationIT.java @@ -130,42 +130,6 @@ public void testGeoPointMultiField() throws Exception { assertThat(countResponse.getHits().getTotalHits(), equalTo(1L)); } - public void testTokenCountMultiField() throws Exception { - assertAcked( - client().admin().indices().prepareCreate("my-index") - .addMapping("my-type", XContentFactory.jsonBuilder().startObject().startObject("my-type") - .startObject("properties") - .startObject("a") - .field("type", "token_count") - .field("analyzer", "simple") - .startObject("fields") - .startObject("b") - .field("type", "keyword") - .endObject() - .endObject() - .endObject() - .endObject() - .endObject().endObject()) - ); - - GetMappingsResponse getMappingsResponse = client().admin().indices().prepareGetMappings("my-index").get(); - MappingMetaData mappingMetaData = getMappingsResponse.mappings().get("my-index").get("my-type"); - assertThat(mappingMetaData, not(nullValue())); - Map mappingSource = mappingMetaData.sourceAsMap(); - Map aField = ((Map) XContentMapValues.extractValue("properties.a", mappingSource)); - assertThat(aField.size(), equalTo(3)); - assertThat(aField.get("type").toString(), equalTo("token_count")); - assertThat(aField.get("fields"), notNullValue()); - - Map bField = ((Map) XContentMapValues.extractValue("properties.a.fields.b", mappingSource)); - assertThat(bField.size(), equalTo(1)); - assertThat(bField.get("type").toString(), equalTo("keyword")); - - client().prepareIndex("my-index", "my-type", "1").setSource("a", "my tokens").setRefreshPolicy(IMMEDIATE).get(); - SearchResponse countResponse = client().prepareSearch("my-index").setSize(0).setQuery(matchQuery("a.b", "my tokens")).get(); - assertThat(countResponse.getHits().getTotalHits(), equalTo(1L)); - } - public void testCompletionMultiField() throws Exception { assertAcked( client().admin().indices().prepareCreate("my-index") diff --git a/core/src/test/java/org/elasticsearch/index/mapper/NestedObjectMapperTests.java b/core/src/test/java/org/elasticsearch/index/mapper/NestedObjectMapperTests.java index 157033d414884..39d4de2359e78 100644 --- a/core/src/test/java/org/elasticsearch/index/mapper/NestedObjectMapperTests.java +++ b/core/src/test/java/org/elasticsearch/index/mapper/NestedObjectMapperTests.java @@ -19,6 +19,9 @@ package org.elasticsearch.index.mapper; +import java.util.HashMap; +import java.util.HashSet; +import org.apache.lucene.index.IndexableField; import org.elasticsearch.Version; import org.elasticsearch.common.compress.CompressedXContent; import org.elasticsearch.common.settings.Settings; @@ -36,6 +39,7 @@ import java.util.Collections; import java.util.function.Function; +import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder; import static org.hamcrest.Matchers.containsString; import static org.hamcrest.Matchers.equalTo; import static org.hamcrest.Matchers.nullValue; @@ -332,6 +336,67 @@ public void testMultiRootAndNested1() throws Exception { assertThat(doc.docs().get(6).getFields("nested1.nested2.field2").length, equalTo(4)); } + /** + * Checks that multiple levels of nested includes where a node is both directly and transitively + * included in root by {@code include_in_root} and a chain of {@code include_in_parent} does not + * lead to duplicate fields on the root document. + */ + public void testMultipleLevelsIncludeRoot1() throws Exception { + String mapping = XContentFactory.jsonBuilder() + .startObject().startObject("type").startObject("properties") + .startObject("nested1").field("type", "nested").field("include_in_root", true).field("include_in_parent", true).startObject("properties") + .startObject("nested2").field("type", "nested").field("include_in_root", true).field("include_in_parent", true) + .endObject().endObject().endObject() + .endObject().endObject().endObject().string(); + + DocumentMapper docMapper = createIndex("test").mapperService().documentMapperParser().parse("type", new CompressedXContent(mapping)); + + ParsedDocument doc = docMapper.parse(SourceToParse.source("test", "type", "1", XContentFactory.jsonBuilder() + .startObject().startArray("nested1") + .startObject().startArray("nested2").startObject().field("foo", "bar") + .endObject().endArray().endObject().endArray() + .endObject() + .bytes(), + XContentType.JSON)); + + final Collection fields = doc.rootDoc().getFields(); + assertThat(fields.size(), equalTo(new HashSet<>(fields).size())); + } + + /** + * Same as {@link NestedObjectMapperTests#testMultipleLevelsIncludeRoot1()} but tests for the + * case where the transitive {@code include_in_parent} and redundant {@code include_in_root} + * happen on a chain of nodes that starts from a parent node that is not directly connected to + * root by a chain of {@code include_in_parent}, i.e. that has {@code include_in_parent} set to + * {@code false} and {@code include_in_root} set to {@code true}. + */ + public void testMultipleLevelsIncludeRoot2() throws Exception { + String mapping = XContentFactory.jsonBuilder() + .startObject().startObject("type").startObject("properties") + .startObject("nested1").field("type", "nested") + .field("include_in_root", true).field("include_in_parent", true).startObject("properties") + .startObject("nested2").field("type", "nested") + .field("include_in_root", true).field("include_in_parent", false).startObject("properties") + .startObject("nested3").field("type", "nested") + .field("include_in_root", true).field("include_in_parent", true) + .endObject().endObject().endObject().endObject().endObject() + .endObject().endObject().endObject().string(); + + DocumentMapper docMapper = createIndex("test").mapperService().documentMapperParser().parse("type", new CompressedXContent(mapping)); + + ParsedDocument doc = docMapper.parse(SourceToParse.source("test", "type", "1", XContentFactory.jsonBuilder() + .startObject().startArray("nested1") + .startObject().startArray("nested2") + .startObject().startArray("nested3").startObject().field("foo", "bar") + .endObject().endArray().endObject().endArray().endObject().endArray() + .endObject() + .bytes(), + XContentType.JSON)); + + final Collection fields = doc.rootDoc().getFields(); + assertThat(fields.size(), equalTo(new HashSet<>(fields).size())); + } + public void testNestedArrayStrict() throws Exception { String mapping = XContentFactory.jsonBuilder().startObject().startObject("type").startObject("properties") .startObject("nested1").field("type", "nested").field("dynamic", "strict").startObject("properties") @@ -428,4 +493,35 @@ public void testLimitOfNestedFieldsWithMultiTypePerIndex() throws Exception { createIndex("test5", Settings.builder().put(MapperService.INDEX_MAPPING_NESTED_FIELDS_LIMIT_SETTING.getKey(), 0).build()) .mapperService().merge("type", new CompressedXContent(mapping.apply("type")), MergeReason.MAPPING_RECOVERY, false); } + + public void testParentObjectMapperAreNested() throws Exception { + MapperService mapperService = createIndex("index1", Settings.EMPTY, "doc", jsonBuilder().startObject() + .startObject("properties") + .startObject("comments") + .field("type", "nested") + .startObject("properties") + .startObject("messages") + .field("type", "nested").endObject() + .endObject() + .endObject() + .endObject() + .endObject()).mapperService(); + ObjectMapper objectMapper = mapperService.getObjectMapper("comments.messages"); + assertTrue(objectMapper.parentObjectMapperAreNested(mapperService)); + + mapperService = createIndex("index2", Settings.EMPTY, "doc", jsonBuilder().startObject() + .startObject("properties") + .startObject("comments") + .field("type", "object") + .startObject("properties") + .startObject("messages") + .field("type", "nested").endObject() + .endObject() + .endObject() + .endObject() + .endObject()).mapperService(); + objectMapper = mapperService.getObjectMapper("comments.messages"); + assertFalse(objectMapper.parentObjectMapperAreNested(mapperService)); + } + } diff --git a/core/src/test/java/org/elasticsearch/index/mapper/NumberFieldMapperTests.java b/core/src/test/java/org/elasticsearch/index/mapper/NumberFieldMapperTests.java index 3ace8c0451320..afbf63a23bd32 100644 --- a/core/src/test/java/org/elasticsearch/index/mapper/NumberFieldMapperTests.java +++ b/core/src/test/java/org/elasticsearch/index/mapper/NumberFieldMapperTests.java @@ -28,10 +28,12 @@ import org.elasticsearch.index.mapper.NumberFieldMapper.NumberType; import org.elasticsearch.index.mapper.NumberFieldTypeTests.OutOfRangeSpec; +import java.io.ByteArrayInputStream; import java.io.IOException; -import java.util.List; +import java.math.BigInteger; import java.util.Arrays; import java.util.HashSet; +import java.util.List; import static org.hamcrest.Matchers.containsString; @@ -39,7 +41,7 @@ public class NumberFieldMapperTests extends AbstractNumericFieldMapperTestCase { @Override protected void setTypeList() { - TYPES = new HashSet<>(Arrays.asList("byte", "short", "integer", "long", "float", "double")); + TYPES = new HashSet<>(Arrays.asList("byte", "short", "integer", "long", "float", "double", "half_float")); WHOLE_TYPES = new HashSet<>(Arrays.asList("byte", "short", "integer", "long")); } @@ -234,6 +236,7 @@ private void doTestIgnoreMalformed(String type) throws IOException { .bytes(), XContentType.JSON)); MapperParsingException e = expectThrows(MapperParsingException.class, runnable); + assertThat(e.getCause().getMessage(), containsString("For input string: \"a\"")); mapping = XContentFactory.jsonBuilder().startObject().startObject("type") @@ -255,7 +258,7 @@ private void doTestIgnoreMalformed(String type) throws IOException { public void testRejectNorms() throws IOException { // not supported as of 5.0 - for (String type : Arrays.asList("byte", "short", "integer", "long", "float", "double")) { + for (String type : TYPES) { DocumentMapperParser parser = createIndex("index-" + type).mapperService().documentMapperParser(); String mapping = XContentFactory.jsonBuilder().startObject().startObject("type") .startObject("properties") @@ -270,6 +273,25 @@ public void testRejectNorms() throws IOException { } } + /** + * `index_options` was deprecated and is rejected as of 7.0 + */ + public void testRejectIndexOptions() throws IOException { + for (String type : TYPES) { + DocumentMapperParser parser = createIndex("index-" + type).mapperService().documentMapperParser(); + String mapping = XContentFactory.jsonBuilder().startObject().startObject("type") + .startObject("properties") + .startObject("foo") + .field("type", type) + .field("index_options", randomFrom(new String[] { "docs", "freqs", "positions", "offsets" })) + .endObject() + .endObject().endObject().endObject().string(); + MapperParsingException e = expectThrows(MapperParsingException.class, + () -> parser.parse("type", new CompressedXContent(mapping))); + assertThat(e.getMessage(), containsString("index_options not allowed in field [foo] of type [" + type +"]")); + } + } + @Override protected void doTestNullValue(String type) throws IOException { String mapping = XContentFactory.jsonBuilder().startObject() @@ -293,7 +315,7 @@ protected void doTestNullValue(String type) throws IOException { assertArrayEquals(new IndexableField[0], doc.rootDoc().getFields("field")); Object missing; - if (Arrays.asList("float", "double").contains(type)) { + if (Arrays.asList("float", "double", "half_float").contains(type)) { missing = 123d; } else { missing = 123L; @@ -345,6 +367,26 @@ public void testEmptyName() throws IOException { public void testOutOfRangeValues() throws IOException { final List> inputs = Arrays.asList( + OutOfRangeSpec.of(NumberType.BYTE, "128", "is out of range for a byte"), + OutOfRangeSpec.of(NumberType.SHORT, "32768", "is out of range for a short"), + OutOfRangeSpec.of(NumberType.INTEGER, "2147483648", "is out of range for an integer"), + OutOfRangeSpec.of(NumberType.LONG, "9223372036854775808", "out of range for a long"), + + OutOfRangeSpec.of(NumberType.BYTE, "-129", "is out of range for a byte"), + OutOfRangeSpec.of(NumberType.SHORT, "-32769", "is out of range for a short"), + OutOfRangeSpec.of(NumberType.INTEGER, "-2147483649", "is out of range for an integer"), + OutOfRangeSpec.of(NumberType.LONG, "-9223372036854775809", "out of range for a long"), + + OutOfRangeSpec.of(NumberType.BYTE, 128, "is out of range for a byte"), + OutOfRangeSpec.of(NumberType.SHORT, 32768, "out of range of Java short"), + OutOfRangeSpec.of(NumberType.INTEGER, 2147483648L, " out of range of int"), + OutOfRangeSpec.of(NumberType.LONG, new BigInteger("9223372036854775808"), "out of range of long"), + + OutOfRangeSpec.of(NumberType.BYTE, -129, "is out of range for a byte"), + OutOfRangeSpec.of(NumberType.SHORT, -32769, "out of range of Java short"), + OutOfRangeSpec.of(NumberType.INTEGER, -2147483649L, " out of range of int"), + OutOfRangeSpec.of(NumberType.LONG, new BigInteger("-9223372036854775809"), "out of range of long"), + OutOfRangeSpec.of(NumberType.HALF_FLOAT, "65520", "[half_float] supports only finite values"), OutOfRangeSpec.of(NumberType.FLOAT, "3.4028235E39", "[float] supports only finite values"), OutOfRangeSpec.of(NumberType.DOUBLE, "1.7976931348623157E309", "[double] supports only finite values"), @@ -398,6 +440,13 @@ private DocumentMapper createDocumentMapper(NumberType type) throws IOException } private BytesReference createIndexRequest(Object value) throws IOException { - return XContentFactory.jsonBuilder().startObject().field("field", value).endObject().bytes(); + if (value instanceof BigInteger) { + return XContentFactory.jsonBuilder() + .startObject() + .rawField("field", new ByteArrayInputStream(value.toString().getBytes("UTF-8")), XContentType.JSON) + .endObject().bytes(); + } else { + return XContentFactory.jsonBuilder().startObject().field("field", value).endObject().bytes(); + } } } diff --git a/core/src/test/java/org/elasticsearch/index/mapper/NumberFieldTypeTests.java b/core/src/test/java/org/elasticsearch/index/mapper/NumberFieldTypeTests.java index 13e5e35df685c..6d5ca1add74d5 100644 --- a/core/src/test/java/org/elasticsearch/index/mapper/NumberFieldTypeTests.java +++ b/core/src/test/java/org/elasticsearch/index/mapper/NumberFieldTypeTests.java @@ -46,6 +46,7 @@ import java.io.IOException; import java.math.BigDecimal; +import java.math.BigInteger; import java.nio.charset.StandardCharsets; import java.util.Arrays; import java.util.List; @@ -136,76 +137,116 @@ public void testRangeQueryWithNegativeBounds() { MappedFieldType ftInt = new NumberFieldMapper.NumberFieldType(NumberType.INTEGER); ftInt.setName("field"); ftInt.setIndexOptions(IndexOptions.DOCS); - assertEquals(ftInt.rangeQuery(-3, -3, true, true, null), ftInt.rangeQuery(-3.5, -2.5, true, true, null)); - assertEquals(ftInt.rangeQuery(-3, -3, true, true, null), ftInt.rangeQuery(-3.5, -2.5, false, false, null)); - assertEquals(ftInt.rangeQuery(0, 0, true, true, null), ftInt.rangeQuery(-0.5, 0.5, true, true, null)); - assertEquals(ftInt.rangeQuery(0, 0, true, true, null), ftInt.rangeQuery(-0.5, 0.5, false, false, null)); - assertEquals(ftInt.rangeQuery(1, 2, true, true, null), ftInt.rangeQuery(0.5, 2.5, true, true, null)); - assertEquals(ftInt.rangeQuery(1, 2, true, true, null), ftInt.rangeQuery(0.5, 2.5, false, false, null)); - assertEquals(ftInt.rangeQuery(0, 2, true, true, null), ftInt.rangeQuery(-0.5, 2.5, true, true, null)); - assertEquals(ftInt.rangeQuery(0, 2, true, true, null), ftInt.rangeQuery(-0.5, 2.5, false, false, null)); - - assertEquals(ftInt.rangeQuery(-2, 0, true, true, null), ftInt.rangeQuery(-2.5, 0.5, true, true, null)); - assertEquals(ftInt.rangeQuery(-2, 0, true, true, null), ftInt.rangeQuery(-2.5, 0.5, false, false, null)); - assertEquals(ftInt.rangeQuery(-2, -1, true, true, null), ftInt.rangeQuery(-2.5, -0.5, true, true, null)); - assertEquals(ftInt.rangeQuery(-2, -1, true, true, null), ftInt.rangeQuery(-2.5, -0.5, false, false, null)); + assertEquals(ftInt.rangeQuery(-3, -3, true, true, null, null, null, null), + ftInt.rangeQuery(-3.5, -2.5, true, true, null, null, null, null)); + assertEquals(ftInt.rangeQuery(-3, -3, true, true, null, null, null, null), + ftInt.rangeQuery(-3.5, -2.5, false, false, null, null, null, null)); + assertEquals(ftInt.rangeQuery(0, 0, true, true, null, null, null, null), + ftInt.rangeQuery(-0.5, 0.5, true, true, null, null, null, null)); + assertEquals(ftInt.rangeQuery(0, 0, true, true, null, null, null, null), + ftInt.rangeQuery(-0.5, 0.5, false, false, null, null, null, null)); + assertEquals(ftInt.rangeQuery(1, 2, true, true, null, null, null, null), + ftInt.rangeQuery(0.5, 2.5, true, true, null, null, null, null)); + assertEquals(ftInt.rangeQuery(1, 2, true, true, null, null, null, null), + ftInt.rangeQuery(0.5, 2.5, false, false, null, null, null, null)); + assertEquals(ftInt.rangeQuery(0, 2, true, true, null, null, null, null), + ftInt.rangeQuery(-0.5, 2.5, true, true, null, null, null, null)); + assertEquals(ftInt.rangeQuery(0, 2, true, true, null, null, null, null), + ftInt.rangeQuery(-0.5, 2.5, false, false, null, null, null, null)); + + assertEquals(ftInt.rangeQuery(-2, 0, true, true, null, null, null, null), + ftInt.rangeQuery(-2.5, 0.5, true, true, null, null, null, null)); + assertEquals(ftInt.rangeQuery(-2, 0, true, true, null, null, null, null), + ftInt.rangeQuery(-2.5, 0.5, false, false, null, null, null, null)); + assertEquals(ftInt.rangeQuery(-2, -1, true, true, null, null, null, null), + ftInt.rangeQuery(-2.5, -0.5, true, true, null, null, null, null)); + assertEquals(ftInt.rangeQuery(-2, -1, true, true, null, null, null, null), + ftInt.rangeQuery(-2.5, -0.5, false, false, null, null, null, null)); MappedFieldType ftLong = new NumberFieldMapper.NumberFieldType(NumberType.LONG); ftLong.setName("field"); ftLong.setIndexOptions(IndexOptions.DOCS); - assertEquals(ftLong.rangeQuery(-3, -3, true, true, null), ftLong.rangeQuery(-3.5, -2.5, true, true, null)); - assertEquals(ftLong.rangeQuery(-3, -3, true, true, null), ftLong.rangeQuery(-3.5, -2.5, false, false, null)); - assertEquals(ftLong.rangeQuery(0, 0, true, true, null), ftLong.rangeQuery(-0.5, 0.5, true, true, null)); - assertEquals(ftLong.rangeQuery(0, 0, true, true, null), ftLong.rangeQuery(-0.5, 0.5, false, false, null)); - assertEquals(ftLong.rangeQuery(1, 2, true, true, null), ftLong.rangeQuery(0.5, 2.5, true, true, null)); - assertEquals(ftLong.rangeQuery(1, 2, true, true, null), ftLong.rangeQuery(0.5, 2.5, false, false, null)); - assertEquals(ftLong.rangeQuery(0, 2, true, true, null), ftLong.rangeQuery(-0.5, 2.5, true, true, null)); - assertEquals(ftLong.rangeQuery(0, 2, true, true, null), ftLong.rangeQuery(-0.5, 2.5, false, false, null)); - - assertEquals(ftLong.rangeQuery(-2, 0, true, true, null), ftLong.rangeQuery(-2.5, 0.5, true, true, null)); - assertEquals(ftLong.rangeQuery(-2, 0, true, true, null), ftLong.rangeQuery(-2.5, 0.5, false, false, null)); - assertEquals(ftLong.rangeQuery(-2, -1, true, true, null), ftLong.rangeQuery(-2.5, -0.5, true, true, null)); - assertEquals(ftLong.rangeQuery(-2, -1, true, true, null), ftLong.rangeQuery(-2.5, -0.5, false, false, null)); + assertEquals(ftLong.rangeQuery(-3, -3, true, true, null, null, null, null), + ftLong.rangeQuery(-3.5, -2.5, true, true, null, null, null, null)); + assertEquals(ftLong.rangeQuery(-3, -3, true, true, null, null, null, null), + ftLong.rangeQuery(-3.5, -2.5, false, false, null, null, null, null)); + assertEquals(ftLong.rangeQuery(0, 0, true, true, null, null, null, null), + ftLong.rangeQuery(-0.5, 0.5, true, true, null, null, null, null)); + assertEquals(ftLong.rangeQuery(0, 0, true, true, null, null, null, null), + ftLong.rangeQuery(-0.5, 0.5, false, false, null, null, null, null)); + assertEquals(ftLong.rangeQuery(1, 2, true, true, null, null, null, null), + ftLong.rangeQuery(0.5, 2.5, true, true, null, null, null, null)); + assertEquals(ftLong.rangeQuery(1, 2, true, true, null, null, null, null), + ftLong.rangeQuery(0.5, 2.5, false, false, null, null, null, null)); + assertEquals(ftLong.rangeQuery(0, 2, true, true, null, null, null, null), + ftLong.rangeQuery(-0.5, 2.5, true, true, null, null, null, null)); + assertEquals(ftLong.rangeQuery(0, 2, true, true, null, null, null, null), + ftLong.rangeQuery(-0.5, 2.5, false, false, null, null, null, null)); + + assertEquals(ftLong.rangeQuery(-2, 0, true, true, null, null, null, null), + ftLong.rangeQuery(-2.5, 0.5, true, true, null, null, null, null)); + assertEquals(ftLong.rangeQuery(-2, 0, true, true, null, null, null, null), + ftLong.rangeQuery(-2.5, 0.5, false, false, null, null, null, null)); + assertEquals(ftLong.rangeQuery(-2, -1, true, true, null, null, null, null), + ftLong.rangeQuery(-2.5, -0.5, true, true, null, null, null, null)); + assertEquals(ftLong.rangeQuery(-2, -1, true, true, null, null, null, null), + ftLong.rangeQuery(-2.5, -0.5, false, false, null, null, null, null)); } public void testByteRangeQueryWithDecimalParts() { MappedFieldType ft = new NumberFieldMapper.NumberFieldType(NumberType.BYTE); ft.setName("field"); ft.setIndexOptions(IndexOptions.DOCS); - assertEquals(ft.rangeQuery(2, 10, true, true, null), ft.rangeQuery(1.1, 10, true, true, null)); - assertEquals(ft.rangeQuery(2, 10, true, true, null), ft.rangeQuery(1.1, 10, false, true, null)); - assertEquals(ft.rangeQuery(1, 10, true, true, null), ft.rangeQuery(1, 10.1, true, true, null)); - assertEquals(ft.rangeQuery(1, 10, true, true, null), ft.rangeQuery(1, 10.1, true, false, null)); + assertEquals(ft.rangeQuery(2, 10, true, true, null, null, null, null), + ft.rangeQuery(1.1, 10, true, true, null, null, null, null)); + assertEquals(ft.rangeQuery(2, 10, true, true, null, null, null, null), + ft.rangeQuery(1.1, 10, false, true, null, null, null, null)); + assertEquals(ft.rangeQuery(1, 10, true, true, null, null, null, null), + ft.rangeQuery(1, 10.1, true, true, null, null, null, null)); + assertEquals(ft.rangeQuery(1, 10, true, true, null, null, null, null), + ft.rangeQuery(1, 10.1, true, false, null, null, null, null)); } public void testShortRangeQueryWithDecimalParts() { MappedFieldType ft = new NumberFieldMapper.NumberFieldType(NumberType.SHORT); ft.setName("field"); ft.setIndexOptions(IndexOptions.DOCS); - assertEquals(ft.rangeQuery(2, 10, true, true, null), ft.rangeQuery(1.1, 10, true, true, null)); - assertEquals(ft.rangeQuery(2, 10, true, true, null), ft.rangeQuery(1.1, 10, false, true, null)); - assertEquals(ft.rangeQuery(1, 10, true, true, null), ft.rangeQuery(1, 10.1, true, true, null)); - assertEquals(ft.rangeQuery(1, 10, true, true, null), ft.rangeQuery(1, 10.1, true, false, null)); + assertEquals(ft.rangeQuery(2, 10, true, true, null, null, null, null), + ft.rangeQuery(1.1, 10, true, true, null, null, null, null)); + assertEquals(ft.rangeQuery(2, 10, true, true, null, null, null, null), + ft.rangeQuery(1.1, 10, false, true, null, null, null, null)); + assertEquals(ft.rangeQuery(1, 10, true, true, null, null, null, null), + ft.rangeQuery(1, 10.1, true, true, null, null, null, null)); + assertEquals(ft.rangeQuery(1, 10, true, true, null, null, null, null), + ft.rangeQuery(1, 10.1, true, false, null, null, null, null)); } public void testIntegerRangeQueryWithDecimalParts() { MappedFieldType ft = new NumberFieldMapper.NumberFieldType(NumberType.INTEGER); ft.setName("field"); ft.setIndexOptions(IndexOptions.DOCS); - assertEquals(ft.rangeQuery(2, 10, true, true, null), ft.rangeQuery(1.1, 10, true, true, null)); - assertEquals(ft.rangeQuery(2, 10, true, true, null), ft.rangeQuery(1.1, 10, false, true, null)); - assertEquals(ft.rangeQuery(1, 10, true, true, null), ft.rangeQuery(1, 10.1, true, true, null)); - assertEquals(ft.rangeQuery(1, 10, true, true, null), ft.rangeQuery(1, 10.1, true, false, null)); + assertEquals(ft.rangeQuery(2, 10, true, true, null, null, null, null), + ft.rangeQuery(1.1, 10, true, true, null, null, null, null)); + assertEquals(ft.rangeQuery(2, 10, true, true, null, null, null, null), + ft.rangeQuery(1.1, 10, false, true, null, null, null, null)); + assertEquals(ft.rangeQuery(1, 10, true, true, null, null, null, null), + ft.rangeQuery(1, 10.1, true, true, null, null, null, null)); + assertEquals(ft.rangeQuery(1, 10, true, true, null, null, null, null), + ft.rangeQuery(1, 10.1, true, false, null, null, null, null)); } public void testLongRangeQueryWithDecimalParts() { MappedFieldType ft = new NumberFieldMapper.NumberFieldType(NumberType.LONG); ft.setName("field"); ft.setIndexOptions(IndexOptions.DOCS); - assertEquals(ft.rangeQuery(2, 10, true, true, null), ft.rangeQuery(1.1, 10, true, true, null)); - assertEquals(ft.rangeQuery(2, 10, true, true, null), ft.rangeQuery(1.1, 10, false, true, null)); - assertEquals(ft.rangeQuery(1, 10, true, true, null), ft.rangeQuery(1, 10.1, true, true, null)); - assertEquals(ft.rangeQuery(1, 10, true, true, null), ft.rangeQuery(1, 10.1, true, false, null)); + assertEquals(ft.rangeQuery(2, 10, true, true, null, null, null, null), + ft.rangeQuery(1.1, 10, true, true, null, null, null, null)); + assertEquals(ft.rangeQuery(2, 10, true, true, null, null, null, null), + ft.rangeQuery(1.1, 10, false, true, null, null, null, null)); + assertEquals(ft.rangeQuery(1, 10, true, true, null, null, null, null), + ft.rangeQuery(1, 10.1, true, true, null, null, null, null)); + assertEquals(ft.rangeQuery(1, 10, true, true, null, null, null, null), + ft.rangeQuery(1, 10.1, true, false, null, null, null, null)); } public void testRangeQuery() { @@ -215,11 +256,11 @@ public void testRangeQuery() { Query expected = new IndexOrDocValuesQuery( LongPoint.newRangeQuery("field", 1, 3), SortedNumericDocValuesField.newSlowRangeQuery("field", 1, 3)); - assertEquals(expected, ft.rangeQuery("1", "3", true, true, null)); + assertEquals(expected, ft.rangeQuery("1", "3", true, true, null, null, null, null)); ft.setIndexOptions(IndexOptions.NONE); IllegalArgumentException e = expectThrows(IllegalArgumentException.class, - () -> ft.rangeQuery("1", "3", true, true, null)); + () -> ft.rangeQuery("1", "3", true, true, null, null, null, null)); assertEquals("Cannot search on field [field] since it is not indexed.", e.getMessage()); } @@ -389,6 +430,22 @@ public void doTestDocValueRangeQueries(NumberType type, Supplier valueSu public void testParseOutOfRangeValues() throws IOException { final List> inputs = Arrays.asList( + OutOfRangeSpec.of(NumberType.BYTE, "128", "out of range for a byte"), + OutOfRangeSpec.of(NumberType.BYTE, 128, "is out of range for a byte"), + OutOfRangeSpec.of(NumberType.BYTE, -129, "is out of range for a byte"), + + OutOfRangeSpec.of(NumberType.SHORT, "32768", "out of range for a short"), + OutOfRangeSpec.of(NumberType.SHORT, 32768, "is out of range for a short"), + OutOfRangeSpec.of(NumberType.SHORT, -32769, "is out of range for a short"), + + OutOfRangeSpec.of(NumberType.INTEGER, "2147483648", "out of range for an integer"), + OutOfRangeSpec.of(NumberType.INTEGER, 2147483648L, "is out of range for an integer"), + OutOfRangeSpec.of(NumberType.INTEGER, -2147483649L, "is out of range for an integer"), + + OutOfRangeSpec.of(NumberType.LONG, "9223372036854775808", "out of range for a long"), + OutOfRangeSpec.of(NumberType.LONG, new BigInteger("9223372036854775808"), " is out of range for a long"), + OutOfRangeSpec.of(NumberType.LONG, new BigInteger("-9223372036854775809"), " is out of range for a long"), + OutOfRangeSpec.of(NumberType.HALF_FLOAT, "65520", "[half_float] supports only finite values"), OutOfRangeSpec.of(NumberType.FLOAT, "3.4028235E39", "[float] supports only finite values"), OutOfRangeSpec.of(NumberType.DOUBLE, "1.7976931348623157E309", "[double] supports only finite values"), @@ -441,4 +498,25 @@ static OutOfRangeSpec of(NumberType t, V v, String m) { message = m; } } + + public void testDisplayValue() { + for (NumberFieldMapper.NumberType type : NumberFieldMapper.NumberType.values()) { + NumberFieldMapper.NumberFieldType fieldType = new NumberFieldMapper.NumberFieldType(type); + assertNull(fieldType.valueForDisplay(null)); + } + assertEquals(Byte.valueOf((byte) 3), + new NumberFieldMapper.NumberFieldType(NumberFieldMapper.NumberType.BYTE).valueForDisplay(3)); + assertEquals(Short.valueOf((short) 3), + new NumberFieldMapper.NumberFieldType(NumberFieldMapper.NumberType.SHORT).valueForDisplay(3)); + assertEquals(Integer.valueOf(3), + new NumberFieldMapper.NumberFieldType(NumberFieldMapper.NumberType.INTEGER).valueForDisplay(3)); + assertEquals(Long.valueOf(3), + new NumberFieldMapper.NumberFieldType(NumberFieldMapper.NumberType.LONG).valueForDisplay(3L)); + assertEquals(Double.valueOf(1.2), + new NumberFieldMapper.NumberFieldType(NumberFieldMapper.NumberType.HALF_FLOAT).valueForDisplay(1.2)); + assertEquals(Double.valueOf(1.2), + new NumberFieldMapper.NumberFieldType(NumberFieldMapper.NumberType.FLOAT).valueForDisplay(1.2)); + assertEquals(Double.valueOf(1.2), + new NumberFieldMapper.NumberFieldType(NumberFieldMapper.NumberType.DOUBLE).valueForDisplay(1.2)); + } } diff --git a/core/src/test/java/org/elasticsearch/index/mapper/ObjectMapperTests.java b/core/src/test/java/org/elasticsearch/index/mapper/ObjectMapperTests.java index 0e1bead111452..bba2007285bcc 100644 --- a/core/src/test/java/org/elasticsearch/index/mapper/ObjectMapperTests.java +++ b/core/src/test/java/org/elasticsearch/index/mapper/ObjectMapperTests.java @@ -182,15 +182,12 @@ public void testMerge() throws IOException { .endObject().endObject().string(); MapperService mapperService = createIndex("test").mapperService(); DocumentMapper mapper = mapperService.merge("type", new CompressedXContent(mapping), MergeReason.MAPPING_UPDATE, false); - assertNull(mapper.root().includeInAll()); assertNull(mapper.root().dynamic()); String update = XContentFactory.jsonBuilder().startObject() .startObject("type") - .field("include_in_all", false) .field("dynamic", "strict") .endObject().endObject().string(); mapper = mapperService.merge("type", new CompressedXContent(update), MergeReason.MAPPING_UPDATE, false); - assertFalse(mapper.root().includeInAll()); assertEquals(Dynamic.STRICT, mapper.root().dynamic()); } diff --git a/core/src/test/java/org/elasticsearch/index/mapper/SourceFieldMapperTests.java b/core/src/test/java/org/elasticsearch/index/mapper/SourceFieldMapperTests.java index 3b73b5dfd3770..85017cb35cd39 100644 --- a/core/src/test/java/org/elasticsearch/index/mapper/SourceFieldMapperTests.java +++ b/core/src/test/java/org/elasticsearch/index/mapper/SourceFieldMapperTests.java @@ -65,7 +65,7 @@ public void testNoFormat() throws Exception { doc = documentMapper.parse(SourceToParse.source("test", "type", "1", XContentFactory.smileBuilder().startObject() .field("field", "value") .endObject().bytes(), - XContentType.JSON)); + XContentType.SMILE)); assertThat(XContentFactory.xContentType(doc.source()), equalTo(XContentType.SMILE)); } diff --git a/core/src/test/java/org/elasticsearch/index/mapper/UidFieldTypeTests.java b/core/src/test/java/org/elasticsearch/index/mapper/UidFieldTypeTests.java index 2fcd0f8fd4071..9b2e0ceb0721f 100644 --- a/core/src/test/java/org/elasticsearch/index/mapper/UidFieldTypeTests.java +++ b/core/src/test/java/org/elasticsearch/index/mapper/UidFieldTypeTests.java @@ -45,7 +45,7 @@ public void testRangeQuery() { MappedFieldType ft = createDefaultFieldType(); ft.setName("_uid"); IllegalArgumentException e = expectThrows(IllegalArgumentException.class, - () -> ft.rangeQuery(null, null, randomBoolean(), randomBoolean(), null)); + () -> ft.rangeQuery(null, null, randomBoolean(), randomBoolean(), null, null, null, null)); assertEquals("Field [_uid] of type [_uid] does not support range queries", e.getMessage()); } diff --git a/core/src/test/java/org/elasticsearch/index/mapper/UidTests.java b/core/src/test/java/org/elasticsearch/index/mapper/UidTests.java index 10b475e57ff87..c4fb94abd3846 100644 --- a/core/src/test/java/org/elasticsearch/index/mapper/UidTests.java +++ b/core/src/test/java/org/elasticsearch/index/mapper/UidTests.java @@ -79,7 +79,7 @@ public void testEncodeUTF8Ids() { for (int iter = 0; iter < iters; ++iter) { final String id = TestUtil.randomRealisticUnicodeString(random(), 1, 10); BytesRef encoded = Uid.encodeId(id); - assertEquals(id, Uid.decodeId(Arrays.copyOfRange(encoded.bytes, encoded.offset, encoded.offset + encoded.length))); + assertEquals(id, doDecodeId(encoded)); assertTrue(encoded.length <= 1 + new BytesRef(id).length); } } @@ -93,7 +93,7 @@ public void testEncodeNumericIds() { id = "0" + id; } BytesRef encoded = Uid.encodeId(id); - assertEquals(id, Uid.decodeId(Arrays.copyOfRange(encoded.bytes, encoded.offset, encoded.offset + encoded.length))); + assertEquals(id, doDecodeId(encoded)); assertEquals(1 + (id.length() + 1) / 2, encoded.length); } } @@ -105,9 +105,26 @@ public void testEncodeBase64Ids() { random().nextBytes(binaryId); final String id = Base64.getUrlEncoder().withoutPadding().encodeToString(binaryId); BytesRef encoded = Uid.encodeId(id); - assertEquals(id, Uid.decodeId(Arrays.copyOfRange(encoded.bytes, encoded.offset, encoded.offset + encoded.length))); + assertEquals(id, doDecodeId(encoded)); assertTrue(encoded.length <= 1 + binaryId.length); } } + private static String doDecodeId(BytesRef encoded) { + + if (randomBoolean()) { + return Uid.decodeId(Arrays.copyOfRange(encoded.bytes, encoded.offset, encoded.offset + encoded.length)); + } else { + if (randomBoolean()) { + BytesRef slicedCopy = new BytesRef(randomIntBetween(encoded.length + 1, encoded.length + 100)); + slicedCopy.offset = randomIntBetween(1, slicedCopy.bytes.length - encoded.length); + slicedCopy.length = encoded.length; + System.arraycopy(encoded.bytes, encoded.offset, slicedCopy.bytes, slicedCopy.offset, encoded.length); + assertArrayEquals(Arrays.copyOfRange(encoded.bytes, encoded.offset, encoded.offset + encoded.length), + Arrays.copyOfRange(slicedCopy.bytes, slicedCopy.offset, slicedCopy.offset + slicedCopy.length)); + encoded = slicedCopy; + } + return Uid.decodeId(encoded.bytes, encoded.offset, encoded.length); + } + } } diff --git a/core/src/test/java/org/elasticsearch/index/mapper/UpdateMappingOnClusterIT.java b/core/src/test/java/org/elasticsearch/index/mapper/UpdateMappingOnClusterIT.java index 9810e737ebe61..d59743340fa8b 100644 --- a/core/src/test/java/org/elasticsearch/index/mapper/UpdateMappingOnClusterIT.java +++ b/core/src/test/java/org/elasticsearch/index/mapper/UpdateMappingOnClusterIT.java @@ -22,6 +22,7 @@ import org.elasticsearch.Version; import org.elasticsearch.action.admin.indices.mapping.get.GetMappingsResponse; import org.elasticsearch.client.Client; +import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.plugins.Plugin; @@ -46,7 +47,8 @@ protected Collection> nodePlugins() { } protected void testConflict(String mapping, String mappingUpdate, Version idxVersion, String... errorMessages) throws InterruptedException { - assertAcked(prepareCreate(INDEX).setSource(mapping, XContentType.JSON).setSettings("index.version.created", idxVersion.id)); + assertAcked(prepareCreate(INDEX).setSource(mapping, XContentType.JSON) + .setSettings(Settings.builder().put("index.version.created", idxVersion.id))); ensureGreen(INDEX); GetMappingsResponse mappingsBeforeUpdateResponse = client().admin().indices().prepareGetMappings(INDEX).addTypes(TYPE).get(); try { @@ -61,44 +63,6 @@ protected void testConflict(String mapping, String mappingUpdate, Version idxVer } - public void testUpdatingAllSettingsOnOlderIndex() throws Exception { - XContentBuilder mapping = jsonBuilder() - .startObject() - .startObject("mappings") - .startObject(TYPE) - .startObject("_all").field("enabled", "true").endObject() - .endObject() - .endObject() - .endObject(); - XContentBuilder mappingUpdate = jsonBuilder() - .startObject() - .startObject("_all").field("enabled", "false").endObject() - .startObject("properties").startObject("text").field("type", "text").endObject() - .endObject() - .endObject(); - String errorMessage = "[_all] enabled is true now encountering false"; - testConflict(mapping.string(), mappingUpdate.string(), Version.V_5_0_0, errorMessage); - } - - public void testUpdatingAllSettingsOnOlderIndexDisabledToEnabled() throws Exception { - XContentBuilder mapping = jsonBuilder() - .startObject() - .startObject("mappings") - .startObject(TYPE) - .startObject("_all").field("enabled", "false").endObject() - .endObject() - .endObject() - .endObject(); - XContentBuilder mappingUpdate = jsonBuilder() - .startObject() - .startObject("_all").field("enabled", "true").endObject() - .startObject("properties").startObject("text").field("type", "text").endObject() - .endObject() - .endObject(); - String errorMessage = "[_all] enabled is false now encountering true"; - testConflict(mapping.string(), mappingUpdate.string(), Version.V_5_0_0, errorMessage); - } - private void compareMappingOnNodes(GetMappingsResponse previousMapping) { // make sure all nodes have same cluster state for (Client client : cluster().getClients()) { diff --git a/core/src/test/java/org/elasticsearch/index/query/ExistsQueryBuilderTests.java b/core/src/test/java/org/elasticsearch/index/query/ExistsQueryBuilderTests.java index cfc2d78942050..d4547eee26f89 100644 --- a/core/src/test/java/org/elasticsearch/index/query/ExistsQueryBuilderTests.java +++ b/core/src/test/java/org/elasticsearch/index/query/ExistsQueryBuilderTests.java @@ -22,15 +22,21 @@ import org.apache.lucene.search.BooleanClause; import org.apache.lucene.search.BooleanQuery; import org.apache.lucene.search.ConstantScoreQuery; +import org.apache.lucene.search.DocValuesFieldExistsQuery; import org.apache.lucene.search.MatchNoDocsQuery; +import org.apache.lucene.search.NormsFieldExistsQuery; import org.apache.lucene.search.Query; import org.apache.lucene.search.TermQuery; +import org.elasticsearch.Version; import org.elasticsearch.cluster.metadata.MetaData; import org.elasticsearch.search.internal.SearchContext; import org.elasticsearch.test.AbstractQueryTestCase; import java.io.IOException; +import java.util.ArrayList; import java.util.Collection; +import java.util.List; +import java.util.stream.Collectors; import static org.hamcrest.CoreMatchers.containsString; import static org.hamcrest.CoreMatchers.equalTo; @@ -60,23 +66,70 @@ protected ExistsQueryBuilder doCreateTestQueryBuilder() { protected void doAssertLuceneQuery(ExistsQueryBuilder queryBuilder, Query query, SearchContext context) throws IOException { String fieldPattern = queryBuilder.fieldName(); Collection fields = context.getQueryShardContext().simpleMatchToIndexNames(fieldPattern); + Collection mappedFields = fields.stream().filter((field) -> context.getQueryShardContext().getObjectMapper(field) != null + || context.getQueryShardContext().getMapperService().fullName(field) != null).collect(Collectors.toList()); if (getCurrentTypes().length == 0) { assertThat(query, instanceOf(MatchNoDocsQuery.class)); MatchNoDocsQuery matchNoDocsQuery = (MatchNoDocsQuery) query; assertThat(matchNoDocsQuery.toString(null), containsString("Missing types in \"exists\" query.")); + } else if (context.mapperService().getIndexSettings().getIndexVersionCreated().before(Version.V_6_1_0)) { + if (fields.size() == 1) { + assertThat(query, instanceOf(ConstantScoreQuery.class)); + ConstantScoreQuery constantScoreQuery = (ConstantScoreQuery) query; + String field = fields.iterator().next(); + assertThat(constantScoreQuery.getQuery(), instanceOf(TermQuery.class)); + TermQuery termQuery = (TermQuery) constantScoreQuery.getQuery(); + assertEquals(field, termQuery.getTerm().text()); + } else { + assertThat(query, instanceOf(ConstantScoreQuery.class)); + ConstantScoreQuery constantScoreQuery = (ConstantScoreQuery) query; + assertThat(constantScoreQuery.getQuery(), instanceOf(BooleanQuery.class)); + BooleanQuery booleanQuery = (BooleanQuery) constantScoreQuery.getQuery(); + assertThat(booleanQuery.clauses().size(), equalTo(mappedFields.size())); + for (int i = 0; i < mappedFields.size(); i++) { + BooleanClause booleanClause = booleanQuery.clauses().get(i); + assertThat(booleanClause.getOccur(), equalTo(BooleanClause.Occur.SHOULD)); + } + } + } else if (fields.size() == 1 && mappedFields.size() == 0) { + assertThat(query, instanceOf(MatchNoDocsQuery.class)); + MatchNoDocsQuery matchNoDocsQuery = (MatchNoDocsQuery) query; + assertThat(matchNoDocsQuery.toString(null), + containsString("No field \"" + fields.iterator().next() + "\" exists in mappings.")); } else if (fields.size() == 1) { assertThat(query, instanceOf(ConstantScoreQuery.class)); ConstantScoreQuery constantScoreQuery = (ConstantScoreQuery) query; - assertThat(constantScoreQuery.getQuery(), instanceOf(TermQuery.class)); - TermQuery termQuery = (TermQuery) constantScoreQuery.getQuery(); - assertEquals(fields.iterator().next(), termQuery.getTerm().text()); + String field = fields.iterator().next(); + if (context.getQueryShardContext().getObjectMapper(field) != null) { + assertThat(constantScoreQuery.getQuery(), instanceOf(BooleanQuery.class)); + BooleanQuery booleanQuery = (BooleanQuery) constantScoreQuery.getQuery(); + List childFields = new ArrayList<>(); + context.getQueryShardContext().getObjectMapper(field).forEach(mapper -> childFields.add(mapper.name())); + assertThat(booleanQuery.clauses().size(), equalTo(childFields.size())); + for (int i = 0; i < childFields.size(); i++) { + BooleanClause booleanClause = booleanQuery.clauses().get(i); + assertThat(booleanClause.getOccur(), equalTo(BooleanClause.Occur.SHOULD)); + } + } else if (context.getQueryShardContext().getMapperService().fullName(field).hasDocValues()) { + assertThat(constantScoreQuery.getQuery(), instanceOf(DocValuesFieldExistsQuery.class)); + DocValuesFieldExistsQuery dvExistsQuery = (DocValuesFieldExistsQuery) constantScoreQuery.getQuery(); + assertEquals(field, dvExistsQuery.getField()); + } else if (context.getQueryShardContext().getMapperService().fullName(field).omitNorms() == false) { + assertThat(constantScoreQuery.getQuery(), instanceOf(NormsFieldExistsQuery.class)); + NormsFieldExistsQuery normsExistsQuery = (NormsFieldExistsQuery) constantScoreQuery.getQuery(); + assertEquals(field, normsExistsQuery.getField()); + } else { + assertThat(constantScoreQuery.getQuery(), instanceOf(TermQuery.class)); + TermQuery termQuery = (TermQuery) constantScoreQuery.getQuery(); + assertEquals(field, termQuery.getTerm().text()); + } } else { assertThat(query, instanceOf(ConstantScoreQuery.class)); ConstantScoreQuery constantScoreQuery = (ConstantScoreQuery) query; assertThat(constantScoreQuery.getQuery(), instanceOf(BooleanQuery.class)); BooleanQuery booleanQuery = (BooleanQuery) constantScoreQuery.getQuery(); - assertThat(booleanQuery.clauses().size(), equalTo(fields.size())); - for (int i = 0; i < fields.size(); i++) { + assertThat(booleanQuery.clauses().size(), equalTo(mappedFields.size())); + for (int i = 0; i < mappedFields.size(); i++) { BooleanClause booleanClause = booleanQuery.clauses().get(i); assertThat(booleanClause.getOccur(), equalTo(BooleanClause.Occur.SHOULD)); } diff --git a/core/src/test/java/org/elasticsearch/index/query/FuzzyQueryBuilderTests.java b/core/src/test/java/org/elasticsearch/index/query/FuzzyQueryBuilderTests.java index 58c70fcfcb39b..4fae80d09a51e 100644 --- a/core/src/test/java/org/elasticsearch/index/query/FuzzyQueryBuilderTests.java +++ b/core/src/test/java/org/elasticsearch/index/query/FuzzyQueryBuilderTests.java @@ -23,6 +23,7 @@ import org.apache.lucene.search.BoostQuery; import org.apache.lucene.search.FuzzyQuery; import org.apache.lucene.search.Query; +import org.elasticsearch.ElasticsearchParseException; import org.elasticsearch.common.ParsingException; import org.elasticsearch.common.unit.Fuzziness; import org.elasticsearch.search.internal.SearchContext; @@ -120,6 +121,92 @@ public void testToQueryWithStringField() throws IOException { assertThat(fuzzyQuery.getPrefixLength(), equalTo(1)); } + public void testToQueryWithStringFieldDefinedFuzziness() throws IOException { + assumeTrue("test runs only when at least a type is registered", getCurrentTypes().length > 0); + String query = "{\n" + + " \"fuzzy\":{\n" + + " \"" + STRING_FIELD_NAME + "\":{\n" + + " \"value\":\"sh\",\n" + + " \"fuzziness\": \"AUTO:2,5\",\n" + + " \"prefix_length\":1,\n" + + " \"boost\":2.0\n" + + " }\n" + + " }\n" + + "}"; + Query parsedQuery = parseQuery(query).toQuery(createShardContext()); + assertThat(parsedQuery, instanceOf(BoostQuery.class)); + BoostQuery boostQuery = (BoostQuery) parsedQuery; + assertThat(boostQuery.getBoost(), equalTo(2.0f)); + assertThat(boostQuery.getQuery(), instanceOf(FuzzyQuery.class)); + FuzzyQuery fuzzyQuery = (FuzzyQuery) boostQuery.getQuery(); + assertThat(fuzzyQuery.getTerm(), equalTo(new Term(STRING_FIELD_NAME, "sh"))); + assertThat(fuzzyQuery.getMaxEdits(), equalTo(1)); + assertThat(fuzzyQuery.getPrefixLength(), equalTo(1)); + } + + public void testToQueryWithStringFieldDefinedWrongFuzziness() throws IOException { + assumeTrue("test runs only when at least a type is registered", getCurrentTypes().length > 0); + String queryMissingFuzzinessUpLimit = "{\n" + + " \"fuzzy\":{\n" + + " \"" + STRING_FIELD_NAME + "\":{\n" + + " \"value\":\"sh\",\n" + + " \"fuzziness\": \"AUTO:2\",\n" + + " \"prefix_length\":1,\n" + + " \"boost\":2.0\n" + + " }\n" + + " }\n" + + "}"; + ElasticsearchParseException e = expectThrows(ElasticsearchParseException.class, + () -> parseQuery(queryMissingFuzzinessUpLimit).toQuery(createShardContext())); + String msg = "failed to find low and high distance values"; + assertTrue(e.getMessage() + " didn't contain: " + msg + " but: " + e.getMessage(), e.getMessage().contains(msg)); + + String queryHavingNegativeFuzzinessLowLimit = "{\n" + + " \"fuzzy\":{\n" + + " \"" + STRING_FIELD_NAME + "\":{\n" + + " \"value\":\"sh\",\n" + + " \"fuzziness\": \"AUTO:-1,6\",\n" + + " \"prefix_length\":1,\n" + + " \"boost\":2.0\n" + + " }\n" + + " }\n" + + "}"; + String msg2 = "fuzziness wrongly configured"; + IllegalArgumentException e2 = expectThrows(IllegalArgumentException.class, + () -> parseQuery(queryHavingNegativeFuzzinessLowLimit).toQuery(createShardContext())); + assertTrue(e2.getMessage() + " didn't contain: " + msg2 + " but: " + e.getMessage(), e.getMessage().contains + (msg)); + + String queryMissingFuzzinessUpLimit2 = "{\n" + + " \"fuzzy\":{\n" + + " \"" + STRING_FIELD_NAME + "\":{\n" + + " \"value\":\"sh\",\n" + + " \"fuzziness\": \"AUTO:1,\",\n" + + " \"prefix_length\":1,\n" + + " \"boost\":2.0\n" + + " }\n" + + " }\n" + + "}"; + e = expectThrows(ElasticsearchParseException.class, + () -> parseQuery(queryMissingFuzzinessUpLimit2).toQuery(createShardContext())); + assertTrue(e.getMessage() + " didn't contain: " + msg + " but: " + e.getMessage(), e.getMessage().contains(msg)); + + String queryMissingFuzzinessLowLimit = "{\n" + + " \"fuzzy\":{\n" + + " \"" + STRING_FIELD_NAME + "\":{\n" + + " \"value\":\"sh\",\n" + + " \"fuzziness\": \"AUTO:,5\",\n" + + " \"prefix_length\":1,\n" + + " \"boost\":2.0\n" + + " }\n" + + " }\n" + + "}"; + e = expectThrows(ElasticsearchParseException.class, + () -> parseQuery(queryMissingFuzzinessLowLimit).toQuery(createShardContext())); + msg = "failed to parse [AUTO:,5] as a \"auto:int,int\""; + assertTrue(e.getMessage() + " didn't contain: " + msg + " but: " + e.getMessage(), e.getMessage().contains(msg)); + } + public void testToQueryWithNumericField() throws IOException { assumeTrue("test runs only when at least a type is registered", getCurrentTypes().length > 0); String query = "{\n" + @@ -154,6 +241,7 @@ public void testFromJson() throws IOException { checkGeneratedJson(json, parsed); assertEquals(json, 42.0, parsed.boost(), 0.00001); assertEquals(json, 2, parsed.fuzziness().asFloat(), 0f); + assertEquals(json, false, parsed.transpositions()); } public void testParseFailsWithMultipleFields() throws IOException { @@ -203,4 +291,19 @@ public void testParseFailsWithValueArray() { ParsingException e = expectThrows(ParsingException.class, () -> parseQuery(query)); assertEquals("[fuzzy] unexpected token [START_ARRAY] after [value]", e.getMessage()); } + + public void testToQueryWithTranspositions() throws Exception { + assumeTrue("test runs only when at least a type is registered", getCurrentTypes().length > 0); + Query query = new FuzzyQueryBuilder(STRING_FIELD_NAME, "text").toQuery(createShardContext()); + assertThat(query, instanceOf(FuzzyQuery.class)); + assertEquals(FuzzyQuery.defaultTranspositions, ((FuzzyQuery)query).getTranspositions()); + + query = new FuzzyQueryBuilder(STRING_FIELD_NAME, "text").transpositions(true).toQuery(createShardContext()); + assertThat(query, instanceOf(FuzzyQuery.class)); + assertEquals(true, ((FuzzyQuery)query).getTranspositions()); + + query = new FuzzyQueryBuilder(STRING_FIELD_NAME, "text").transpositions(false).toQuery(createShardContext()); + assertThat(query, instanceOf(FuzzyQuery.class)); + assertEquals(false, ((FuzzyQuery)query).getTranspositions()); + } } diff --git a/core/src/test/java/org/elasticsearch/index/query/IdsQueryBuilderTests.java b/core/src/test/java/org/elasticsearch/index/query/IdsQueryBuilderTests.java index be845bc1f2185..e440fc0277229 100644 --- a/core/src/test/java/org/elasticsearch/index/query/IdsQueryBuilderTests.java +++ b/core/src/test/java/org/elasticsearch/index/query/IdsQueryBuilderTests.java @@ -149,35 +149,4 @@ public void testFromJson() throws IOException { assertThat(parsed.ids(), contains("1","100","4")); assertEquals(json, 0, parsed.types().length); } - - public void testFromJsonDeprecatedSyntax() throws IOException { - IdsQueryBuilder testQuery = new IdsQueryBuilder().types("my_type"); - - //single value type can also be called _type - final String contentString = "{\n" + - " \"ids\" : {\n" + - " \"_type\" : \"my_type\",\n" + - " \"values\" : [ ]\n" + - " }\n" + - "}"; - - IdsQueryBuilder parsed = (IdsQueryBuilder) parseQuery(contentString); - assertEquals(testQuery, parsed); - - parseQuery(contentString); - assertWarnings("Deprecated field [_type] used, expected [type] instead"); - - //array of types can also be called types rather than type - final String contentString2 = "{\n" + - " \"ids\" : {\n" + - " \"types\" : [\"my_type\"],\n" + - " \"values\" : [ ]\n" + - " }\n" + - "}"; - parsed = (IdsQueryBuilder) parseQuery(contentString2); - assertEquals(testQuery, parsed); - - parseQuery(contentString2); - assertWarnings("Deprecated field [types] used, expected [type] instead"); - } } diff --git a/core/src/test/java/org/elasticsearch/index/query/InnerHitBuilderTests.java b/core/src/test/java/org/elasticsearch/index/query/InnerHitBuilderTests.java index 78975bf7b1784..a4e6856166272 100644 --- a/core/src/test/java/org/elasticsearch/index/query/InnerHitBuilderTests.java +++ b/core/src/test/java/org/elasticsearch/index/query/InnerHitBuilderTests.java @@ -139,8 +139,8 @@ public void testEqualsAndHashcode() { public static InnerHitBuilder randomInnerHits() { InnerHitBuilder innerHits = new InnerHitBuilder(); innerHits.setName(randomAlphaOfLengthBetween(1, 16)); - innerHits.setFrom(randomIntBetween(0, 128)); - innerHits.setSize(randomIntBetween(0, 128)); + innerHits.setFrom(randomIntBetween(0, 32)); + innerHits.setSize(randomIntBetween(0, 32)); innerHits.setExplain(randomBoolean()); innerHits.setVersion(randomBoolean()); innerHits.setTrackScores(randomBoolean()); diff --git a/core/src/test/java/org/elasticsearch/index/query/MatchQueryBuilderTests.java b/core/src/test/java/org/elasticsearch/index/query/MatchQueryBuilderTests.java index 526210d33703f..9d5ee3e7f76f3 100644 --- a/core/src/test/java/org/elasticsearch/index/query/MatchQueryBuilderTests.java +++ b/core/src/test/java/org/elasticsearch/index/query/MatchQueryBuilderTests.java @@ -24,10 +24,8 @@ import org.apache.lucene.search.BooleanQuery; import org.apache.lucene.search.BoostQuery; import org.apache.lucene.search.FuzzyQuery; -import org.apache.lucene.search.IndexOrDocValuesQuery; import org.apache.lucene.search.MatchAllDocsQuery; import org.apache.lucene.search.MatchNoDocsQuery; -import org.apache.lucene.search.PhraseQuery; import org.apache.lucene.search.PointRangeQuery; import org.apache.lucene.search.Query; import org.apache.lucene.search.TermQuery; @@ -148,23 +146,6 @@ protected void doAssertLuceneQuery(MatchQueryBuilder queryBuilder, Query query, return; } - switch (queryBuilder.type()) { - case BOOLEAN: - assertThat(query, either(instanceOf(BooleanQuery.class)).or(instanceOf(ExtendedCommonTermsQuery.class)) - .or(instanceOf(TermQuery.class)).or(instanceOf(FuzzyQuery.class)).or(instanceOf(MatchNoDocsQuery.class)) - .or(instanceOf(PointRangeQuery.class)).or(instanceOf(IndexOrDocValuesQuery.class))); - break; - case PHRASE: - assertThat(query, either(instanceOf(BooleanQuery.class)).or(instanceOf(PhraseQuery.class)) - .or(instanceOf(TermQuery.class)).or(instanceOf(FuzzyQuery.class)) - .or(instanceOf(PointRangeQuery.class)).or(instanceOf(IndexOrDocValuesQuery.class))); - break; - case PHRASE_PREFIX: - assertThat(query, either(instanceOf(BooleanQuery.class)).or(instanceOf(MultiPhrasePrefixQuery.class)) - .or(instanceOf(TermQuery.class)).or(instanceOf(FuzzyQuery.class)) - .or(instanceOf(PointRangeQuery.class)).or(instanceOf(IndexOrDocValuesQuery.class))); - break; - } QueryShardContext context = searchContext.getQueryShardContext(); MappedFieldType fieldType = context.fieldMapper(queryBuilder.fieldName()); if (query instanceof TermQuery && fieldType != null) { @@ -250,11 +231,6 @@ public void testIllegalValues() { assertEquals("[match] requires operator to be non-null", e.getMessage()); } - { - IllegalArgumentException e = expectThrows(IllegalArgumentException.class, () -> matchQuery.type(null)); - assertEquals("[match] requires type to be non-null", e.getMessage()); - } - { IllegalArgumentException e = expectThrows(IllegalArgumentException.class, () -> matchQuery.zeroTermsQuery(null)); assertEquals("[match] requires zeroTermsQuery to be non-null", e.getMessage()); @@ -290,69 +266,6 @@ public void testSimpleMatchQuery() throws IOException { assertEquals(json, Operator.AND, qb.operator()); } - public void testLegacyMatchPhrasePrefixQuery() throws IOException { - MatchQueryBuilder expectedQB = new MatchQueryBuilder("message", "to be or not to be"); - expectedQB.type(Type.PHRASE_PREFIX); - expectedQB.slop(2); - expectedQB.maxExpansions(30); - String json = "{\n" + - " \"match\" : {\n" + - " \"message\" : {\n" + - " \"query\" : \"to be or not to be\",\n" + - " \"type\" : \"phrase_prefix\",\n" + - " \"operator\" : \"OR\",\n" + - " \"slop\" : 2,\n" + - " \"prefix_length\" : 0,\n" + - " \"max_expansions\" : 30,\n" + - " \"fuzzy_transpositions\" : true,\n" + - " \"lenient\" : false,\n" + - " \"zero_terms_query\" : \"NONE\",\n" + - " \"auto_generate_synonyms_phrase_query\" : true,\n" + - " \"boost\" : 1.0\n" + - " }\n" + - " }\n" + - "}"; - MatchQueryBuilder qb = (MatchQueryBuilder) parseQuery(json); - checkGeneratedJson(json, qb); - - assertEquals(json, expectedQB, qb); - - assertSerialization(qb); - - assertWarnings("Deprecated field [type] used, replaced by [match_phrase and match_phrase_prefix query]", - "Deprecated field [slop] used, replaced by [match_phrase query]"); - } - - public void testLegacyMatchPhraseQuery() throws IOException { - MatchQueryBuilder expectedQB = new MatchQueryBuilder("message", "to be or not to be"); - expectedQB.type(Type.PHRASE); - expectedQB.slop(2); - String json = "{\n" + - " \"match\" : {\n" + - " \"message\" : {\n" + - " \"query\" : \"to be or not to be\",\n" + - " \"type\" : \"phrase\",\n" + - " \"operator\" : \"OR\",\n" + - " \"slop\" : 2,\n" + - " \"prefix_length\" : 0,\n" + - " \"max_expansions\" : 50,\n" + - " \"fuzzy_transpositions\" : true,\n" + - " \"lenient\" : false,\n" + - " \"zero_terms_query\" : \"NONE\",\n" + - " \"auto_generate_synonyms_phrase_query\" : true,\n" + - " \"boost\" : 1.0\n" + - " }\n" + - " }\n" + - "}"; - MatchQueryBuilder qb = (MatchQueryBuilder) parseQuery(json); - checkGeneratedJson(json, qb); - - assertEquals(json, expectedQB, qb); - assertSerialization(qb); - assertWarnings("Deprecated field [type] used, replaced by [match_phrase and match_phrase_prefix query]", - "Deprecated field [slop] used, replaced by [match_phrase query]"); - } - public void testFuzzinessOnNonStringField() throws Exception { assumeTrue("test runs only when at least a type is registered", getCurrentTypes().length > 0); MatchQueryBuilder query = new MatchQueryBuilder(INT_FIELD_NAME, 42); @@ -437,8 +350,12 @@ public void testExceptionUsingAnalyzerOnNumericField() { @Override protected void initializeAdditionalMappings(MapperService mapperService) throws IOException { - mapperService.merge("doc", new CompressedXContent(PutMappingRequest.buildFromSimplifiedDef("doc", - "string_boost", "type=text,boost=4").string()), MapperService.MergeReason.MAPPING_UPDATE, false); + mapperService.merge("doc", new CompressedXContent(PutMappingRequest.buildFromSimplifiedDef( + "doc", + "string_boost", "type=text,boost=4", "string_no_pos", + "type=text,index_options=docs").string() + ), + MapperService.MergeReason.MAPPING_UPDATE, false); } public void testMatchPhrasePrefixWithBoost() throws Exception { @@ -463,6 +380,16 @@ public void testMatchPhrasePrefixWithBoost() throws Exception { Query query = builder.toQuery(context); assertThat(query, instanceOf(MultiPhrasePrefixQuery.class)); } + } + public void testLenientPhraseQuery() throws Exception { + assumeTrue("test runs only when at least a type is registered", getCurrentTypes().length > 0); + QueryShardContext context = createShardContext(); + MatchQuery b = new MatchQuery(context); + b.setLenient(true); + Query query = b.parse(Type.PHRASE, "string_no_pos", "foo bar"); + assertThat(query, instanceOf(MatchNoDocsQuery.class)); + assertThat(query.toString(), + containsString("field:[string_no_pos] was indexed without position data; cannot run PhraseQuery")); } } diff --git a/core/src/test/java/org/elasticsearch/index/query/MultiMatchQueryBuilderTests.java b/core/src/test/java/org/elasticsearch/index/query/MultiMatchQueryBuilderTests.java index c27f0dd311b4c..a0afe28a17bce 100644 --- a/core/src/test/java/org/elasticsearch/index/query/MultiMatchQueryBuilderTests.java +++ b/core/src/test/java/org/elasticsearch/index/query/MultiMatchQueryBuilderTests.java @@ -33,8 +33,8 @@ import org.apache.lucene.search.Query; import org.apache.lucene.search.TermQuery; import org.elasticsearch.common.ParsingException; -import org.elasticsearch.common.lucene.all.AllTermQuery; import org.elasticsearch.common.lucene.search.MultiPhrasePrefixQuery; +import org.elasticsearch.common.unit.Fuzziness; import org.elasticsearch.index.query.MultiMatchQueryBuilder.Type; import org.elasticsearch.index.search.MatchQuery; import org.elasticsearch.search.internal.SearchContext; @@ -124,6 +124,9 @@ protected MultiMatchQueryBuilder doCreateTestQueryBuilder() { if (randomBoolean()) { query.autoGenerateSynonymsPhraseQuery(randomBoolean()); } + if (randomBoolean()) { + query.fuzzyTranspositions(randomBoolean()); + } // test with fields with boost and patterns delegated to the tests further below return query; } @@ -144,7 +147,7 @@ protected Map getAlternateVersions() { @Override protected void doAssertLuceneQuery(MultiMatchQueryBuilder queryBuilder, Query query, SearchContext context) throws IOException { // we rely on integration tests for deeper checks here - assertThat(query, either(instanceOf(BoostQuery.class)).or(instanceOf(TermQuery.class)).or(instanceOf(AllTermQuery.class)) + assertThat(query, either(instanceOf(BoostQuery.class)).or(instanceOf(TermQuery.class)) .or(instanceOf(BooleanQuery.class)).or(instanceOf(DisjunctionMaxQuery.class)) .or(instanceOf(FuzzyQuery.class)).or(instanceOf(MultiPhrasePrefixQuery.class)) .or(instanceOf(MatchAllDocsQuery.class)).or(instanceOf(ExtendedCommonTermsQuery.class)) @@ -242,6 +245,7 @@ public void testFromJson() throws IOException { " \"lenient\" : false,\n" + " \"zero_terms_query\" : \"NONE\",\n" + " \"auto_generate_synonyms_phrase_query\" : true,\n" + + " \"fuzzy_transpositions\" : false,\n" + " \"boost\" : 1.0\n" + " }\n" + "}"; @@ -253,6 +257,7 @@ public void testFromJson() throws IOException { assertEquals(json, 3, parsed.fields().size()); assertEquals(json, MultiMatchQueryBuilder.Type.MOST_FIELDS, parsed.type()); assertEquals(json, Operator.OR, parsed.operator()); + assertEquals(json, false, parsed.fuzzyTranspositions()); } /** @@ -318,4 +323,19 @@ public void testFuzzinessOnNonStringField() throws Exception { query.analyzer(null); query.toQuery(context); // no exception } + + public void testToFuzzyQuery() throws Exception { + assumeTrue("test runs only when at least a type is registered", getCurrentTypes().length > 0); + + MultiMatchQueryBuilder qb = new MultiMatchQueryBuilder("text").field(STRING_FIELD_NAME); + qb.fuzziness(Fuzziness.TWO); + qb.prefixLength(2); + qb.maxExpansions(5); + qb.fuzzyTranspositions(false); + + Query query = qb.toQuery(createShardContext()); + FuzzyQuery expected = new FuzzyQuery(new Term(STRING_FIELD_NAME, "text"), 2, 2, 5, false); + + assertEquals(expected, query); + } } diff --git a/core/src/test/java/org/elasticsearch/index/query/NestedQueryBuilderTests.java b/core/src/test/java/org/elasticsearch/index/query/NestedQueryBuilderTests.java index 0e3928c89883c..9d674a1a0d05a 100644 --- a/core/src/test/java/org/elasticsearch/index/query/NestedQueryBuilderTests.java +++ b/core/src/test/java/org/elasticsearch/index/query/NestedQueryBuilderTests.java @@ -26,6 +26,8 @@ import org.elasticsearch.Version; import org.elasticsearch.action.admin.indices.mapping.put.PutMappingRequest; import org.elasticsearch.common.compress.CompressedXContent; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.index.IndexSettings; import org.elasticsearch.index.mapper.MapperService; import org.elasticsearch.index.query.functionscore.FunctionScoreQueryBuilder; import org.elasticsearch.index.search.ESToParentBlockJoinQuery; @@ -41,6 +43,7 @@ import java.util.HashMap; import java.util.Map; +import static org.elasticsearch.index.IndexSettingsTests.newIndexMeta; import static org.elasticsearch.index.query.InnerHitBuilderTests.randomInnerHits; import static org.hamcrest.CoreMatchers.containsString; import static org.hamcrest.CoreMatchers.equalTo; @@ -325,6 +328,11 @@ public void testBuildIgnoreUnmappedNestQuery() throws Exception { SearchContext searchContext = mock(SearchContext.class); when(searchContext.getQueryShardContext()).thenReturn(queryShardContext); + MapperService mapperService = mock(MapperService.class); + IndexSettings settings = new IndexSettings(newIndexMeta("index", Settings.EMPTY), Settings.EMPTY); + when(mapperService.getIndexSettings()).thenReturn(settings); + when(searchContext.mapperService()).thenReturn(mapperService); + InnerHitBuilder leafInnerHits = randomInnerHits(); NestedQueryBuilder query1 = new NestedQueryBuilder("path", new MatchAllQueryBuilder(), ScoreMode.None); query1.innerHit(leafInnerHits); diff --git a/core/src/test/java/org/elasticsearch/index/query/QueryStringQueryBuilderTests.java b/core/src/test/java/org/elasticsearch/index/query/QueryStringQueryBuilderTests.java index b3bc4f5947a24..5aa375672822a 100644 --- a/core/src/test/java/org/elasticsearch/index/query/QueryStringQueryBuilderTests.java +++ b/core/src/test/java/org/elasticsearch/index/query/QueryStringQueryBuilderTests.java @@ -31,6 +31,7 @@ import org.apache.lucene.search.MatchAllDocsQuery; import org.apache.lucene.search.MatchNoDocsQuery; import org.apache.lucene.search.MultiTermQuery; +import org.apache.lucene.search.NormsFieldExistsQuery; import org.apache.lucene.search.PhraseQuery; import org.apache.lucene.search.PrefixQuery; import org.apache.lucene.search.Query; @@ -47,10 +48,10 @@ import org.apache.lucene.util.automaton.TooComplexToDeterminizeException; import org.elasticsearch.Version; import org.elasticsearch.action.admin.indices.mapping.put.PutMappingRequest; -import org.elasticsearch.common.ParsingException; +import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.common.compress.CompressedXContent; -import org.elasticsearch.common.lucene.all.AllTermQuery; import org.elasticsearch.common.lucene.search.MultiPhrasePrefixQuery; +import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.Fuzziness; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.json.JsonXContent; @@ -63,6 +64,7 @@ import java.io.IOException; import java.util.ArrayList; +import java.util.Arrays; import java.util.List; import static org.elasticsearch.index.query.AbstractQueryBuilder.parseInnerQueryBuilder; @@ -75,7 +77,6 @@ import static org.hamcrest.Matchers.instanceOf; public class QueryStringQueryBuilderTests extends AbstractQueryTestCase { - @Override protected QueryStringQueryBuilder doCreateTestQueryBuilder() { int numTerms = randomIntBetween(0, 5); @@ -163,6 +164,9 @@ protected QueryStringQueryBuilder doCreateTestQueryBuilder() { if (randomBoolean()) { queryStringQueryBuilder.autoGenerateSynonymsPhraseQuery(randomBoolean()); } + if (randomBoolean()) { + queryStringQueryBuilder.fuzzyTranspositions(randomBoolean()); + } queryStringQueryBuilder.type(randomFrom(MultiMatchQueryBuilder.Type.values())); return queryStringQueryBuilder; } @@ -170,7 +174,7 @@ protected QueryStringQueryBuilder doCreateTestQueryBuilder() { @Override protected void doAssertLuceneQuery(QueryStringQueryBuilder queryBuilder, Query query, SearchContext context) throws IOException { - assertThat(query, either(instanceOf(TermQuery.class)).or(instanceOf(AllTermQuery.class)) + assertThat(query, either(instanceOf(TermQuery.class)) .or(instanceOf(BooleanQuery.class)).or(instanceOf(DisjunctionMaxQuery.class)) .or(instanceOf(PhraseQuery.class)).or(instanceOf(BoostQuery.class)) .or(instanceOf(MultiPhrasePrefixQuery.class)).or(instanceOf(PrefixQuery.class)).or(instanceOf(SpanQuery.class)) @@ -799,24 +803,22 @@ public void testToQueryTextParsing() throws IOException { public void testExistsFieldQuery() throws Exception { QueryShardContext context = createShardContext(); - QueryStringQueryBuilder queryBuilder = new QueryStringQueryBuilder("foo:*"); + QueryStringQueryBuilder queryBuilder = new QueryStringQueryBuilder(STRING_FIELD_NAME + ":*"); Query query = queryBuilder.toQuery(context); - Query expected; if (getCurrentTypes().length > 0) { - expected = new ConstantScoreQuery(new TermQuery(new Term("_field_names", "foo"))); + if (context.getIndexSettings().getIndexVersionCreated().onOrAfter(Version.V_6_1_0) + && (context.fieldMapper(STRING_FIELD_NAME).omitNorms() == false)) { + assertThat(query, equalTo(new ConstantScoreQuery(new NormsFieldExistsQuery(STRING_FIELD_NAME)))); + } else { + assertThat(query, equalTo(new ConstantScoreQuery(new TermQuery(new Term("_field_names", STRING_FIELD_NAME))))); + } } else { - expected = new MatchNoDocsQuery(); + assertThat(query, equalTo(new MatchNoDocsQuery())); } - assertThat(query, equalTo(expected)); - - queryBuilder = new QueryStringQueryBuilder("_all:*"); - query = queryBuilder.toQuery(context); - expected = new MatchAllDocsQuery(); - assertThat(query, equalTo(expected)); queryBuilder = new QueryStringQueryBuilder("*:*"); query = queryBuilder.toQuery(context); - expected = new MatchAllDocsQuery(); + Query expected = new MatchAllDocsQuery(); assertThat(query, equalTo(expected)); queryBuilder = new QueryStringQueryBuilder("*"); @@ -870,6 +872,7 @@ public void testFromJson() throws IOException { " \"phrase_slop\" : 0,\n" + " \"escape\" : false,\n" + " \"auto_generate_synonyms_phrase_query\" : true,\n" + + " \"fuzzy_transpositions\" : false,\n" + " \"boost\" : 1.0\n" + " }\n" + "}"; @@ -879,6 +882,7 @@ public void testFromJson() throws IOException { assertEquals(json, "this AND that OR thus", parsed.queryString()); assertEquals(json, "content", parsed.defaultField()); + assertEquals(json, false, parsed.fuzzyTranspositions()); } public void testExpandedTerms() throws Exception { @@ -989,4 +993,69 @@ public void testUnmappedFieldRewriteToMatchNoDocs() throws IOException { .toQuery(createShardContext()); assertEquals(new MatchNoDocsQuery(""), query); } + + public void testDefaultField() throws Exception { + assumeTrue("test runs only when at least a type is registered", getCurrentTypes().length > 0); + QueryShardContext context = createShardContext(); + context.getIndexSettings().updateIndexMetaData( + newIndexMeta("index", context.getIndexSettings().getSettings(), Settings.builder().putList("index.query.default_field", + STRING_FIELD_NAME, STRING_FIELD_NAME_2 + "^5").build()) + ); + Query query = new QueryStringQueryBuilder("hello") + .toQuery(context); + Query expected = new DisjunctionMaxQuery( + Arrays.asList( + new TermQuery(new Term(STRING_FIELD_NAME, "hello")), + new BoostQuery(new TermQuery(new Term(STRING_FIELD_NAME_2, "hello")), 5.0f) + ), 0.0f + ); + assertEquals(expected, query); + // Reset the default value + context.getIndexSettings().updateIndexMetaData( + newIndexMeta("index", + context.getIndexSettings().getSettings(), Settings.builder().putList("index.query.default_field", "*").build()) + ); + } + + /** + * the quote analyzer should overwrite any other forced analyzer in quoted parts of the query + */ + public void testQuoteAnalyzer() throws Exception { + assumeTrue("test runs only when at least a type is registered", getCurrentTypes().length > 0); + // Prefix + Query query = new QueryStringQueryBuilder("ONE \"TWO THREE\"") + .field(STRING_FIELD_NAME) + .analyzer("whitespace") + .quoteAnalyzer("simple") + .toQuery(createShardContext()); + Query expectedQuery = + new BooleanQuery.Builder() + .add(new BooleanClause(new TermQuery(new Term(STRING_FIELD_NAME, "ONE")), Occur.SHOULD)) + .add(new BooleanClause(new PhraseQuery.Builder() + .add(new Term(STRING_FIELD_NAME, "two"), 0) + .add(new Term(STRING_FIELD_NAME, "three"), 1) + .build(), Occur.SHOULD)) + .build(); + assertEquals(expectedQuery, query); + } + + public void testToFuzzyQuery() throws Exception { + assumeTrue("test runs only when at least a type is registered", getCurrentTypes().length > 0); + + Query query = new QueryStringQueryBuilder("text~2") + .field(STRING_FIELD_NAME) + .fuzzyPrefixLength(2) + .fuzzyMaxExpansions(5) + .fuzzyTranspositions(false) + .toQuery(createShardContext()); + FuzzyQuery expected = new FuzzyQuery(new Term(STRING_FIELD_NAME, "text"), 2, 2, 5, false); + assertEquals(expected, query); + } + + private static IndexMetaData newIndexMeta(String name, Settings oldIndexSettings, Settings indexSettings) { + Settings build = Settings.builder().put(oldIndexSettings) + .put(indexSettings) + .build(); + return IndexMetaData.builder(name).settings(build).build(); + } } diff --git a/core/src/test/java/org/elasticsearch/index/query/RangeQueryBuilderTests.java b/core/src/test/java/org/elasticsearch/index/query/RangeQueryBuilderTests.java index b57b45c3d7484..2230436b18ef4 100644 --- a/core/src/test/java/org/elasticsearch/index/query/RangeQueryBuilderTests.java +++ b/core/src/test/java/org/elasticsearch/index/query/RangeQueryBuilderTests.java @@ -19,19 +19,20 @@ package org.elasticsearch.index.query; -import com.carrotsearch.randomizedtesting.generators.RandomPicks; - import org.apache.lucene.document.IntPoint; import org.apache.lucene.document.LongPoint; import org.apache.lucene.index.Term; import org.apache.lucene.search.ConstantScoreQuery; +import org.apache.lucene.search.DocValuesFieldExistsQuery; import org.apache.lucene.search.IndexOrDocValuesQuery; import org.apache.lucene.search.MatchNoDocsQuery; +import org.apache.lucene.search.NormsFieldExistsQuery; import org.apache.lucene.search.PointRangeQuery; import org.apache.lucene.search.Query; import org.apache.lucene.search.TermQuery; import org.apache.lucene.search.TermRangeQuery; import org.elasticsearch.ElasticsearchParseException; +import org.elasticsearch.Version; import org.elasticsearch.common.ParsingException; import org.elasticsearch.common.geo.ShapeRelation; import org.elasticsearch.common.lucene.BytesRefs; @@ -64,13 +65,13 @@ protected RangeQueryBuilder doCreateTestQueryBuilder() { switch (randomIntBetween(0, 2)) { case 0: // use mapped integer field for numeric range queries - query = new RangeQueryBuilder(randomBoolean() ? INT_FIELD_NAME : INT_RANGE_FIELD_NAME); + query = new RangeQueryBuilder(INT_FIELD_NAME); query.from(randomIntBetween(1, 100)); query.to(randomIntBetween(101, 200)); break; case 1: // use mapped date field, using date string representation - query = new RangeQueryBuilder(randomBoolean() ? DATE_FIELD_NAME : DATE_RANGE_FIELD_NAME); + query = new RangeQueryBuilder(DATE_FIELD_NAME); query.from(new DateTime(System.currentTimeMillis() - randomIntBetween(0, 1000000), DateTimeZone.UTC).toString()); query.to(new DateTime(System.currentTimeMillis() + randomIntBetween(0, 1000000), DateTimeZone.UTC).toString()); // Create timestamp option only then we have a date mapper, @@ -98,9 +99,6 @@ protected RangeQueryBuilder doCreateTestQueryBuilder() { if (randomBoolean()) { query.to(null); } - if (query.fieldName().equals(INT_RANGE_FIELD_NAME) || query.fieldName().equals(DATE_RANGE_FIELD_NAME)) { - query.relation(RandomPicks.randomFrom(random(), ShapeRelation.values()).getRelationName()); - } return query; } @@ -129,7 +127,15 @@ protected void doAssertLuceneQuery(RangeQueryBuilder queryBuilder, Query query, if (queryBuilder.from() == null && queryBuilder.to() == null) { final Query expectedQuery; if (getCurrentTypes().length > 0) { - expectedQuery = new ConstantScoreQuery(new TermQuery(new Term(FieldNamesFieldMapper.NAME, queryBuilder.fieldName()))); + if (context.mapperService().getIndexSettings().getIndexVersionCreated().onOrAfter(Version.V_6_1_0) + && context.mapperService().fullName(queryBuilder.fieldName()).hasDocValues()) { + expectedQuery = new ConstantScoreQuery(new DocValuesFieldExistsQuery(queryBuilder.fieldName())); + } else if (context.mapperService().getIndexSettings().getIndexVersionCreated().onOrAfter(Version.V_6_1_0) + && context.mapperService().fullName(queryBuilder.fieldName()).omitNorms() == false) { + expectedQuery = new ConstantScoreQuery(new NormsFieldExistsQuery(queryBuilder.fieldName())); + } else { + expectedQuery = new ConstantScoreQuery(new TermQuery(new Term(FieldNamesFieldMapper.NAME, queryBuilder.fieldName()))); + } } else { expectedQuery = new MatchNoDocsQuery("no mappings yet"); } @@ -137,9 +143,7 @@ protected void doAssertLuceneQuery(RangeQueryBuilder queryBuilder, Query query, } else if (getCurrentTypes().length == 0 || (queryBuilder.fieldName().equals(DATE_FIELD_NAME) == false - && queryBuilder.fieldName().equals(INT_FIELD_NAME) == false - && queryBuilder.fieldName().equals(DATE_RANGE_FIELD_NAME) == false - && queryBuilder.fieldName().equals(INT_RANGE_FIELD_NAME) == false)) { + && queryBuilder.fieldName().equals(INT_FIELD_NAME) == false)) { assertThat(query, instanceOf(TermRangeQuery.class)); TermRangeQuery termRangeQuery = (TermRangeQuery) query; assertThat(termRangeQuery.getField(), equalTo(queryBuilder.fieldName())); @@ -215,9 +219,6 @@ protected void doAssertLuceneQuery(RangeQueryBuilder queryBuilder, Query query, maxInt--; } } - } else if (queryBuilder.fieldName().equals(DATE_RANGE_FIELD_NAME) - || queryBuilder.fieldName().equals(INT_RANGE_FIELD_NAME)) { - // todo can't check RangeFieldQuery because its currently package private (this will change) } else { throw new UnsupportedOperationException(); } @@ -234,16 +235,6 @@ public void testIllegalArguments() { expectThrows(IllegalArgumentException.class, () -> rangeQueryBuilder.format("badFormat")); } - /** - * Specifying a timezone together with a numeric range query should throw an exception. - */ - public void testToQueryNonDateWithTimezone() throws QueryShardException { - RangeQueryBuilder query = new RangeQueryBuilder(INT_FIELD_NAME); - query.from(1).to(10).timeZone("UTC"); - QueryShardException e = expectThrows(QueryShardException.class, () -> query.toQuery(createShardContext())); - assertThat(e.getMessage(), containsString("[range] time_zone can not be applied")); - } - /** * Specifying a timezone together with an unmapped field should throw an exception. */ @@ -364,7 +355,7 @@ public void testDateRangeQueryTimezone() throws IOException { " }\n" + "}"; QueryBuilder queryBuilder = parseQuery(query); - expectThrows(QueryShardException.class, () -> queryBuilder.toQuery(createShardContext())); + queryBuilder.toQuery(createShardContext()); // no exception } public void testFromJson() throws IOException { @@ -402,25 +393,10 @@ public void testNamedQueryParsing() throws IOException { " }\n" + "}"; assertNotNull(parseQuery(json)); - - final String deprecatedJson = - "{\n" + - " \"range\" : {\n" + - " \"timestamp\" : {\n" + - " \"from\" : \"2015-01-01 00:00:00\",\n" + - " \"to\" : \"now\",\n" + - " \"boost\" : 1.0\n" + - " },\n" + - " \"_name\" : \"my_range\"\n" + - " }\n" + - "}"; - - assertNotNull(parseQuery(deprecatedJson)); - assertWarnings("Deprecated field [_name] used, replaced by [query name is not supported in short version of range query]"); } public void testRewriteDateToMatchAll() throws IOException { - String fieldName = randomAlphaOfLengthBetween(1, 20); + String fieldName = DATE_FIELD_NAME; RangeQueryBuilder query = new RangeQueryBuilder(fieldName) { @Override protected MappedFieldType.Relation getRelation(QueryRewriteContext queryRewriteContext) { @@ -443,7 +419,12 @@ protected MappedFieldType.Relation getRelation(QueryRewriteContext queryRewriteC final Query luceneQuery = rewrittenRange.toQuery(queryShardContext); final Query expectedQuery; if (getCurrentTypes().length > 0) { - expectedQuery = new ConstantScoreQuery(new TermQuery(new Term(FieldNamesFieldMapper.NAME, query.fieldName()))); + if (queryShardContext.getIndexSettings().getIndexVersionCreated().onOrAfter(Version.V_6_1_0) + && queryShardContext.fieldMapper(query.fieldName()).hasDocValues()) { + expectedQuery = new ConstantScoreQuery(new DocValuesFieldExistsQuery(query.fieldName())); + } else { + expectedQuery = new ConstantScoreQuery(new TermQuery(new Term(FieldNamesFieldMapper.NAME, query.fieldName()))); + } } else { expectedQuery = new MatchNoDocsQuery("no mappings yet"); } @@ -451,7 +432,7 @@ protected MappedFieldType.Relation getRelation(QueryRewriteContext queryRewriteC } public void testRewriteDateToMatchAllWithTimezoneAndFormat() throws IOException { - String fieldName = randomAlphaOfLengthBetween(1, 20); + String fieldName = DATE_FIELD_NAME; RangeQueryBuilder query = new RangeQueryBuilder(fieldName) { @Override protected MappedFieldType.Relation getRelation(QueryRewriteContext queryRewriteContext) { @@ -556,4 +537,29 @@ public void testParseFailsWithMultipleFieldsWhenOneIsDate() { ParsingException e = expectThrows(ParsingException.class, () -> parseQuery(json)); assertEquals("[range] query doesn't support multiple fields, found [age] and [" + DATE_FIELD_NAME + "]", e.getMessage()); } + + public void testParseRelation() { + String json = + "{\n" + + " \"range\": {\n" + + " \"age\": {\n" + + " \"gte\": 30,\n" + + " \"lte\": 40,\n" + + " \"relation\": \"disjoint\"\n" + + " }" + + " }\n" + + " }"; + String fieldName = randomAlphaOfLengthBetween(1, 20); + IllegalArgumentException e1 = expectThrows(IllegalArgumentException.class, () -> parseQuery(json)); + assertEquals("[range] query does not support relation [disjoint]", e1.getMessage()); + RangeQueryBuilder builder = new RangeQueryBuilder(fieldName); + IllegalArgumentException e2 = expectThrows(IllegalArgumentException.class, ()->builder.relation("disjoint")); + assertEquals("[range] query does not support relation [disjoint]", e2.getMessage()); + builder.relation("contains"); + assertEquals(ShapeRelation.CONTAINS, builder.relation()); + builder.relation("within"); + assertEquals(ShapeRelation.WITHIN, builder.relation()); + builder.relation("intersects"); + assertEquals(ShapeRelation.INTERSECTS, builder.relation()); + } } diff --git a/core/src/test/java/org/elasticsearch/index/query/ScriptQueryBuilderTests.java b/core/src/test/java/org/elasticsearch/index/query/ScriptQueryBuilderTests.java index d273825f9794c..3e805f2b8dcac 100644 --- a/core/src/test/java/org/elasticsearch/index/query/ScriptQueryBuilderTests.java +++ b/core/src/test/java/org/elasticsearch/index/query/ScriptQueryBuilderTests.java @@ -52,7 +52,7 @@ protected void doAssertLuceneQuery(ScriptQueryBuilder queryBuilder, Query query, assertThat(query, instanceOf(ScriptQueryBuilder.ScriptQuery.class)); // make sure the query would not get cached ScriptQuery sQuery = (ScriptQuery) query; - ScriptQuery clone = new ScriptQuery(sQuery.script, sQuery.searchScript); + ScriptQuery clone = new ScriptQuery(sQuery.script, sQuery.filterScript); assertFalse(sQuery.equals(clone)); assertFalse(sQuery.hashCode() == clone.hashCode()); } diff --git a/core/src/test/java/org/elasticsearch/index/query/SimpleQueryParserTests.java b/core/src/test/java/org/elasticsearch/index/query/SimpleQueryParserTests.java deleted file mode 100644 index 2516a3abc094f..0000000000000 --- a/core/src/test/java/org/elasticsearch/index/query/SimpleQueryParserTests.java +++ /dev/null @@ -1,208 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.index.query; - -import org.apache.lucene.analysis.Analyzer; -import org.apache.lucene.analysis.MockSynonymAnalyzer; -import org.apache.lucene.analysis.standard.StandardAnalyzer; -import org.apache.lucene.index.Term; -import org.apache.lucene.search.BooleanClause; -import org.apache.lucene.search.BooleanQuery; -import org.apache.lucene.search.PrefixQuery; -import org.apache.lucene.search.Query; -import org.apache.lucene.search.SynonymQuery; -import org.apache.lucene.search.TermQuery; -import org.apache.lucene.search.spans.SpanNearQuery; -import org.apache.lucene.search.spans.SpanOrQuery; -import org.apache.lucene.search.spans.SpanQuery; -import org.apache.lucene.search.spans.SpanTermQuery; -import org.elasticsearch.Version; -import org.elasticsearch.cluster.metadata.IndexMetaData; -import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.index.IndexSettings; -import org.elasticsearch.index.mapper.MappedFieldType; -import org.elasticsearch.index.mapper.MockFieldMapper; -import org.elasticsearch.test.ESTestCase; - -import java.util.Collections; -import java.util.HashMap; -import java.util.Map; - -import static org.hamcrest.Matchers.equalTo; - -public class SimpleQueryParserTests extends ESTestCase { - private static class MockSimpleQueryParser extends SimpleQueryParser { - MockSimpleQueryParser(Analyzer analyzer, Map weights, int flags, Settings settings) { - super(analyzer, weights, flags, settings, null); - } - - @Override - protected Query newTermQuery(Term term) { - return new TermQuery(term); - } - } - - public void testAnalyzeWildcard() { - SimpleQueryParser.Settings settings = new SimpleQueryParser.Settings(); - settings.analyzeWildcard(true); - Map weights = new HashMap<>(); - weights.put("field1", 1.0f); - SimpleQueryParser parser = new MockSimpleQueryParser(new StandardAnalyzer(), weights, -1, settings); - for (Operator op : Operator.values()) { - BooleanClause.Occur defaultOp = op.toBooleanClauseOccur(); - parser.setDefaultOperator(defaultOp); - Query query = parser.parse("first foo-bar-foobar* last"); - Query expectedQuery = - new BooleanQuery.Builder() - .add(new BooleanClause(new TermQuery(new Term("field1", "first")), defaultOp)) - .add(new BooleanQuery.Builder() - .add(new BooleanClause(new TermQuery(new Term("field1", "foo")), defaultOp)) - .add(new BooleanClause(new TermQuery(new Term("field1", "bar")), defaultOp)) - .add(new BooleanClause(new PrefixQuery(new Term("field1", "foobar")), defaultOp)) - .build(), defaultOp) - .add(new BooleanClause(new TermQuery(new Term("field1", "last")), defaultOp)) - .build(); - assertThat(query, equalTo(expectedQuery)); - } - } - - public void testAnalyzerWildcardWithSynonyms() { - SimpleQueryParser.Settings settings = new SimpleQueryParser.Settings(); - settings.analyzeWildcard(true); - Map weights = new HashMap<>(); - weights.put("field1", 1.0f); - SimpleQueryParser parser = new MockSimpleQueryParser(new MockRepeatAnalyzer(), weights, -1, settings); - - for (Operator op : Operator.values()) { - BooleanClause.Occur defaultOp = op.toBooleanClauseOccur(); - parser.setDefaultOperator(defaultOp); - Query query = parser.parse("first foo-bar-foobar* last"); - - Query expectedQuery = new BooleanQuery.Builder() - .add(new BooleanClause(new SynonymQuery(new Term("field1", "first"), - new Term("field1", "first")), defaultOp)) - .add(new BooleanQuery.Builder() - .add(new BooleanClause(new SynonymQuery(new Term("field1", "foo"), - new Term("field1", "foo")), defaultOp)) - .add(new BooleanClause(new SynonymQuery(new Term("field1", "bar"), - new Term("field1", "bar")), defaultOp)) - .add(new BooleanQuery.Builder() - .add(new BooleanClause(new PrefixQuery(new Term("field1", "foobar")), - BooleanClause.Occur.SHOULD)) - .add(new BooleanClause(new PrefixQuery(new Term("field1", "foobar")), - BooleanClause.Occur.SHOULD)) - .build(), defaultOp) - .build(), defaultOp) - .add(new BooleanClause(new SynonymQuery(new Term("field1", "last"), - new Term("field1", "last")), defaultOp)) - .build(); - assertThat(query, equalTo(expectedQuery)); - } - } - - public void testAnalyzerWithGraph() { - SimpleQueryParser.Settings settings = new SimpleQueryParser.Settings(); - settings.analyzeWildcard(true); - Map weights = new HashMap<>(); - weights.put("field1", 1.0f); - SimpleQueryParser parser = new MockSimpleQueryParser(new MockSynonymAnalyzer(), weights, -1, settings); - - for (Operator op : Operator.values()) { - BooleanClause.Occur defaultOp = op.toBooleanClauseOccur(); - parser.setDefaultOperator(defaultOp); - - // non-phrase won't detect multi-word synonym because of whitespace splitting - Query query = parser.parse("guinea pig"); - - Query expectedQuery = new BooleanQuery.Builder() - .add(new BooleanClause(new TermQuery(new Term("field1", "guinea")), defaultOp)) - .add(new BooleanClause(new TermQuery(new Term("field1", "pig")), defaultOp)) - .build(); - assertThat(query, equalTo(expectedQuery)); - - // phrase will pick it up - query = parser.parse("\"guinea pig\""); - SpanTermQuery span1 = new SpanTermQuery(new Term("field1", "guinea")); - SpanTermQuery span2 = new SpanTermQuery(new Term("field1", "pig")); - expectedQuery = new SpanOrQuery( - new SpanNearQuery(new SpanQuery[] { span1, span2 }, 0, true), - new SpanTermQuery(new Term("field1", "cavy"))); - - assertThat(query, equalTo(expectedQuery)); - - // phrase with slop - query = parser.parse("big \"tiny guinea pig\"~2"); - - expectedQuery = new BooleanQuery.Builder() - .add(new TermQuery(new Term("field1", "big")), defaultOp) - .add(new SpanNearQuery(new SpanQuery[] { - new SpanTermQuery(new Term("field1", "tiny")), - new SpanOrQuery( - new SpanNearQuery(new SpanQuery[] { span1, span2 }, 0, true), - new SpanTermQuery(new Term("field1", "cavy")) - ) - }, 2, true), defaultOp) - .build(); - assertThat(query, equalTo(expectedQuery)); - } - } - - public void testQuoteFieldSuffix() { - SimpleQueryParser.Settings sqpSettings = new SimpleQueryParser.Settings(); - sqpSettings.quoteFieldSuffix(".quote"); - - Settings indexSettings = Settings.builder() - .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1) - .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0) - .put(IndexMetaData.SETTING_INDEX_UUID, "some_uuid") - .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT) - .build(); - IndexMetaData indexState = IndexMetaData.builder("index").settings(indexSettings).build(); - IndexSettings settings = new IndexSettings(indexState, Settings.EMPTY); - QueryShardContext mockShardContext = new QueryShardContext(0, settings, null, null, null, null, null, xContentRegistry(), - writableRegistry(), null, null, System::currentTimeMillis, null) { - @Override - public MappedFieldType fieldMapper(String name) { - return new MockFieldMapper.FakeFieldType(); - } - }; - - SimpleQueryParser parser = new SimpleQueryParser(new StandardAnalyzer(), - Collections.singletonMap("foo", 1f), -1, sqpSettings, mockShardContext); - assertEquals(new TermQuery(new Term("foo", "bar")), parser.parse("bar")); - assertEquals(new TermQuery(new Term("foo.quote", "bar")), parser.parse("\"bar\"")); - - // Now check what happens if foo.quote does not exist - mockShardContext = new QueryShardContext(0, settings, null, null, null, null, null, xContentRegistry(), - writableRegistry(), null, null, System::currentTimeMillis, null) { - @Override - public MappedFieldType fieldMapper(String name) { - if (name.equals("foo.quote")) { - return null; - } - return new MockFieldMapper.FakeFieldType(); - } - }; - parser = new SimpleQueryParser(new StandardAnalyzer(), - Collections.singletonMap("foo", 1f), -1, sqpSettings, mockShardContext); - assertEquals(new TermQuery(new Term("foo", "bar")), parser.parse("bar")); - assertEquals(new TermQuery(new Term("foo", "bar")), parser.parse("\"bar\"")); - } -} diff --git a/core/src/test/java/org/elasticsearch/index/query/SimpleQueryStringBuilderTests.java b/core/src/test/java/org/elasticsearch/index/query/SimpleQueryStringBuilderTests.java index c0c70559110a6..bfc6fd0600493 100644 --- a/core/src/test/java/org/elasticsearch/index/query/SimpleQueryStringBuilderTests.java +++ b/core/src/test/java/org/elasticsearch/index/query/SimpleQueryStringBuilderTests.java @@ -19,6 +19,8 @@ package org.elasticsearch.index.query; +import org.apache.lucene.analysis.MockSynonymAnalyzer; +import org.apache.lucene.analysis.standard.StandardAnalyzer; import org.apache.lucene.index.Term; import org.apache.lucene.search.BooleanClause; import org.apache.lucene.search.BooleanQuery; @@ -29,27 +31,32 @@ import org.apache.lucene.search.MatchNoDocsQuery; import org.apache.lucene.search.PrefixQuery; import org.apache.lucene.search.Query; +import org.apache.lucene.search.SynonymQuery; import org.apache.lucene.search.TermQuery; +import org.apache.lucene.search.spans.SpanNearQuery; +import org.apache.lucene.search.spans.SpanOrQuery; +import org.apache.lucene.search.spans.SpanQuery; +import org.apache.lucene.search.spans.SpanTermQuery; import org.apache.lucene.util.TestUtil; import org.elasticsearch.Version; -import org.elasticsearch.cluster.metadata.MetaData; -import org.elasticsearch.common.ParsingException; -import org.elasticsearch.index.mapper.MapperService; +import org.elasticsearch.cluster.metadata.IndexMetaData; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.index.search.SimpleQueryStringQueryParser; import org.elasticsearch.search.internal.SearchContext; import org.elasticsearch.test.AbstractQueryTestCase; import java.io.IOException; +import java.util.Arrays; +import java.util.Collections; import java.util.HashMap; import java.util.HashSet; -import java.util.Iterator; import java.util.Locale; import java.util.Map; import java.util.Set; import static org.hamcrest.Matchers.anyOf; -import static org.hamcrest.Matchers.containsString; +import static org.hamcrest.Matchers.either; import static org.hamcrest.Matchers.equalTo; -import static org.hamcrest.Matchers.greaterThan; import static org.hamcrest.Matchers.instanceOf; import static org.hamcrest.Matchers.is; import static org.hamcrest.Matchers.notNullValue; @@ -85,19 +92,28 @@ protected SimpleQueryStringBuilder doCreateTestQueryBuilder() { } } - int fieldCount = randomIntBetween(0, 10); + int fieldCount = randomIntBetween(0, 2); Map fields = new HashMap<>(); for (int i = 0; i < fieldCount; i++) { if (randomBoolean()) { - fields.put(randomAlphaOfLengthBetween(1, 10), AbstractQueryBuilder.DEFAULT_BOOST); + fields.put(STRING_FIELD_NAME, AbstractQueryBuilder.DEFAULT_BOOST); } else { - fields.put(randomBoolean() ? STRING_FIELD_NAME : randomAlphaOfLengthBetween(1, 10), 2.0f / randomIntBetween(1, 20)); + fields.put(STRING_FIELD_NAME_2, 2.0f / randomIntBetween(1, 20)); } } result.fields(fields); if (randomBoolean()) { result.autoGenerateSynonymsPhraseQuery(randomBoolean()); } + if (randomBoolean()) { + result.fuzzyPrefixLength(randomIntBetween(0, 5)); + } + if (randomBoolean()) { + result.fuzzyMaxExpansions(randomIntBetween(1, 5)); + } + if (randomBoolean()) { + result.fuzzyTranspositions(randomBoolean()); + } return result; } @@ -119,6 +135,18 @@ public void testDefaults() { assertEquals("Wrong default default lenient.", false, qb.lenient()); assertEquals("Wrong default default lenient field.", false, SimpleQueryStringBuilder.DEFAULT_LENIENT); + + assertEquals("Wrong default default fuzzy prefix length.", FuzzyQuery.defaultPrefixLength, qb.fuzzyPrefixLength()); + assertEquals("Wrong default default fuzzy prefix length field.", + FuzzyQuery.defaultPrefixLength, SimpleQueryStringBuilder.DEFAULT_FUZZY_PREFIX_LENGTH); + + assertEquals("Wrong default default fuzzy max expansions.", FuzzyQuery.defaultMaxExpansions, qb.fuzzyMaxExpansions()); + assertEquals("Wrong default default fuzzy max expansions field.", + FuzzyQuery.defaultMaxExpansions, SimpleQueryStringBuilder.DEFAULT_FUZZY_MAX_EXPANSIONS); + + assertEquals("Wrong default default fuzzy transpositions.", FuzzyQuery.defaultTranspositions, qb.fuzzyTranspositions()); + assertEquals("Wrong default default fuzzy transpositions field.", + FuzzyQuery.defaultTranspositions, SimpleQueryStringBuilder.DEFAULT_FUZZY_TRANSPOSITIONS); } public void testDefaultNullComplainFlags() { @@ -234,52 +262,38 @@ public void testDefaultFieldParsing() throws IOException { protected void doAssertLuceneQuery(SimpleQueryStringBuilder queryBuilder, Query query, SearchContext context) throws IOException { assertThat(query, notNullValue()); - if ("".equals(queryBuilder.value())) { + if (queryBuilder.value().isEmpty()) { assertThat(query, instanceOf(MatchNoDocsQuery.class)); } else if (queryBuilder.fields().size() > 1) { - assertThat(query, anyOf(instanceOf(BooleanQuery.class), instanceOf(DisjunctionMaxQuery.class))); - if (query instanceof BooleanQuery) { - BooleanQuery boolQuery = (BooleanQuery) query; - for (BooleanClause clause : boolQuery.clauses()) { - if (clause.getQuery() instanceof TermQuery) { - TermQuery inner = (TermQuery) clause.getQuery(); - assertThat(inner.getTerm().bytes().toString(), is(inner.getTerm().bytes().toString().toLowerCase(Locale.ROOT))); - } - } - assertThat(boolQuery.clauses().size(), equalTo(queryBuilder.fields().size())); - Iterator> fieldsIterator = queryBuilder.fields().entrySet().iterator(); - for (BooleanClause booleanClause : boolQuery) { - Map.Entry field = fieldsIterator.next(); - assertTermOrBoostQuery(booleanClause.getQuery(), field.getKey(), queryBuilder.value(), field.getValue()); - } - if (queryBuilder.minimumShouldMatch() != null) { - assertThat(boolQuery.getMinimumNumberShouldMatch(), greaterThan(0)); - } - } else if (query instanceof DisjunctionMaxQuery) { - DisjunctionMaxQuery maxQuery = (DisjunctionMaxQuery) query; - for (Query disjunct : maxQuery.getDisjuncts()) { - if (disjunct instanceof TermQuery) { - TermQuery inner = (TermQuery) disjunct; - assertThat(inner.getTerm().bytes().toString(), is(inner.getTerm().bytes().toString().toLowerCase(Locale.ROOT))); - } + assertThat(query, instanceOf(DisjunctionMaxQuery.class)); + DisjunctionMaxQuery maxQuery = (DisjunctionMaxQuery) query; + for (Query disjunct : maxQuery.getDisjuncts()) { + assertThat(disjunct, either(instanceOf(TermQuery.class)) + .or(instanceOf(BoostQuery.class)) + .or(instanceOf(MatchNoDocsQuery.class))); + Query termQuery = disjunct; + if (disjunct instanceof BoostQuery) { + termQuery = ((BoostQuery) disjunct).getQuery(); } - assertThat(maxQuery.getDisjuncts().size(), equalTo(queryBuilder.fields().size())); - Iterator> fieldsIterator = queryBuilder.fields().entrySet().iterator(); - for (Query disjunct : maxQuery) { - Map.Entry field = fieldsIterator.next(); - assertTermOrBoostQuery(disjunct, field.getKey(), queryBuilder.value(), field.getValue()); + if (termQuery instanceof TermQuery) { + TermQuery inner = (TermQuery) termQuery; + assertThat(inner.getTerm().bytes().toString(), is(inner.getTerm().bytes().toString().toLowerCase(Locale.ROOT))); + } else { + assertThat(termQuery, instanceOf(MatchNoDocsQuery.class)); } } } else if (queryBuilder.fields().size() == 1) { Map.Entry field = queryBuilder.fields().entrySet().iterator().next(); - assertTermOrBoostQuery(query, field.getKey(), queryBuilder.value(), field.getValue()); + if (query instanceof MatchNoDocsQuery == false) { + assertTermOrBoostQuery(query, field.getKey(), queryBuilder.value(), field.getValue()); + } } else if (queryBuilder.fields().size() == 0) { - MapperService ms = context.mapperService(); - if (ms.allEnabled()) { - assertTermQuery(query, MetaData.ALL, queryBuilder.value()); - } else { - assertThat(query.getClass(), - anyOf(equalTo(BooleanQuery.class), equalTo(DisjunctionMaxQuery.class), equalTo(MatchNoDocsQuery.class))); + assertThat(query, either(instanceOf(DisjunctionMaxQuery.class)) + .or(instanceOf(MatchNoDocsQuery.class)).or(instanceOf(TermQuery.class))); + if (query instanceof DisjunctionMaxQuery) { + for (Query disjunct : (DisjunctionMaxQuery) query) { + assertThat(disjunct, either(instanceOf(TermQuery.class)).or(instanceOf(MatchNoDocsQuery.class))); + } } } else { fail("Encountered lucene query type we do not have a validation implementation for in our " @@ -335,7 +349,7 @@ public void testFromJson() throws IOException { "{\n" + " \"simple_query_string\" : {\n" + " \"query\" : \"\\\"fried eggs\\\" +(eggplant | potato) -frittata\",\n" + - " \"fields\" : [ \"_all^1.0\", \"body^5.0\" ],\n" + + " \"fields\" : [ \"body^5.0\" ],\n" + " \"analyzer\" : \"snowball\",\n" + " \"flags\" : -1,\n" + " \"default_operator\" : \"and\",\n" + @@ -343,6 +357,9 @@ public void testFromJson() throws IOException { " \"analyze_wildcard\" : false,\n" + " \"quote_field_suffix\" : \".quote\",\n" + " \"auto_generate_synonyms_phrase_query\" : true,\n" + + " \"fuzzy_prefix_length\" : 1,\n" + + " \"fuzzy_max_expansions\" : 5,\n" + + " \"fuzzy_transpositions\" : false,\n" + " \"boost\" : 1.0\n" + " }\n" + "}"; @@ -351,12 +368,16 @@ public void testFromJson() throws IOException { checkGeneratedJson(json, parsed); assertEquals(json, "\"fried eggs\" +(eggplant | potato) -frittata", parsed.value()); - assertEquals(json, 2, parsed.fields().size()); + assertEquals(json, 1, parsed.fields().size()); assertEquals(json, "snowball", parsed.analyzer()); assertEquals(json, ".quote", parsed.quoteFieldSuffix()); + assertEquals(json, 1, parsed.fuzzyPrefixLength()); + assertEquals(json, 5, parsed.fuzzyMaxExpansions()); + assertEquals(json, false, parsed.fuzzyTranspositions()); } public void testMinimumShouldMatch() throws IOException { + assumeTrue("test runs only when at least a type is registered", getCurrentTypes().length > 0); QueryShardContext shardContext = createShardContext(); int numberOfTerms = randomIntBetween(1, 4); StringBuilder queryString = new StringBuilder(); @@ -369,7 +390,7 @@ public void testMinimumShouldMatch() throws IOException { } int numberOfFields = randomIntBetween(1, 4); for (int i = 0; i < numberOfFields; i++) { - simpleQueryStringBuilder.field("f" + i); + simpleQueryStringBuilder.field(STRING_FIELD_NAME); } int percent = randomIntBetween(1, 100); simpleQueryStringBuilder.minimumShouldMatch(percent + "%"); @@ -379,7 +400,7 @@ public void testMinimumShouldMatch() throws IOException { if (numberOfFields * numberOfTerms == 1) { assertThat(query, instanceOf(TermQuery.class)); } else if (numberOfTerms == 1) { - assertThat(query, instanceOf(DisjunctionMaxQuery.class)); + assertThat(query, either(instanceOf(DisjunctionMaxQuery.class)).or(instanceOf(TermQuery.class))); } else { assertThat(query, instanceOf(BooleanQuery.class)); BooleanQuery boolQuery = (BooleanQuery) query; @@ -403,6 +424,7 @@ public void testIndexMetaField() throws IOException { } public void testExpandedTerms() throws Exception { + assumeTrue("test runs only when at least a type is registered", getCurrentTypes().length > 0); // Prefix Query query = new SimpleQueryStringBuilder("aBc*") .field(STRING_FIELD_NAME) @@ -430,18 +452,165 @@ public void testExpandedTerms() throws Exception { assertEquals(expected, query); } - public void testAllFieldsWithFields() throws IOException { - String json = - "{\n" + - " \"simple_query_string\" : {\n" + - " \"query\" : \"this that thus\",\n" + - " \"fields\" : [\"foo\"],\n" + - " \"all_fields\" : true\n" + - " }\n" + - "}"; + public void testAnalyzeWildcard() throws IOException { + assumeTrue("test runs only when at least a type is registered", getCurrentTypes().length > 0); + SimpleQueryStringQueryParser.Settings settings = new SimpleQueryStringQueryParser.Settings(); + settings.analyzeWildcard(true); + SimpleQueryStringQueryParser parser = new SimpleQueryStringQueryParser(new StandardAnalyzer(), + Collections.singletonMap(STRING_FIELD_NAME, 1.0f), -1, settings, createShardContext()); + for (Operator op : Operator.values()) { + BooleanClause.Occur defaultOp = op.toBooleanClauseOccur(); + parser.setDefaultOperator(defaultOp); + Query query = parser.parse("first foo-bar-foobar* last"); + Query expectedQuery = + new BooleanQuery.Builder() + .add(new BooleanClause(new TermQuery(new Term(STRING_FIELD_NAME, "first")), defaultOp)) + .add(new BooleanQuery.Builder() + .add(new BooleanClause(new TermQuery(new Term(STRING_FIELD_NAME, "foo")), defaultOp)) + .add(new BooleanClause(new TermQuery(new Term(STRING_FIELD_NAME, "bar")), defaultOp)) + .add(new BooleanClause(new PrefixQuery(new Term(STRING_FIELD_NAME, "foobar")), defaultOp)) + .build(), defaultOp) + .add(new BooleanClause(new TermQuery(new Term(STRING_FIELD_NAME, "last")), defaultOp)) + .build(); + assertThat(query, equalTo(expectedQuery)); + } + } + + public void testAnalyzerWildcardWithSynonyms() throws IOException { + assumeTrue("test runs only when at least a type is registered", getCurrentTypes().length > 0); + SimpleQueryStringQueryParser.Settings settings = new SimpleQueryStringQueryParser.Settings(); + settings.analyzeWildcard(true); + SimpleQueryStringQueryParser parser = new SimpleQueryStringQueryParser(new MockRepeatAnalyzer(), + Collections.singletonMap(STRING_FIELD_NAME, 1.0f), -1, settings, createShardContext()); + for (Operator op : Operator.values()) { + BooleanClause.Occur defaultOp = op.toBooleanClauseOccur(); + parser.setDefaultOperator(defaultOp); + Query query = parser.parse("first foo-bar-foobar* last"); + Query expectedQuery = new BooleanQuery.Builder() + .add(new BooleanClause(new SynonymQuery(new Term(STRING_FIELD_NAME, "first"), + new Term(STRING_FIELD_NAME, "first")), defaultOp)) + .add(new BooleanQuery.Builder() + .add(new BooleanClause(new SynonymQuery(new Term(STRING_FIELD_NAME, "foo"), + new Term(STRING_FIELD_NAME, "foo")), defaultOp)) + .add(new BooleanClause(new SynonymQuery(new Term(STRING_FIELD_NAME, "bar"), + new Term(STRING_FIELD_NAME, "bar")), defaultOp)) + .add(new BooleanQuery.Builder() + .add(new BooleanClause(new PrefixQuery(new Term(STRING_FIELD_NAME, "foobar")), + BooleanClause.Occur.SHOULD)) + .add(new BooleanClause(new PrefixQuery(new Term(STRING_FIELD_NAME, "foobar")), + BooleanClause.Occur.SHOULD)) + .build(), defaultOp) + .build(), defaultOp) + .add(new BooleanClause(new SynonymQuery(new Term(STRING_FIELD_NAME, "last"), + new Term(STRING_FIELD_NAME, "last")), defaultOp)) + .build(); + assertThat(query, equalTo(expectedQuery)); + } + } + + public void testAnalyzerWithGraph() { + assumeTrue("test runs only when at least a type is registered", getCurrentTypes().length > 0); + SimpleQueryStringQueryParser.Settings settings = new SimpleQueryStringQueryParser.Settings(); + settings.analyzeWildcard(true); + SimpleQueryStringQueryParser parser = new SimpleQueryStringQueryParser(new MockSynonymAnalyzer(), + Collections.singletonMap(STRING_FIELD_NAME, 1.0f), -1, settings, createShardContext()); + for (Operator op : Operator.values()) { + BooleanClause.Occur defaultOp = op.toBooleanClauseOccur(); + parser.setDefaultOperator(defaultOp); + // non-phrase won't detect multi-word synonym because of whitespace splitting + Query query = parser.parse("guinea pig"); + + Query expectedQuery = new BooleanQuery.Builder() + .add(new BooleanClause(new TermQuery(new Term(STRING_FIELD_NAME, "guinea")), defaultOp)) + .add(new BooleanClause(new TermQuery(new Term(STRING_FIELD_NAME, "pig")), defaultOp)) + .build(); + assertThat(query, equalTo(expectedQuery)); + + // phrase will pick it up + query = parser.parse("\"guinea pig\""); + SpanTermQuery span1 = new SpanTermQuery(new Term(STRING_FIELD_NAME, "guinea")); + SpanTermQuery span2 = new SpanTermQuery(new Term(STRING_FIELD_NAME, "pig")); + expectedQuery = new SpanOrQuery( + new SpanNearQuery(new SpanQuery[] { span1, span2 }, 0, true), + new SpanTermQuery(new Term(STRING_FIELD_NAME, "cavy"))); + + assertThat(query, equalTo(expectedQuery)); + + // phrase with slop + query = parser.parse("big \"tiny guinea pig\"~2"); + + expectedQuery = new BooleanQuery.Builder() + .add(new TermQuery(new Term(STRING_FIELD_NAME, "big")), defaultOp) + .add(new SpanNearQuery(new SpanQuery[] { + new SpanTermQuery(new Term(STRING_FIELD_NAME, "tiny")), + new SpanOrQuery( + new SpanNearQuery(new SpanQuery[] { span1, span2 }, 0, true), + new SpanTermQuery(new Term(STRING_FIELD_NAME, "cavy")) + ) + }, 2, true), defaultOp) + .build(); + assertThat(query, equalTo(expectedQuery)); + } + } + + public void testQuoteFieldSuffix() { + assumeTrue("test runs only when at least a type is registered", getCurrentTypes().length > 0); + SimpleQueryStringQueryParser.Settings settings = new SimpleQueryStringQueryParser.Settings(); + settings.analyzeWildcard(true); + settings.quoteFieldSuffix("_2"); + SimpleQueryStringQueryParser parser = new SimpleQueryStringQueryParser(new MockSynonymAnalyzer(), + Collections.singletonMap(STRING_FIELD_NAME, 1.0f), -1, settings, createShardContext()); + assertEquals(new TermQuery(new Term(STRING_FIELD_NAME, "bar")), parser.parse("bar")); + assertEquals(new TermQuery(new Term(STRING_FIELD_NAME_2, "bar")), parser.parse("\"bar\"")); + + // Now check what happens if the quote field does not exist + settings.quoteFieldSuffix(".quote"); + parser = new SimpleQueryStringQueryParser(new MockSynonymAnalyzer(), + Collections.singletonMap(STRING_FIELD_NAME, 1.0f), -1, settings, createShardContext()); + assertEquals(new TermQuery(new Term(STRING_FIELD_NAME, "bar")), parser.parse("bar")); + assertEquals(new TermQuery(new Term(STRING_FIELD_NAME, "bar")), parser.parse("\"bar\"")); + } + + public void testDefaultField() throws Exception { + assumeTrue("test runs only when at least a type is registered", getCurrentTypes().length > 0); + QueryShardContext context = createShardContext(); + context.getIndexSettings().updateIndexMetaData( + newIndexMeta("index", context.getIndexSettings().getSettings(), Settings.builder().putList("index.query.default_field", + STRING_FIELD_NAME, STRING_FIELD_NAME_2 + "^5").build()) + ); + Query query = new SimpleQueryStringBuilder("hello") + .toQuery(context); + Query expected = new DisjunctionMaxQuery( + Arrays.asList( + new TermQuery(new Term(STRING_FIELD_NAME, "hello")), + new BoostQuery(new TermQuery(new Term(STRING_FIELD_NAME_2, "hello")), 5.0f) + ), 1.0f + ); + assertEquals(expected, query); + // Reset the default value + context.getIndexSettings().updateIndexMetaData( + newIndexMeta("index", + context.getIndexSettings().getSettings(), Settings.builder().putList("index.query.default_field", "*").build()) + ); + } + + public void testToFuzzyQuery() throws Exception { + assumeTrue("test runs only when at least a type is registered", getCurrentTypes().length > 0); + + Query query = new SimpleQueryStringBuilder("text~2") + .field(STRING_FIELD_NAME) + .fuzzyPrefixLength(2) + .fuzzyMaxExpansions(5) + .fuzzyTranspositions(false) + .toQuery(createShardContext()); + FuzzyQuery expected = new FuzzyQuery(new Term(STRING_FIELD_NAME, "text"), 2, 2, 5, false); + assertEquals(expected, query); + } - ParsingException e = expectThrows(ParsingException.class, () -> parseQuery(json)); - assertThat(e.getMessage(), - containsString("cannot use [all_fields] parameter in conjunction with [fields]")); + private static IndexMetaData newIndexMeta(String name, Settings oldIndexSettings, Settings indexSettings) { + Settings build = Settings.builder().put(oldIndexSettings) + .put(indexSettings) + .build(); + return IndexMetaData.builder(name).settings(build).build(); } } diff --git a/core/src/test/java/org/elasticsearch/index/query/TermsQueryBuilderTests.java b/core/src/test/java/org/elasticsearch/index/query/TermsQueryBuilderTests.java index 6abff5fbcdec6..79f9af61408b2 100644 --- a/core/src/test/java/org/elasticsearch/index/query/TermsQueryBuilderTests.java +++ b/core/src/test/java/org/elasticsearch/index/query/TermsQueryBuilderTests.java @@ -79,8 +79,7 @@ protected TermsQueryBuilder doCreateTestQueryBuilder() { String fieldName; do { fieldName = getRandomFieldName(); - } while (fieldName.equals(GEO_POINT_FIELD_NAME) || fieldName.equals(GEO_SHAPE_FIELD_NAME) - || fieldName.equals(INT_RANGE_FIELD_NAME) || fieldName.equals(DATE_RANGE_FIELD_NAME)); + } while (fieldName.equals(GEO_POINT_FIELD_NAME) || fieldName.equals(GEO_SHAPE_FIELD_NAME)); Object[] values = new Object[randomInt(5)]; for (int i = 0; i < values.length; i++) { values[i] = getRandomValueForFieldName(fieldName); diff --git a/core/src/test/java/org/elasticsearch/index/query/TermsSetQueryBuilderTests.java b/core/src/test/java/org/elasticsearch/index/query/TermsSetQueryBuilderTests.java new file mode 100644 index 0000000000000..f3226acc2eae3 --- /dev/null +++ b/core/src/test/java/org/elasticsearch/index/query/TermsSetQueryBuilderTests.java @@ -0,0 +1,248 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.index.query; + +import org.apache.lucene.analysis.core.WhitespaceAnalyzer; +import org.apache.lucene.document.Document; +import org.apache.lucene.document.Field; +import org.apache.lucene.document.NumericDocValuesField; +import org.apache.lucene.document.SortedNumericDocValuesField; +import org.apache.lucene.document.TextField; +import org.apache.lucene.index.DirectoryReader; +import org.apache.lucene.index.IndexReader; +import org.apache.lucene.index.IndexWriter; +import org.apache.lucene.index.IndexWriterConfig; +import org.apache.lucene.index.NoMergePolicy; +import org.apache.lucene.search.CoveringQuery; +import org.apache.lucene.search.IndexSearcher; +import org.apache.lucene.search.MatchNoDocsQuery; +import org.apache.lucene.search.Query; +import org.apache.lucene.search.Sort; +import org.apache.lucene.search.SortField; +import org.apache.lucene.search.TopDocs; +import org.apache.lucene.store.Directory; +import org.elasticsearch.action.admin.indices.mapping.put.PutMappingRequest; +import org.elasticsearch.common.compress.CompressedXContent; +import org.elasticsearch.index.fielddata.ScriptDocValues; +import org.elasticsearch.index.mapper.MapperService; +import org.elasticsearch.plugins.Plugin; +import org.elasticsearch.script.MockScriptEngine; +import org.elasticsearch.script.MockScriptPlugin; +import org.elasticsearch.script.Script; +import org.elasticsearch.script.ScriptType; +import org.elasticsearch.search.internal.SearchContext; +import org.elasticsearch.test.AbstractQueryTestCase; +import org.elasticsearch.test.rest.yaml.ObjectPath; + +import java.io.IOException; +import java.io.UncheckedIOException; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collection; +import java.util.Collections; +import java.util.List; +import java.util.Map; +import java.util.function.Function; + +import static org.hamcrest.Matchers.containsString; +import static org.hamcrest.Matchers.equalTo; +import static org.hamcrest.Matchers.instanceOf; + +public class TermsSetQueryBuilderTests extends AbstractQueryTestCase { + + @Override + protected Collection> getPlugins() { + return Collections.singleton(CustomScriptPlugin.class); + } + + @Override + protected void initializeAdditionalMappings(MapperService mapperService) throws IOException { + String docType = "doc"; + mapperService.merge(docType, new CompressedXContent(PutMappingRequest.buildFromSimplifiedDef(docType, + "m_s_m", "type=long" + ).string()), MapperService.MergeReason.MAPPING_UPDATE, false); + } + + @Override + protected TermsSetQueryBuilder doCreateTestQueryBuilder() { + String fieldName; + do { + fieldName = randomFrom(MAPPED_FIELD_NAMES); + } while (fieldName.equals(GEO_POINT_FIELD_NAME) || fieldName.equals(GEO_SHAPE_FIELD_NAME)); + int numValues = randomIntBetween(0, 10); + List randomTerms = new ArrayList<>(numValues); + for (int i = 0; i < numValues; i++) { + randomTerms.add(getRandomValueForFieldName(fieldName)); + } + TermsSetQueryBuilder queryBuilder = new TermsSetQueryBuilder(STRING_FIELD_NAME, randomTerms); + if (randomBoolean()) { + queryBuilder.setMinimumShouldMatchField("m_s_m"); + } else { + queryBuilder.setMinimumShouldMatchScript( + new Script(ScriptType.INLINE, MockScriptEngine.NAME, "_script", Collections.emptyMap())); + } + return queryBuilder; + } + + @Override + protected void doAssertLuceneQuery(TermsSetQueryBuilder queryBuilder, Query query, SearchContext context) throws IOException { + if (queryBuilder.getValues().isEmpty()) { + assertThat(query, instanceOf(MatchNoDocsQuery.class)); + MatchNoDocsQuery matchNoDocsQuery = (MatchNoDocsQuery) query; + assertThat(matchNoDocsQuery.toString(), containsString("No terms supplied for \"terms_set\" query.")); + } else { + assertThat(query, instanceOf(CoveringQuery.class)); + } + } + + @Override + protected boolean isCachable(TermsSetQueryBuilder queryBuilder) { + return queryBuilder.getMinimumShouldMatchField() != null || + (queryBuilder.getMinimumShouldMatchScript() != null && queryBuilder.getValues().isEmpty()); + } + + @Override + protected boolean builderGeneratesCacheableQueries() { + return false; + } + + public void testBothFieldAndScriptSpecified() { + TermsSetQueryBuilder queryBuilder = new TermsSetQueryBuilder("_field", Collections.emptyList()); + queryBuilder.setMinimumShouldMatchScript(new Script("")); + expectThrows(IllegalArgumentException.class, () -> queryBuilder.setMinimumShouldMatchField("_field")); + + queryBuilder.setMinimumShouldMatchScript(null); + queryBuilder.setMinimumShouldMatchField("_field"); + expectThrows(IllegalArgumentException.class, () -> queryBuilder.setMinimumShouldMatchScript(new Script(""))); + } + + public void testDoToQuery() throws Exception { + try (Directory directory = newDirectory()) { + IndexWriterConfig config = new IndexWriterConfig(new WhitespaceAnalyzer()); + config.setMergePolicy(NoMergePolicy.INSTANCE); + try (IndexWriter iw = new IndexWriter(directory, config)) { + Document document = new Document(); + document.add(new TextField("message", "a b", Field.Store.NO)); + document.add(new SortedNumericDocValuesField("m_s_m", 1)); + iw.addDocument(document); + + document = new Document(); + document.add(new TextField("message", "a b c", Field.Store.NO)); + document.add(new SortedNumericDocValuesField("m_s_m", 1)); + iw.addDocument(document); + + document = new Document(); + document.add(new TextField("message", "a b c", Field.Store.NO)); + document.add(new SortedNumericDocValuesField("m_s_m", 2)); + iw.addDocument(document); + + document = new Document(); + document.add(new TextField("message", "a b c d", Field.Store.NO)); + document.add(new SortedNumericDocValuesField("m_s_m", 1)); + iw.addDocument(document); + + document = new Document(); + document.add(new TextField("message", "a b c d", Field.Store.NO)); + document.add(new SortedNumericDocValuesField("m_s_m", 2)); + iw.addDocument(document); + + document = new Document(); + document.add(new TextField("message", "a b c d", Field.Store.NO)); + document.add(new SortedNumericDocValuesField("m_s_m", 3)); + iw.addDocument(document); + } + + try (IndexReader ir = DirectoryReader.open(directory)) { + QueryShardContext context = createShardContext(); + Query query = new TermsSetQueryBuilder("message", Arrays.asList("c", "d")) + .setMinimumShouldMatchField("m_s_m").doToQuery(context); + IndexSearcher searcher = new IndexSearcher(ir); + TopDocs topDocs = searcher.search(query, 10, new Sort(SortField.FIELD_DOC)); + assertThat(topDocs.totalHits, equalTo(3L)); + assertThat(topDocs.scoreDocs[0].doc, equalTo(1)); + assertThat(topDocs.scoreDocs[1].doc, equalTo(3)); + assertThat(topDocs.scoreDocs[2].doc, equalTo(4)); + } + } + } + + public void testDoToQuery_msmScriptField() throws Exception { + try (Directory directory = newDirectory()) { + IndexWriterConfig config = new IndexWriterConfig(new WhitespaceAnalyzer()); + config.setMergePolicy(NoMergePolicy.INSTANCE); + try (IndexWriter iw = new IndexWriter(directory, config)) { + Document document = new Document(); + document.add(new TextField("message", "a b x y", Field.Store.NO)); + document.add(new SortedNumericDocValuesField("m_s_m", 50)); + iw.addDocument(document); + + document = new Document(); + document.add(new TextField("message", "a b x y", Field.Store.NO)); + document.add(new SortedNumericDocValuesField("m_s_m", 75)); + iw.addDocument(document); + + document = new Document(); + document.add(new TextField("message", "a b c x", Field.Store.NO)); + document.add(new SortedNumericDocValuesField("m_s_m", 75)); + iw.addDocument(document); + + document = new Document(); + document.add(new TextField("message", "a b c x", Field.Store.NO)); + document.add(new SortedNumericDocValuesField("m_s_m", 100)); + iw.addDocument(document); + + document = new Document(); + document.add(new TextField("message", "a b c d", Field.Store.NO)); + document.add(new SortedNumericDocValuesField("m_s_m", 100)); + iw.addDocument(document); + } + + try (IndexReader ir = DirectoryReader.open(directory)) { + QueryShardContext context = createShardContext(); + Script script = new Script(ScriptType.INLINE, MockScriptEngine.NAME, "_script", Collections.emptyMap()); + Query query = new TermsSetQueryBuilder("message", Arrays.asList("a", "b", "c", "d")) + .setMinimumShouldMatchScript(script).doToQuery(context); + IndexSearcher searcher = new IndexSearcher(ir); + TopDocs topDocs = searcher.search(query, 10, new Sort(SortField.FIELD_DOC)); + assertThat(topDocs.totalHits, equalTo(3L)); + assertThat(topDocs.scoreDocs[0].doc, equalTo(0)); + assertThat(topDocs.scoreDocs[1].doc, equalTo(2)); + assertThat(topDocs.scoreDocs[2].doc, equalTo(4)); + } + } + } + + public static class CustomScriptPlugin extends MockScriptPlugin { + + @Override + protected Map, Object>> pluginScripts() { + return Collections.singletonMap("_script", args -> { + try { + int clauseCount = ObjectPath.evaluate(args, "params.num_terms"); + long msm = ((ScriptDocValues.Longs) ObjectPath.evaluate(args, "doc.m_s_m")).getValue(); + return clauseCount * (msm / 100d); + } catch (IOException e) { + throw new UncheckedIOException(e); + } + }); + } + } + +} + diff --git a/core/src/test/java/org/elasticsearch/index/replication/ESIndexLevelReplicationTestCase.java b/core/src/test/java/org/elasticsearch/index/replication/ESIndexLevelReplicationTestCase.java index c38d3434c3b8f..7e8949cd15fbf 100644 --- a/core/src/test/java/org/elasticsearch/index/replication/ESIndexLevelReplicationTestCase.java +++ b/core/src/test/java/org/elasticsearch/index/replication/ESIndexLevelReplicationTestCase.java @@ -141,7 +141,7 @@ protected class ReplicationGroup implements AutoCloseable, Iterable ReplicationGroup(final IndexMetaData indexMetaData) throws IOException { final ShardRouting primaryRouting = this.createShardRouting("s0", true); - primary = newShard(primaryRouting, indexMetaData, null, getEngineFactory(primaryRouting)); + primary = newShard(primaryRouting, indexMetaData, null, getEngineFactory(primaryRouting), () -> {}); replicas = new ArrayList<>(); this.indexMetaData = indexMetaData; updateAllocationIDsOnPrimary(); @@ -238,7 +238,7 @@ public void startPrimary() throws IOException { public IndexShard addReplica() throws IOException { final ShardRouting replicaRouting = createShardRouting("s" + replicaId.incrementAndGet(), false); final IndexShard replica = - newShard(replicaRouting, indexMetaData, null, getEngineFactory(replicaRouting)); + newShard(replicaRouting, indexMetaData, null, getEngineFactory(replicaRouting), () -> {}); addReplica(replica); return replica; } @@ -259,8 +259,8 @@ public synchronized IndexShard addReplicaWithExistingPath(final ShardPath shardP false, ShardRoutingState.INITIALIZING, RecoverySource.PeerRecoverySource.INSTANCE); - final IndexShard newReplica = newShard(shardRouting, shardPath, indexMetaData, null, - getEngineFactory(shardRouting)); + final IndexShard newReplica = + newShard(shardRouting, shardPath, indexMetaData, null, getEngineFactory(shardRouting), () -> {}); replicas.add(newReplica); updateAllocationIDsOnPrimary(); return newReplica; @@ -315,7 +315,7 @@ private synchronized IndexShardRoutingTable routingTable(Function globalCheckpointMatcher; if (shardRouting.primary()) { - globalCheckpointMatcher = numDocs == 0 ? equalTo(SequenceNumbersService.NO_OPS_PERFORMED) : equalTo(numDocs - 1L); + globalCheckpointMatcher = numDocs == 0 ? equalTo(SequenceNumbers.NO_OPS_PERFORMED) : equalTo(numDocs - 1L); } else { - globalCheckpointMatcher = numDocs == 0 ? equalTo(SequenceNumbersService.NO_OPS_PERFORMED) + globalCheckpointMatcher = numDocs == 0 ? equalTo(SequenceNumbers.NO_OPS_PERFORMED) : anyOf(equalTo(numDocs - 1L), equalTo(numDocs - 2L)); } assertThat(shardRouting + " global checkpoint mismatch", shardStats.getGlobalCheckpoint(), globalCheckpointMatcher); @@ -177,7 +177,7 @@ public void testCheckpointsAdvance() throws Exception { // simulate a background global checkpoint sync at which point we expect the global checkpoint to advance on the replicas shards.syncGlobalCheckpoint(); - final long noOpsPerformed = SequenceNumbersService.NO_OPS_PERFORMED; + final long noOpsPerformed = SequenceNumbers.NO_OPS_PERFORMED; for (IndexShard shard : shards) { final SeqNoStats shardStats = shard.seqNoStats(); final ShardRouting shardRouting = shard.routingEntry(); @@ -316,7 +316,7 @@ public long addDocument(Iterable doc) throws IOExcepti assert documentFailureMessage != null; throw new IOException(documentFailureMessage); } - }, null, config); + }, null, null, config); } } diff --git a/core/src/test/java/org/elasticsearch/index/replication/RecoveryDuringReplicationTests.java b/core/src/test/java/org/elasticsearch/index/replication/RecoveryDuringReplicationTests.java index 562e40a790dda..844d6b0aaf957 100644 --- a/core/src/test/java/org/elasticsearch/index/replication/RecoveryDuringReplicationTests.java +++ b/core/src/test/java/org/elasticsearch/index/replication/RecoveryDuringReplicationTests.java @@ -293,7 +293,6 @@ public void testResyncAfterPrimaryPromotion() throws Exception { final IndexShard oldPrimary = shards.getPrimary(); final IndexShard newPrimary = shards.getReplicas().get(0); - final IndexShard otherReplica = shards.getReplicas().get(1); // simulate docs that were inflight when primary failed final int extraDocs = randomIntBetween(0, 5); @@ -638,6 +637,7 @@ public long addDocument(final Iterable doc) throws IOE } }, null, + null, config); } diff --git a/core/src/test/java/org/elasticsearch/index/search/MatchQueryIT.java b/core/src/test/java/org/elasticsearch/index/search/MatchQueryIT.java index ec5e92ef6e376..aa154d9392574 100644 --- a/core/src/test/java/org/elasticsearch/index/search/MatchQueryIT.java +++ b/core/src/test/java/org/elasticsearch/index/search/MatchQueryIT.java @@ -52,15 +52,15 @@ public void setUp() throws Exception { Settings.builder() .put(indexSettings()) .put("index.analysis.filter.syns.type", "synonym") - .putArray("index.analysis.filter.syns.synonyms", "wtf, what the fudge", "foo, bar baz") + .putList("index.analysis.filter.syns.synonyms", "wtf, what the fudge", "foo, bar baz") .put("index.analysis.analyzer.lower_syns.type", "custom") .put("index.analysis.analyzer.lower_syns.tokenizer", "standard") - .putArray("index.analysis.analyzer.lower_syns.filter", "lowercase", "syns") + .putList("index.analysis.analyzer.lower_syns.filter", "lowercase", "syns") .put("index.analysis.filter.graphsyns.type", "synonym_graph") - .putArray("index.analysis.filter.graphsyns.synonyms", "wtf, what the fudge", "foo, bar baz") + .putList("index.analysis.filter.graphsyns.synonyms", "wtf, what the fudge", "foo, bar baz") .put("index.analysis.analyzer.lower_graphsyns.type", "custom") .put("index.analysis.analyzer.lower_graphsyns.tokenizer", "standard") - .putArray("index.analysis.analyzer.lower_graphsyns.filter", "lowercase", "graphsyns") + .putList("index.analysis.analyzer.lower_graphsyns.filter", "lowercase", "graphsyns") ); assertAcked(builder.addMapping(INDEX, createMapping())); diff --git a/core/src/test/java/org/elasticsearch/index/search/MultiMatchQueryTests.java b/core/src/test/java/org/elasticsearch/index/search/MultiMatchQueryTests.java index 5b63fc4bdb011..5695094553de9 100644 --- a/core/src/test/java/org/elasticsearch/index/search/MultiMatchQueryTests.java +++ b/core/src/test/java/org/elasticsearch/index/search/MultiMatchQueryTests.java @@ -30,7 +30,7 @@ import org.apache.lucene.search.TermQuery; import org.apache.lucene.util.BytesRef; import org.elasticsearch.common.compress.CompressedXContent; -import org.elasticsearch.common.lucene.search.MultiPhrasePrefixQuery; +import org.elasticsearch.common.lucene.search.Queries; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.index.IndexService; import org.elasticsearch.index.engine.Engine; @@ -47,7 +47,6 @@ import static org.elasticsearch.index.query.QueryBuilders.multiMatchQuery; import static org.hamcrest.Matchers.equalTo; -import static org.hamcrest.Matchers.instanceOf; public class MultiMatchQueryTests extends ESSingleNodeTestCase { @@ -57,7 +56,7 @@ public class MultiMatchQueryTests extends ESSingleNodeTestCase { public void setup() throws IOException { Settings settings = Settings.builder() .put("index.analysis.filter.syns.type","synonym") - .putArray("index.analysis.filter.syns.synonyms","quick,fast") + .putList("index.analysis.filter.syns.synonyms","quick,fast") .put("index.analysis.analyzer.syns.tokenizer","standard") .put("index.analysis.analyzer.syns.filter","syns").build(); IndexService indexService = createIndex("test", settings); @@ -112,7 +111,7 @@ public void testBlendTerms() { Query expected = BlendedTermQuery.dismaxBlendedQuery(terms, boosts, 1.0f); Query actual = MultiMatchQuery.blendTerm( indexService.newQueryShardContext(randomInt(20), null, () -> { throw new UnsupportedOperationException(); }, null), - new BytesRef("baz"), null, 1f, new FieldAndFieldType(ft1, 2), new FieldAndFieldType(ft2, 3)); + new BytesRef("baz"), null, 1f, false, new FieldAndFieldType(ft1, 2), new FieldAndFieldType(ft2, 3)); assertEquals(expected, actual); } @@ -128,11 +127,11 @@ public void testBlendTermsWithFieldBoosts() { Query expected = BlendedTermQuery.dismaxBlendedQuery(terms, boosts, 1.0f); Query actual = MultiMatchQuery.blendTerm( indexService.newQueryShardContext(randomInt(20), null, () -> { throw new UnsupportedOperationException(); }, null), - new BytesRef("baz"), null, 1f, new FieldAndFieldType(ft1, 2), new FieldAndFieldType(ft2, 3)); + new BytesRef("baz"), null, 1f, false, new FieldAndFieldType(ft1, 2), new FieldAndFieldType(ft2, 3)); assertEquals(expected, actual); } - public void testBlendTermsUnsupportedValue() { + public void testBlendTermsUnsupportedValueWithLenient() { FakeFieldType ft1 = new FakeFieldType(); ft1.setName("foo"); FakeFieldType ft2 = new FakeFieldType() { @@ -144,13 +143,29 @@ public Query termQuery(Object value, QueryShardContext context) { ft2.setName("bar"); Term[] terms = new Term[] { new Term("foo", "baz") }; float[] boosts = new float[] {2}; - Query expected = BlendedTermQuery.dismaxBlendedQuery(terms, boosts, 1.0f); + Query expected = new DisjunctionMaxQuery(Arrays.asList( + Queries.newMatchNoDocsQuery("failed [" + ft2.name() + "] query, caused by illegal_argument_exception:[null]"), + BlendedTermQuery.dismaxBlendedQuery(terms, boosts, 1.0f) + ), 1f); Query actual = MultiMatchQuery.blendTerm( indexService.newQueryShardContext(randomInt(20), null, () -> { throw new UnsupportedOperationException(); }, null), - new BytesRef("baz"), null, 1f, new FieldAndFieldType(ft1, 2), new FieldAndFieldType(ft2, 3)); + new BytesRef("baz"), null, 1f, true, new FieldAndFieldType(ft1, 2), new FieldAndFieldType(ft2, 3)); assertEquals(expected, actual); } + public void testBlendTermsUnsupportedValueWithoutLenient() { + FakeFieldType ft = new FakeFieldType() { + @Override + public Query termQuery(Object value, QueryShardContext context) { + throw new IllegalArgumentException(); + } + }; + ft.setName("bar"); + expectThrows(IllegalArgumentException.class, () -> MultiMatchQuery.blendTerm( + indexService.newQueryShardContext(randomInt(20), null, () -> { throw new UnsupportedOperationException(); }, null), + new BytesRef("baz"), null, 1f, false, new FieldAndFieldType(ft, 1))); + } + public void testBlendNoTermQuery() { FakeFieldType ft1 = new FakeFieldType(); ft1.setName("foo"); @@ -172,20 +187,10 @@ public Query termQuery(Object value, QueryShardContext context) { ), 1.0f); Query actual = MultiMatchQuery.blendTerm( indexService.newQueryShardContext(randomInt(20), null, () -> { throw new UnsupportedOperationException(); }, null), - new BytesRef("baz"), null, 1f, new FieldAndFieldType(ft1, 2), new FieldAndFieldType(ft2, 3)); + new BytesRef("baz"), null, 1f, false, new FieldAndFieldType(ft1, 2), new FieldAndFieldType(ft2, 3)); assertEquals(expected, actual); } - public void testMultiMatchPrefixWithAllField() throws IOException { - QueryShardContext queryShardContext = indexService.newQueryShardContext( - randomInt(20), null, () -> { throw new UnsupportedOperationException(); }, null); - queryShardContext.setAllowUnmappedFields(true); - Query parsedQuery = - multiMatchQuery("foo").field("_all").type(MultiMatchQueryBuilder.Type.PHRASE_PREFIX).toQuery(queryShardContext); - assertThat(parsedQuery, instanceOf(MultiPhrasePrefixQuery.class)); - assertThat(parsedQuery.toString(), equalTo("_all:\"foo*\"")); - } - public void testMultiMatchCrossFieldsWithSynonyms() throws IOException { QueryShardContext queryShardContext = indexService.newQueryShardContext( randomInt(20), null, () -> { throw new UnsupportedOperationException(); }, null); diff --git a/core/src/test/java/org/elasticsearch/index/search/NestedHelperTests.java b/core/src/test/java/org/elasticsearch/index/search/NestedHelperTests.java index 1cb8451b0aac4..e781a3311b383 100644 --- a/core/src/test/java/org/elasticsearch/index/search/NestedHelperTests.java +++ b/core/src/test/java/org/elasticsearch/index/search/NestedHelperTests.java @@ -146,28 +146,28 @@ public void testTermQuery() { } public void testRangeQuery() { - Query rangeQuery = mapperService.fullName("foo2").rangeQuery(2, 5, true, true, null); + Query rangeQuery = mapperService.fullName("foo2").rangeQuery(2, 5, true, true, null, null, null, null); assertFalse(new NestedHelper(mapperService).mightMatchNestedDocs(rangeQuery)); assertTrue(new NestedHelper(mapperService).mightMatchNonNestedDocs(rangeQuery, "nested1")); assertTrue(new NestedHelper(mapperService).mightMatchNonNestedDocs(rangeQuery, "nested2")); assertTrue(new NestedHelper(mapperService).mightMatchNonNestedDocs(rangeQuery, "nested3")); assertTrue(new NestedHelper(mapperService).mightMatchNonNestedDocs(rangeQuery, "nested_missing")); - rangeQuery = mapperService.fullName("nested1.foo2").rangeQuery(2, 5, true, true, null); + rangeQuery = mapperService.fullName("nested1.foo2").rangeQuery(2, 5, true, true, null, null, null, null); assertTrue(new NestedHelper(mapperService).mightMatchNestedDocs(rangeQuery)); assertFalse(new NestedHelper(mapperService).mightMatchNonNestedDocs(rangeQuery, "nested1")); assertTrue(new NestedHelper(mapperService).mightMatchNonNestedDocs(rangeQuery, "nested2")); assertTrue(new NestedHelper(mapperService).mightMatchNonNestedDocs(rangeQuery, "nested3")); assertTrue(new NestedHelper(mapperService).mightMatchNonNestedDocs(rangeQuery, "nested_missing")); - rangeQuery = mapperService.fullName("nested2.foo2").rangeQuery(2, 5, true, true, null); + rangeQuery = mapperService.fullName("nested2.foo2").rangeQuery(2, 5, true, true, null, null, null, null); assertTrue(new NestedHelper(mapperService).mightMatchNestedDocs(rangeQuery)); assertTrue(new NestedHelper(mapperService).mightMatchNonNestedDocs(rangeQuery, "nested1")); assertTrue(new NestedHelper(mapperService).mightMatchNonNestedDocs(rangeQuery, "nested2")); assertTrue(new NestedHelper(mapperService).mightMatchNonNestedDocs(rangeQuery, "nested3")); assertTrue(new NestedHelper(mapperService).mightMatchNonNestedDocs(rangeQuery, "nested_missing")); - rangeQuery = mapperService.fullName("nested3.foo2").rangeQuery(2, 5, true, true, null); + rangeQuery = mapperService.fullName("nested3.foo2").rangeQuery(2, 5, true, true, null, null, null, null); assertTrue(new NestedHelper(mapperService).mightMatchNestedDocs(rangeQuery)); assertTrue(new NestedHelper(mapperService).mightMatchNonNestedDocs(rangeQuery, "nested1")); assertTrue(new NestedHelper(mapperService).mightMatchNonNestedDocs(rangeQuery, "nested2")); diff --git a/core/src/test/java/org/elasticsearch/index/seqno/GlobalCheckpointSyncIT.java b/core/src/test/java/org/elasticsearch/index/seqno/GlobalCheckpointSyncIT.java new file mode 100644 index 0000000000000..b2c828cb73f0c --- /dev/null +++ b/core/src/test/java/org/elasticsearch/index/seqno/GlobalCheckpointSyncIT.java @@ -0,0 +1,210 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.index.seqno; + +import org.elasticsearch.action.admin.indices.stats.IndexShardStats; +import org.elasticsearch.action.admin.indices.stats.IndexStats; +import org.elasticsearch.action.admin.indices.stats.IndicesStatsResponse; +import org.elasticsearch.action.admin.indices.stats.ShardStats; +import org.elasticsearch.client.Client; +import org.elasticsearch.cluster.node.DiscoveryNode; +import org.elasticsearch.cluster.node.DiscoveryNodes; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.unit.TimeValue; +import org.elasticsearch.common.xcontent.XContentType; +import org.elasticsearch.index.IndexService; +import org.elasticsearch.plugins.Plugin; +import org.elasticsearch.test.ESIntegTestCase; +import org.elasticsearch.test.InternalSettingsPlugin; +import org.elasticsearch.test.transport.MockTransportService; +import org.elasticsearch.transport.TransportRequest; +import org.elasticsearch.transport.TransportRequestOptions; +import org.elasticsearch.transport.TransportService; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.Collection; +import java.util.List; +import java.util.Optional; +import java.util.concurrent.BrokenBarrierException; +import java.util.concurrent.CyclicBarrier; +import java.util.function.Consumer; +import java.util.stream.Collectors; +import java.util.stream.Stream; + +import static org.hamcrest.Matchers.equalTo; + +public class GlobalCheckpointSyncIT extends ESIntegTestCase { + + @Override + protected Collection> nodePlugins() { + return Stream.concat( + super.nodePlugins().stream(), + Stream.of(InternalSettingsPlugin.class, MockTransportService.TestPlugin.class)) + .collect(Collectors.toList()); + } + + public void testPostOperationGlobalCheckpointSync() throws Exception { + // set the sync interval high so it does not execute during this test + runGlobalCheckpointSyncTest(TimeValue.timeValueHours(24), client -> {}, client -> {}); + } + + /* + * This test swallows the post-operation global checkpoint syncs, and then restores the ability to send these requests at the end of the + * test so that a background sync can fire and sync the global checkpoint. + */ + public void testBackgroundGlobalCheckpointSync() throws Exception { + runGlobalCheckpointSyncTest( + TimeValue.timeValueSeconds(randomIntBetween(1, 3)), + client -> { + // prevent global checkpoint syncs between all nodes + final DiscoveryNodes nodes = client.admin().cluster().prepareState().get().getState().getNodes(); + for (final DiscoveryNode node : nodes) { + for (final DiscoveryNode other : nodes) { + if (node == other) { + continue; + } + final MockTransportService senderTransportService = + (MockTransportService) internalCluster().getInstance(TransportService.class, node.getName()); + final MockTransportService receiverTransportService = + (MockTransportService) internalCluster().getInstance(TransportService.class, other.getName()); + + senderTransportService.addDelegate(receiverTransportService, + new MockTransportService.DelegateTransport(senderTransportService.original()) { + @Override + protected void sendRequest( + final Connection connection, + final long requestId, + final String action, + final TransportRequest request, + final TransportRequestOptions options) throws IOException { + if ("indices:admin/seq_no/global_checkpoint_sync[r]".equals(action)) { + throw new IllegalStateException("blocking indices:admin/seq_no/global_checkpoint_sync[r]"); + } else { + super.sendRequest(connection, requestId, action, request, options); + } + } + }); + } + } + }, + client -> { + // restore global checkpoint syncs between all nodes + final DiscoveryNodes nodes = client.admin().cluster().prepareState().get().getState().getNodes(); + for (final DiscoveryNode node : nodes) { + for (final DiscoveryNode other : nodes) { + if (node == other) { + continue; + } + final MockTransportService senderTransportService = + (MockTransportService) internalCluster().getInstance(TransportService.class, node.getName()); + final MockTransportService receiverTransportService = + (MockTransportService) internalCluster().getInstance(TransportService.class, other.getName()); + senderTransportService.clearRule(receiverTransportService); + } + } + }); + } + + private void runGlobalCheckpointSyncTest( + final TimeValue globalCheckpointSyncInterval, + final Consumer beforeIndexing, + final Consumer afterIndexing) throws Exception { + final int numberOfReplicas = randomIntBetween(1, 4); + internalCluster().ensureAtLeastNumDataNodes(1 + numberOfReplicas); + prepareCreate( + "test", + Settings.builder() + .put(IndexService.GLOBAL_CHECKPOINT_SYNC_INTERVAL_SETTING.getKey(), globalCheckpointSyncInterval) + .put("index.number_of_replicas", numberOfReplicas)) + .get(); + if (randomBoolean()) { + ensureGreen(); + } + + beforeIndexing.accept(client()); + + final int numberOfDocuments = randomIntBetween(0, 256); + + final int numberOfThreads = randomIntBetween(1, 4); + final CyclicBarrier barrier = new CyclicBarrier(1 + numberOfThreads); + + // start concurrent indexing threads + final List threads = new ArrayList<>(numberOfThreads); + for (int i = 0; i < numberOfThreads; i++) { + final int index = i; + final Thread thread = new Thread(() -> { + try { + barrier.await(); + } catch (BrokenBarrierException | InterruptedException e) { + throw new RuntimeException(e); + } + for (int j = 0; j < numberOfDocuments; j++) { + final String id = Integer.toString(index * numberOfDocuments + j); + client().prepareIndex("test", "test", id).setSource("{\"foo\": " + id + "}", XContentType.JSON).get(); + } + try { + barrier.await(); + } catch (BrokenBarrierException | InterruptedException e) { + throw new RuntimeException(e); + } + }); + threads.add(thread); + thread.start(); + } + + // synchronize the start of the threads + barrier.await(); + + // wait for the threads to finish + barrier.await(); + + afterIndexing.accept(client()); + + assertBusy(() -> { + final IndicesStatsResponse stats = client().admin().indices().prepareStats().clear().get(); + final IndexStats indexStats = stats.getIndex("test"); + for (final IndexShardStats indexShardStats : indexStats.getIndexShards().values()) { + Optional maybePrimary = + Stream.of(indexShardStats.getShards()) + .filter(s -> s.getShardRouting().active() && s.getShardRouting().primary()) + .findFirst(); + if (!maybePrimary.isPresent()) { + continue; + } + final ShardStats primary = maybePrimary.get(); + final SeqNoStats primarySeqNoStats = primary.getSeqNoStats(); + for (final ShardStats shardStats : indexShardStats) { + final SeqNoStats seqNoStats = shardStats.getSeqNoStats(); + if (seqNoStats == null) { + // the shard is initializing + continue; + } + assertThat(seqNoStats.getGlobalCheckpoint(), equalTo(primarySeqNoStats.getGlobalCheckpoint())); + } + } + }); + + for (final Thread thread : threads) { + thread.join(); + } + } + +} diff --git a/core/src/test/java/org/elasticsearch/index/seqno/GlobalCheckpointTrackerTests.java b/core/src/test/java/org/elasticsearch/index/seqno/GlobalCheckpointTrackerTests.java index 2f7d2dd15ceb5..dcaab38be5cfb 100644 --- a/core/src/test/java/org/elasticsearch/index/seqno/GlobalCheckpointTrackerTests.java +++ b/core/src/test/java/org/elasticsearch/index/seqno/GlobalCheckpointTrackerTests.java @@ -30,13 +30,14 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.util.set.Sets; +import org.elasticsearch.index.IndexSettings; import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.test.ESTestCase; import org.elasticsearch.test.IndexSettingsModule; -import org.junit.Before; import java.io.IOException; import java.util.Arrays; +import java.util.Collection; import java.util.Collections; import java.util.HashMap; import java.util.HashSet; @@ -46,40 +47,25 @@ import java.util.concurrent.BrokenBarrierException; import java.util.concurrent.CyclicBarrier; import java.util.concurrent.atomic.AtomicBoolean; -import java.util.concurrent.atomic.AtomicInteger; import java.util.function.Function; import java.util.stream.Collectors; import java.util.stream.IntStream; import java.util.stream.Stream; import static java.util.Collections.emptySet; -import static org.elasticsearch.index.seqno.SequenceNumbersService.NO_OPS_PERFORMED; -import static org.elasticsearch.index.seqno.SequenceNumbersService.UNASSIGNED_SEQ_NO; +import static org.elasticsearch.index.seqno.SequenceNumbers.NO_OPS_PERFORMED; +import static org.elasticsearch.index.seqno.SequenceNumbers.UNASSIGNED_SEQ_NO; import static org.hamcrest.Matchers.equalTo; import static org.hamcrest.Matchers.greaterThan; import static org.hamcrest.Matchers.not; public class GlobalCheckpointTrackerTests extends ESTestCase { - GlobalCheckpointTracker tracker; - - @Override - @Before - public void setUp() throws Exception { - super.setUp(); - tracker = - new GlobalCheckpointTracker( - new ShardId("test", "_na_", 0), - IndexSettingsModule.newIndexSettings("test", Settings.EMPTY), - UNASSIGNED_SEQ_NO); - } - public void testEmptyShards() { + final GlobalCheckpointTracker tracker = newTracker(AllocationId.newInitializing()); assertThat(tracker.getGlobalCheckpoint(), equalTo(UNASSIGNED_SEQ_NO)); } - private final AtomicInteger aIdGenerator = new AtomicInteger(); - private Map randomAllocationsWithLocalCheckpoints(int min, int max) { Map allocations = new HashMap<>(); for (int i = randomIntBetween(min, max); i > 0; i--) { @@ -88,13 +74,24 @@ private Map randomAllocationsWithLocalCheckpoints(int min, i return allocations; } - private static IndexShardRoutingTable routingTable(Set initializingIds) { + private static IndexShardRoutingTable routingTable(final Set initializingIds, final AllocationId primaryId) { + final ShardId shardId = new ShardId("test", "_na_", 0); + final ShardRouting primaryShard = + TestShardRouting.newShardRouting(shardId, randomAlphaOfLength(10), null, true, ShardRoutingState.STARTED, primaryId); + return routingTable(initializingIds, primaryShard); + } + + private static IndexShardRoutingTable routingTable(final Set initializingIds, final ShardRouting primaryShard) { + assert !initializingIds.contains(primaryShard.allocationId()); ShardId shardId = new ShardId("test", "_na_", 0); IndexShardRoutingTable.Builder builder = new IndexShardRoutingTable.Builder(shardId); for (AllocationId initializingId : initializingIds) { - builder.addShard(TestShardRouting.newShardRouting(shardId, randomAlphaOfLength(10), null, false, ShardRoutingState.INITIALIZING, - initializingId)); + builder.addShard(TestShardRouting.newShardRouting( + shardId, randomAlphaOfLength(10), null, false, ShardRoutingState.INITIALIZING, initializingId)); } + + builder.addShard(primaryShard); + return builder.build(); } @@ -117,6 +114,9 @@ public void testGlobalCheckpointUpdate() { // it is however nice not to assume this on this level and check we do the right thing. final long minLocalCheckpoint = allocations.values().stream().min(Long::compare).orElse(UNASSIGNED_SEQ_NO); + + final AllocationId primaryId = active.iterator().next(); + final GlobalCheckpointTracker tracker = newTracker(primaryId); assertThat(tracker.getGlobalCheckpoint(), equalTo(UNASSIGNED_SEQ_NO)); logger.info("--> using allocations"); @@ -132,8 +132,8 @@ public void testGlobalCheckpointUpdate() { logger.info(" - [{}], local checkpoint [{}], [{}]", aId, allocations.get(aId), type); }); - tracker.updateFromMaster(initialClusterStateVersion, ids(active), routingTable(initializing), emptySet()); - tracker.activatePrimaryMode(active.iterator().next().getId(), NO_OPS_PERFORMED); + tracker.updateFromMaster(initialClusterStateVersion, ids(active), routingTable(initializing, primaryId), emptySet()); + tracker.activatePrimaryMode(NO_OPS_PERFORMED); initializing.forEach(aId -> markAllocationIdAsInSyncQuietly(tracker, aId.getId(), NO_OPS_PERFORMED)); allocations.keySet().forEach(aId -> tracker.updateLocalCheckpoint(aId.getId(), allocations.get(aId))); @@ -152,12 +152,12 @@ public void testGlobalCheckpointUpdate() { // first check that adding it without the master blessing doesn't change anything. tracker.updateLocalCheckpoint(extraId.getId(), minLocalCheckpointAfterUpdates + 1 + randomInt(4)); - assertNull(tracker.localCheckpoints.get(extraId)); + assertNull(tracker.checkpoints.get(extraId)); expectThrows(IllegalStateException.class, () -> tracker.initiateTracking(extraId.getId())); Set newInitializing = new HashSet<>(initializing); newInitializing.add(extraId); - tracker.updateFromMaster(initialClusterStateVersion + 1, ids(active), routingTable(newInitializing), emptySet()); + tracker.updateFromMaster(initialClusterStateVersion + 1, ids(active), routingTable(newInitializing, primaryId), emptySet()); tracker.initiateTracking(extraId.getId()); @@ -179,9 +179,10 @@ public void testMissingActiveIdsPreventAdvance() { final Map assigned = new HashMap<>(); assigned.putAll(active); assigned.putAll(initializing); - tracker.updateFromMaster(randomNonNegativeLong(), ids(active.keySet()), routingTable(initializing.keySet()), emptySet()); - AllocationId primary = active.keySet().iterator().next(); - tracker.activatePrimaryMode(primary.getId(), NO_OPS_PERFORMED); + AllocationId primaryId = active.keySet().iterator().next(); + final GlobalCheckpointTracker tracker = newTracker(primaryId); + tracker.updateFromMaster(randomNonNegativeLong(), ids(active.keySet()), routingTable(initializing.keySet(), primaryId), emptySet()); + tracker.activatePrimaryMode(NO_OPS_PERFORMED); randomSubsetOf(initializing.keySet()).forEach(k -> markAllocationIdAsInSyncQuietly(tracker, k.getId(), NO_OPS_PERFORMED)); final AllocationId missingActiveID = randomFrom(active.keySet()); assigned @@ -190,7 +191,7 @@ public void testMissingActiveIdsPreventAdvance() { .filter(e -> !e.getKey().equals(missingActiveID)) .forEach(e -> tracker.updateLocalCheckpoint(e.getKey().getId(), e.getValue())); - if (missingActiveID.equals(primary) == false) { + if (missingActiveID.equals(primaryId) == false) { assertThat(tracker.getGlobalCheckpoint(), equalTo(UNASSIGNED_SEQ_NO)); } // now update all knowledge of all shards @@ -202,9 +203,11 @@ public void testMissingInSyncIdsPreventAdvance() { final Map active = randomAllocationsWithLocalCheckpoints(1, 5); final Map initializing = randomAllocationsWithLocalCheckpoints(2, 5); logger.info("active: {}, initializing: {}", active, initializing); - tracker.updateFromMaster(randomNonNegativeLong(), ids(active.keySet()), routingTable(initializing.keySet()), emptySet()); - AllocationId primary = active.keySet().iterator().next(); - tracker.activatePrimaryMode(primary.getId(), NO_OPS_PERFORMED); + + AllocationId primaryId = active.keySet().iterator().next(); + final GlobalCheckpointTracker tracker = newTracker(primaryId); + tracker.updateFromMaster(randomNonNegativeLong(), ids(active.keySet()), routingTable(initializing.keySet(), primaryId), emptySet()); + tracker.activatePrimaryMode(NO_OPS_PERFORMED); randomSubsetOf(randomIntBetween(1, initializing.size() - 1), initializing.keySet()).forEach(aId -> markAllocationIdAsInSyncQuietly(tracker, aId.getId(), NO_OPS_PERFORMED)); @@ -221,8 +224,10 @@ public void testInSyncIdsAreIgnoredIfNotValidatedByMaster() { final Map active = randomAllocationsWithLocalCheckpoints(1, 5); final Map initializing = randomAllocationsWithLocalCheckpoints(1, 5); final Map nonApproved = randomAllocationsWithLocalCheckpoints(1, 5); - tracker.updateFromMaster(randomNonNegativeLong(), ids(active.keySet()), routingTable(initializing.keySet()), emptySet()); - tracker.activatePrimaryMode(active.keySet().iterator().next().getId(), NO_OPS_PERFORMED); + final AllocationId primaryId = active.keySet().iterator().next(); + final GlobalCheckpointTracker tracker = newTracker(primaryId); + tracker.updateFromMaster(randomNonNegativeLong(), ids(active.keySet()), routingTable(initializing.keySet(), primaryId), emptySet()); + tracker.activatePrimaryMode(NO_OPS_PERFORMED); initializing.keySet().forEach(k -> markAllocationIdAsInSyncQuietly(tracker, k.getId(), NO_OPS_PERFORMED)); nonApproved.keySet().forEach(k -> expectThrows(IllegalStateException.class, () -> markAllocationIdAsInSyncQuietly(tracker, k.getId(), NO_OPS_PERFORMED))); @@ -243,6 +248,10 @@ public void testInSyncIdsAreRemovedIfNotValidatedByMaster() { final Set active = Sets.union(activeToStay.keySet(), activeToBeRemoved.keySet()); final Set initializing = Sets.union(initializingToStay.keySet(), initializingToBeRemoved.keySet()); final Map allocations = new HashMap<>(); + final AllocationId primaryId = active.iterator().next(); + if (activeToBeRemoved.containsKey(primaryId)) { + activeToStay.put(primaryId, activeToBeRemoved.remove(primaryId)); + } allocations.putAll(activeToStay); if (randomBoolean()) { allocations.putAll(activeToBeRemoved); @@ -251,8 +260,9 @@ public void testInSyncIdsAreRemovedIfNotValidatedByMaster() { if (randomBoolean()) { allocations.putAll(initializingToBeRemoved); } - tracker.updateFromMaster(initialClusterStateVersion, ids(active), routingTable(initializing), emptySet()); - tracker.activatePrimaryMode(active.iterator().next().getId(), NO_OPS_PERFORMED); + final GlobalCheckpointTracker tracker = newTracker(primaryId); + tracker.updateFromMaster(initialClusterStateVersion, ids(active), routingTable(initializing, primaryId), emptySet()); + tracker.activatePrimaryMode(NO_OPS_PERFORMED); if (randomBoolean()) { initializingToStay.keySet().forEach(k -> markAllocationIdAsInSyncQuietly(tracker, k.getId(), NO_OPS_PERFORMED)); } else { @@ -264,13 +274,19 @@ public void testInSyncIdsAreRemovedIfNotValidatedByMaster() { // now remove shards if (randomBoolean()) { - tracker.updateFromMaster(initialClusterStateVersion + 1, ids(activeToStay.keySet()), routingTable(initializingToStay.keySet()), - emptySet()); + tracker.updateFromMaster( + initialClusterStateVersion + 1, + ids(activeToStay.keySet()), + routingTable(initializingToStay.keySet(), primaryId), + emptySet()); allocations.forEach((aid, ckp) -> tracker.updateLocalCheckpoint(aid.getId(), ckp + 10L)); } else { allocations.forEach((aid, ckp) -> tracker.updateLocalCheckpoint(aid.getId(), ckp + 10L)); - tracker.updateFromMaster(initialClusterStateVersion + 2, ids(activeToStay.keySet()), routingTable(initializingToStay.keySet()), - emptySet()); + tracker.updateFromMaster( + initialClusterStateVersion + 2, + ids(activeToStay.keySet()), + routingTable(initializingToStay.keySet(), primaryId), + emptySet()); } final long checkpoint = Stream.concat(activeToStay.values().stream(), initializingToStay.values().stream()) @@ -286,9 +302,10 @@ public void testWaitForAllocationIdToBeInSync() throws Exception { final AtomicBoolean complete = new AtomicBoolean(); final AllocationId inSyncAllocationId = AllocationId.newInitializing(); final AllocationId trackingAllocationId = AllocationId.newInitializing(); + final GlobalCheckpointTracker tracker = newTracker(inSyncAllocationId); tracker.updateFromMaster(randomNonNegativeLong(), Collections.singleton(inSyncAllocationId.getId()), - routingTable(Collections.singleton(trackingAllocationId)), emptySet()); - tracker.activatePrimaryMode(inSyncAllocationId.getId(), globalCheckpoint); + routingTable(Collections.singleton(trackingAllocationId), inSyncAllocationId), emptySet()); + tracker.activatePrimaryMode(globalCheckpoint); final Thread thread = new Thread(() -> { try { // synchronize starting with the test thread @@ -326,6 +343,14 @@ public void testWaitForAllocationIdToBeInSync() throws Exception { thread.join(); } + private GlobalCheckpointTracker newTracker(final AllocationId allocationId) { + return new GlobalCheckpointTracker( + new ShardId("test", "_na_", 0), + allocationId.getId(), + IndexSettingsModule.newIndexSettings("test", Settings.EMPTY), + UNASSIGNED_SEQ_NO); + } + public void testWaitForAllocationIdToBeInSyncCanBeInterrupted() throws BrokenBarrierException, InterruptedException { final int localCheckpoint = randomIntBetween(1, 32); final int globalCheckpoint = randomIntBetween(localCheckpoint + 1, 64); @@ -333,9 +358,10 @@ public void testWaitForAllocationIdToBeInSyncCanBeInterrupted() throws BrokenBar final AtomicBoolean interrupted = new AtomicBoolean(); final AllocationId inSyncAllocationId = AllocationId.newInitializing(); final AllocationId trackingAllocationId = AllocationId.newInitializing(); + final GlobalCheckpointTracker tracker = newTracker(inSyncAllocationId); tracker.updateFromMaster(randomNonNegativeLong(), Collections.singleton(inSyncAllocationId.getId()), - routingTable(Collections.singleton(trackingAllocationId)), emptySet()); - tracker.activatePrimaryMode(inSyncAllocationId.getId(), globalCheckpoint); + routingTable(Collections.singleton(trackingAllocationId), inSyncAllocationId), emptySet()); + tracker.activatePrimaryMode(globalCheckpoint); final Thread thread = new Thread(() -> { try { // synchronize starting with the test thread @@ -379,10 +405,11 @@ public void testUpdateAllocationIdsFromMaster() throws Exception { randomActiveAndInitializingAllocationIds(numberOfActiveAllocationsIds, numberOfInitializingIds); final Set activeAllocationIds = activeAndInitializingAllocationIds.v1(); final Set initializingIds = activeAndInitializingAllocationIds.v2(); - IndexShardRoutingTable routingTable = routingTable(initializingIds); - tracker.updateFromMaster(initialClusterStateVersion, ids(activeAllocationIds), routingTable, emptySet()); AllocationId primaryId = activeAllocationIds.iterator().next(); - tracker.activatePrimaryMode(primaryId.getId(), NO_OPS_PERFORMED); + IndexShardRoutingTable routingTable = routingTable(initializingIds, primaryId); + final GlobalCheckpointTracker tracker = newTracker(primaryId); + tracker.updateFromMaster(initialClusterStateVersion, ids(activeAllocationIds), routingTable, emptySet()); + tracker.activatePrimaryMode(NO_OPS_PERFORMED); assertThat(tracker.getReplicationGroup().getInSyncAllocationIds(), equalTo(ids(activeAllocationIds))); assertThat(tracker.getReplicationGroup().getRoutingTable(), equalTo(routingTable)); @@ -393,25 +420,25 @@ public void testUpdateAllocationIdsFromMaster() throws Exception { .stream() .filter(a -> a.equals(primaryId) == false) .allMatch(a -> tracker.getTrackedLocalCheckpointForShard(a.getId()).getLocalCheckpoint() - == SequenceNumbersService.UNASSIGNED_SEQ_NO)); + == SequenceNumbers.UNASSIGNED_SEQ_NO)); assertTrue(initializingIds.stream().noneMatch(a -> tracker.getTrackedLocalCheckpointForShard(a.getId()).inSync)); assertTrue( initializingIds .stream() .filter(a -> a.equals(primaryId) == false) .allMatch(a -> tracker.getTrackedLocalCheckpointForShard(a.getId()).getLocalCheckpoint() - == SequenceNumbersService.UNASSIGNED_SEQ_NO)); + == SequenceNumbers.UNASSIGNED_SEQ_NO)); // now we will remove some allocation IDs from these and ensure that they propagate through final Set removingActiveAllocationIds = new HashSet<>(randomSubsetOf(activeAllocationIds)); + removingActiveAllocationIds.remove(primaryId); final Set newActiveAllocationIds = activeAllocationIds.stream().filter(a -> !removingActiveAllocationIds.contains(a)).collect(Collectors.toSet()); final List removingInitializingAllocationIds = randomSubsetOf(initializingIds); final Set newInitializingAllocationIds = initializingIds.stream().filter(a -> !removingInitializingAllocationIds.contains(a)).collect(Collectors.toSet()); - routingTable = routingTable(newInitializingAllocationIds); - tracker.updateFromMaster(initialClusterStateVersion + 1, ids(newActiveAllocationIds), routingTable, - emptySet()); + routingTable = routingTable(newInitializingAllocationIds, primaryId); + tracker.updateFromMaster(initialClusterStateVersion + 1, ids(newActiveAllocationIds), routingTable, emptySet()); assertTrue(newActiveAllocationIds.stream().allMatch(a -> tracker.getTrackedLocalCheckpointForShard(a.getId()).inSync)); assertTrue(removingActiveAllocationIds.stream().allMatch(a -> tracker.getTrackedLocalCheckpointForShard(a.getId()) == null)); assertTrue(newInitializingAllocationIds.stream().noneMatch(a -> tracker.getTrackedLocalCheckpointForShard(a.getId()).inSync)); @@ -425,21 +452,24 @@ public void testUpdateAllocationIdsFromMaster() throws Exception { * than we have been using above ensures that we can not collide with a previous allocation ID */ newInitializingAllocationIds.add(AllocationId.newInitializing()); - tracker.updateFromMaster(initialClusterStateVersion + 2, ids(newActiveAllocationIds), routingTable(newInitializingAllocationIds), - emptySet()); + tracker.updateFromMaster( + initialClusterStateVersion + 2, + ids(newActiveAllocationIds), + routingTable(newInitializingAllocationIds, primaryId), + emptySet()); assertTrue(newActiveAllocationIds.stream().allMatch(a -> tracker.getTrackedLocalCheckpointForShard(a.getId()).inSync)); assertTrue( newActiveAllocationIds .stream() .filter(a -> a.equals(primaryId) == false) .allMatch(a -> tracker.getTrackedLocalCheckpointForShard(a.getId()).getLocalCheckpoint() - == SequenceNumbersService.UNASSIGNED_SEQ_NO)); + == SequenceNumbers.UNASSIGNED_SEQ_NO)); assertTrue(newInitializingAllocationIds.stream().noneMatch(a -> tracker.getTrackedLocalCheckpointForShard(a.getId()).inSync)); assertTrue( newInitializingAllocationIds .stream() .allMatch(a -> tracker.getTrackedLocalCheckpointForShard(a.getId()).getLocalCheckpoint() - == SequenceNumbersService.UNASSIGNED_SEQ_NO)); + == SequenceNumbers.UNASSIGNED_SEQ_NO)); // the tracking allocation IDs should play no role in determining the global checkpoint final Map activeLocalCheckpoints = @@ -469,8 +499,11 @@ public void testUpdateAllocationIdsFromMaster() throws Exception { // using a different length than we have been using above ensures that we can not collide with a previous allocation ID final AllocationId newSyncingAllocationId = AllocationId.newInitializing(); newInitializingAllocationIds.add(newSyncingAllocationId); - tracker.updateFromMaster(initialClusterStateVersion + 3, ids(newActiveAllocationIds), routingTable(newInitializingAllocationIds), - emptySet()); + tracker.updateFromMaster( + initialClusterStateVersion + 3, + ids(newActiveAllocationIds), + routingTable(newInitializingAllocationIds, primaryId), + emptySet()); final CyclicBarrier barrier = new CyclicBarrier(2); final Thread thread = new Thread(() -> { try { @@ -504,8 +537,11 @@ public void testUpdateAllocationIdsFromMaster() throws Exception { * the in-sync set even if we receive a cluster state update that does not reflect this. * */ - tracker.updateFromMaster(initialClusterStateVersion + 4, ids(newActiveAllocationIds), routingTable(newInitializingAllocationIds), - emptySet()); + tracker.updateFromMaster( + initialClusterStateVersion + 4, + ids(newActiveAllocationIds), + routingTable(newInitializingAllocationIds, primaryId), + emptySet()); assertTrue(tracker.getTrackedLocalCheckpointForShard(newSyncingAllocationId.getId()).inSync); assertFalse(tracker.pendingInSync.contains(newSyncingAllocationId.getId())); } @@ -513,7 +549,7 @@ public void testUpdateAllocationIdsFromMaster() throws Exception { /** * If we do not update the global checkpoint in {@link GlobalCheckpointTracker#markAllocationIdAsInSync(String, long)} after adding the * allocation ID to the in-sync set and removing it from pending, the local checkpoint update that freed the thread waiting for the - * local checkpoint to advance could miss updating the global checkpoint in a race if the the waiting thread did not add the allocation + * local checkpoint to advance could miss updating the global checkpoint in a race if the waiting thread did not add the allocation * ID to the in-sync set and remove it from the pending set before the local checkpoint updating thread executed the global checkpoint * update. This test fails without an additional call to {@link GlobalCheckpointTracker#updateGlobalCheckpointOnPrimary()} after * removing the allocation ID from the pending set in {@link GlobalCheckpointTracker#markAllocationIdAsInSync(String, long)} (even if a @@ -529,9 +565,13 @@ public void testRaceUpdatingGlobalCheckpoint() throws InterruptedException, Brok final CyclicBarrier barrier = new CyclicBarrier(4); final int activeLocalCheckpoint = randomIntBetween(0, Integer.MAX_VALUE - 1); - tracker.updateFromMaster(randomNonNegativeLong(), Collections.singleton(active.getId()), - routingTable(Collections.singleton(initializing)), emptySet()); - tracker.activatePrimaryMode(active.getId(), activeLocalCheckpoint); + final GlobalCheckpointTracker tracker = newTracker(active); + tracker.updateFromMaster( + randomNonNegativeLong(), + Collections.singleton(active.getId()), + routingTable(Collections.singleton(initializing), active), + emptySet()); + tracker.activatePrimaryMode(activeLocalCheckpoint); final int nextActiveLocalCheckpoint = randomIntBetween(activeLocalCheckpoint + 1, Integer.MAX_VALUE); final Thread activeThread = new Thread(() -> { try { @@ -574,21 +614,27 @@ public void testRaceUpdatingGlobalCheckpoint() throws InterruptedException, Brok } public void testPrimaryContextHandoff() throws IOException { - GlobalCheckpointTracker oldPrimary = new GlobalCheckpointTracker(new ShardId("test", "_na_", 0), - IndexSettingsModule.newIndexSettings("test", Settings.EMPTY), UNASSIGNED_SEQ_NO); - GlobalCheckpointTracker newPrimary = new GlobalCheckpointTracker(new ShardId("test", "_na_", 0), - IndexSettingsModule.newIndexSettings("test", Settings.EMPTY), UNASSIGNED_SEQ_NO); + final IndexSettings indexSettings = IndexSettingsModule.newIndexSettings("test", Settings.EMPTY); + final ShardId shardId = new ShardId("test", "_na_", 0); FakeClusterState clusterState = initialState(); + final AllocationId primaryAllocationId = clusterState.routingTable.primaryShard().allocationId(); + GlobalCheckpointTracker oldPrimary = + new GlobalCheckpointTracker(shardId, primaryAllocationId.getId(), indexSettings, UNASSIGNED_SEQ_NO); + GlobalCheckpointTracker newPrimary = + new GlobalCheckpointTracker(shardId, primaryAllocationId.getRelocationId(), indexSettings, UNASSIGNED_SEQ_NO); + + Set allocationIds = new HashSet<>(Arrays.asList(oldPrimary.shardAllocationId, newPrimary.shardAllocationId)); + clusterState.apply(oldPrimary); clusterState.apply(newPrimary); - activatePrimary(clusterState, oldPrimary); + activatePrimary(oldPrimary); final int numUpdates = randomInt(10); for (int i = 0; i < numUpdates; i++) { if (rarely()) { - clusterState = randomUpdateClusterState(clusterState); + clusterState = randomUpdateClusterState(allocationIds, clusterState); clusterState.apply(oldPrimary); clusterState.apply(newPrimary); } @@ -600,12 +646,18 @@ public void testPrimaryContextHandoff() throws IOException { } } + // simulate transferring the global checkpoint to the new primary after finalizing recovery before the handoff + markAllocationIdAsInSyncQuietly( + oldPrimary, + newPrimary.shardAllocationId, + Math.max(SequenceNumbers.NO_OPS_PERFORMED, oldPrimary.getGlobalCheckpoint() + randomInt(5))); + oldPrimary.updateGlobalCheckpointForShard(newPrimary.shardAllocationId, oldPrimary.getGlobalCheckpoint()); GlobalCheckpointTracker.PrimaryContext primaryContext = oldPrimary.startRelocationHandoff(); if (randomBoolean()) { // cluster state update after primary context handoff if (randomBoolean()) { - clusterState = randomUpdateClusterState(clusterState); + clusterState = randomUpdateClusterState(allocationIds, clusterState); clusterState.apply(oldPrimary); clusterState.apply(newPrimary); } @@ -614,7 +666,7 @@ public void testPrimaryContextHandoff() throws IOException { oldPrimary.abortRelocationHandoff(); if (rarely()) { - clusterState = randomUpdateClusterState(clusterState); + clusterState = randomUpdateClusterState(allocationIds, clusterState); clusterState.apply(oldPrimary); clusterState.apply(newPrimary); } @@ -634,11 +686,10 @@ public void testPrimaryContextHandoff() throws IOException { primaryContext.writeTo(output); StreamInput streamInput = output.bytes().streamInput(); primaryContext = new GlobalCheckpointTracker.PrimaryContext(streamInput); - switch (randomInt(3)) { case 0: { // apply cluster state update on old primary while primary context is being transferred - clusterState = randomUpdateClusterState(clusterState); + clusterState = randomUpdateClusterState(allocationIds, clusterState); clusterState.apply(oldPrimary); // activate new primary newPrimary.activateWithPrimaryContext(primaryContext); @@ -648,7 +699,7 @@ public void testPrimaryContextHandoff() throws IOException { } case 1: { // apply cluster state update on new primary while primary context is being transferred - clusterState = randomUpdateClusterState(clusterState); + clusterState = randomUpdateClusterState(allocationIds, clusterState); clusterState.apply(newPrimary); // activate new primary newPrimary.activateWithPrimaryContext(primaryContext); @@ -658,7 +709,7 @@ public void testPrimaryContextHandoff() throws IOException { } case 2: { // apply cluster state update on both copies while primary context is being transferred - clusterState = randomUpdateClusterState(clusterState); + clusterState = randomUpdateClusterState(allocationIds, clusterState); clusterState.apply(oldPrimary); clusterState.apply(newPrimary); newPrimary.activateWithPrimaryContext(primaryContext); @@ -674,8 +725,32 @@ public void testPrimaryContextHandoff() throws IOException { assertTrue(oldPrimary.primaryMode); assertTrue(newPrimary.primaryMode); assertThat(newPrimary.appliedClusterStateVersion, equalTo(oldPrimary.appliedClusterStateVersion)); - assertThat(newPrimary.localCheckpoints, equalTo(oldPrimary.localCheckpoints)); - assertThat(newPrimary.globalCheckpoint, equalTo(oldPrimary.globalCheckpoint)); + /* + * We can not assert on shared knowledge of the global checkpoint between the old primary and the new primary as the new primary + * will update its global checkpoint state without the old primary learning of it, and the old primary could have updated its + * global checkpoint state after the primary context was transferred. + */ + Map oldPrimaryCheckpointsCopy = new HashMap<>(oldPrimary.checkpoints); + oldPrimaryCheckpointsCopy.remove(oldPrimary.shardAllocationId); + oldPrimaryCheckpointsCopy.remove(newPrimary.shardAllocationId); + Map newPrimaryCheckpointsCopy = new HashMap<>(newPrimary.checkpoints); + newPrimaryCheckpointsCopy.remove(oldPrimary.shardAllocationId); + newPrimaryCheckpointsCopy.remove(newPrimary.shardAllocationId); + assertThat(newPrimaryCheckpointsCopy, equalTo(oldPrimaryCheckpointsCopy)); + // we can however assert that shared knowledge of the local checkpoint and in-sync status is equal + assertThat( + oldPrimary.checkpoints.get(oldPrimary.shardAllocationId).localCheckpoint, + equalTo(newPrimary.checkpoints.get(oldPrimary.shardAllocationId).localCheckpoint)); + assertThat( + oldPrimary.checkpoints.get(newPrimary.shardAllocationId).localCheckpoint, + equalTo(newPrimary.checkpoints.get(newPrimary.shardAllocationId).localCheckpoint)); + assertThat( + oldPrimary.checkpoints.get(oldPrimary.shardAllocationId).inSync, + equalTo(newPrimary.checkpoints.get(oldPrimary.shardAllocationId).inSync)); + assertThat( + oldPrimary.checkpoints.get(newPrimary.shardAllocationId).inSync, + equalTo(newPrimary.checkpoints.get(newPrimary.shardAllocationId).inSync)); + assertThat(newPrimary.getGlobalCheckpoint(), equalTo(oldPrimary.getGlobalCheckpoint())); assertThat(newPrimary.routingTable, equalTo(oldPrimary.routingTable)); assertThat(newPrimary.replicationGroup, equalTo(oldPrimary.replicationGroup)); @@ -686,9 +761,10 @@ public void testPrimaryContextHandoff() throws IOException { public void testIllegalStateExceptionIfUnknownAllocationId() { final AllocationId active = AllocationId.newInitializing(); final AllocationId initializing = AllocationId.newInitializing(); + final GlobalCheckpointTracker tracker = newTracker(active); tracker.updateFromMaster(randomNonNegativeLong(), Collections.singleton(active.getId()), - routingTable(Collections.singleton(initializing)), emptySet()); - tracker.activatePrimaryMode(active.getId(), NO_OPS_PERFORMED); + routingTable(Collections.singleton(initializing), active), emptySet()); + tracker.activatePrimaryMode(NO_OPS_PERFORMED); expectThrows(IllegalStateException.class, () -> tracker.initiateTracking(randomAlphaOfLength(10))); expectThrows(IllegalStateException.class, () -> tracker.markAllocationIdAsInSync(randomAlphaOfLength(10), randomNonNegativeLong())); @@ -724,38 +800,58 @@ private static FakeClusterState initialState() { final int numberOfActiveAllocationsIds = randomIntBetween(1, 8); final int numberOfInitializingIds = randomIntBetween(0, 8); final Tuple, Set> activeAndInitializingAllocationIds = - randomActiveAndInitializingAllocationIds(numberOfActiveAllocationsIds, numberOfInitializingIds); + randomActiveAndInitializingAllocationIds(numberOfActiveAllocationsIds, numberOfInitializingIds); final Set activeAllocationIds = activeAndInitializingAllocationIds.v1(); final Set initializingAllocationIds = activeAndInitializingAllocationIds.v2(); - return new FakeClusterState(initialClusterStateVersion, activeAllocationIds, routingTable(initializingAllocationIds)); + final AllocationId primaryId = randomFrom(activeAllocationIds); + final AllocationId relocatingId = AllocationId.newRelocation(primaryId); + activeAllocationIds.remove(primaryId); + activeAllocationIds.add(relocatingId); + final ShardId shardId = new ShardId("test", "_na_", 0); + final ShardRouting primaryShard = + TestShardRouting.newShardRouting( + shardId, randomAlphaOfLength(10), randomAlphaOfLength(10), true, ShardRoutingState.RELOCATING, relocatingId); + + return new FakeClusterState( + initialClusterStateVersion, + activeAllocationIds, + routingTable(initializingAllocationIds, primaryShard)); } - private static void activatePrimary(FakeClusterState clusterState, GlobalCheckpointTracker gcp) { - gcp.activatePrimaryMode(randomFrom(ids(clusterState.inSyncIds)), randomIntBetween(Math.toIntExact(NO_OPS_PERFORMED), 10)); + private static void activatePrimary(GlobalCheckpointTracker gcp) { + gcp.activatePrimaryMode(randomIntBetween(Math.toIntExact(NO_OPS_PERFORMED), 10)); } private static void randomLocalCheckpointUpdate(GlobalCheckpointTracker gcp) { - String allocationId = randomFrom(gcp.localCheckpoints.keySet()); - long currentLocalCheckpoint = gcp.localCheckpoints.get(allocationId).getLocalCheckpoint(); - gcp.updateLocalCheckpoint(allocationId, Math.max(SequenceNumbersService.NO_OPS_PERFORMED, currentLocalCheckpoint + randomInt(5))); + String allocationId = randomFrom(gcp.checkpoints.keySet()); + long currentLocalCheckpoint = gcp.checkpoints.get(allocationId).getLocalCheckpoint(); + gcp.updateLocalCheckpoint(allocationId, Math.max(SequenceNumbers.NO_OPS_PERFORMED, currentLocalCheckpoint + randomInt(5))); } private static void randomMarkInSync(GlobalCheckpointTracker gcp) { - String allocationId = randomFrom(gcp.localCheckpoints.keySet()); + String allocationId = randomFrom(gcp.checkpoints.keySet()); long newLocalCheckpoint = Math.max(NO_OPS_PERFORMED, gcp.getGlobalCheckpoint() + randomInt(5)); markAllocationIdAsInSyncQuietly(gcp, allocationId, newLocalCheckpoint); } - private static FakeClusterState randomUpdateClusterState(FakeClusterState clusterState) { - final Set initializingIdsToAdd = randomAllocationIdsExcludingExistingIds(clusterState.allIds(), randomInt(2)); + private static FakeClusterState randomUpdateClusterState(Set allocationIds, FakeClusterState clusterState) { + final Set initializingIdsToAdd = + randomAllocationIdsExcludingExistingIds(exclude(clusterState.allIds(), allocationIds), randomInt(2)); final Set initializingIdsToRemove = new HashSet<>( - randomSubsetOf(randomInt(clusterState.initializingIds().size()), clusterState.initializingIds())); + exclude(randomSubsetOf(randomInt(clusterState.initializingIds().size()), clusterState.initializingIds()), allocationIds)); final Set inSyncIdsToRemove = new HashSet<>( - randomSubsetOf(randomInt(clusterState.inSyncIds.size()), clusterState.inSyncIds)); + exclude(randomSubsetOf(randomInt(clusterState.inSyncIds.size()), clusterState.inSyncIds), allocationIds)); final Set remainingInSyncIds = Sets.difference(clusterState.inSyncIds, inSyncIdsToRemove); - return new FakeClusterState(clusterState.version + randomIntBetween(1, 5), - remainingInSyncIds.isEmpty() ? clusterState.inSyncIds : remainingInSyncIds, - routingTable(Sets.difference(Sets.union(clusterState.initializingIds(), initializingIdsToAdd), initializingIdsToRemove))); + return new FakeClusterState( + clusterState.version + randomIntBetween(1, 5), + remainingInSyncIds.isEmpty() ? clusterState.inSyncIds : remainingInSyncIds, + routingTable( + Sets.difference(Sets.union(clusterState.initializingIds(), initializingIdsToAdd), initializingIdsToRemove), + clusterState.routingTable.primaryShard())); + } + + private static Set exclude(Collection allocationIds, Set excludeIds) { + return allocationIds.stream().filter(aId -> !excludeIds.contains(aId.getId())).collect(Collectors.toSet()); } private static Tuple, Set> randomActiveAndInitializingAllocationIds( diff --git a/core/src/test/java/org/elasticsearch/index/seqno/LocalCheckpointTrackerTests.java b/core/src/test/java/org/elasticsearch/index/seqno/LocalCheckpointTrackerTests.java index e2978ffc51d52..eb62391e0b0d7 100644 --- a/core/src/test/java/org/elasticsearch/index/seqno/LocalCheckpointTrackerTests.java +++ b/core/src/test/java/org/elasticsearch/index/seqno/LocalCheckpointTrackerTests.java @@ -19,12 +19,14 @@ package org.elasticsearch.index.seqno; +import com.carrotsearch.hppc.LongObjectHashMap; +import org.apache.lucene.util.FixedBitSet; import org.elasticsearch.ElasticsearchException; import org.elasticsearch.common.Randomness; -import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.util.concurrent.AbstractRunnable; import org.elasticsearch.test.ESTestCase; -import org.elasticsearch.test.IndexSettingsModule; +import org.hamcrest.BaseMatcher; +import org.hamcrest.Description; import org.junit.Before; import java.util.ArrayList; @@ -38,7 +40,7 @@ import java.util.stream.Collectors; import java.util.stream.IntStream; -import static org.hamcrest.Matchers.empty; +import static org.elasticsearch.index.seqno.LocalCheckpointTracker.BIT_SET_SIZE; import static org.hamcrest.Matchers.equalTo; import static org.hamcrest.Matchers.isOneOf; @@ -46,19 +48,8 @@ public class LocalCheckpointTrackerTests extends ESTestCase { private LocalCheckpointTracker tracker; - private static final int SMALL_CHUNK_SIZE = 4; - public static LocalCheckpointTracker createEmptyTracker() { - return new LocalCheckpointTracker( - IndexSettingsModule.newIndexSettings( - "test", - Settings - .builder() - .put(LocalCheckpointTracker.SETTINGS_BIT_ARRAYS_SIZE.getKey(), SMALL_CHUNK_SIZE) - .build()), - SequenceNumbersService.NO_OPS_PERFORMED, - SequenceNumbersService.NO_OPS_PERFORMED - ); + return new LocalCheckpointTracker(SequenceNumbers.NO_OPS_PERFORMED, SequenceNumbers.NO_OPS_PERFORMED); } @Override @@ -70,7 +61,7 @@ public void setUp() throws Exception { public void testSimplePrimary() { long seqNo1, seqNo2; - assertThat(tracker.getCheckpoint(), equalTo(SequenceNumbersService.NO_OPS_PERFORMED)); + assertThat(tracker.getCheckpoint(), equalTo(SequenceNumbers.NO_OPS_PERFORMED)); seqNo1 = tracker.generateSeqNo(); assertThat(seqNo1, equalTo(0L)); tracker.markSeqNoAsCompleted(seqNo1); @@ -86,7 +77,7 @@ public void testSimplePrimary() { } public void testSimpleReplica() { - assertThat(tracker.getCheckpoint(), equalTo(SequenceNumbersService.NO_OPS_PERFORMED)); + assertThat(tracker.getCheckpoint(), equalTo(SequenceNumbers.NO_OPS_PERFORMED)); tracker.markSeqNoAsCompleted(0L); assertThat(tracker.getCheckpoint(), equalTo(0L)); tracker.markSeqNoAsCompleted(2L); @@ -95,10 +86,19 @@ public void testSimpleReplica() { assertThat(tracker.getCheckpoint(), equalTo(2L)); } + public void testLazyInitialization() { + /* + * Previously this would allocate the entire chain of bit sets to the one for the sequence number being marked; for very large + * sequence numbers this could lead to excessive memory usage resulting in out of memory errors. + */ + tracker.markSeqNoAsCompleted(randomNonNegativeLong()); + assertThat(tracker.processedSeqNo.size(), equalTo(1)); + } + public void testSimpleOverFlow() { List seqNoList = new ArrayList<>(); final boolean aligned = randomBoolean(); - final int maxOps = SMALL_CHUNK_SIZE * randomIntBetween(1, 5) + (aligned ? 0 : randomIntBetween(1, SMALL_CHUNK_SIZE - 1)); + final int maxOps = BIT_SET_SIZE * randomIntBetween(1, 5) + (aligned ? 0 : randomIntBetween(1, BIT_SET_SIZE - 1)); for (int i = 0; i < maxOps; i++) { seqNoList.add(i); @@ -109,7 +109,9 @@ public void testSimpleOverFlow() { } assertThat(tracker.checkpoint, equalTo(maxOps - 1L)); assertThat(tracker.processedSeqNo.size(), equalTo(aligned ? 0 : 1)); - assertThat(tracker.firstProcessedSeqNo, equalTo(((long) maxOps / SMALL_CHUNK_SIZE) * SMALL_CHUNK_SIZE)); + if (aligned == false) { + assertThat(tracker.processedSeqNo.keys().iterator().next().value, equalTo(tracker.checkpoint / BIT_SET_SIZE)); + } } public void testConcurrentPrimary() throws InterruptedException { @@ -150,7 +152,9 @@ protected void doRun() throws Exception { tracker.markSeqNoAsCompleted(unFinishedSeq); assertThat(tracker.getCheckpoint(), equalTo(maxOps - 1L)); assertThat(tracker.processedSeqNo.size(), isOneOf(0, 1)); - assertThat(tracker.firstProcessedSeqNo, equalTo(((long) maxOps / SMALL_CHUNK_SIZE) * SMALL_CHUNK_SIZE)); + if (tracker.processedSeqNo.size() == 1) { + assertThat(tracker.processedSeqNo.keys().iterator().next().value, equalTo(tracker.checkpoint / BIT_SET_SIZE)); + } } public void testConcurrentReplica() throws InterruptedException { @@ -198,7 +202,10 @@ protected void doRun() throws Exception { assertThat(tracker.getCheckpoint(), equalTo(unFinishedSeq - 1L)); tracker.markSeqNoAsCompleted(unFinishedSeq); assertThat(tracker.getCheckpoint(), equalTo(maxOps - 1L)); - assertThat(tracker.firstProcessedSeqNo, equalTo(((long) maxOps / SMALL_CHUNK_SIZE) * SMALL_CHUNK_SIZE)); + assertThat(tracker.processedSeqNo.size(), isOneOf(0, 1)); + if (tracker.processedSeqNo.size() == 1) { + assertThat(tracker.processedSeqNo.keys().iterator().next().value, equalTo(tracker.checkpoint / BIT_SET_SIZE)); + } } public void testWaitForOpsToComplete() throws BrokenBarrierException, InterruptedException { @@ -240,7 +247,7 @@ public void testWaitForOpsToComplete() throws BrokenBarrierException, Interrupte public void testResetCheckpoint() { final int operations = 1024 - scaledRandomIntBetween(0, 1024); - int maxSeqNo = Math.toIntExact(SequenceNumbersService.NO_OPS_PERFORMED); + int maxSeqNo = Math.toIntExact(SequenceNumbers.NO_OPS_PERFORMED); for (int i = 0; i < operations; i++) { if (!rarely()) { tracker.markSeqNoAsCompleted(i); @@ -249,11 +256,21 @@ public void testResetCheckpoint() { } final int localCheckpoint = - randomIntBetween(Math.toIntExact(SequenceNumbersService.NO_OPS_PERFORMED), Math.toIntExact(tracker.getCheckpoint())); + randomIntBetween(Math.toIntExact(SequenceNumbers.NO_OPS_PERFORMED), Math.toIntExact(tracker.getCheckpoint())); tracker.resetCheckpoint(localCheckpoint); assertThat(tracker.getCheckpoint(), equalTo((long) localCheckpoint)); assertThat(tracker.getMaxSeqNo(), equalTo((long) maxSeqNo)); - assertThat(tracker.processedSeqNo, empty()); + assertThat(tracker.processedSeqNo, new BaseMatcher>() { + @Override + public boolean matches(Object item) { + return (item instanceof LongObjectHashMap && ((LongObjectHashMap) item).isEmpty()); + } + + @Override + public void describeTo(Description description) { + description.appendText("empty"); + } + }); assertThat(tracker.generateSeqNo(), equalTo((long) (maxSeqNo + 1))); } } diff --git a/core/src/test/java/org/elasticsearch/index/seqno/SequenceNumbersTests.java b/core/src/test/java/org/elasticsearch/index/seqno/SequenceNumbersTests.java index 23eac18377017..f835cff3f4656 100644 --- a/core/src/test/java/org/elasticsearch/index/seqno/SequenceNumbersTests.java +++ b/core/src/test/java/org/elasticsearch/index/seqno/SequenceNumbersTests.java @@ -29,31 +29,31 @@ public class SequenceNumbersTests extends ESTestCase { public void testMin() { final long seqNo = randomNonNegativeLong(); - assertThat(SequenceNumbers.min(SequenceNumbersService.NO_OPS_PERFORMED, seqNo), equalTo(seqNo)); + assertThat(SequenceNumbers.min(SequenceNumbers.NO_OPS_PERFORMED, seqNo), equalTo(seqNo)); assertThat( - SequenceNumbers.min(SequenceNumbersService.NO_OPS_PERFORMED, SequenceNumbersService.UNASSIGNED_SEQ_NO), - equalTo(SequenceNumbersService.UNASSIGNED_SEQ_NO)); - assertThat(SequenceNumbers.min(SequenceNumbersService.UNASSIGNED_SEQ_NO, seqNo), equalTo(seqNo)); + SequenceNumbers.min(SequenceNumbers.NO_OPS_PERFORMED, SequenceNumbers.UNASSIGNED_SEQ_NO), + equalTo(SequenceNumbers.UNASSIGNED_SEQ_NO)); + assertThat(SequenceNumbers.min(SequenceNumbers.UNASSIGNED_SEQ_NO, seqNo), equalTo(seqNo)); final long minSeqNo = randomNonNegativeLong(); assertThat(SequenceNumbers.min(minSeqNo, seqNo), equalTo(Math.min(minSeqNo, seqNo))); final IllegalArgumentException e = - expectThrows(IllegalArgumentException.class, () -> SequenceNumbers.min(minSeqNo, SequenceNumbersService.UNASSIGNED_SEQ_NO)); + expectThrows(IllegalArgumentException.class, () -> SequenceNumbers.min(minSeqNo, SequenceNumbers.UNASSIGNED_SEQ_NO)); assertThat(e, hasToString(containsString("sequence number must be assigned"))); } public void testMax() { final long seqNo = randomNonNegativeLong(); - assertThat(SequenceNumbers.max(SequenceNumbersService.NO_OPS_PERFORMED, seqNo), equalTo(seqNo)); + assertThat(SequenceNumbers.max(SequenceNumbers.NO_OPS_PERFORMED, seqNo), equalTo(seqNo)); assertThat( - SequenceNumbers.max(SequenceNumbersService.NO_OPS_PERFORMED, SequenceNumbersService.UNASSIGNED_SEQ_NO), - equalTo(SequenceNumbersService.UNASSIGNED_SEQ_NO)); - assertThat(SequenceNumbers.max(SequenceNumbersService.UNASSIGNED_SEQ_NO, seqNo), equalTo(seqNo)); + SequenceNumbers.max(SequenceNumbers.NO_OPS_PERFORMED, SequenceNumbers.UNASSIGNED_SEQ_NO), + equalTo(SequenceNumbers.UNASSIGNED_SEQ_NO)); + assertThat(SequenceNumbers.max(SequenceNumbers.UNASSIGNED_SEQ_NO, seqNo), equalTo(seqNo)); final long maxSeqNo = randomNonNegativeLong(); assertThat(SequenceNumbers.min(maxSeqNo, seqNo), equalTo(Math.min(maxSeqNo, seqNo))); final IllegalArgumentException e = - expectThrows(IllegalArgumentException.class, () -> SequenceNumbers.min(maxSeqNo, SequenceNumbersService.UNASSIGNED_SEQ_NO)); + expectThrows(IllegalArgumentException.class, () -> SequenceNumbers.min(maxSeqNo, SequenceNumbers.UNASSIGNED_SEQ_NO)); assertThat(e, hasToString(containsString("sequence number must be assigned"))); } diff --git a/core/src/test/java/org/elasticsearch/index/shard/DocsStatsTests.java b/core/src/test/java/org/elasticsearch/index/shard/DocsStatsTests.java new file mode 100644 index 0000000000000..85f6764941cb1 --- /dev/null +++ b/core/src/test/java/org/elasticsearch/index/shard/DocsStatsTests.java @@ -0,0 +1,59 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.index.shard; + +import org.elasticsearch.common.bytes.BytesReference; +import org.elasticsearch.common.io.stream.BytesStreamOutput; +import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.test.ESTestCase; + +import static org.hamcrest.Matchers.equalTo; + +public class DocsStatsTests extends ESTestCase { + + public void testCalculateAverageDocSize() throws Exception { + DocsStats stats = new DocsStats(10, 2, 120); + assertThat(stats.getAverageSizeInBytes(), equalTo(10L)); + + stats.add(new DocsStats(0, 0, 0)); + assertThat(stats.getAverageSizeInBytes(), equalTo(10L)); + + stats.add(new DocsStats(8, 30, 480)); + assertThat(stats.getCount(), equalTo(18L)); + assertThat(stats.getDeleted(), equalTo(32L)); + assertThat(stats.getTotalSizeInBytes(), equalTo(600L)); + assertThat(stats.getAverageSizeInBytes(), equalTo(12L)); + } + + public void testSerialize() throws Exception { + DocsStats originalStats = new DocsStats(randomNonNegativeLong(), randomNonNegativeLong(), randomNonNegativeLong()); + try (BytesStreamOutput out = new BytesStreamOutput()) { + originalStats.writeTo(out); + BytesReference bytes = out.bytes(); + try (StreamInput in = bytes.streamInput()) { + DocsStats cloneStats = new DocsStats(); + cloneStats.readFrom(in); + assertThat(cloneStats.getCount(), equalTo(originalStats.getCount())); + assertThat(cloneStats.getDeleted(), equalTo(originalStats.getDeleted())); + assertThat(cloneStats.getAverageSizeInBytes(), equalTo(originalStats.getAverageSizeInBytes())); + } + } + } +} diff --git a/core/src/test/java/org/elasticsearch/index/shard/ElasticsearchQueryCachingPolicyTests.java b/core/src/test/java/org/elasticsearch/index/shard/ElasticsearchQueryCachingPolicyTests.java deleted file mode 100644 index 0344a15810f3b..0000000000000 --- a/core/src/test/java/org/elasticsearch/index/shard/ElasticsearchQueryCachingPolicyTests.java +++ /dev/null @@ -1,61 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.index.shard; - -import org.apache.lucene.index.Term; -import org.apache.lucene.search.PhraseQuery; -import org.apache.lucene.search.Query; -import org.apache.lucene.search.QueryCachingPolicy; -import org.apache.lucene.search.TermQuery; -import org.elasticsearch.test.ESTestCase; - -import java.io.IOException; - -public class ElasticsearchQueryCachingPolicyTests extends ESTestCase { - - public void testDoesNotCacheTermQueries() throws IOException { - QueryCachingPolicy policy = QueryCachingPolicy.ALWAYS_CACHE; - assertTrue(policy.shouldCache(new TermQuery(new Term("foo", "bar")))); - assertTrue(policy.shouldCache(new PhraseQuery("foo", "bar", "baz"))); - policy = new ElasticsearchQueryCachingPolicy(policy); - assertFalse(policy.shouldCache(new TermQuery(new Term("foo", "bar")))); - assertTrue(policy.shouldCache(new PhraseQuery("foo", "bar", "baz"))); - } - - public void testDoesNotPutTermQueriesIntoTheHistory() { - boolean[] used = new boolean[1]; - QueryCachingPolicy policy = new QueryCachingPolicy() { - @Override - public boolean shouldCache(Query query) throws IOException { - throw new UnsupportedOperationException(); - } - @Override - public void onUse(Query query) { - used[0] = true; - } - }; - policy = new ElasticsearchQueryCachingPolicy(policy); - policy.onUse(new TermQuery(new Term("foo", "bar"))); - assertFalse(used[0]); - policy.onUse(new PhraseQuery("foo", "bar", "baz")); - assertTrue(used[0]); - } - -} diff --git a/core/src/test/java/org/elasticsearch/index/shard/IndexShardIT.java b/core/src/test/java/org/elasticsearch/index/shard/IndexShardIT.java index 2346ba290ae4e..b29ba2d9efcb5 100644 --- a/core/src/test/java/org/elasticsearch/index/shard/IndexShardIT.java +++ b/core/src/test/java/org/elasticsearch/index/shard/IndexShardIT.java @@ -151,7 +151,7 @@ public void testLockTryingToDelete() throws Exception { public void testMarkAsInactiveTriggersSyncedFlush() throws Exception { assertAcked(client().admin().indices().prepareCreate("test") - .setSettings(SETTING_NUMBER_OF_SHARDS, 1, SETTING_NUMBER_OF_REPLICAS, 0)); + .setSettings(Settings.builder().put(SETTING_NUMBER_OF_SHARDS, 1).put(SETTING_NUMBER_OF_REPLICAS, 0))); client().prepareIndex("test", "test").setSource("{}", XContentType.JSON).get(); ensureGreen("test"); IndicesService indicesService = getInstanceFromNode(IndicesService.class); @@ -220,7 +220,7 @@ private void setDurability(IndexShard shard, Translog.Durability durability) { public void testUpdatePriority() { assertAcked(client().admin().indices().prepareCreate("test") - .setSettings(IndexMetaData.SETTING_PRIORITY, 200)); + .setSettings(Settings.builder().put(IndexMetaData.SETTING_PRIORITY, 200))); IndexService indexService = getInstanceFromNode(IndicesService.class).indexService(resolveIndex("test")); assertEquals(200, indexService.getIndexSettings().getSettings().getAsInt(IndexMetaData.SETTING_PRIORITY, 0).intValue()); client().admin().indices().prepareUpdateSettings("test").setSettings(Settings.builder().put(IndexMetaData.SETTING_PRIORITY, 400) @@ -247,7 +247,7 @@ public void testIndexDirIsDeletedWhenShardRemoved() throws Exception { public void testExpectedShardSizeIsPresent() throws InterruptedException { assertAcked(client().admin().indices().prepareCreate("test") - .setSettings(SETTING_NUMBER_OF_SHARDS, 1, SETTING_NUMBER_OF_REPLICAS, 0)); + .setSettings(Settings.builder().put(SETTING_NUMBER_OF_SHARDS, 1).put(SETTING_NUMBER_OF_REPLICAS, 0))); for (int i = 0; i < 50; i++) { client().prepareIndex("test", "test").setSource("{}", XContentType.JSON).get(); } @@ -382,7 +382,6 @@ public void testMaybeRollTranslogGeneration() throws Exception { .builder() .put("index.number_of_shards", 1) .put("index.translog.generation_threshold_size", generationThreshold + "b") - .put() .build(); createIndex("test", settings); ensureGreen("test"); @@ -539,7 +538,7 @@ public static final IndexShard newIndexShard(IndexService indexService, IndexSha IndexShard newShard = new IndexShard(initializingShardRouting, indexService.getIndexSettings(), shard.shardPath(), shard.store(), indexService.getIndexSortSupplier(), indexService.cache(), indexService.mapperService(), indexService.similarityService(), shard.getEngineFactory(), indexService.getIndexEventListener(), wrapper, - indexService.getThreadPool(), indexService.getBigArrays(), null, Collections.emptyList(), Arrays.asList(listeners)); + indexService.getThreadPool(), indexService.getBigArrays(), null, Collections.emptyList(), Arrays.asList(listeners), () -> {}); return newShard; } diff --git a/core/src/test/java/org/elasticsearch/index/shard/IndexShardTests.java b/core/src/test/java/org/elasticsearch/index/shard/IndexShardTests.java index 1332df658c434..89e2f8441741d 100644 --- a/core/src/test/java/org/elasticsearch/index/shard/IndexShardTests.java +++ b/core/src/test/java/org/elasticsearch/index/shard/IndexShardTests.java @@ -67,6 +67,7 @@ import org.elasticsearch.common.util.concurrent.ConcurrentCollections; import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.env.NodeEnvironment; import org.elasticsearch.index.VersionType; @@ -84,9 +85,10 @@ import org.elasticsearch.index.mapper.SeqNoFieldMapper; import org.elasticsearch.index.mapper.SourceToParse; import org.elasticsearch.index.mapper.Uid; -import org.elasticsearch.index.seqno.SequenceNumbersService; +import org.elasticsearch.index.seqno.SequenceNumbers; import org.elasticsearch.index.snapshots.IndexShardSnapshotStatus; import org.elasticsearch.index.store.Store; +import org.elasticsearch.index.store.StoreStats; import org.elasticsearch.index.translog.Translog; import org.elasticsearch.index.translog.TranslogTests; import org.elasticsearch.indices.IndicesQueryCache; @@ -150,6 +152,8 @@ import static org.hamcrest.Matchers.hasSize; import static org.hamcrest.Matchers.hasToString; import static org.hamcrest.Matchers.instanceOf; +import static org.hamcrest.Matchers.lessThanOrEqualTo; +import static org.hamcrest.Matchers.notNullValue; import static org.hamcrest.Matchers.nullValue; /** @@ -282,7 +286,7 @@ public void testClosesPreventsNewOperations() throws InterruptedException, Execu // expected } try { - indexShard.acquireReplicaOperationPermit(indexShard.getPrimaryTerm(), SequenceNumbersService.UNASSIGNED_SEQ_NO, null, + indexShard.acquireReplicaOperationPermit(indexShard.getPrimaryTerm(), SequenceNumbers.UNASSIGNED_SEQ_NO, null, ThreadPool.Names.INDEX); fail("we should not be able to increment anymore"); } catch (IndexShardClosedException e) { @@ -294,7 +298,7 @@ public void testRejectOperationPermitWithHigherTermWhenNotStarted() throws IOExc IndexShard indexShard = newShard(false); expectThrows(IndexShardNotStartedException.class, () -> indexShard.acquireReplicaOperationPermit(indexShard.getPrimaryTerm() + randomIntBetween(1, 100), - SequenceNumbersService.UNASSIGNED_SEQ_NO, null, ThreadPool.Names.INDEX)); + SequenceNumbers.UNASSIGNED_SEQ_NO, null, ThreadPool.Names.INDEX)); closeShards(indexShard); } @@ -409,12 +413,58 @@ public void onFailure(Exception e) { closeShards(indexShard); } + /** + * This test makes sure that people can use the shard routing entry to check whether a shard was already promoted to + * a primary. Concretely this means, that when we publish the routing entry via {@link IndexShard#routingEntry()} the following + * should have happened + * 1) Internal state (ala GlobalCheckpointTracker) have been updated + * 2) Primary term is set to the new term + */ + public void testPublishingOrderOnPromotion() throws IOException, BrokenBarrierException, InterruptedException { + final IndexShard indexShard = newStartedShard(false); + final long promotedTerm = indexShard.getPrimaryTerm() + 1; + final CyclicBarrier barrier = new CyclicBarrier(2); + final AtomicBoolean stop = new AtomicBoolean(); + final Thread thread = new Thread(() -> { + try { + barrier.await(); + } catch (final BrokenBarrierException | InterruptedException e) { + throw new RuntimeException(e); + } + while(stop.get() == false) { + if (indexShard.routingEntry().primary()) { + assertThat(indexShard.getPrimaryTerm(), equalTo(promotedTerm)); + assertThat(indexShard.getEngine().seqNoService().getReplicationGroup(), notNullValue()); + } + } + }); + thread.start(); + + final ShardRouting replicaRouting = indexShard.routingEntry(); + final ShardRouting primaryRouting = newShardRouting(replicaRouting.shardId(), replicaRouting.currentNodeId(), null, true, + ShardRoutingState.STARTED, replicaRouting.allocationId()); + + + final Set inSyncAllocationIds = Collections.singleton(primaryRouting.allocationId().getId()); + final IndexShardRoutingTable routingTable = + new IndexShardRoutingTable.Builder(primaryRouting.shardId()).addShard(primaryRouting).build(); + barrier.await(); + // promote the replica + indexShard.updateShardState(primaryRouting, promotedTerm, (shard, listener) -> {}, 0L, inSyncAllocationIds, routingTable, + Collections.emptySet()); + + stop.set(true); + thread.join(); + closeShards(indexShard); + } + + public void testPrimaryFillsSeqNoGapsOnPromotion() throws Exception { final IndexShard indexShard = newStartedShard(false); // most of the time this is large enough that most of the time there will be at least one gap final int operations = 1024 - scaledRandomIntBetween(0, 1024); - final Result result = indexOnReplicaWithGaps(indexShard, operations, Math.toIntExact(SequenceNumbersService.NO_OPS_PERFORMED)); + final Result result = indexOnReplicaWithGaps(indexShard, operations, Math.toIntExact(SequenceNumbers.NO_OPS_PERFORMED)); final int maxSeqNo = result.maxSeqNo; final boolean gap = result.gap; @@ -458,6 +508,51 @@ public void onFailure(Exception e) { closeShards(indexShard); } + public void testPrimaryPromotionRollsGeneration() throws Exception { + final IndexShard indexShard = newStartedShard(false); + + final long currentTranslogGeneration = indexShard.getTranslog().getGeneration().translogFileGeneration; + + // promote the replica + final ShardRouting replicaRouting = indexShard.routingEntry(); + final ShardRouting primaryRouting = + newShardRouting( + replicaRouting.shardId(), + replicaRouting.currentNodeId(), + null, + true, + ShardRoutingState.STARTED, + replicaRouting.allocationId()); + indexShard.updateShardState(primaryRouting, indexShard.getPrimaryTerm() + 1, (shard, listener) -> {}, + 0L, Collections.singleton(primaryRouting.allocationId().getId()), + new IndexShardRoutingTable.Builder(primaryRouting.shardId()).addShard(primaryRouting).build(), Collections.emptySet()); + + /* + * This operation completing means that the delay operation executed as part of increasing the primary term has completed and the + * translog generation has rolled. + */ + final CountDownLatch latch = new CountDownLatch(1); + indexShard.acquirePrimaryOperationPermit( + new ActionListener() { + @Override + public void onResponse(Releasable releasable) { + releasable.close(); + latch.countDown(); + } + + @Override + public void onFailure(Exception e) { + throw new RuntimeException(e); + } + }, + ThreadPool.Names.GENERIC); + + latch.await(); + assertThat(indexShard.getTranslog().getGeneration().translogFileGeneration, equalTo(currentTranslogGeneration + 1)); + + closeShards(indexShard); + } + public void testOperationPermitsOnPrimaryShards() throws InterruptedException, ExecutionException, IOException { final ShardId shardId = new ShardId("test", "_na_", 0); final IndexShard indexShard; @@ -592,7 +687,7 @@ public void onFailure(Exception e) { } }; - indexShard.acquireReplicaOperationPermit(primaryTerm - 1, SequenceNumbersService.UNASSIGNED_SEQ_NO, onLockAcquired, + indexShard.acquireReplicaOperationPermit(primaryTerm - 1, SequenceNumbers.UNASSIGNED_SEQ_NO, onLockAcquired, ThreadPool.Names.INDEX); assertFalse(onResponse.get()); @@ -608,12 +703,12 @@ public void onFailure(Exception e) { final CyclicBarrier barrier = new CyclicBarrier(2); final long newPrimaryTerm = primaryTerm + 1 + randomInt(20); if (engineClosed == false) { - assertThat(indexShard.getLocalCheckpoint(), equalTo(SequenceNumbersService.NO_OPS_PERFORMED)); - assertThat(indexShard.getGlobalCheckpoint(), equalTo(SequenceNumbersService.NO_OPS_PERFORMED)); + assertThat(indexShard.getLocalCheckpoint(), equalTo(SequenceNumbers.NO_OPS_PERFORMED)); + assertThat(indexShard.getGlobalCheckpoint(), equalTo(SequenceNumbers.NO_OPS_PERFORMED)); } final long newGlobalCheckPoint; if (engineClosed || randomBoolean()) { - newGlobalCheckPoint = SequenceNumbersService.NO_OPS_PERFORMED; + newGlobalCheckPoint = SequenceNumbers.NO_OPS_PERFORMED; } else { long localCheckPoint = indexShard.getGlobalCheckpoint() + randomInt(100); // advance local checkpoint @@ -623,8 +718,8 @@ public void onFailure(Exception e) { newGlobalCheckPoint = randomIntBetween((int) indexShard.getGlobalCheckpoint(), (int) localCheckPoint); } final long expectedLocalCheckpoint; - if (newGlobalCheckPoint == SequenceNumbersService.UNASSIGNED_SEQ_NO) { - expectedLocalCheckpoint = SequenceNumbersService.NO_OPS_PERFORMED; + if (newGlobalCheckPoint == SequenceNumbers.UNASSIGNED_SEQ_NO) { + expectedLocalCheckpoint = SequenceNumbers.NO_OPS_PERFORMED; } else { expectedLocalCheckpoint = newGlobalCheckPoint; } @@ -711,21 +806,80 @@ private void finish() { closeShards(indexShard); } + public void testGlobalCheckpointSync() throws IOException { + // create the primary shard with a callback that sets a boolean when the global checkpoint sync is invoked + final ShardId shardId = new ShardId("index", "_na_", 0); + final ShardRouting shardRouting = + TestShardRouting.newShardRouting( + shardId, + randomAlphaOfLength(8), + true, + ShardRoutingState.INITIALIZING, + RecoverySource.StoreRecoverySource.EMPTY_STORE_INSTANCE); + final Settings settings = Settings.builder() + .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT) + .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 2) + .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1) + .build(); + final IndexMetaData.Builder indexMetadata = IndexMetaData.builder(shardRouting.getIndexName()).settings(settings).primaryTerm(0, 1); + final AtomicBoolean synced = new AtomicBoolean(); + final IndexShard primaryShard = newShard(shardRouting, indexMetadata.build(), null, null, () -> { synced.set(true); }); + // add a replica + recoverShardFromStore(primaryShard); + final IndexShard replicaShard = newShard(shardId, false); + recoverReplica(replicaShard, primaryShard); + final int maxSeqNo = randomIntBetween(0, 128); + for (int i = 0; i <= maxSeqNo; i++) { + primaryShard.getEngine().seqNoService().generateSeqNo(); + } + final long checkpoint = rarely() ? maxSeqNo - scaledRandomIntBetween(0, maxSeqNo) : maxSeqNo; + + // set up local checkpoints on the shard copies + primaryShard.updateLocalCheckpointForShard(shardRouting.allocationId().getId(), checkpoint); + final int replicaLocalCheckpoint = randomIntBetween(0, Math.toIntExact(checkpoint)); + final String replicaAllocationId = replicaShard.routingEntry().allocationId().getId(); + primaryShard.updateLocalCheckpointForShard(replicaAllocationId, replicaLocalCheckpoint); + + // initialize the local knowledge on the primary of the global checkpoint on the replica shard + final int replicaGlobalCheckpoint = + randomIntBetween(Math.toIntExact(SequenceNumbers.NO_OPS_PERFORMED), Math.toIntExact(primaryShard.getGlobalCheckpoint())); + primaryShard.updateGlobalCheckpointForShard(replicaAllocationId, replicaGlobalCheckpoint); + + // simulate a background maybe sync; it should only run if the knowledge on the replica of the global checkpoint lags the primary + primaryShard.maybeSyncGlobalCheckpoint("test"); + assertThat( + synced.get(), + equalTo(maxSeqNo == primaryShard.getGlobalCheckpoint() && (replicaGlobalCheckpoint < checkpoint))); + + // simulate that the background sync advanced the global checkpoint on the replica + primaryShard.updateGlobalCheckpointForShard(replicaAllocationId, primaryShard.getGlobalCheckpoint()); + + // reset our boolean so that we can assert after another simulated maybe sync + synced.set(false); + + primaryShard.maybeSyncGlobalCheckpoint("test"); + + // this time there should not be a sync since all the replica copies are caught up with the primary + assertFalse(synced.get()); + + closeShards(replicaShard, primaryShard); + } + public void testRestoreLocalCheckpointTrackerFromTranslogOnPromotion() throws IOException, InterruptedException { final IndexShard indexShard = newStartedShard(false); final int operations = 1024 - scaledRandomIntBetween(0, 1024); - indexOnReplicaWithGaps(indexShard, operations, Math.toIntExact(SequenceNumbersService.NO_OPS_PERFORMED)); + indexOnReplicaWithGaps(indexShard, operations, Math.toIntExact(SequenceNumbers.NO_OPS_PERFORMED)); final long maxSeqNo = indexShard.seqNoStats().getMaxSeqNo(); - final long globalCheckpointOnReplica = SequenceNumbersService.UNASSIGNED_SEQ_NO; + final long globalCheckpointOnReplica = SequenceNumbers.UNASSIGNED_SEQ_NO; randomIntBetween( - Math.toIntExact(SequenceNumbersService.UNASSIGNED_SEQ_NO), + Math.toIntExact(SequenceNumbers.UNASSIGNED_SEQ_NO), Math.toIntExact(indexShard.getLocalCheckpoint())); indexShard.updateGlobalCheckpointOnReplica(globalCheckpointOnReplica, "test"); final int globalCheckpoint = randomIntBetween( - Math.toIntExact(SequenceNumbersService.UNASSIGNED_SEQ_NO), + Math.toIntExact(SequenceNumbers.UNASSIGNED_SEQ_NO), Math.toIntExact(indexShard.getLocalCheckpoint())); final CountDownLatch latch = new CountDownLatch(1); @@ -770,17 +924,17 @@ public void testThrowBackLocalCheckpointOnReplica() throws IOException, Interrup // most of the time this is large enough that most of the time there will be at least one gap final int operations = 1024 - scaledRandomIntBetween(0, 1024); - indexOnReplicaWithGaps(indexShard, operations, Math.toIntExact(SequenceNumbersService.NO_OPS_PERFORMED)); + indexOnReplicaWithGaps(indexShard, operations, Math.toIntExact(SequenceNumbers.NO_OPS_PERFORMED)); final long globalCheckpointOnReplica = randomIntBetween( - Math.toIntExact(SequenceNumbersService.UNASSIGNED_SEQ_NO), + Math.toIntExact(SequenceNumbers.UNASSIGNED_SEQ_NO), Math.toIntExact(indexShard.getLocalCheckpoint())); indexShard.updateGlobalCheckpointOnReplica(globalCheckpointOnReplica, "test"); final int globalCheckpoint = randomIntBetween( - Math.toIntExact(SequenceNumbersService.UNASSIGNED_SEQ_NO), + Math.toIntExact(SequenceNumbers.UNASSIGNED_SEQ_NO), Math.toIntExact(indexShard.getLocalCheckpoint())); final CountDownLatch latch = new CountDownLatch(1); indexShard.acquireReplicaOperationPermit( @@ -801,9 +955,9 @@ public void onFailure(final Exception e) { ThreadPool.Names.SAME); latch.await(); - if (globalCheckpointOnReplica == SequenceNumbersService.UNASSIGNED_SEQ_NO - && globalCheckpoint == SequenceNumbersService.UNASSIGNED_SEQ_NO) { - assertThat(indexShard.getLocalCheckpoint(), equalTo(SequenceNumbersService.NO_OPS_PERFORMED)); + if (globalCheckpointOnReplica == SequenceNumbers.UNASSIGNED_SEQ_NO + && globalCheckpoint == SequenceNumbers.UNASSIGNED_SEQ_NO) { + assertThat(indexShard.getLocalCheckpoint(), equalTo(SequenceNumbers.NO_OPS_PERFORMED)); } else { assertThat(indexShard.getLocalCheckpoint(), equalTo(Math.max(globalCheckpoint, globalCheckpointOnReplica))); } @@ -1054,7 +1208,7 @@ public void testRefreshMetric() throws IOException { indexDoc(shard, "test", "test"); try (Engine.GetResult ignored = shard.get(new Engine.Get(true, "test", "test", new Term(IdFieldMapper.NAME, Uid.encodeId("test"))))) { - assertThat(shard.refreshStats().getTotal(), equalTo(refreshCount + 1)); + assertThat(shard.refreshStats().getTotal(), equalTo(refreshCount)); } closeShards(shard); } @@ -1613,8 +1767,8 @@ public void testRestoreShard() throws IOException { target.refresh("test"); assertDocs(target, "1"); flushShard(source); // only flush source - final ShardRouting origRouting = target.routingEntry(); - ShardRouting routing = ShardRoutingHelper.reinitPrimary(origRouting); + ShardRouting routing = ShardRoutingHelper.initWithSameId(target.routingEntry(), + RecoverySource.StoreRecoverySource.EXISTING_STORE_INSTANCE); final Snapshot snapshot = new Snapshot("foo", new SnapshotId("bar", UUIDs.randomBase64UUID())); routing = ShardRoutingHelper.newWithRestoreSource(routing, new RecoverySource.SnapshotRecoverySource(snapshot, Version.CURRENT, "test")); @@ -1676,8 +1830,9 @@ public IndexSearcher wrap(IndexSearcher searcher) throws EngineException { } }; closeShards(shard); - IndexShard newShard = newShard(ShardRoutingHelper.reinitPrimary(shard.routingEntry()), - shard.shardPath(), shard.indexSettings().getIndexMetaData(), wrapper, null); + IndexShard newShard = newShard( + ShardRoutingHelper.initWithSameId(shard.routingEntry(), RecoverySource.StoreRecoverySource.EXISTING_STORE_INSTANCE), + shard.shardPath(), shard.indexSettings().getIndexMetaData(), wrapper, null, () -> {}); recoverShardFromStore(newShard); @@ -1821,8 +1976,9 @@ public IndexSearcher wrap(IndexSearcher searcher) throws EngineException { }; closeShards(shard); - IndexShard newShard = newShard(ShardRoutingHelper.reinitPrimary(shard.routingEntry()), - shard.shardPath(), shard.indexSettings().getIndexMetaData(), wrapper, null); + IndexShard newShard = newShard( + ShardRoutingHelper.initWithSameId(shard.routingEntry(), RecoverySource.StoreRecoverySource.EXISTING_STORE_INSTANCE), + shard.shardPath(), shard.indexSettings().getIndexMetaData(), wrapper, null, () -> {}); recoverShardFromStore(newShard); @@ -2119,6 +2275,7 @@ public void testDocStats() throws IOException { final DocsStats docsStats = indexShard.docStats(); assertThat(docsStats.getCount(), equalTo(numDocs)); assertThat(docsStats.getDeleted(), equalTo(0L)); + assertThat(docsStats.getAverageSizeInBytes(), greaterThan(0L)); } final List ids = randomSubsetOf( @@ -2155,12 +2312,70 @@ public void testDocStats() throws IOException { final DocsStats docStats = indexShard.docStats(); assertThat(docStats.getCount(), equalTo(numDocs)); assertThat(docStats.getDeleted(), equalTo(0L)); + assertThat(docStats.getAverageSizeInBytes(), greaterThan(0L)); } } finally { closeShards(indexShard); } } + public void testEstimateTotalDocSize() throws Exception { + IndexShard indexShard = null; + try { + indexShard = newStartedShard(true); + + int numDoc = randomIntBetween(100, 200); + for (int i = 0; i < numDoc; i++) { + String doc = XContentFactory.jsonBuilder() + .startObject() + .field("count", randomInt()) + .field("point", randomFloat()) + .field("description", randomUnicodeOfCodepointLength(100)) + .endObject().string(); + indexDoc(indexShard, "doc", Integer.toString(i), doc); + } + + assertThat("Without flushing, segment sizes should be zero", + indexShard.docStats().getTotalSizeInBytes(), equalTo(0L)); + + indexShard.flush(new FlushRequest()); + indexShard.refresh("test"); + { + final DocsStats docsStats = indexShard.docStats(); + final StoreStats storeStats = indexShard.storeStats(); + assertThat(storeStats.sizeInBytes(), greaterThan(numDoc * 100L)); // A doc should be more than 100 bytes. + + assertThat("Estimated total document size is too small compared with the stored size", + docsStats.getTotalSizeInBytes(), greaterThanOrEqualTo(storeStats.sizeInBytes() * 80/100)); + assertThat("Estimated total document size is too large compared with the stored size", + docsStats.getTotalSizeInBytes(), lessThanOrEqualTo(storeStats.sizeInBytes() * 120/100)); + } + + // Do some updates and deletes, then recheck the correlation again. + for (int i = 0; i < numDoc / 2; i++) { + if (randomBoolean()) { + deleteDoc(indexShard, "doc", Integer.toString(i)); + } else { + indexDoc(indexShard, "doc", Integer.toString(i), "{\"foo\": \"bar\"}"); + } + } + + indexShard.flush(new FlushRequest()); + indexShard.refresh("test"); + { + final DocsStats docsStats = indexShard.docStats(); + final StoreStats storeStats = indexShard.storeStats(); + assertThat("Estimated total document size is too small compared with the stored size", + docsStats.getTotalSizeInBytes(), greaterThanOrEqualTo(storeStats.sizeInBytes() * 80/100)); + assertThat("Estimated total document size is too large compared with the stored size", + docsStats.getTotalSizeInBytes(), lessThanOrEqualTo(storeStats.sizeInBytes() * 120/100)); + } + + } finally { + closeShards(indexShard); + } + } + /** * here we are simulating the scenario that happens when we do async shard fetching from GatewaySerivce while we are finishing * a recovery and concurrently clean files. This should always be possible without any exception. Yet there was a bug where IndexShard diff --git a/core/src/test/java/org/elasticsearch/index/shard/IndexingOperationListenerTests.java b/core/src/test/java/org/elasticsearch/index/shard/IndexingOperationListenerTests.java index 91ea9c6073a4e..f3bf76c57a550 100644 --- a/core/src/test/java/org/elasticsearch/index/shard/IndexingOperationListenerTests.java +++ b/core/src/test/java/org/elasticsearch/index/shard/IndexingOperationListenerTests.java @@ -24,7 +24,7 @@ import org.elasticsearch.index.engine.InternalEngineTests; import org.elasticsearch.index.mapper.ParsedDocument; import org.elasticsearch.index.mapper.Uid; -import org.elasticsearch.index.seqno.SequenceNumbersService; +import org.elasticsearch.index.seqno.SequenceNumbers; import org.elasticsearch.test.ESTestCase; import java.util.ArrayList; @@ -138,7 +138,7 @@ public void postDelete(ShardId shardId, Engine.Delete delete, Exception ex) { ParsedDocument doc = InternalEngineTests.createParsedDoc("1", null); Engine.Delete delete = new Engine.Delete("test", "1", new Term("_uid", Uid.createUidAsBytes(doc.type(), doc.id()))); Engine.Index index = new Engine.Index(new Term("_uid", Uid.createUidAsBytes(doc.type(), doc.id())), doc); - compositeListener.postDelete(randomShardId, delete, new Engine.DeleteResult(1, SequenceNumbersService.UNASSIGNED_SEQ_NO, true)); + compositeListener.postDelete(randomShardId, delete, new Engine.DeleteResult(1, SequenceNumbers.UNASSIGNED_SEQ_NO, true)); assertEquals(0, preIndex.get()); assertEquals(0, postIndex.get()); assertEquals(0, postIndexException.get()); @@ -162,7 +162,7 @@ public void postDelete(ShardId shardId, Engine.Delete delete, Exception ex) { assertEquals(2, postDelete.get()); assertEquals(2, postDeleteException.get()); - compositeListener.postIndex(randomShardId, index, new Engine.IndexResult(0, SequenceNumbersService.UNASSIGNED_SEQ_NO, false)); + compositeListener.postIndex(randomShardId, index, new Engine.IndexResult(0, SequenceNumbers.UNASSIGNED_SEQ_NO, false)); assertEquals(0, preIndex.get()); assertEquals(2, postIndex.get()); assertEquals(0, postIndexException.get()); diff --git a/core/src/test/java/org/elasticsearch/index/shard/NewPathForShardTests.java b/core/src/test/java/org/elasticsearch/index/shard/NewPathForShardTests.java index fc8fc12e75d6a..4e6e3036f4c40 100644 --- a/core/src/test/java/org/elasticsearch/index/shard/NewPathForShardTests.java +++ b/core/src/test/java/org/elasticsearch/index/shard/NewPathForShardTests.java @@ -20,13 +20,14 @@ import org.apache.lucene.mockfile.FilterFileSystemProvider; +import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.common.io.PathUtils; import org.elasticsearch.common.io.PathUtilsForTesting; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.env.Environment; import org.elasticsearch.env.NodeEnvironment; import org.elasticsearch.env.NodeEnvironment.NodePath; -import org.elasticsearch.index.Index; +import org.elasticsearch.env.TestEnvironment; import org.elasticsearch.index.IndexSettings; import org.elasticsearch.test.ESTestCase; import org.elasticsearch.test.IndexSettingsModule; @@ -36,6 +37,7 @@ import java.io.IOException; import java.nio.file.FileStore; import java.nio.file.FileSystem; +import java.nio.file.Files; import java.nio.file.Path; import java.nio.file.attribute.FileAttributeView; import java.nio.file.attribute.FileStoreAttributeView; @@ -46,6 +48,9 @@ import java.util.List; import java.util.Map; +import static org.hamcrest.Matchers.containsString; +import static org.hamcrest.Matchers.equalTo; + /** Separate test class from ShardPathTests because we need static (BeforeClass) setup to install mock filesystems... */ public class NewPathForShardTests extends ESTestCase { @@ -159,6 +164,10 @@ public Object getAttribute(String attribute) throws IOException { } } + static void createFakeShard(ShardPath path) throws IOException { + Files.createDirectories(path.resolveIndex().getParent()); + } + public void testSelectNewPathForShard() throws Exception { Path path = PathUtils.get(createTempDir().toString()); @@ -168,8 +177,8 @@ public void testSelectNewPathForShard() throws Exception { Settings settings = Settings.builder() .put(Environment.PATH_HOME_SETTING.getKey(), path) - .putArray(Environment.PATH_DATA_SETTING.getKey(), paths).build(); - NodeEnvironment nodeEnv = new NodeEnvironment(settings, new Environment(settings)); + .putList(Environment.PATH_DATA_SETTING.getKey(), paths).build(); + NodeEnvironment nodeEnv = new NodeEnvironment(settings, TestEnvironment.newEnvironment(settings)); // Make sure all our mocking above actually worked: NodePath[] nodePaths = nodeEnv.nodePaths(); @@ -200,8 +209,10 @@ public void testSelectNewPathForShard() throws Exception { Map dataPathToShardCount = new HashMap<>(); ShardPath result1 = ShardPath.selectNewPathForShard(nodeEnv, shardId, INDEX_SETTINGS, 100, dataPathToShardCount); + createFakeShard(result1); dataPathToShardCount.put(NodeEnvironment.shardStatePathToDataPath(result1.getDataPath()), 1); ShardPath result2 = ShardPath.selectNewPathForShard(nodeEnv, shardId, INDEX_SETTINGS, 100, dataPathToShardCount); + createFakeShard(result2); // #11122: this was the original failure: on a node with 2 disks that have nearly equal // free space, we would always allocate all N incoming shards to the one path that @@ -211,4 +222,153 @@ public void testSelectNewPathForShard() throws Exception { nodeEnv.close(); } + + public void testSelectNewPathForShardEvenly() throws Exception { + Path path = PathUtils.get(createTempDir().toString()); + + // Use 2 data paths: + String[] paths = new String[] {path.resolve("a").toString(), + path.resolve("b").toString()}; + + Settings settings = Settings.builder() + .put(Environment.PATH_HOME_SETTING.getKey(), path) + .putList(Environment.PATH_DATA_SETTING.getKey(), paths).build(); + NodeEnvironment nodeEnv = new NodeEnvironment(settings, TestEnvironment.newEnvironment(settings)); + + // Make sure all our mocking above actually worked: + NodePath[] nodePaths = nodeEnv.nodePaths(); + assertEquals(2, nodePaths.length); + + assertEquals("mocka", nodePaths[0].fileStore.name()); + assertEquals("mockb", nodePaths[1].fileStore.name()); + + // Path a has lots of free space, but b has little, so new shard should go to a: + aFileStore.usableSpace = 100000; + bFileStore.usableSpace = 10000; + + ShardId shardId = new ShardId("index", "uid1", 0); + ShardPath result = ShardPath.selectNewPathForShard(nodeEnv, shardId, INDEX_SETTINGS, 100, Collections.emptyMap()); + createFakeShard(result); + // First shard should go to a + assertThat(result.getDataPath().toString(), containsString(aPathPart)); + + shardId = new ShardId("index", "uid1", 1); + result = ShardPath.selectNewPathForShard(nodeEnv, shardId, INDEX_SETTINGS, 100, Collections.emptyMap()); + createFakeShard(result); + // Second shard should go to b + assertThat(result.getDataPath().toString(), containsString(bPathPart)); + + Map dataPathToShardCount = new HashMap<>(); + shardId = new ShardId("index2", "uid2", 0); + IndexSettings idxSettings = IndexSettingsModule.newIndexSettings("index2", + Settings.builder().put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 3).build()); + ShardPath result1 = ShardPath.selectNewPathForShard(nodeEnv, shardId, idxSettings, 100, dataPathToShardCount); + createFakeShard(result1); + dataPathToShardCount.put(NodeEnvironment.shardStatePathToDataPath(result1.getDataPath()), 1); + shardId = new ShardId("index2", "uid2", 1); + ShardPath result2 = ShardPath.selectNewPathForShard(nodeEnv, shardId, idxSettings, 100, dataPathToShardCount); + createFakeShard(result2); + dataPathToShardCount.put(NodeEnvironment.shardStatePathToDataPath(result2.getDataPath()), 1); + shardId = new ShardId("index2", "uid2", 2); + ShardPath result3 = ShardPath.selectNewPathForShard(nodeEnv, shardId, idxSettings, 100, dataPathToShardCount); + createFakeShard(result3); + // 2 shards go to 'a' and 1 to 'b' + assertThat(result1.getDataPath().toString(), containsString(aPathPart)); + assertThat(result2.getDataPath().toString(), containsString(bPathPart)); + assertThat(result3.getDataPath().toString(), containsString(aPathPart)); + + nodeEnv.close(); + } + + public void testGettingPathWithMostFreeSpace() throws Exception { + Path path = PathUtils.get(createTempDir().toString()); + + // Use 2 data paths: + String[] paths = new String[] {path.resolve("a").toString(), + path.resolve("b").toString()}; + + Settings settings = Settings.builder() + .put(Environment.PATH_HOME_SETTING.getKey(), path) + .putList(Environment.PATH_DATA_SETTING.getKey(), paths).build(); + NodeEnvironment nodeEnv = new NodeEnvironment(settings, TestEnvironment.newEnvironment(settings)); + + aFileStore.usableSpace = 100000; + bFileStore.usableSpace = 1000; + + assertThat(ShardPath.getPathWithMostFreeSpace(nodeEnv), equalTo(nodeEnv.nodePaths()[0])); + + aFileStore.usableSpace = 10000; + bFileStore.usableSpace = 20000; + + assertThat(ShardPath.getPathWithMostFreeSpace(nodeEnv), equalTo(nodeEnv.nodePaths()[1])); + + nodeEnv.close(); + } + + public void testTieBreakWithMostShards() throws Exception { + Path path = PathUtils.get(createTempDir().toString()); + + // Use 2 data paths: + String[] paths = new String[] {path.resolve("a").toString(), + path.resolve("b").toString()}; + + Settings settings = Settings.builder() + .put(Environment.PATH_HOME_SETTING.getKey(), path) + .putList(Environment.PATH_DATA_SETTING.getKey(), paths).build(); + NodeEnvironment nodeEnv = new NodeEnvironment(settings, TestEnvironment.newEnvironment(settings)); + + // Make sure all our mocking above actually worked: + NodePath[] nodePaths = nodeEnv.nodePaths(); + assertEquals(2, nodePaths.length); + + assertEquals("mocka", nodePaths[0].fileStore.name()); + assertEquals("mockb", nodePaths[1].fileStore.name()); + + // Path a has lots of free space, but b has little, so new shard should go to a: + aFileStore.usableSpace = 100000; + bFileStore.usableSpace = 10000; + + Map dataPathToShardCount = new HashMap<>(); + + ShardId shardId = new ShardId("index", "uid1", 0); + ShardPath result = ShardPath.selectNewPathForShard(nodeEnv, shardId, INDEX_SETTINGS, 100, dataPathToShardCount); + createFakeShard(result); + // First shard should go to a + assertThat(result.getDataPath().toString(), containsString(aPathPart)); + dataPathToShardCount.compute(NodeEnvironment.shardStatePathToDataPath(result.getDataPath()), (k, v) -> v == null ? 1 : v + 1); + + shardId = new ShardId("index", "uid1", 1); + result = ShardPath.selectNewPathForShard(nodeEnv, shardId, INDEX_SETTINGS, 100, dataPathToShardCount); + createFakeShard(result); + // Second shard should go to b + assertThat(result.getDataPath().toString(), containsString(bPathPart)); + dataPathToShardCount.compute(NodeEnvironment.shardStatePathToDataPath(result.getDataPath()), (k, v) -> v == null ? 1 : v + 1); + + shardId = new ShardId("index2", "uid3", 0); + result = ShardPath.selectNewPathForShard(nodeEnv, shardId, INDEX_SETTINGS, 100, dataPathToShardCount); + createFakeShard(result); + // Shard for new index should go to a + assertThat(result.getDataPath().toString(), containsString(aPathPart)); + dataPathToShardCount.compute(NodeEnvironment.shardStatePathToDataPath(result.getDataPath()), (k, v) -> v == null ? 1 : v + 1); + + shardId = new ShardId("index2", "uid2", 0); + IndexSettings idxSettings = IndexSettingsModule.newIndexSettings("index2", + Settings.builder().put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 3).build()); + ShardPath result1 = ShardPath.selectNewPathForShard(nodeEnv, shardId, idxSettings, 100, dataPathToShardCount); + createFakeShard(result1); + dataPathToShardCount.compute(NodeEnvironment.shardStatePathToDataPath(result1.getDataPath()), (k, v) -> v == null ? 1 : v + 1); + shardId = new ShardId("index2", "uid2", 1); + ShardPath result2 = ShardPath.selectNewPathForShard(nodeEnv, shardId, idxSettings, 100, dataPathToShardCount); + createFakeShard(result2); + dataPathToShardCount.compute(NodeEnvironment.shardStatePathToDataPath(result2.getDataPath()), (k, v) -> v == null ? 1 : v + 1); + shardId = new ShardId("index2", "uid2", 2); + ShardPath result3 = ShardPath.selectNewPathForShard(nodeEnv, shardId, idxSettings, 100, dataPathToShardCount); + createFakeShard(result3); + // 2 shards go to 'b' and 1 to 'a' + assertThat(result1.getDataPath().toString(), containsString(bPathPart)); + assertThat(result2.getDataPath().toString(), containsString(aPathPart)); + assertThat(result3.getDataPath().toString(), containsString(bPathPart)); + + nodeEnv.close(); + } } diff --git a/core/src/test/java/org/elasticsearch/index/shard/RefreshListenersTests.java b/core/src/test/java/org/elasticsearch/index/shard/RefreshListenersTests.java index 6b5bd57aed9c2..fcc3c93fc37c3 100644 --- a/core/src/test/java/org/elasticsearch/index/shard/RefreshListenersTests.java +++ b/core/src/test/java/org/elasticsearch/index/shard/RefreshListenersTests.java @@ -28,6 +28,7 @@ import org.apache.lucene.store.Directory; import org.apache.lucene.util.IOUtils; import org.elasticsearch.common.Nullable; +import org.elasticsearch.common.UUIDs; import org.elasticsearch.common.bytes.BytesArray; import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.lucene.uid.Versions; @@ -55,7 +56,7 @@ import org.elasticsearch.test.junit.annotations.TestLogging; import org.elasticsearch.threadpool.TestThreadPool; import org.elasticsearch.threadpool.ThreadPool; -import org.elasticsearch.threadpool.ThreadPool.Cancellable; +import org.elasticsearch.threadpool.Scheduler.Cancellable; import org.elasticsearch.threadpool.ThreadPool.Names; import org.junit.After; import org.junit.Before; @@ -98,6 +99,7 @@ public void setupListeners() throws Exception { threadPool = new TestThreadPool(getTestName()); IndexSettings indexSettings = IndexSettingsModule.newIndexSettings("index", Settings.EMPTY); ShardId shardId = new ShardId(new Index("index", "_na_"), 1); + String allocationId = UUIDs.randomBase64UUID(random()); Directory directory = newDirectory(); DirectoryService directoryService = new DirectoryService(shardId, indexSettings) { @Override @@ -115,10 +117,9 @@ public void onFailedEngine(String reason, @Nullable Exception e) { // we don't need to notify anybody in this test } }; - EngineConfig config = new EngineConfig(EngineConfig.OpenMode.CREATE_INDEX_AND_TRANSLOG, shardId, threadPool, indexSettings, null, - store, newMergePolicy(), iwc.getAnalyzer(), - iwc.getSimilarity(), new CodecService(null, logger), eventListener, - IndexSearcher.getDefaultQueryCache(), IndexSearcher.getDefaultQueryCachingPolicy(), translogConfig, + EngineConfig config = new EngineConfig(EngineConfig.OpenMode.CREATE_INDEX_AND_TRANSLOG, shardId, allocationId, threadPool, + indexSettings, null, store, newMergePolicy(), iwc.getAnalyzer(), iwc.getSimilarity(), new CodecService(null, logger), + eventListener, IndexSearcher.getDefaultQueryCache(), IndexSearcher.getDefaultQueryCachingPolicy(), false, translogConfig, TimeValue.timeValueMinutes(5), Collections.singletonList(listeners), null, null); engine = new InternalEngine(config); listeners.setTranslog(engine.getTranslog()); @@ -269,7 +270,6 @@ public void testConcurrentRefresh() throws Exception { * Uses a bunch of threads to index, wait for refresh, and non-realtime get documents to validate that they are visible after waiting * regardless of what crazy sequence of events causes the refresh listener to fire. */ - @TestLogging("_root:debug,org.elasticsearch.index.engine.Engine.DW:trace") public void testLotsOfThreads() throws Exception { int threadCount = between(3, 10); maxListeners = between(1, threadCount * 2); diff --git a/core/src/test/java/org/elasticsearch/index/shard/ShardSplittingQueryTests.java b/core/src/test/java/org/elasticsearch/index/shard/ShardSplittingQueryTests.java new file mode 100644 index 0000000000000..7351372620fc9 --- /dev/null +++ b/core/src/test/java/org/elasticsearch/index/shard/ShardSplittingQueryTests.java @@ -0,0 +1,193 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.index.shard; + +import org.apache.lucene.document.Field; +import org.apache.lucene.document.SortedNumericDocValuesField; +import org.apache.lucene.document.StringField; +import org.apache.lucene.index.DirectoryReader; +import org.apache.lucene.index.IndexReader; +import org.apache.lucene.index.LeafReaderContext; +import org.apache.lucene.index.RandomIndexWriter; +import org.apache.lucene.index.SortedNumericDocValues; +import org.apache.lucene.search.DocIdSetIterator; +import org.apache.lucene.search.IndexSearcher; +import org.apache.lucene.search.Scorer; +import org.apache.lucene.search.Weight; +import org.apache.lucene.store.Directory; +import org.apache.lucene.util.BytesRef; +import org.elasticsearch.Version; +import org.elasticsearch.cluster.metadata.IndexMetaData; +import org.elasticsearch.cluster.routing.OperationRouting; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.index.mapper.IdFieldMapper; +import org.elasticsearch.index.mapper.RoutingFieldMapper; +import org.elasticsearch.index.mapper.Uid; +import org.elasticsearch.test.ESTestCase; + +import java.io.IOException; +import java.util.Arrays; +import java.util.List; + +public class ShardSplittingQueryTests extends ESTestCase { + + public void testSplitOnID() throws IOException { + Directory dir = newFSDirectory(createTempDir()); + final int numDocs = randomIntBetween(50, 100); + RandomIndexWriter writer = new RandomIndexWriter(random(), dir); + int numShards = randomIntBetween(2, 10); + IndexMetaData metaData = IndexMetaData.builder("test") + .settings(Settings.builder().put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT)) + .numberOfShards(numShards) + .setRoutingNumShards(numShards * 1000000) + .numberOfReplicas(0).build(); + int targetShardId = randomIntBetween(0, numShards-1); + for (int j = 0; j < numDocs; j++) { + int shardId = OperationRouting.generateShardId(metaData, Integer.toString(j), null); + writer.addDocument(Arrays.asList( + new StringField(IdFieldMapper.NAME, Uid.encodeId(Integer.toString(j)), Field.Store.YES), + new SortedNumericDocValuesField("shard_id", shardId) + )); + } + writer.commit(); + writer.close(); + + + assertSplit(dir, metaData, targetShardId); + dir.close(); + } + + public void testSplitOnRouting() throws IOException { + Directory dir = newFSDirectory(createTempDir()); + final int numDocs = randomIntBetween(50, 100); + RandomIndexWriter writer = new RandomIndexWriter(random(), dir); + int numShards = randomIntBetween(2, 10); + IndexMetaData metaData = IndexMetaData.builder("test") + .settings(Settings.builder().put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT)) + .numberOfShards(numShards) + .setRoutingNumShards(numShards * 1000000) + .numberOfReplicas(0).build(); + int targetShardId = randomIntBetween(0, numShards-1); + for (int j = 0; j < numDocs; j++) { + String routing = randomRealisticUnicodeOfCodepointLengthBetween(1, 5); + final int shardId = OperationRouting.generateShardId(metaData, null, routing); + writer.addDocument(Arrays.asList( + new StringField(IdFieldMapper.NAME, Uid.encodeId(Integer.toString(j)), Field.Store.YES), + new StringField(RoutingFieldMapper.NAME, routing, Field.Store.YES), + new SortedNumericDocValuesField("shard_id", shardId) + )); + } + writer.commit(); + writer.close(); + assertSplit(dir, metaData, targetShardId); + dir.close(); + } + + public void testSplitOnIdOrRouting() throws IOException { + Directory dir = newFSDirectory(createTempDir()); + final int numDocs = randomIntBetween(50, 100); + RandomIndexWriter writer = new RandomIndexWriter(random(), dir); + int numShards = randomIntBetween(2, 10); + IndexMetaData metaData = IndexMetaData.builder("test") + .settings(Settings.builder().put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT)) + .numberOfShards(numShards) + .setRoutingNumShards(numShards * 1000000) + .numberOfReplicas(0).build(); + int targetShardId = randomIntBetween(0, numShards-1); + for (int j = 0; j < numDocs; j++) { + if (randomBoolean()) { + String routing = randomRealisticUnicodeOfCodepointLengthBetween(1, 5); + final int shardId = OperationRouting.generateShardId(metaData, null, routing); + writer.addDocument(Arrays.asList( + new StringField(IdFieldMapper.NAME, Uid.encodeId(Integer.toString(j)), Field.Store.YES), + new StringField(RoutingFieldMapper.NAME, routing, Field.Store.YES), + new SortedNumericDocValuesField("shard_id", shardId) + )); + } else { + int shardId = OperationRouting.generateShardId(metaData, Integer.toString(j), null); + writer.addDocument(Arrays.asList( + new StringField(IdFieldMapper.NAME, Uid.encodeId(Integer.toString(j)), Field.Store.YES), + new SortedNumericDocValuesField("shard_id", shardId) + )); + } + } + writer.commit(); + writer.close(); + assertSplit(dir, metaData, targetShardId); + dir.close(); + } + + + public void testSplitOnRoutingPartitioned() throws IOException { + Directory dir = newFSDirectory(createTempDir()); + final int numDocs = randomIntBetween(50, 100); + RandomIndexWriter writer = new RandomIndexWriter(random(), dir); + int numShards = randomIntBetween(2, 10); + IndexMetaData metaData = IndexMetaData.builder("test") + .settings(Settings.builder().put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT)) + .numberOfShards(numShards) + .setRoutingNumShards(numShards * 1000000) + .routingPartitionSize(randomIntBetween(1, 10)) + .numberOfReplicas(0).build(); + int targetShardId = randomIntBetween(0, numShards-1); + for (int j = 0; j < numDocs; j++) { + String routing = randomRealisticUnicodeOfCodepointLengthBetween(1, 5); + final int shardId = OperationRouting.generateShardId(metaData, Integer.toString(j), routing); + writer.addDocument(Arrays.asList( + new StringField(IdFieldMapper.NAME, Uid.encodeId(Integer.toString(j)), Field.Store.YES), + new StringField(RoutingFieldMapper.NAME, routing, Field.Store.YES), + new SortedNumericDocValuesField("shard_id", shardId) + )); + } + writer.commit(); + writer.close(); + assertSplit(dir, metaData, targetShardId); + dir.close(); + } + + + + + void assertSplit(Directory dir, IndexMetaData metaData, int targetShardId) throws IOException { + try (IndexReader reader = DirectoryReader.open(dir)) { + IndexSearcher searcher = new IndexSearcher(reader); + searcher.setQueryCache(null); + final boolean needsScores = false; + final Weight splitWeight = searcher.createNormalizedWeight(new ShardSplittingQuery(metaData, targetShardId), needsScores); + final List leaves = reader.leaves(); + for (final LeafReaderContext ctx : leaves) { + Scorer scorer = splitWeight.scorer(ctx); + DocIdSetIterator iterator = scorer.iterator(); + SortedNumericDocValues shard_id = ctx.reader().getSortedNumericDocValues("shard_id"); + int doc; + while ((doc = iterator.nextDoc()) != DocIdSetIterator.NO_MORE_DOCS) { + while (shard_id.nextDoc() < doc) { + long shardID = shard_id.nextValue(); + assertEquals(shardID, targetShardId); + } + assertEquals(shard_id.docID(), doc); + long shardID = shard_id.nextValue(); + BytesRef id = reader.document(doc).getBinaryValue("_id"); + String actualId = Uid.decodeId(id.bytes, id.offset, id.length); + assertNotEquals(ctx.reader() + " docID: " + doc + " actualID: " + actualId, shardID, targetShardId); + } + } + } + } +} diff --git a/core/src/test/java/org/elasticsearch/index/shard/StoreRecoveryTests.java b/core/src/test/java/org/elasticsearch/index/shard/StoreRecoveryTests.java index 8d3ac8433d17d..05b092ff3a461 100644 --- a/core/src/test/java/org/elasticsearch/index/shard/StoreRecoveryTests.java +++ b/core/src/test/java/org/elasticsearch/index/shard/StoreRecoveryTests.java @@ -25,17 +25,28 @@ import org.apache.lucene.index.DirectoryReader; import org.apache.lucene.index.IndexWriter; import org.apache.lucene.index.IndexWriterConfig; +import org.apache.lucene.index.LeafReader; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.index.NoMergePolicy; import org.apache.lucene.index.SegmentCommitInfo; import org.apache.lucene.index.SegmentInfos; +import org.apache.lucene.index.Terms; +import org.apache.lucene.index.TermsEnum; import org.apache.lucene.search.Sort; import org.apache.lucene.search.SortField; import org.apache.lucene.search.SortedNumericSortField; import org.apache.lucene.store.Directory; import org.apache.lucene.store.IOContext; import org.apache.lucene.store.IndexOutput; +import org.apache.lucene.util.BytesRef; import org.apache.lucene.util.IOUtils; +import org.elasticsearch.Version; +import org.elasticsearch.cluster.metadata.IndexMetaData; +import org.elasticsearch.cluster.routing.OperationRouting; +import org.elasticsearch.common.settings.Settings; import org.elasticsearch.index.engine.InternalEngine; +import org.elasticsearch.index.mapper.IdFieldMapper; +import org.elasticsearch.index.mapper.Uid; import org.elasticsearch.index.seqno.SequenceNumbers; import org.elasticsearch.indices.recovery.RecoveryState; import org.elasticsearch.test.ESTestCase; @@ -46,7 +57,9 @@ import java.nio.file.attribute.BasicFileAttributes; import java.security.AccessControlException; import java.util.Arrays; +import java.util.HashSet; import java.util.Map; +import java.util.Set; import java.util.function.Predicate; import static org.hamcrest.CoreMatchers.equalTo; @@ -87,7 +100,7 @@ public void testAddIndices() throws IOException { Directory target = newFSDirectory(createTempDir()); final long maxSeqNo = randomNonNegativeLong(); final long maxUnsafeAutoIdTimestamp = randomNonNegativeLong(); - storeRecovery.addIndices(indexStats, target, indexSort, dirs, maxSeqNo, maxUnsafeAutoIdTimestamp); + storeRecovery.addIndices(indexStats, target, indexSort, dirs, maxSeqNo, maxUnsafeAutoIdTimestamp, null, 0, false); int numFiles = 0; Predicate filesFilter = (f) -> f.startsWith("segments") == false && f.equals("write.lock") == false && f.startsWith("extra") == false; @@ -122,6 +135,99 @@ public void testAddIndices() throws IOException { IOUtils.close(dirs); } + public void testSplitShard() throws IOException { + Directory dir = newFSDirectory(createTempDir()); + final int numDocs = randomIntBetween(50, 100); + final Sort indexSort; + if (randomBoolean()) { + indexSort = new Sort(new SortedNumericSortField("num", SortField.Type.LONG, true)); + } else { + indexSort = null; + } + int id = 0; + IndexWriterConfig iwc = newIndexWriterConfig() + .setMergePolicy(NoMergePolicy.INSTANCE) + .setOpenMode(IndexWriterConfig.OpenMode.CREATE); + if (indexSort != null) { + iwc.setIndexSort(indexSort); + } + IndexWriter writer = new IndexWriter(dir, iwc); + for (int j = 0; j < numDocs; j++) { + writer.addDocument(Arrays.asList( + new StringField(IdFieldMapper.NAME, Uid.encodeId(Integer.toString(j)), Field.Store.YES), + new SortedNumericDocValuesField("num", randomLong()) + )); + } + + writer.commit(); + writer.close(); + StoreRecovery storeRecovery = new StoreRecovery(new ShardId("foo", "bar", 1), logger); + RecoveryState.Index indexStats = new RecoveryState.Index(); + Directory target = newFSDirectory(createTempDir()); + final long maxSeqNo = randomNonNegativeLong(); + final long maxUnsafeAutoIdTimestamp = randomNonNegativeLong(); + int numShards = randomIntBetween(2, 10); + int targetShardId = randomIntBetween(0, numShards-1); + IndexMetaData metaData = IndexMetaData.builder("test") + .settings(Settings.builder().put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT)) + .numberOfShards(numShards) + .setRoutingNumShards(numShards * 1000000) + .numberOfReplicas(0).build(); + storeRecovery.addIndices(indexStats, target, indexSort, new Directory[] {dir}, maxSeqNo, maxUnsafeAutoIdTimestamp, metaData, + targetShardId, true); + + + SegmentInfos segmentCommitInfos = SegmentInfos.readLatestCommit(target); + final Map userData = segmentCommitInfos.getUserData(); + assertThat(userData.get(SequenceNumbers.MAX_SEQ_NO), equalTo(Long.toString(maxSeqNo))); + assertThat(userData.get(SequenceNumbers.LOCAL_CHECKPOINT_KEY), equalTo(Long.toString(maxSeqNo))); + assertThat(userData.get(InternalEngine.MAX_UNSAFE_AUTO_ID_TIMESTAMP_COMMIT_ID), equalTo(Long.toString(maxUnsafeAutoIdTimestamp))); + for (SegmentCommitInfo info : segmentCommitInfos) { // check that we didn't merge + assertEquals("all sources must be flush", + info.info.getDiagnostics().get("source"), "flush"); + if (indexSort != null) { + assertEquals(indexSort, info.info.getIndexSort()); + } + } + + iwc = newIndexWriterConfig() + .setMergePolicy(NoMergePolicy.INSTANCE) + .setOpenMode(IndexWriterConfig.OpenMode.CREATE); + if (indexSort != null) { + iwc.setIndexSort(indexSort); + } + writer = new IndexWriter(target, iwc); + writer.forceMerge(1, true); + writer.commit(); + writer.close(); + + DirectoryReader reader = DirectoryReader.open(target); + for (LeafReaderContext ctx : reader.leaves()) { + LeafReader leafReader = ctx.reader(); + Terms terms = leafReader.terms(IdFieldMapper.NAME); + TermsEnum iterator = terms.iterator(); + BytesRef ref; + while((ref = iterator.next()) != null) { + String value = ref.utf8ToString(); + assertEquals("value has wrong shards: " + value, targetShardId, OperationRouting.generateShardId(metaData, value, null)); + } + for (int i = 0; i < numDocs; i++) { + ref = new BytesRef(Integer.toString(i)); + int shardId = OperationRouting.generateShardId(metaData, ref.utf8ToString(), null); + if (shardId == targetShardId) { + assertTrue(ref.utf8ToString() + " is missing", terms.iterator().seekExact(ref)); + } else { + assertFalse(ref.utf8ToString() + " was found but shouldn't", terms.iterator().seekExact(ref)); + } + + } + } + + reader.close(); + target.close(); + IOUtils.close(dir); + } + public void testStatsDirWrapper() throws IOException { Directory dir = newDirectory(); Directory target = newDirectory(); diff --git a/core/src/test/java/org/elasticsearch/index/store/FsDirectoryServiceTests.java b/core/src/test/java/org/elasticsearch/index/store/FsDirectoryServiceTests.java index 0a72037b7d8c0..24ce9b487cc24 100644 --- a/core/src/test/java/org/elasticsearch/index/store/FsDirectoryServiceTests.java +++ b/core/src/test/java/org/elasticsearch/index/store/FsDirectoryServiceTests.java @@ -21,9 +21,7 @@ import org.apache.lucene.store.Directory; import org.apache.lucene.store.FileSwitchDirectory; import org.apache.lucene.store.MMapDirectory; -import org.apache.lucene.store.SimpleFSDirectory; import org.apache.lucene.store.SleepingLockWrapper; -import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.index.IndexModule; import org.elasticsearch.index.IndexSettings; @@ -48,7 +46,7 @@ public void testPreload() throws IOException { private void doTestPreload(String...preload) throws IOException { Settings build = Settings.builder() .put(IndexModule.INDEX_STORE_TYPE_SETTING.getKey(), "mmapfs") - .putArray(IndexModule.INDEX_STORE_PRE_LOAD_SETTING.getKey(), preload) + .putList(IndexModule.INDEX_STORE_PRE_LOAD_SETTING.getKey(), preload) .build(); IndexSettings settings = IndexSettingsModule.newIndexSettings("foo", build); IndexStore store = new IndexStore(settings); diff --git a/core/src/test/java/org/elasticsearch/index/translog/TranslogDeletionPolicyTests.java b/core/src/test/java/org/elasticsearch/index/translog/TranslogDeletionPolicyTests.java index b44a174ceaebf..f62d292730e43 100644 --- a/core/src/test/java/org/elasticsearch/index/translog/TranslogDeletionPolicyTests.java +++ b/core/src/test/java/org/elasticsearch/index/translog/TranslogDeletionPolicyTests.java @@ -21,6 +21,7 @@ import org.apache.lucene.store.ByteArrayDataOutput; import org.apache.lucene.util.IOUtils; +import org.elasticsearch.common.UUIDs; import org.elasticsearch.common.bytes.BytesArray; import org.elasticsearch.common.collect.Tuple; import org.elasticsearch.common.lease.Releasable; @@ -42,18 +43,6 @@ public class TranslogDeletionPolicyTests extends ESTestCase { - public static TranslogDeletionPolicy createTranslogDeletionPolicy() { - return new TranslogDeletionPolicy( - IndexSettings.INDEX_TRANSLOG_RETENTION_SIZE_SETTING.getDefault(Settings.EMPTY).getBytes(), - IndexSettings.INDEX_TRANSLOG_RETENTION_AGE_SETTING.getDefault(Settings.EMPTY).getMillis() - ); - } - - public static TranslogDeletionPolicy createTranslogDeletionPolicy(IndexSettings indexSettings) { - return new TranslogDeletionPolicy(indexSettings.getTranslogRetentionSize().getBytes(), - indexSettings.getTranslogRetentionAge().getMillis()); - } - public void testNoRetention() throws IOException { long now = System.currentTimeMillis(); Tuple, TranslogWriter> readersAndWriter = createReadersAndWriter(now); @@ -172,13 +161,14 @@ private Tuple, TranslogWriter> createReadersAndWriter(final TranslogWriter writer = null; List readers = new ArrayList<>(); final int numberOfReaders = randomIntBetween(0, 10); + final String translogUUID = UUIDs.randomBase64UUID(random()); for (long gen = 1; gen <= numberOfReaders + 1; gen++) { if (writer != null) { final TranslogReader reader = Mockito.spy(writer.closeIntoReader()); Mockito.doReturn(writer.getLastModifiedTime()).when(reader).getLastModifiedTime(); readers.add(reader); } - writer = TranslogWriter.create(new ShardId("index", "uuid", 0), "translog_uuid", gen, + writer = TranslogWriter.create(new ShardId("index", "uuid", 0), translogUUID, gen, tempDir.resolve(Translog.getFilename(gen)), FileChannel::open, TranslogConfig.DEFAULT_BUFFER_SIZE, () -> 1L, 1L, () -> 1L ); writer = Mockito.spy(writer); diff --git a/core/src/test/java/org/elasticsearch/index/translog/TranslogTests.java b/core/src/test/java/org/elasticsearch/index/translog/TranslogTests.java index 59e2dd8221853..78ed6697b22b4 100644 --- a/core/src/test/java/org/elasticsearch/index/translog/TranslogTests.java +++ b/core/src/test/java/org/elasticsearch/index/translog/TranslogTests.java @@ -34,7 +34,6 @@ import org.apache.lucene.util.IOUtils; import org.apache.lucene.util.LineFileDocs; import org.apache.lucene.util.LuceneTestCase; -import org.elasticsearch.Version; import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.common.Randomness; import org.elasticsearch.common.bytes.BytesArray; @@ -64,7 +63,7 @@ import org.elasticsearch.index.mapper.UidFieldMapper; import org.elasticsearch.index.seqno.LocalCheckpointTracker; import org.elasticsearch.index.seqno.LocalCheckpointTrackerTests; -import org.elasticsearch.index.seqno.SequenceNumbersService; +import org.elasticsearch.index.seqno.SequenceNumbers; import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.index.translog.Translog.Location; import org.elasticsearch.test.ESTestCase; @@ -108,7 +107,7 @@ import static com.carrotsearch.randomizedtesting.RandomizedTest.randomLongBetween; import static org.elasticsearch.common.util.BigArrays.NON_RECYCLING_INSTANCE; -import static org.elasticsearch.index.translog.TranslogDeletionPolicyTests.createTranslogDeletionPolicy; +import static org.elasticsearch.index.translog.TranslogDeletionPolicies.createTranslogDeletionPolicy; import static org.hamcrest.Matchers.containsString; import static org.hamcrest.Matchers.empty; import static org.hamcrest.Matchers.equalTo; @@ -146,7 +145,7 @@ protected void afterIfSuccessful() throws Exception { protected Translog createTranslog(TranslogConfig config, String translogUUID) throws IOException { return new Translog(config, translogUUID, createTranslogDeletionPolicy(config.getIndexSettings()), - () -> SequenceNumbersService.UNASSIGNED_SEQ_NO); + () -> SequenceNumbers.UNASSIGNED_SEQ_NO); } private void markCurrentGenAsCommitted(Translog translog) throws IOException { @@ -188,7 +187,7 @@ public void tearDown() throws Exception { } private Translog create(Path path) throws IOException { - globalCheckpoint = new AtomicLong(SequenceNumbersService.UNASSIGNED_SEQ_NO); + globalCheckpoint = new AtomicLong(SequenceNumbers.UNASSIGNED_SEQ_NO); final TranslogConfig translogConfig = getTranslogConfig(path); final TranslogDeletionPolicy deletionPolicy = createTranslogDeletionPolicy(translogConfig.getIndexSettings()); return new Translog(translogConfig, null, deletionPolicy, () -> globalCheckpoint.get()); @@ -771,7 +770,7 @@ public void testConcurrentWriteViewsAndSnapshot() throws Throwable { final AtomicBoolean run = new AtomicBoolean(true); final Object flushMutex = new Object(); - final AtomicLong lastCommittedLocalCheckpoint = new AtomicLong(SequenceNumbersService.NO_OPS_PERFORMED); + final AtomicLong lastCommittedLocalCheckpoint = new AtomicLong(SequenceNumbers.NO_OPS_PERFORMED); final LocalCheckpointTracker tracker = LocalCheckpointTrackerTests.createEmptyTracker(); final TranslogDeletionPolicy deletionPolicy = translog.getDeletionPolicy(); // any errors on threads @@ -1102,10 +1101,10 @@ public void testTranslogWriter() throws IOException { out.writeInt(i); long seqNo; do { - seqNo = opsHaveValidSequenceNumbers ? randomNonNegativeLong() : SequenceNumbersService.UNASSIGNED_SEQ_NO; + seqNo = opsHaveValidSequenceNumbers ? randomNonNegativeLong() : SequenceNumbers.UNASSIGNED_SEQ_NO; opsHaveValidSequenceNumbers = opsHaveValidSequenceNumbers || !rarely(); } while (seenSeqNos.contains(seqNo)); - if (seqNo != SequenceNumbersService.UNASSIGNED_SEQ_NO) { + if (seqNo != SequenceNumbers.UNASSIGNED_SEQ_NO) { seenSeqNos.add(seqNo); } writer.add(new BytesArray(bytes), seqNo); @@ -1120,8 +1119,8 @@ public void testTranslogWriter() throws IOException { final int value = buffer.getInt(); assertEquals(i, value); } - final long minSeqNo = seenSeqNos.stream().min(Long::compareTo).orElse(SequenceNumbersService.NO_OPS_PERFORMED); - final long maxSeqNo = seenSeqNos.stream().max(Long::compareTo).orElse(SequenceNumbersService.NO_OPS_PERFORMED); + final long minSeqNo = seenSeqNos.stream().min(Long::compareTo).orElse(SequenceNumbers.NO_OPS_PERFORMED); + final long maxSeqNo = seenSeqNos.stream().max(Long::compareTo).orElse(SequenceNumbers.NO_OPS_PERFORMED); assertThat(reader.getCheckpoint().minSeqNo, equalTo(minSeqNo)); assertThat(reader.getCheckpoint().maxSeqNo, equalTo(maxSeqNo)); @@ -1211,7 +1210,7 @@ public void testBasicRecovery() throws IOException { assertNull(snapshot.next()); } } else { - translog = new Translog(config, translogGeneration.translogUUID, translog.getDeletionPolicy(), () -> SequenceNumbersService.UNASSIGNED_SEQ_NO); + translog = new Translog(config, translogGeneration.translogUUID, translog.getDeletionPolicy(), () -> SequenceNumbers.UNASSIGNED_SEQ_NO); assertEquals("lastCommitted must be 1 less than current", translogGeneration.translogFileGeneration + 1, translog.currentFileGeneration()); assertFalse(translog.syncNeeded()); try (Translog.Snapshot snapshot = translog.newSnapshotFromGen(translogGeneration.translogFileGeneration)) { @@ -1249,7 +1248,7 @@ public void testRecoveryUncommitted() throws IOException { TranslogConfig config = translog.getConfig(); final String translogUUID = translog.getTranslogUUID(); final TranslogDeletionPolicy deletionPolicy = translog.getDeletionPolicy(); - try (Translog translog = new Translog(config, translogUUID, deletionPolicy, () -> SequenceNumbersService.UNASSIGNED_SEQ_NO)) { + try (Translog translog = new Translog(config, translogUUID, deletionPolicy, () -> SequenceNumbers.UNASSIGNED_SEQ_NO)) { assertNotNull(translogGeneration); assertEquals("lastCommitted must be 2 less than current - we never finished the commit", translogGeneration.translogFileGeneration + 2, translog.currentFileGeneration()); assertFalse(translog.syncNeeded()); @@ -1263,7 +1262,7 @@ public void testRecoveryUncommitted() throws IOException { } } if (randomBoolean()) { // recover twice - try (Translog translog = new Translog(config, translogUUID, deletionPolicy, () -> SequenceNumbersService.UNASSIGNED_SEQ_NO)) { + try (Translog translog = new Translog(config, translogUUID, deletionPolicy, () -> SequenceNumbers.UNASSIGNED_SEQ_NO)) { assertNotNull(translogGeneration); assertEquals("lastCommitted must be 3 less than current - we never finished the commit and run recovery twice", translogGeneration.translogFileGeneration + 3, translog.currentFileGeneration()); assertFalse(translog.syncNeeded()); @@ -1307,7 +1306,7 @@ public void testRecoveryUncommittedFileExists() throws IOException { final String translogUUID = translog.getTranslogUUID(); final TranslogDeletionPolicy deletionPolicy = translog.getDeletionPolicy(); - try (Translog translog = new Translog(config, translogUUID, deletionPolicy, () -> SequenceNumbersService.UNASSIGNED_SEQ_NO)) { + try (Translog translog = new Translog(config, translogUUID, deletionPolicy, () -> SequenceNumbers.UNASSIGNED_SEQ_NO)) { assertNotNull(translogGeneration); assertEquals("lastCommitted must be 2 less than current - we never finished the commit", translogGeneration.translogFileGeneration + 2, translog.currentFileGeneration()); assertFalse(translog.syncNeeded()); @@ -1322,7 +1321,7 @@ public void testRecoveryUncommittedFileExists() throws IOException { } if (randomBoolean()) { // recover twice - try (Translog translog = new Translog(config, translogUUID, deletionPolicy, () -> SequenceNumbersService.UNASSIGNED_SEQ_NO)) { + try (Translog translog = new Translog(config, translogUUID, deletionPolicy, () -> SequenceNumbers.UNASSIGNED_SEQ_NO)) { assertNotNull(translogGeneration); assertEquals("lastCommitted must be 3 less than current - we never finished the commit and run recovery twice", translogGeneration.translogFileGeneration + 3, translog.currentFileGeneration()); assertFalse(translog.syncNeeded()); @@ -1359,11 +1358,11 @@ public void testRecoveryUncommittedCorruptedCheckpoint() throws IOException { TranslogConfig config = translog.getConfig(); Path ckp = config.getTranslogPath().resolve(Translog.CHECKPOINT_FILE_NAME); Checkpoint read = Checkpoint.read(ckp); - Checkpoint corrupted = Checkpoint.emptyTranslogCheckpoint(0, 0, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0); + Checkpoint corrupted = Checkpoint.emptyTranslogCheckpoint(0, 0, SequenceNumbers.UNASSIGNED_SEQ_NO, 0); Checkpoint.write(FileChannel::open, config.getTranslogPath().resolve(Translog.getCommitCheckpointFileName(read.generation)), corrupted, StandardOpenOption.WRITE, StandardOpenOption.CREATE_NEW); final String translogUUID = translog.getTranslogUUID(); final TranslogDeletionPolicy deletionPolicy = translog.getDeletionPolicy(); - try (Translog ignored = new Translog(config, translogUUID, deletionPolicy, () -> SequenceNumbersService.UNASSIGNED_SEQ_NO)) { + try (Translog ignored = new Translog(config, translogUUID, deletionPolicy, () -> SequenceNumbers.UNASSIGNED_SEQ_NO)) { fail("corrupted"); } catch (IllegalStateException ex) { assertEquals("Checkpoint file translog-2.ckp already exists but has corrupted content expected: Checkpoint{offset=3123, " + @@ -1371,7 +1370,7 @@ public void testRecoveryUncommittedCorruptedCheckpoint() throws IOException { "generation=0, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-2, minTranslogGeneration=0}", ex.getMessage()); } Checkpoint.write(FileChannel::open, config.getTranslogPath().resolve(Translog.getCommitCheckpointFileName(read.generation)), read, StandardOpenOption.WRITE, StandardOpenOption.TRUNCATE_EXISTING); - try (Translog translog = new Translog(config, translogUUID, deletionPolicy, () -> SequenceNumbersService.UNASSIGNED_SEQ_NO)) { + try (Translog translog = new Translog(config, translogUUID, deletionPolicy, () -> SequenceNumbers.UNASSIGNED_SEQ_NO)) { assertNotNull(translogGeneration); assertEquals("lastCommitted must be 2 less than current - we never finished the commit", translogGeneration.translogFileGeneration + 2, translog.currentFileGeneration()); assertFalse(translog.syncNeeded()); @@ -1448,12 +1447,12 @@ public void testOpenForeignTranslog() throws IOException { final String foreignTranslog = randomRealisticUnicodeOfCodepointLengthBetween(1, translogGeneration.translogUUID.length()); try { - new Translog(config, foreignTranslog, createTranslogDeletionPolicy(), () -> SequenceNumbersService.UNASSIGNED_SEQ_NO); + new Translog(config, foreignTranslog, createTranslogDeletionPolicy(), () -> SequenceNumbers.UNASSIGNED_SEQ_NO); fail("translog doesn't belong to this UUID"); } catch (TranslogCorruptedException ex) { } - this.translog = new Translog(config, translogUUID, deletionPolicy, () -> SequenceNumbersService.UNASSIGNED_SEQ_NO); + this.translog = new Translog(config, translogUUID, deletionPolicy, () -> SequenceNumbers.UNASSIGNED_SEQ_NO); try (Translog.Snapshot snapshot = this.translog.newSnapshotFromGen(translogGeneration.translogFileGeneration)) { for (int i = firstUncommitted; i < translogOperations; i++) { Translog.Operation next = snapshot.next(); @@ -1639,7 +1638,7 @@ public void testFailFlush() throws IOException { translog.close(); // we are closed final String translogUUID = translog.getTranslogUUID(); final TranslogDeletionPolicy deletionPolicy = translog.getDeletionPolicy(); - try (Translog tlog = new Translog(config, translogUUID, deletionPolicy, () -> SequenceNumbersService.UNASSIGNED_SEQ_NO)) { + try (Translog tlog = new Translog(config, translogUUID, deletionPolicy, () -> SequenceNumbers.UNASSIGNED_SEQ_NO)) { assertEquals("lastCommitted must be 1 less than current", translogGeneration.translogFileGeneration + 1, tlog.currentFileGeneration()); assertFalse(tlog.syncNeeded()); @@ -1775,7 +1774,7 @@ protected void afterAdd() throws IOException { } } try (Translog tlog = - new Translog(config, translogUUID, createTranslogDeletionPolicy(), () -> SequenceNumbersService.UNASSIGNED_SEQ_NO); + new Translog(config, translogUUID, createTranslogDeletionPolicy(), () -> SequenceNumbers.UNASSIGNED_SEQ_NO); Translog.Snapshot snapshot = tlog.newSnapshot()) { if (writtenOperations.size() != snapshot.totalOperations()) { for (int i = 0; i < threadCount; i++) { @@ -1820,7 +1819,7 @@ public void testRecoveryFromAFutureGenerationCleansUp() throws IOException { TranslogConfig config = translog.getConfig(); final TranslogDeletionPolicy deletionPolicy = new TranslogDeletionPolicy(-1, -1); deletionPolicy.setMinTranslogGenerationForRecovery(comittedGeneration); - translog = new Translog(config, translog.getTranslogUUID(), deletionPolicy, () -> SequenceNumbersService.UNASSIGNED_SEQ_NO); + translog = new Translog(config, translog.getTranslogUUID(), deletionPolicy, () -> SequenceNumbers.UNASSIGNED_SEQ_NO); assertThat(translog.getMinFileGeneration(), equalTo(1L)); // no trimming done yet, just recovered for (long gen = 1; gen < translog.currentFileGeneration(); gen++) { @@ -1874,7 +1873,7 @@ public void testRecoveryFromFailureOnTrimming() throws IOException { } final TranslogDeletionPolicy deletionPolicy = new TranslogDeletionPolicy(-1, -1); deletionPolicy.setMinTranslogGenerationForRecovery(comittedGeneration); - try (Translog translog = new Translog(config, translogUUID, deletionPolicy, () -> SequenceNumbersService.UNASSIGNED_SEQ_NO)) { + try (Translog translog = new Translog(config, translogUUID, deletionPolicy, () -> SequenceNumbers.UNASSIGNED_SEQ_NO)) { // we don't know when things broke exactly assertThat(translog.getMinFileGeneration(), greaterThanOrEqualTo(1L)); assertThat(translog.getMinFileGeneration(), lessThanOrEqualTo(comittedGeneration)); @@ -1922,7 +1921,7 @@ public void onceFailedFailAlways() { private Translog getFailableTranslog(final FailSwitch fail, final TranslogConfig config, final boolean partialWrites, final boolean throwUnknownException, String translogUUID, final TranslogDeletionPolicy deletionPolicy) throws IOException { - return new Translog(config, translogUUID, deletionPolicy, () -> SequenceNumbersService.UNASSIGNED_SEQ_NO) { + return new Translog(config, translogUUID, deletionPolicy, () -> SequenceNumbers.UNASSIGNED_SEQ_NO) { @Override ChannelFactory getChannelFactory() { final ChannelFactory factory = super.getChannelFactory(); @@ -2048,7 +2047,7 @@ public void testFailWhileCreateWriteWithRecoveredTLogs() throws IOException { translog.add(new Translog.Index("test", "boom", 0, "boom".getBytes(Charset.forName("UTF-8")))); translog.close(); try { - new Translog(config, translog.getTranslogUUID(), createTranslogDeletionPolicy(), () -> SequenceNumbersService.UNASSIGNED_SEQ_NO) { + new Translog(config, translog.getTranslogUUID(), createTranslogDeletionPolicy(), () -> SequenceNumbers.UNASSIGNED_SEQ_NO) { @Override protected TranslogWriter createWriter(long fileGeneration) throws IOException { throw new MockDirectoryWrapper.FakeIOException(); @@ -2101,7 +2100,7 @@ public void testRecoverWithUnbackedNextGenInIllegalState() throws IOException { Files.createFile(config.getTranslogPath().resolve("translog-" + (read.generation + 1) + ".tlog")); try { - Translog tlog = new Translog(config, translog.getTranslogUUID(), translog.getDeletionPolicy(), () -> SequenceNumbersService.UNASSIGNED_SEQ_NO); + Translog tlog = new Translog(config, translog.getTranslogUUID(), translog.getDeletionPolicy(), () -> SequenceNumbers.UNASSIGNED_SEQ_NO); fail("file already exists?"); } catch (TranslogException ex) { // all is well @@ -2123,7 +2122,7 @@ public void testRecoverWithUnbackedNextGenAndFutureFile() throws IOException { Files.createFile(config.getTranslogPath().resolve("translog-" + (read.generation + 1) + ".tlog")); // we add N+1 and N+2 to ensure we only delete the N+1 file and never jump ahead and wipe without the right condition Files.createFile(config.getTranslogPath().resolve("translog-" + (read.generation + 2) + ".tlog")); - try (Translog tlog = new Translog(config, translogUUID, deletionPolicy, () -> SequenceNumbersService.UNASSIGNED_SEQ_NO)) { + try (Translog tlog = new Translog(config, translogUUID, deletionPolicy, () -> SequenceNumbers.UNASSIGNED_SEQ_NO)) { assertFalse(tlog.syncNeeded()); try (Translog.Snapshot snapshot = tlog.newSnapshot()) { for (int i = 0; i < 1; i++) { @@ -2136,7 +2135,7 @@ public void testRecoverWithUnbackedNextGenAndFutureFile() throws IOException { } try { - Translog tlog = new Translog(config, translogUUID, deletionPolicy, () -> SequenceNumbersService.UNASSIGNED_SEQ_NO); + Translog tlog = new Translog(config, translogUUID, deletionPolicy, () -> SequenceNumbers.UNASSIGNED_SEQ_NO); fail("file already exists?"); } catch (TranslogException ex) { // all is well @@ -2239,8 +2238,8 @@ public void testWithRandomException() throws IOException { fail.failNever(); // we don't wanna fail here but we might since we write a new checkpoint and create a new tlog file TranslogDeletionPolicy deletionPolicy = createTranslogDeletionPolicy(); deletionPolicy.setMinTranslogGenerationForRecovery(minGenForRecovery); - try (Translog translog = new Translog(config, generationUUID, deletionPolicy, () -> SequenceNumbersService.UNASSIGNED_SEQ_NO); - Translog.Snapshot snapshot = translog.newSnapshotFromGen(minGenForRecovery)) { + try (Translog translog = new Translog(config, generationUUID, deletionPolicy, () -> SequenceNumbers.UNASSIGNED_SEQ_NO); + Translog.Snapshot snapshot = translog.newSnapshotFromGen(minGenForRecovery)) { assertEquals(syncedDocs.size(), snapshot.totalOperations()); for (int i = 0; i < syncedDocs.size(); i++) { Translog.Operation next = snapshot.next(); @@ -2304,14 +2303,14 @@ public void testPendingDelete() throws IOException { final String translogUUID = translog.getTranslogUUID(); final TranslogDeletionPolicy deletionPolicy = createTranslogDeletionPolicy(config.getIndexSettings()); translog.close(); - translog = new Translog(config, translogUUID, deletionPolicy, () -> SequenceNumbersService.UNASSIGNED_SEQ_NO); + translog = new Translog(config, translogUUID, deletionPolicy, () -> SequenceNumbers.UNASSIGNED_SEQ_NO); translog.add(new Translog.Index("test", "2", 1, new byte[]{2})); translog.rollGeneration(); Closeable lock = translog.acquireRetentionLock(); translog.add(new Translog.Index("test", "3", 2, new byte[]{3})); translog.close(); IOUtils.close(lock); - translog = new Translog(config, translogUUID, deletionPolicy, () -> SequenceNumbersService.UNASSIGNED_SEQ_NO); + translog = new Translog(config, translogUUID, deletionPolicy, () -> SequenceNumbers.UNASSIGNED_SEQ_NO); } public static Translog.Location randomTranslogLocation() { diff --git a/core/src/test/java/org/elasticsearch/index/translog/TranslogVersionTests.java b/core/src/test/java/org/elasticsearch/index/translog/TranslogVersionTests.java index f6aafe765f56f..d57373ebfe349 100644 --- a/core/src/test/java/org/elasticsearch/index/translog/TranslogVersionTests.java +++ b/core/src/test/java/org/elasticsearch/index/translog/TranslogVersionTests.java @@ -19,7 +19,7 @@ package org.elasticsearch.index.translog; -import org.elasticsearch.index.seqno.SequenceNumbersService; +import org.elasticsearch.index.seqno.SequenceNumbers; import org.elasticsearch.test.ESTestCase; import java.io.IOException; @@ -86,10 +86,10 @@ public void testTruncatedTranslog() throws Exception { public TranslogReader openReader(final Path path, final long id) throws IOException { try (FileChannel channel = FileChannel.open(path, StandardOpenOption.READ)) { - final long minSeqNo = SequenceNumbersService.NO_OPS_PERFORMED; - final long maxSeqNo = SequenceNumbersService.NO_OPS_PERFORMED; + final long minSeqNo = SequenceNumbers.NO_OPS_PERFORMED; + final long maxSeqNo = SequenceNumbers.NO_OPS_PERFORMED; final Checkpoint checkpoint = - new Checkpoint(Files.size(path), 1, id, minSeqNo, maxSeqNo, SequenceNumbersService.UNASSIGNED_SEQ_NO, id); + new Checkpoint(Files.size(path), 1, id, minSeqNo, maxSeqNo, SequenceNumbers.UNASSIGNED_SEQ_NO, id); return TranslogReader.open(channel, path, checkpoint, null); } } diff --git a/core/src/test/java/org/elasticsearch/index/translog/TruncateTranslogIT.java b/core/src/test/java/org/elasticsearch/index/translog/TruncateTranslogIT.java index 60434d95e6209..b0d4c238679e8 100644 --- a/core/src/test/java/org/elasticsearch/index/translog/TruncateTranslogIT.java +++ b/core/src/test/java/org/elasticsearch/index/translog/TruncateTranslogIT.java @@ -33,6 +33,7 @@ import org.elasticsearch.action.admin.indices.recovery.RecoveryResponse; import org.elasticsearch.action.index.IndexRequestBuilder; import org.elasticsearch.action.search.SearchPhaseExecutionException; +import org.elasticsearch.action.search.SearchRequestBuilder; import org.elasticsearch.cli.MockTerminal; import org.elasticsearch.cluster.ClusterState; import org.elasticsearch.cluster.routing.GroupShardsIterator; @@ -77,7 +78,6 @@ import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked; import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertHitCount; import static org.hamcrest.Matchers.containsString; -import static org.hamcrest.Matchers.equalTo; import static org.hamcrest.Matchers.greaterThan; import static org.hamcrest.Matchers.notNullValue; @@ -144,16 +144,12 @@ public void testCorruptTranslogTruncation() throws Exception { } } - final boolean expectSeqNoRecovery; if (randomBoolean() && numDocsToTruncate > 0) { // flush the replica, so it will have more docs than what the primary will have Index index = resolveIndex("test"); IndexShard replica = internalCluster().getInstance(IndicesService.class, replicaNode).getShardOrNull(new ShardId(index, 0)); replica.flush(new FlushRequest()); - expectSeqNoRecovery = false; - logger.info("--> ops based recovery disabled by flushing replica"); - } else { - expectSeqNoRecovery = true; + logger.info("--> performed extra flushing on replica"); } // shut down the replica node to be tested later @@ -215,12 +211,14 @@ public void testCorruptTranslogTruncation() throws Exception { logger.info("--> starting the replica node to test recovery"); internalCluster().startNode(); ensureGreen("test"); - assertHitCount(client().prepareSearch("test").setPreference("_replica").setQuery(matchAllQuery()).get(), numDocsToKeep); + for (String node : internalCluster().nodesInclude("test")) { + SearchRequestBuilder q = client().prepareSearch("test").setPreference("_only_nodes:" + node).setQuery(matchAllQuery()); + assertHitCount(q.get(), numDocsToKeep); + } final RecoveryResponse recoveryResponse = client().admin().indices().prepareRecoveries("test").setActiveOnly(false).get(); final RecoveryState replicaRecoveryState = recoveryResponse.shardRecoveryStates().get("test").stream() .filter(recoveryState -> recoveryState.getPrimary() == false).findFirst().get(); - assertThat(replicaRecoveryState.getIndex().toString(), replicaRecoveryState.getIndex().recoveredFileCount(), - expectSeqNoRecovery ? equalTo(0) : greaterThan(0)); + assertThat(replicaRecoveryState.getIndex().toString(), replicaRecoveryState.getIndex().recoveredFileCount(), greaterThan(0)); } public void testCorruptTranslogTruncationOfReplica() throws Exception { @@ -314,7 +312,9 @@ public void testCorruptTranslogTruncationOfReplica() throws Exception { logger.info("--> starting the replica node to test recovery"); internalCluster().startNode(); ensureGreen("test"); - assertHitCount(client().prepareSearch("test").setPreference("_replica").setQuery(matchAllQuery()).get(), totalDocs); + for (String node : internalCluster().nodesInclude("test")) { + assertHitCount(client().prepareSearch("test").setPreference("_only_nodes:" + node).setQuery(matchAllQuery()).get(), totalDocs); + } final RecoveryResponse recoveryResponse = client().admin().indices().prepareRecoveries("test").setActiveOnly(false).get(); final RecoveryState replicaRecoveryState = recoveryResponse.shardRecoveryStates().get("test").stream() diff --git a/core/src/test/java/org/elasticsearch/indexing/IndexActionIT.java b/core/src/test/java/org/elasticsearch/indexing/IndexActionIT.java index 63f889179a27c..7abb603b8eb2d 100644 --- a/core/src/test/java/org/elasticsearch/indexing/IndexActionIT.java +++ b/core/src/test/java/org/elasticsearch/indexing/IndexActionIT.java @@ -250,6 +250,6 @@ public void testDocumentWithBlankFieldName() { ); assertThat(e.getMessage(), containsString("failed to parse")); assertThat(e.getRootCause().getMessage(), - containsString("object field starting or ending with a [.] makes object resolution ambiguous: []")); + containsString("field name cannot be an empty string")); } } diff --git a/core/src/test/java/org/elasticsearch/indices/IndexingMemoryControllerTests.java b/core/src/test/java/org/elasticsearch/indices/IndexingMemoryControllerTests.java index d8367b0d6a6d4..2a490c1dcf973 100644 --- a/core/src/test/java/org/elasticsearch/indices/IndexingMemoryControllerTests.java +++ b/core/src/test/java/org/elasticsearch/indices/IndexingMemoryControllerTests.java @@ -35,7 +35,7 @@ import org.elasticsearch.indices.recovery.RecoveryState; import org.elasticsearch.test.ESSingleNodeTestCase; import org.elasticsearch.threadpool.ThreadPool; -import org.elasticsearch.threadpool.ThreadPool.Cancellable; +import org.elasticsearch.threadpool.Scheduler.Cancellable; import java.io.IOException; import java.util.ArrayList; diff --git a/core/src/test/java/org/elasticsearch/indices/IndicesLifecycleListenerIT.java b/core/src/test/java/org/elasticsearch/indices/IndicesLifecycleListenerIT.java index 4dbdd5e37f6c6..f36dd9a78b89b 100644 --- a/core/src/test/java/org/elasticsearch/indices/IndicesLifecycleListenerIT.java +++ b/core/src/test/java/org/elasticsearch/indices/IndicesLifecycleListenerIT.java @@ -101,13 +101,15 @@ public void beforeIndexCreated(Index index, Settings indexSettings) { internalCluster().getInstance(MockIndexEventListener.TestEventListener.class, node3).setNewDelegate(listener); client().admin().indices().prepareCreate("test") - .setSettings(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 3, IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 1).get(); + .setSettings(Settings.builder().put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 3) + .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 1)).get(); ensureGreen("test"); assertThat("beforeIndexAddedToCluster called only once", beforeAddedCount.get(), equalTo(1)); assertThat("beforeIndexCreated called on each data node", allCreatedCount.get(), greaterThanOrEqualTo(3)); try { - client().admin().indices().prepareCreate("failed").setSettings("index.fail", true).get(); + client().admin().indices().prepareCreate("failed") + .setSettings(Settings.builder().put("index.fail", true)).get(); fail("should have thrown an exception during creation"); } catch (Exception e) { assertTrue(e.getMessage().contains("failing on purpose")); @@ -122,7 +124,8 @@ public void beforeIndexCreated(Index index, Settings indexSettings) { */ public void testIndexShardFailedOnRelocation() throws Throwable { String node1 = internalCluster().startNode(); - client().admin().indices().prepareCreate("index1").setSettings(SETTING_NUMBER_OF_SHARDS, 1, SETTING_NUMBER_OF_REPLICAS, 0).get(); + client().admin().indices().prepareCreate("index1") + .setSettings(Settings.builder().put(SETTING_NUMBER_OF_SHARDS, 1).put(SETTING_NUMBER_OF_REPLICAS, 0)).get(); ensureGreen("index1"); String node2 = internalCluster().startNode(); internalCluster().getInstance(MockIndexEventListener.TestEventListener.class, node2).setNewDelegate(new IndexShardStateChangeListener() { @@ -148,7 +151,8 @@ public void testIndexStateShardChanged() throws Throwable { //create an index that should fail try { - client().admin().indices().prepareCreate("failed").setSettings(SETTING_NUMBER_OF_SHARDS, 1, "index.fail", true).get(); + client().admin().indices().prepareCreate("failed") + .setSettings(Settings.builder().put(SETTING_NUMBER_OF_SHARDS, 1).put("index.fail", true)).get(); fail("should have thrown an exception"); } catch (ElasticsearchException e) { assertTrue(e.getMessage().contains("failing on purpose")); @@ -159,7 +163,7 @@ public void testIndexStateShardChanged() throws Throwable { //create an index assertAcked(client().admin().indices().prepareCreate("test") - .setSettings(SETTING_NUMBER_OF_SHARDS, 6, SETTING_NUMBER_OF_REPLICAS, 0)); + .setSettings(Settings.builder().put(SETTING_NUMBER_OF_SHARDS, 6).put(SETTING_NUMBER_OF_REPLICAS, 0))); ensureGreen(); assertThat(stateChangeListenerNode1.creationSettings.getAsInt(SETTING_NUMBER_OF_SHARDS, -1), equalTo(6)); assertThat(stateChangeListenerNode1.creationSettings.getAsInt(SETTING_NUMBER_OF_REPLICAS, -1), equalTo(0)); diff --git a/core/src/test/java/org/elasticsearch/indices/IndicesLifecycleListenerSingleNodeTests.java b/core/src/test/java/org/elasticsearch/indices/IndicesLifecycleListenerSingleNodeTests.java index 0dc760d63bfe7..aa06f9e9b7dfe 100644 --- a/core/src/test/java/org/elasticsearch/indices/IndicesLifecycleListenerSingleNodeTests.java +++ b/core/src/test/java/org/elasticsearch/indices/IndicesLifecycleListenerSingleNodeTests.java @@ -52,7 +52,7 @@ public class IndicesLifecycleListenerSingleNodeTests extends ESSingleNodeTestCas public void testStartDeleteIndexEventCallback() throws Throwable { IndicesService indicesService = getInstanceFromNode(IndicesService.class); assertAcked(client().admin().indices().prepareCreate("test") - .setSettings(SETTING_NUMBER_OF_SHARDS, 1, SETTING_NUMBER_OF_REPLICAS, 0)); + .setSettings(Settings.builder().put(SETTING_NUMBER_OF_SHARDS, 1).put(SETTING_NUMBER_OF_REPLICAS, 0))); ensureGreen(); Index idx = resolveIndex("test"); IndexMetaData metaData = indicesService.indexService(idx).getMetaData(); @@ -130,7 +130,7 @@ public void afterIndexRemoved(Index index, IndexSettings indexSettings, IndexRem newRouting = newRouting.moveToUnassigned(unassignedInfo) .updateUnassigned(unassignedInfo, RecoverySource.StoreRecoverySource.EMPTY_STORE_INSTANCE); newRouting = ShardRoutingHelper.initialize(newRouting, nodeId); - IndexShard shard = index.createShard(newRouting); + IndexShard shard = index.createShard(newRouting, s -> {}); IndexShardTestCase.updateRoutingEntry(shard, newRouting); assertEquals(5, counter.get()); final DiscoveryNode localNode = new DiscoveryNode("foo", buildNewFakeTransportAddress(), diff --git a/core/src/test/java/org/elasticsearch/indices/IndicesOptionsIntegrationIT.java b/core/src/test/java/org/elasticsearch/indices/IndicesOptionsIntegrationIT.java index 76d5be5ea1e8c..977719b5398dd 100644 --- a/core/src/test/java/org/elasticsearch/indices/IndicesOptionsIntegrationIT.java +++ b/core/src/test/java/org/elasticsearch/indices/IndicesOptionsIntegrationIT.java @@ -599,7 +599,8 @@ public void testPutMappingMultiType() throws Exception { verify(client().admin().indices().preparePutMapping("_all").setType("type1").setSource("field", "type=text"), true); for (String index : Arrays.asList("foo", "foobar", "bar", "barbaz")) { - assertAcked(prepareCreate(index).setSettings("index.version.created", Version.V_5_6_0.id)); // allows for multiple types + assertAcked(prepareCreate(index).setSettings(Settings.builder().put("index.version.created", Version.V_5_6_0.id))); + // allows for multiple types } verify(client().admin().indices().preparePutMapping("foo").setType("type1").setSource("field", "type=text"), false); diff --git a/core/src/test/java/org/elasticsearch/indices/IndicesRequestCacheIT.java b/core/src/test/java/org/elasticsearch/indices/IndicesRequestCacheIT.java index c672d7e5bc509..1884361a47f69 100644 --- a/core/src/test/java/org/elasticsearch/indices/IndicesRequestCacheIT.java +++ b/core/src/test/java/org/elasticsearch/indices/IndicesRequestCacheIT.java @@ -24,6 +24,7 @@ import org.elasticsearch.action.search.SearchType; import org.elasticsearch.client.Client; import org.elasticsearch.cluster.metadata.IndexMetaData; +import org.elasticsearch.common.settings.Settings; import org.elasticsearch.index.query.QueryBuilders; import org.elasticsearch.search.aggregations.bucket.histogram.DateHistogramInterval; import org.elasticsearch.search.aggregations.bucket.histogram.Histogram; @@ -51,7 +52,7 @@ public void testCacheAggs() throws Exception { Client client = client(); assertAcked(client.admin().indices().prepareCreate("index") .addMapping("type", "f", "type=date") - .setSettings(IndicesRequestCache.INDEX_CACHE_REQUEST_ENABLED_SETTING.getKey(), true).get()); + .setSettings(Settings.builder().put(IndicesRequestCache.INDEX_CACHE_REQUEST_ENABLED_SETTING.getKey(), true)).get()); indexRandom(true, client.prepareIndex("index", "type").setSource("f", "2014-03-10T00:00:00.000Z"), client.prepareIndex("index", "type").setSource("f", "2014-05-13T00:00:00.000Z")); @@ -93,10 +94,8 @@ public void testCacheAggs() throws Exception { public void testQueryRewrite() throws Exception { Client client = client(); assertAcked(client.admin().indices().prepareCreate("index").addMapping("type", "s", "type=date") - .setSettings(IndicesRequestCache.INDEX_CACHE_REQUEST_ENABLED_SETTING.getKey(), true, - IndexMetaData.SETTING_NUMBER_OF_SHARDS, 5, - IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0) - .get()); + .setSettings(Settings.builder().put(IndicesRequestCache.INDEX_CACHE_REQUEST_ENABLED_SETTING.getKey(), true) + .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 5).put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0)).get()); indexRandom(true, client.prepareIndex("index", "type", "1").setRouting("1").setSource("s", "2016-03-19"), client.prepareIndex("index", "type", "2").setRouting("1").setSource("s", "2016-03-20"), client.prepareIndex("index", "type", "3").setRouting("1").setSource("s", "2016-03-21"), @@ -147,9 +146,8 @@ public void testQueryRewrite() throws Exception { public void testQueryRewriteMissingValues() throws Exception { Client client = client(); assertAcked(client.admin().indices().prepareCreate("index").addMapping("type", "s", "type=date") - .setSettings(IndicesRequestCache.INDEX_CACHE_REQUEST_ENABLED_SETTING.getKey(), true, IndexMetaData.SETTING_NUMBER_OF_SHARDS, - 1, IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0) - .get()); + .setSettings(Settings.builder().put(IndicesRequestCache.INDEX_CACHE_REQUEST_ENABLED_SETTING.getKey(), true) + .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1).put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0)).get()); indexRandom(true, client.prepareIndex("index", "type", "1").setSource("s", "2016-03-19"), client.prepareIndex("index", "type", "2").setSource("s", "2016-03-20"), client.prepareIndex("index", "type", "3").setSource("s", "2016-03-21"), @@ -197,10 +195,8 @@ public void testQueryRewriteMissingValues() throws Exception { public void testQueryRewriteDates() throws Exception { Client client = client(); assertAcked(client.admin().indices().prepareCreate("index").addMapping("type", "d", "type=date") - .setSettings(IndicesRequestCache.INDEX_CACHE_REQUEST_ENABLED_SETTING.getKey(), true, - IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1, - IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0) - .get()); + .setSettings(Settings.builder().put(IndicesRequestCache.INDEX_CACHE_REQUEST_ENABLED_SETTING.getKey(), true) + .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1).put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0)).get()); indexRandom(true, client.prepareIndex("index", "type", "1").setSource("d", "2014-01-01T00:00:00"), client.prepareIndex("index", "type", "2").setSource("d", "2014-02-01T00:00:00"), client.prepareIndex("index", "type", "3").setSource("d", "2014-03-01T00:00:00"), @@ -250,18 +246,14 @@ public void testQueryRewriteDates() throws Exception { public void testQueryRewriteDatesWithNow() throws Exception { Client client = client(); + Settings settings = Settings.builder().put(IndicesRequestCache.INDEX_CACHE_REQUEST_ENABLED_SETTING.getKey(), true) + .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1).put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0).build(); assertAcked(client.admin().indices().prepareCreate("index-1").addMapping("type", "d", "type=date") - .setSettings(IndicesRequestCache.INDEX_CACHE_REQUEST_ENABLED_SETTING.getKey(), true, IndexMetaData.SETTING_NUMBER_OF_SHARDS, - 1, IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0) - .get()); + .setSettings(settings).get()); assertAcked(client.admin().indices().prepareCreate("index-2").addMapping("type", "d", "type=date") - .setSettings(IndicesRequestCache.INDEX_CACHE_REQUEST_ENABLED_SETTING.getKey(), true, IndexMetaData.SETTING_NUMBER_OF_SHARDS, - 1, IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0) - .get()); + .setSettings(settings).get()); assertAcked(client.admin().indices().prepareCreate("index-3").addMapping("type", "d", "type=date") - .setSettings(IndicesRequestCache.INDEX_CACHE_REQUEST_ENABLED_SETTING.getKey(), true, IndexMetaData.SETTING_NUMBER_OF_SHARDS, - 1, IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0) - .get()); + .setSettings(settings).get()); DateTime now = new DateTime(ISOChronology.getInstanceUTC()); indexRandom(true, client.prepareIndex("index-1", "type", "1").setSource("d", now), client.prepareIndex("index-1", "type", "2").setSource("d", now.minusDays(1)), @@ -369,9 +361,10 @@ public void testQueryRewriteDatesWithNow() throws Exception { public void testCanCache() throws Exception { Client client = client(); + Settings settings = Settings.builder().put(IndicesRequestCache.INDEX_CACHE_REQUEST_ENABLED_SETTING.getKey(), true) + .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 2).put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0).build(); assertAcked(client.admin().indices().prepareCreate("index").addMapping("type", "s", "type=date") - .setSettings(IndicesRequestCache.INDEX_CACHE_REQUEST_ENABLED_SETTING.getKey(), true, IndexMetaData.SETTING_NUMBER_OF_SHARDS, - 2, IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0) + .setSettings(settings) .get()); indexRandom(true, client.prepareIndex("index", "type", "1").setRouting("1").setSource("s", "2016-03-19"), client.prepareIndex("index", "type", "2").setRouting("1").setSource("s", "2016-03-20"), @@ -455,9 +448,10 @@ public void testCanCache() throws Exception { public void testCacheWithFilteredAlias() { Client client = client(); + Settings settings = Settings.builder().put(IndicesRequestCache.INDEX_CACHE_REQUEST_ENABLED_SETTING.getKey(), true) + .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1).put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0).build(); assertAcked(client.admin().indices().prepareCreate("index").addMapping("type", "created_at", "type=date") - .setSettings(IndicesRequestCache.INDEX_CACHE_REQUEST_ENABLED_SETTING.getKey(), true, IndexMetaData.SETTING_NUMBER_OF_SHARDS, - 1, IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0) + .setSettings(settings) .addAlias(new Alias("last_week").filter(QueryBuilders.rangeQuery("created_at").gte("now-7d/d"))) .get()); DateTime now = new DateTime(DateTimeZone.UTC); diff --git a/core/src/test/java/org/elasticsearch/indices/analysis/AnalysisModuleTests.java b/core/src/test/java/org/elasticsearch/indices/analysis/AnalysisModuleTests.java index 2b9f7c0c1b730..2bc98885f9096 100644 --- a/core/src/test/java/org/elasticsearch/indices/analysis/AnalysisModuleTests.java +++ b/core/src/test/java/org/elasticsearch/indices/analysis/AnalysisModuleTests.java @@ -34,6 +34,7 @@ import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.env.Environment; +import org.elasticsearch.env.TestEnvironment; import org.elasticsearch.index.IndexSettings; import org.elasticsearch.index.analysis.Analysis; import org.elasticsearch.index.analysis.AnalysisRegistry; @@ -91,7 +92,7 @@ public IndexAnalyzers getIndexAnalyzers(AnalysisRegistry registry, Settings sett public AnalysisRegistry getNewRegistry(Settings settings) { try { - return new AnalysisModule(new Environment(settings), singletonList(new AnalysisPlugin() { + return new AnalysisModule(TestEnvironment.newEnvironment(settings), singletonList(new AnalysisPlugin() { @Override public Map> getTokenFilters() { return singletonMap("myfilter", MyFilterTokenFilterFactory::new); @@ -108,7 +109,7 @@ public Map> getCharFilters() { } private Settings loadFromClasspath(String path) throws IOException { - return Settings.builder().loadFromStream(path, getClass().getResourceAsStream(path)) + return Settings.builder().loadFromStream(path, getClass().getResourceAsStream(path), false) .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT) .put(Environment.PATH_HOME_SETTING.getKey(), createTempDir().toString()) .build(); @@ -141,7 +142,7 @@ public void testAnalyzerAliasNotAllowedPost5x() throws IOException { public void testVersionedAnalyzers() throws Exception { String yaml = "/org/elasticsearch/index/analysis/test1.yml"; Settings settings2 = Settings.builder() - .loadFromStream(yaml, getClass().getResourceAsStream(yaml)) + .loadFromStream(yaml, getClass().getResourceAsStream(yaml), false) .put(Environment.PATH_HOME_SETTING.getKey(), createTempDir().toString()) .put(IndexMetaData.SETTING_VERSION_CREATED, Version.V_5_0_0) .build(); @@ -162,7 +163,8 @@ public void testVersionedAnalyzers() throws Exception { indexAnalyzers.get("thai").analyzer().getVersion()); assertThat(indexAnalyzers.get("custom7").analyzer(), is(instanceOf(StandardAnalyzer.class))); - assertEquals(org.apache.lucene.util.Version.fromBits(3,6,0), indexAnalyzers.get("custom7").analyzer().getVersion()); + assertEquals(org.apache.lucene.util.Version.fromBits(3,6,0), + indexAnalyzers.get("custom7").analyzer().getVersion()); } private void testSimpleConfiguration(Settings settings) throws IOException { @@ -194,14 +196,14 @@ public void testWordListPath() throws Exception { Settings settings = Settings.builder() .put(Environment.PATH_HOME_SETTING.getKey(), createTempDir().toString()) .build(); - Environment env = new Environment(settings); + Environment env = TestEnvironment.newEnvironment(settings); String[] words = new String[]{"donau", "dampf", "schiff", "spargel", "creme", "suppe"}; Path wordListFile = generateWordList(words); settings = Settings.builder().loadFromSource("index: \n word_list_path: " + wordListFile.toAbsolutePath(), XContentType.YAML) .build(); - Set wordList = Analysis.getWordSet(env, Version.CURRENT, settings, "index.word_list"); + Set wordList = Analysis.getWordSet(env, settings, "index.word_list"); MatcherAssert.assertThat(wordList.size(), equalTo(6)); // MatcherAssert.assertThat(wordList, hasItems(words)); Files.delete(wordListFile); @@ -241,7 +243,8 @@ public void testPluginPreConfiguredCharFilters() throws IOException { boolean noVersionSupportsMultiTerm = randomBoolean(); boolean luceneVersionSupportsMultiTerm = randomBoolean(); boolean elasticsearchVersionSupportsMultiTerm = randomBoolean(); - AnalysisRegistry registry = new AnalysisModule(new Environment(emptyNodeSettings), singletonList(new AnalysisPlugin() { + AnalysisRegistry registry = new AnalysisModule(TestEnvironment.newEnvironment(emptyNodeSettings), + singletonList(new AnalysisPlugin() { @Override public List getPreConfiguredCharFilters() { return Arrays.asList( @@ -285,7 +288,8 @@ public void testPluginPreConfiguredTokenFilters() throws IOException { boolean noVersionSupportsMultiTerm = randomBoolean(); boolean luceneVersionSupportsMultiTerm = randomBoolean(); boolean elasticsearchVersionSupportsMultiTerm = randomBoolean(); - AnalysisRegistry registry = new AnalysisModule(new Environment(emptyNodeSettings), singletonList(new AnalysisPlugin() { + AnalysisRegistry registry = new AnalysisModule(TestEnvironment.newEnvironment(emptyNodeSettings), + singletonList(new AnalysisPlugin() { @Override public List getPreConfiguredTokenFilters() { return Arrays.asList( @@ -359,7 +363,8 @@ public void reset() throws IOException { read = false; } } - AnalysisRegistry registry = new AnalysisModule(new Environment(emptyNodeSettings), singletonList(new AnalysisPlugin() { + AnalysisRegistry registry = new AnalysisModule(TestEnvironment.newEnvironment(emptyNodeSettings), + singletonList(new AnalysisPlugin() { @Override public List getPreConfiguredTokenizers() { return Arrays.asList( @@ -402,7 +407,7 @@ public void testRegisterHunspellDictionary() throws Exception { .put(Environment.PATH_HOME_SETTING.getKey(), createTempDir().toString()) .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT) .build(); - Environment environment = new Environment(settings); + Environment environment = TestEnvironment.newEnvironment(settings); InputStream aff = getClass().getResourceAsStream("/indices/analyze/conf_dir/hunspell/en_US/en_US.aff"); InputStream dic = getClass().getResourceAsStream("/indices/analyze/conf_dir/hunspell/en_US/en_US.dic"); Dictionary dictionary; diff --git a/core/src/test/java/org/elasticsearch/indices/analyze/AnalyzeActionIT.java b/core/src/test/java/org/elasticsearch/indices/analyze/AnalyzeActionIT.java index d53dba67e0dc4..9f214082d4b22 100644 --- a/core/src/test/java/org/elasticsearch/indices/analyze/AnalyzeActionIT.java +++ b/core/src/test/java/org/elasticsearch/indices/analyze/AnalyzeActionIT.java @@ -117,9 +117,9 @@ public void testAnalyzeWithNonDefaultPostionLength() throws Exception { assertAcked(prepareCreate("test").addAlias(new Alias("alias")) .setSettings(Settings.builder().put(indexSettings()) .put("index.analysis.filter.syns.type", "synonym") - .putArray("index.analysis.filter.syns.synonyms", "wtf, what the fudge") + .putList("index.analysis.filter.syns.synonyms", "wtf, what the fudge") .put("index.analysis.analyzer.custom_syns.tokenizer", "standard") - .putArray("index.analysis.analyzer.custom_syns.filter", "lowercase", "syns"))); + .putList("index.analysis.analyzer.custom_syns.filter", "lowercase", "syns"))); ensureGreen(); AnalyzeResponse analyzeResponse = client().admin().indices().prepareAnalyze("say what the fudge").setIndex("test").setAnalyzer("custom_syns").get(); @@ -446,7 +446,7 @@ public void testAnalyzeNormalizedKeywordField() throws IOException { assertAcked(prepareCreate("test").addAlias(new Alias("alias")) .setSettings(Settings.builder().put(indexSettings()) .put("index.analysis.normalizer.my_normalizer.type", "custom") - .putArray("index.analysis.normalizer.my_normalizer.filter", "lowercase")) + .putList("index.analysis.normalizer.my_normalizer.filter", "lowercase")) .addMapping("test", "keyword", "type=keyword,normalizer=my_normalizer")); ensureGreen("test"); diff --git a/core/src/test/java/org/elasticsearch/indices/cluster/AbstractIndicesClusterStateServiceTestCase.java b/core/src/test/java/org/elasticsearch/indices/cluster/AbstractIndicesClusterStateServiceTestCase.java index 208e7443c7daf..35bbc497838f2 100644 --- a/core/src/test/java/org/elasticsearch/indices/cluster/AbstractIndicesClusterStateServiceTestCase.java +++ b/core/src/test/java/org/elasticsearch/indices/cluster/AbstractIndicesClusterStateServiceTestCase.java @@ -226,7 +226,8 @@ public MockIndexShard createShard(ShardRouting shardRouting, RecoveryState recov PeerRecoveryTargetService recoveryTargetService, PeerRecoveryTargetService.RecoveryListener recoveryListener, RepositoriesService repositoriesService, - Consumer onShardFailure) throws IOException { + Consumer onShardFailure, + Consumer globalCheckpointSyncer) throws IOException { failRandomly(); MockIndexService indexService = indexService(recoveryState.getShardId().getIndex()); MockIndexShard indexShard = indexService.createShard(shardRouting); diff --git a/core/src/test/java/org/elasticsearch/indices/cluster/ClusterStateChanges.java b/core/src/test/java/org/elasticsearch/indices/cluster/ClusterStateChanges.java index feff696e94201..6e6eaf726a599 100644 --- a/core/src/test/java/org/elasticsearch/indices/cluster/ClusterStateChanges.java +++ b/core/src/test/java/org/elasticsearch/indices/cluster/ClusterStateChanges.java @@ -72,10 +72,10 @@ import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.env.Environment; +import org.elasticsearch.env.TestEnvironment; import org.elasticsearch.index.IndexService; import org.elasticsearch.index.mapper.MapperService; import org.elasticsearch.index.shard.IndexEventListener; -import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.indices.IndicesService; import org.elasticsearch.test.gateway.TestGatewayAllocator; import org.elasticsearch.threadpool.ThreadPool; @@ -87,7 +87,6 @@ import java.util.Collections; import java.util.HashSet; import java.util.List; -import java.util.function.Consumer; import java.util.stream.Collectors; import static com.carrotsearch.randomizedtesting.RandomizedTest.getRandom; @@ -132,7 +131,7 @@ public ClusterStateChanges(NamedXContentRegistry xContentRegistry, ThreadPool th ActionFilters actionFilters = new ActionFilters(Collections.emptySet()); IndexNameExpressionResolver indexNameExpressionResolver = new IndexNameExpressionResolver(settings); DestructiveOperations destructiveOperations = new DestructiveOperations(settings, clusterSettings); - Environment environment = new Environment(settings); + Environment environment = TestEnvironment.newEnvironment(settings); Transport transport = null; // it's not used // mocks @@ -170,7 +169,7 @@ public IndexMetaData upgradeIndexMetaData(IndexMetaData indexMetaData, Version m } }; MetaDataIndexStateService indexStateService = new MetaDataIndexStateService(settings, clusterService, allocationService, - metaDataIndexUpgradeService, indicesService); + metaDataIndexUpgradeService, indicesService, threadPool); MetaDataDeleteIndexService deleteIndexService = new MetaDataDeleteIndexService(settings, clusterService, allocationService); MetaDataUpdateSettingsService metaDataUpdateSettingsService = new MetaDataUpdateSettingsService(settings, clusterService, allocationService, IndexScopedSettings.DEFAULT_SCOPED_SETTINGS, indicesService, threadPool); diff --git a/core/src/test/java/org/elasticsearch/indices/cluster/IndicesClusterStateServiceRandomUpdatesTests.java b/core/src/test/java/org/elasticsearch/indices/cluster/IndicesClusterStateServiceRandomUpdatesTests.java index a356693213f35..bc5a5b95b958a 100644 --- a/core/src/test/java/org/elasticsearch/indices/cluster/IndicesClusterStateServiceRandomUpdatesTests.java +++ b/core/src/test/java/org/elasticsearch/indices/cluster/IndicesClusterStateServiceRandomUpdatesTests.java @@ -410,20 +410,20 @@ private IndicesClusterStateService createIndicesClusterStateService(DiscoveryNod final ShardStateAction shardStateAction = mock(ShardStateAction.class); final PrimaryReplicaSyncer primaryReplicaSyncer = mock(PrimaryReplicaSyncer.class); return new IndicesClusterStateService( - settings, - indicesService, - clusterService, - threadPool, - recoveryTargetService, - shardStateAction, - null, - repositoriesService, - null, - null, - null, - null, - null, - primaryReplicaSyncer); + settings, + indicesService, + clusterService, + threadPool, + recoveryTargetService, + shardStateAction, + null, + repositoriesService, + null, + null, + null, + null, + primaryReplicaSyncer, + s -> {}); } private class RecordingIndicesService extends MockIndicesService { diff --git a/core/src/test/java/org/elasticsearch/indices/exists/types/TypesExistsIT.java b/core/src/test/java/org/elasticsearch/indices/exists/types/TypesExistsIT.java index 506cdc812fc03..ae3ec36759eee 100644 --- a/core/src/test/java/org/elasticsearch/indices/exists/types/TypesExistsIT.java +++ b/core/src/test/java/org/elasticsearch/indices/exists/types/TypesExistsIT.java @@ -23,6 +23,7 @@ import org.elasticsearch.action.admin.indices.exists.types.TypesExistsResponse; import org.elasticsearch.client.Client; import org.elasticsearch.cluster.metadata.IndexMetaData; +import org.elasticsearch.common.settings.Settings; import org.elasticsearch.index.IndexNotFoundException; import org.elasticsearch.plugins.Plugin; import org.elasticsearch.test.ESIntegTestCase; @@ -51,7 +52,7 @@ protected Collection> nodePlugins() { public void testSimple() throws Exception { Client client = client(); CreateIndexResponse response1 = client.admin().indices().prepareCreate("test1") - .setSettings("index.version.created", Version.V_5_6_0.id) + .setSettings(Settings.builder().put("index.version.created", Version.V_5_6_0.id)) .addMapping("type1", jsonBuilder().startObject().startObject("type1").endObject().endObject()) .addMapping("type2", jsonBuilder().startObject().startObject("type2").endObject().endObject()) .execute().actionGet(); diff --git a/core/src/test/java/org/elasticsearch/indices/flush/FlushIT.java b/core/src/test/java/org/elasticsearch/indices/flush/FlushIT.java index 0416dbb885547..e918f09e2946e 100644 --- a/core/src/test/java/org/elasticsearch/indices/flush/FlushIT.java +++ b/core/src/test/java/org/elasticsearch/indices/flush/FlushIT.java @@ -90,7 +90,7 @@ public void onFailure(Exception e) { public void testSyncedFlush() throws ExecutionException, InterruptedException, IOException { internalCluster().ensureAtLeastNumDataNodes(2); - prepareCreate("test").setSettings(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1).get(); + prepareCreate("test").setSettings(Settings.builder().put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1)).get(); ensureGreen(); final Index index = client().admin().cluster().prepareState().get().getState().metaData().index("test").getIndex(); diff --git a/core/src/test/java/org/elasticsearch/indices/mapping/SimpleGetFieldMappingsIT.java b/core/src/test/java/org/elasticsearch/indices/mapping/SimpleGetFieldMappingsIT.java index a1faa4d5eeb71..4393749b722c6 100644 --- a/core/src/test/java/org/elasticsearch/indices/mapping/SimpleGetFieldMappingsIT.java +++ b/core/src/test/java/org/elasticsearch/indices/mapping/SimpleGetFieldMappingsIT.java @@ -21,6 +21,7 @@ import org.elasticsearch.Version; import org.elasticsearch.action.admin.indices.mapping.get.GetFieldMappingsResponse; +import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentFactory; @@ -128,11 +129,11 @@ public void testGetFieldMappings() throws Exception { public void testGetFieldMappingsMultiType() throws Exception { assertTrue("remove this multi type test", Version.CURRENT.before(Version.fromString("7.0.0"))); assertAcked(prepareCreate("indexa") - .setSettings("index.version.created", Version.V_5_6_0.id) + .setSettings(Settings.builder().put("index.version.created", Version.V_5_6_0.id)) .addMapping("typeA", getMappingForType("typeA")) .addMapping("typeB", getMappingForType("typeB"))); assertAcked(client().admin().indices().prepareCreate("indexb") - .setSettings("index.version.created", Version.V_5_6_0.id) + .setSettings(Settings.builder().put("index.version.created", Version.V_5_6_0.id)) .addMapping("typeA", getMappingForType("typeA")) .addMapping("typeB", getMappingForType("typeB"))); diff --git a/core/src/test/java/org/elasticsearch/indices/mapping/SimpleGetMappingsIT.java b/core/src/test/java/org/elasticsearch/indices/mapping/SimpleGetMappingsIT.java index 52a2add15c117..722f8d6f820c8 100644 --- a/core/src/test/java/org/elasticsearch/indices/mapping/SimpleGetMappingsIT.java +++ b/core/src/test/java/org/elasticsearch/indices/mapping/SimpleGetMappingsIT.java @@ -23,6 +23,7 @@ import org.elasticsearch.action.admin.cluster.health.ClusterHealthResponse; import org.elasticsearch.action.admin.indices.mapping.get.GetMappingsResponse; import org.elasticsearch.common.Priority; +import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.plugins.Plugin; import org.elasticsearch.test.ESIntegTestCase; @@ -66,14 +67,14 @@ private XContentBuilder getMappingForType(String type) throws IOException { public void testSimpleGetMappings() throws Exception { client().admin().indices().prepareCreate("indexa") - .setSettings("index.version.created", Version.V_5_6_0.id) + .setSettings(Settings.builder().put("index.version.created", Version.V_5_6_0.id)) .addMapping("typeA", getMappingForType("typeA")) .addMapping("typeB", getMappingForType("typeB")) .addMapping("Atype", getMappingForType("Atype")) .addMapping("Btype", getMappingForType("Btype")) .execute().actionGet(); client().admin().indices().prepareCreate("indexb") - .setSettings("index.version.created", Version.V_5_6_0.id) + .setSettings(Settings.builder().put("index.version.created", Version.V_5_6_0.id)) .addMapping("typeA", getMappingForType("typeA")) .addMapping("typeB", getMappingForType("typeB")) .addMapping("Atype", getMappingForType("Atype")) diff --git a/core/src/test/java/org/elasticsearch/indices/mapping/UpdateMappingIntegrationIT.java b/core/src/test/java/org/elasticsearch/indices/mapping/UpdateMappingIntegrationIT.java index d33b2e5cf7694..f65250a5666de 100644 --- a/core/src/test/java/org/elasticsearch/indices/mapping/UpdateMappingIntegrationIT.java +++ b/core/src/test/java/org/elasticsearch/indices/mapping/UpdateMappingIntegrationIT.java @@ -353,7 +353,7 @@ public void testPutMappingsWithBlocks() throws Exception { public void testUpdateMappingOnAllTypes() throws IOException { assertTrue("remove this multi type test", Version.CURRENT.before(Version.fromString("7.0.0"))); assertAcked(prepareCreate("index") - .setSettings("index.version.created", Version.V_5_6_0.id) + .setSettings(Settings.builder().put("index.version.created", Version.V_5_6_0.id)) .addMapping("type1", "f", "type=keyword").addMapping("type2", "f", "type=keyword")); assertAcked(client().admin().indices().preparePutMapping("index") diff --git a/core/src/test/java/org/elasticsearch/indices/memory/breaker/RandomExceptionCircuitBreakerIT.java b/core/src/test/java/org/elasticsearch/indices/memory/breaker/RandomExceptionCircuitBreakerIT.java index 6dd9126a51983..ed38ec8b05b96 100644 --- a/core/src/test/java/org/elasticsearch/indices/memory/breaker/RandomExceptionCircuitBreakerIT.java +++ b/core/src/test/java/org/elasticsearch/indices/memory/breaker/RandomExceptionCircuitBreakerIT.java @@ -123,7 +123,7 @@ public void testBreakerWithRandomExceptions() throws IOException, InterruptedExc .put(EXCEPTION_TOP_LEVEL_RATIO_KEY, topLevelRate) .put(EXCEPTION_LOW_LEVEL_RATIO_KEY, lowLevelRate) .put(MockEngineSupport.WRAP_READER_RATIO.getKey(), 1.0d); - logger.info("creating index: [test] using settings: [{}]", settings.build().getAsMap()); + logger.info("creating index: [test] using settings: [{}]", settings.build()); CreateIndexResponse response = client().admin().indices().prepareCreate("test") .setSettings(settings) .addMapping("type", mapping, XContentType.JSON).execute().actionGet(); diff --git a/core/src/test/java/org/elasticsearch/indices/recovery/PeerRecoverySourceServiceTests.java b/core/src/test/java/org/elasticsearch/indices/recovery/PeerRecoverySourceServiceTests.java index b69fa1321ed37..524795bfa2480 100644 --- a/core/src/test/java/org/elasticsearch/indices/recovery/PeerRecoverySourceServiceTests.java +++ b/core/src/test/java/org/elasticsearch/indices/recovery/PeerRecoverySourceServiceTests.java @@ -21,8 +21,10 @@ import org.elasticsearch.common.settings.ClusterSettings; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.index.seqno.SequenceNumbers; import org.elasticsearch.index.shard.IndexShard; import org.elasticsearch.index.shard.IndexShardTestCase; +import org.elasticsearch.index.store.Store; import org.elasticsearch.indices.IndicesService; import org.elasticsearch.transport.TransportService; @@ -39,7 +41,8 @@ public void testDuplicateRecoveries() throws IOException { mock(TransportService.class), mock(IndicesService.class), new RecoverySettings(Settings.EMPTY, new ClusterSettings(Settings.EMPTY, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS))); StartRecoveryRequest startRecoveryRequest = new StartRecoveryRequest(primary.shardId(), randomAlphaOfLength(10), - getFakeDiscoNode("source"), getFakeDiscoNode("target"), null, randomBoolean(), randomLong(), randomLong()); + getFakeDiscoNode("source"), getFakeDiscoNode("target"), Store.MetadataSnapshot.EMPTY, randomBoolean(), randomLong(), + SequenceNumbers.UNASSIGNED_SEQ_NO); RecoverySourceHandler handler = peerRecoverySourceService.ongoingRecoveries.addNewRecovery(startRecoveryRequest, primary); DelayRecoveryException delayRecoveryException = expectThrows(DelayRecoveryException.class, () -> peerRecoverySourceService.ongoingRecoveries.addNewRecovery(startRecoveryRequest, primary)); diff --git a/core/src/test/java/org/elasticsearch/indices/recovery/PeerRecoveryTargetServiceTests.java b/core/src/test/java/org/elasticsearch/indices/recovery/PeerRecoveryTargetServiceTests.java index 437d1eaa84150..835d16117ad60 100644 --- a/core/src/test/java/org/elasticsearch/indices/recovery/PeerRecoveryTargetServiceTests.java +++ b/core/src/test/java/org/elasticsearch/indices/recovery/PeerRecoveryTargetServiceTests.java @@ -25,7 +25,7 @@ import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.index.VersionType; import org.elasticsearch.index.mapper.SourceToParse; -import org.elasticsearch.index.seqno.SequenceNumbersService; +import org.elasticsearch.index.seqno.SequenceNumbers; import org.elasticsearch.index.shard.IndexShard; import org.elasticsearch.index.shard.IndexShardTestCase; import org.elasticsearch.index.shard.ShardId; @@ -73,11 +73,11 @@ Path translogLocation() { translogLocation.set(replica.getTranslog().location()); - assertThat(PeerRecoveryTargetService.getStartingSeqNo(recoveryTarget), equalTo(SequenceNumbersService.UNASSIGNED_SEQ_NO)); - final Translog translog = replica.getTranslog(); - translogLocation.set( - writeTranslog(replica.shardId(), translog.getTranslogUUID(), translog.currentFileGeneration(), maxSeqNo - 1)); + final String translogUUID = translog.getTranslogUUID(); + assertThat(PeerRecoveryTargetService.getStartingSeqNo(recoveryTarget), equalTo(SequenceNumbers.UNASSIGNED_SEQ_NO)); + + translogLocation.set(writeTranslog(replica.shardId(), translogUUID, translog.currentFileGeneration(), maxSeqNo - 1)); // commit is good, global checkpoint is at least max *committed* which is NO_OPS_PERFORMED assertThat(PeerRecoveryTargetService.getStartingSeqNo(recoveryTarget), equalTo(0L)); @@ -87,10 +87,9 @@ Path translogLocation() { translogLocation.set(replica.getTranslog().location()); // commit is not good, global checkpoint is below max - assertThat(PeerRecoveryTargetService.getStartingSeqNo(recoveryTarget), equalTo(SequenceNumbersService.UNASSIGNED_SEQ_NO)); + assertThat(PeerRecoveryTargetService.getStartingSeqNo(recoveryTarget), equalTo(SequenceNumbers.UNASSIGNED_SEQ_NO)); - translogLocation.set( - writeTranslog(replica.shardId(), translog.getTranslogUUID(), translog.currentFileGeneration(), maxSeqNo)); + translogLocation.set(writeTranslog(replica.shardId(), translogUUID, translog.currentFileGeneration(), maxSeqNo)); // commit is good, global checkpoint is above max assertThat(PeerRecoveryTargetService.getStartingSeqNo(recoveryTarget), equalTo(localCheckpoint + 1)); diff --git a/core/src/test/java/org/elasticsearch/indices/recovery/RecoverySourceHandlerTests.java b/core/src/test/java/org/elasticsearch/indices/recovery/RecoverySourceHandlerTests.java index 4f1a2364d184b..993cc84506498 100644 --- a/core/src/test/java/org/elasticsearch/indices/recovery/RecoverySourceHandlerTests.java +++ b/core/src/test/java/org/elasticsearch/indices/recovery/RecoverySourceHandlerTests.java @@ -38,6 +38,7 @@ import org.elasticsearch.action.ActionListener; import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.cluster.node.DiscoveryNode; +import org.elasticsearch.common.UUIDs; import org.elasticsearch.common.bytes.BytesArray; import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.io.FileSystemUtils; @@ -56,7 +57,7 @@ import org.elasticsearch.index.mapper.Uid; import org.elasticsearch.index.mapper.UidFieldMapper; import org.elasticsearch.index.seqno.SeqNoStats; -import org.elasticsearch.index.seqno.SequenceNumbersService; +import org.elasticsearch.index.seqno.SequenceNumbers; import org.elasticsearch.index.shard.IndexShard; import org.elasticsearch.index.shard.IndexShardRelocatedException; import org.elasticsearch.index.shard.IndexShardState; @@ -96,17 +97,9 @@ public class RecoverySourceHandlerTests extends ESTestCase { public void testSendFiles() throws Throwable { Settings settings = Settings.builder().put("indices.recovery.concurrent_streams", 1). - put("indices.recovery.concurrent_small_file_streams", 1).build(); + put("indices.recovery.concurrent_small_file_streams", 1).build(); final RecoverySettings recoverySettings = new RecoverySettings(settings, service); - final StartRecoveryRequest request = new StartRecoveryRequest( - shardId, - null, - new DiscoveryNode("b", buildNewFakeTransportAddress(), emptyMap(), emptySet(), Version.CURRENT), - new DiscoveryNode("b", buildNewFakeTransportAddress(), emptyMap(), emptySet(), Version.CURRENT), - null, - randomBoolean(), - randomNonNegativeLong(), - randomBoolean() ? SequenceNumbersService.UNASSIGNED_SEQ_NO : randomNonNegativeLong()); + final StartRecoveryRequest request = getStartRecoveryRequest(); Store store = newStore(createTempDir()); RecoverySourceHandler handler = new RecoverySourceHandler(null, null, request, recoverySettings.getChunkSize().bytesAsInt(), Settings.EMPTY); @@ -151,19 +144,26 @@ public void close() throws IOException { IOUtils.close(reader, store, targetStore); } - public void testSendSnapshotSendsOps() throws IOException { - final RecoverySettings recoverySettings = new RecoverySettings(Settings.EMPTY, service); - final int fileChunkSizeInBytes = recoverySettings.getChunkSize().bytesAsInt(); - final long startingSeqNo = randomBoolean() ? SequenceNumbersService.UNASSIGNED_SEQ_NO : randomIntBetween(0, 16); - final StartRecoveryRequest request = new StartRecoveryRequest( + public StartRecoveryRequest getStartRecoveryRequest() throws IOException { + Store.MetadataSnapshot metadataSnapshot = randomBoolean() ? Store.MetadataSnapshot.EMPTY : + new Store.MetadataSnapshot(Collections.emptyMap(), + Collections.singletonMap(Engine.HISTORY_UUID_KEY, UUIDs.randomBase64UUID()), randomIntBetween(0, 100)); + return new StartRecoveryRequest( shardId, null, new DiscoveryNode("b", buildNewFakeTransportAddress(), emptyMap(), emptySet(), Version.CURRENT), new DiscoveryNode("b", buildNewFakeTransportAddress(), emptyMap(), emptySet(), Version.CURRENT), - null, + metadataSnapshot, randomBoolean(), randomNonNegativeLong(), - randomBoolean() ? SequenceNumbersService.UNASSIGNED_SEQ_NO : randomNonNegativeLong()); + randomBoolean() || metadataSnapshot.getHistoryUUID() == null ? + SequenceNumbers.UNASSIGNED_SEQ_NO : randomNonNegativeLong()); + } + + public void testSendSnapshotSendsOps() throws IOException { + final RecoverySettings recoverySettings = new RecoverySettings(Settings.EMPTY, service); + final int fileChunkSizeInBytes = recoverySettings.getChunkSize().bytesAsInt(); + final StartRecoveryRequest request = getStartRecoveryRequest(); final IndexShard shard = mock(IndexShard.class); when(shard.state()).thenReturn(IndexShardState.STARTED); final RecoveryTargetHandler recoveryTarget = mock(RecoveryTargetHandler.class); @@ -173,7 +173,7 @@ public void testSendSnapshotSendsOps() throws IOException { final int initialNumberOfDocs = randomIntBetween(16, 64); for (int i = 0; i < initialNumberOfDocs; i++) { final Engine.Index index = getIndex(Integer.toString(i)); - operations.add(new Translog.Index(index, new Engine.IndexResult(1, SequenceNumbersService.UNASSIGNED_SEQ_NO, true))); + operations.add(new Translog.Index(index, new Engine.IndexResult(1, SequenceNumbers.UNASSIGNED_SEQ_NO, true))); } final int numberOfDocsWithValidSequenceNumbers = randomIntBetween(16, 64); for (int i = initialNumberOfDocs; i < initialNumberOfDocs + numberOfDocsWithValidSequenceNumbers; i++) { @@ -181,6 +181,7 @@ public void testSendSnapshotSendsOps() throws IOException { operations.add(new Translog.Index(index, new Engine.IndexResult(1, i - initialNumberOfDocs, true))); } operations.add(null); + final long startingSeqNo = randomBoolean() ? SequenceNumbers.UNASSIGNED_SEQ_NO : randomIntBetween(0, 16); RecoverySourceHandler.SendSnapshotResult result = handler.sendSnapshot(startingSeqNo, new Translog.Snapshot() { @Override public void close() { @@ -199,7 +200,7 @@ public Translog.Operation next() throws IOException { return operations.get(counter++); } }); - if (startingSeqNo == SequenceNumbersService.UNASSIGNED_SEQ_NO) { + if (startingSeqNo == SequenceNumbers.UNASSIGNED_SEQ_NO) { assertThat(result.totalOperations, equalTo(initialNumberOfDocs + numberOfDocsWithValidSequenceNumbers)); } else { assertThat(result.totalOperations, equalTo(Math.toIntExact(numberOfDocsWithValidSequenceNumbers - startingSeqNo))); @@ -226,18 +227,9 @@ private Engine.Index getIndex(final String id) { public void testHandleCorruptedIndexOnSendSendFiles() throws Throwable { Settings settings = Settings.builder().put("indices.recovery.concurrent_streams", 1). - put("indices.recovery.concurrent_small_file_streams", 1).build(); + put("indices.recovery.concurrent_small_file_streams", 1).build(); final RecoverySettings recoverySettings = new RecoverySettings(settings, service); - final StartRecoveryRequest request = - new StartRecoveryRequest( - shardId, - null, - new DiscoveryNode("b", buildNewFakeTransportAddress(), emptyMap(), emptySet(), Version.CURRENT), - new DiscoveryNode("b", buildNewFakeTransportAddress(), emptyMap(), emptySet(), Version.CURRENT), - null, - randomBoolean(), - randomNonNegativeLong(), - randomBoolean() ? SequenceNumbersService.UNASSIGNED_SEQ_NO : 0L); + final StartRecoveryRequest request = getStartRecoveryRequest(); Path tempDir = createTempDir(); Store store = newStore(tempDir, false); AtomicBoolean failedEngine = new AtomicBoolean(false); @@ -268,8 +260,8 @@ protected void failEngine(IOException cause) { } CorruptionUtils.corruptFile(random(), FileSystemUtils.files(tempDir, (p) -> - (p.getFileName().toString().equals("write.lock") || - p.getFileName().toString().startsWith("extra")) == false)); + (p.getFileName().toString().equals("write.lock") || + p.getFileName().toString().startsWith("extra")) == false)); Store targetStore = newStore(createTempDir(), false); try { handler.sendFiles(store, metas.toArray(new StoreFileMetaData[0]), (md) -> { @@ -296,18 +288,9 @@ public void close() throws IOException { public void testHandleExceptinoOnSendSendFiles() throws Throwable { Settings settings = Settings.builder().put("indices.recovery.concurrent_streams", 1). - put("indices.recovery.concurrent_small_file_streams", 1).build(); + put("indices.recovery.concurrent_small_file_streams", 1).build(); final RecoverySettings recoverySettings = new RecoverySettings(settings, service); - final StartRecoveryRequest request = - new StartRecoveryRequest( - shardId, - null, - new DiscoveryNode("b", buildNewFakeTransportAddress(), emptyMap(), emptySet(), Version.CURRENT), - new DiscoveryNode("b", buildNewFakeTransportAddress(), emptyMap(), emptySet(), Version.CURRENT), - null, - randomBoolean(), - randomNonNegativeLong(), - randomBoolean() ? SequenceNumbersService.UNASSIGNED_SEQ_NO : 0L); + final StartRecoveryRequest request = getStartRecoveryRequest(); Path tempDir = createTempDir(); Store store = newStore(tempDir, false); AtomicBoolean failedEngine = new AtomicBoolean(false); @@ -363,17 +346,7 @@ protected void failEngine(IOException cause) { public void testThrowExceptionOnPrimaryRelocatedBeforePhase1Started() throws IOException { final RecoverySettings recoverySettings = new RecoverySettings(Settings.EMPTY, service); - final boolean attemptSequenceNumberBasedRecovery = randomBoolean(); - final StartRecoveryRequest request = - new StartRecoveryRequest( - shardId, - null, - new DiscoveryNode("b", buildNewFakeTransportAddress(), emptyMap(), emptySet(), Version.CURRENT), - new DiscoveryNode("b", buildNewFakeTransportAddress(), emptyMap(), emptySet(), Version.CURRENT), - null, - false, - randomNonNegativeLong(), - attemptSequenceNumberBasedRecovery ? randomNonNegativeLong() : SequenceNumbersService.UNASSIGNED_SEQ_NO); + final StartRecoveryRequest request = getStartRecoveryRequest(); final IndexShard shard = mock(IndexShard.class); when(shard.seqNoStats()).thenReturn(mock(SeqNoStats.class)); when(shard.segmentStats(anyBoolean())).thenReturn(mock(SegmentsStats.class)); @@ -412,7 +385,7 @@ void prepareTargetForTranslog(final int totalTranslogOps) throws IOException { @Override long phase2(long startingSeqNo, Translog.Snapshot snapshot) throws IOException { phase2Called.set(true); - return SequenceNumbersService.UNASSIGNED_SEQ_NO; + return SequenceNumbers.UNASSIGNED_SEQ_NO; } }; diff --git a/core/src/test/java/org/elasticsearch/indices/recovery/RecoveryTests.java b/core/src/test/java/org/elasticsearch/indices/recovery/RecoveryTests.java index 48f0c2f839feb..55b7e22eb8a38 100644 --- a/core/src/test/java/org/elasticsearch/indices/recovery/RecoveryTests.java +++ b/core/src/test/java/org/elasticsearch/indices/recovery/RecoveryTests.java @@ -19,22 +19,35 @@ package org.elasticsearch.indices.recovery; +import org.apache.lucene.index.IndexWriter; +import org.apache.lucene.index.IndexWriterConfig; +import org.apache.lucene.index.NoMergePolicy; import org.elasticsearch.action.index.IndexRequest; import org.elasticsearch.cluster.metadata.IndexMetaData; +import org.elasticsearch.common.UUIDs; import org.elasticsearch.common.bytes.BytesArray; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.util.BigArrays; import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.index.IndexSettings; import org.elasticsearch.index.VersionType; +import org.elasticsearch.index.engine.Engine; import org.elasticsearch.index.mapper.SourceToParse; import org.elasticsearch.index.replication.ESIndexLevelReplicationTestCase; import org.elasticsearch.index.replication.RecoveryDuringReplicationTests; import org.elasticsearch.index.shard.IndexShard; +import org.elasticsearch.index.translog.Translog; +import org.elasticsearch.index.translog.TranslogConfig; +import java.util.HashMap; +import java.util.Map; import java.util.concurrent.CountDownLatch; import java.util.concurrent.Future; +import static org.elasticsearch.index.translog.TranslogDeletionPolicies.createTranslogDeletionPolicy; +import static org.hamcrest.Matchers.empty; import static org.hamcrest.Matchers.equalTo; +import static org.hamcrest.Matchers.not; public class RecoveryTests extends ESIndexLevelReplicationTestCase { @@ -54,7 +67,6 @@ public void testTranslogHistoryTransferred() throws Exception { } } - public void testRetentionPolicyChangeDuringRecovery() throws Exception { try (ReplicationGroup shards = createGroup(0)) { shards.startPrimary(); @@ -132,4 +144,67 @@ public void testRecoveryWithOutOfOrderDelete() throws Exception { assertThat(newReplica.getTranslog().totalOperations(), equalTo(translogOps)); } } + + public void testDifferentHistoryUUIDDisablesOPsRecovery() throws Exception { + try (ReplicationGroup shards = createGroup(1)) { + shards.startAll(); + // index some shared docs + final int flushedDocs = 10; + final int nonFlushedDocs = randomIntBetween(0, 10); + final int numDocs = flushedDocs + nonFlushedDocs; + shards.indexDocs(flushedDocs); + shards.flush(); + shards.indexDocs(nonFlushedDocs); + + IndexShard replica = shards.getReplicas().get(0); + final String translogUUID = replica.getTranslog().getTranslogUUID(); + final String historyUUID = replica.getHistoryUUID(); + Translog.TranslogGeneration translogGeneration = replica.getTranslog().getGeneration(); + shards.removeReplica(replica); + replica.close("test", false); + IndexWriterConfig iwc = new IndexWriterConfig(null) + .setCommitOnClose(false) + // we don't want merges to happen here - we call maybe merge on the engine + // later once we stared it up otherwise we would need to wait for it here + // we also don't specify a codec here and merges should use the engines for this index + .setMergePolicy(NoMergePolicy.INSTANCE) + .setOpenMode(IndexWriterConfig.OpenMode.APPEND); + Map userData = new HashMap<>(replica.store().readLastCommittedSegmentsInfo().getUserData()); + final String translogUUIDtoUse; + final long translogGenToUse; + final String historyUUIDtoUse = UUIDs.randomBase64UUID(random()); + if (randomBoolean()) { + // create a new translog + final TranslogConfig translogConfig = + new TranslogConfig(replica.shardId(), replica.shardPath().resolveTranslog(), replica.indexSettings(), + BigArrays.NON_RECYCLING_INSTANCE); + try (Translog translog = new Translog(translogConfig, null, createTranslogDeletionPolicy(), () -> flushedDocs)) { + translogUUIDtoUse = translog.getTranslogUUID(); + translogGenToUse = translog.currentFileGeneration(); + } + } else { + translogUUIDtoUse = translogGeneration.translogUUID; + translogGenToUse = translogGeneration.translogFileGeneration; + } + try (IndexWriter writer = new IndexWriter(replica.store().directory(), iwc)) { + userData.put(Engine.HISTORY_UUID_KEY, historyUUIDtoUse); + userData.put(Translog.TRANSLOG_UUID_KEY, translogUUIDtoUse); + userData.put(Translog.TRANSLOG_GENERATION_KEY, Long.toString(translogGenToUse)); + writer.setLiveCommitData(userData.entrySet()); + writer.commit(); + } + replica.store().close(); + IndexShard newReplica = shards.addReplicaWithExistingPath(replica.shardPath(), replica.routingEntry().currentNodeId()); + shards.recoverReplica(newReplica); + // file based recovery should be made + assertThat(newReplica.recoveryState().getIndex().fileDetails(), not(empty())); + assertThat(newReplica.getTranslog().totalOperations(), equalTo(numDocs)); + + // history uuid was restored + assertThat(newReplica.getHistoryUUID(), equalTo(historyUUID)); + assertThat(newReplica.commitStats().getUserData().get(Engine.HISTORY_UUID_KEY), equalTo(historyUUID)); + + shards.assertAllEqual(numDocs); + } + } } diff --git a/core/src/test/java/org/elasticsearch/indices/recovery/StartRecoveryRequestTests.java b/core/src/test/java/org/elasticsearch/indices/recovery/StartRecoveryRequestTests.java index 85a9ee10208c7..bb1aac89f3e8f 100644 --- a/core/src/test/java/org/elasticsearch/indices/recovery/StartRecoveryRequestTests.java +++ b/core/src/test/java/org/elasticsearch/indices/recovery/StartRecoveryRequestTests.java @@ -24,13 +24,15 @@ import org.elasticsearch.common.UUIDs; import org.elasticsearch.common.io.stream.InputStreamStreamInput; import org.elasticsearch.common.io.stream.OutputStreamStreamOutput; -import org.elasticsearch.index.seqno.SequenceNumbersService; +import org.elasticsearch.index.engine.Engine; +import org.elasticsearch.index.seqno.SequenceNumbers; import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.index.store.Store; import org.elasticsearch.test.ESTestCase; import java.io.ByteArrayInputStream; import java.io.ByteArrayOutputStream; +import java.util.Collections; import static java.util.Collections.emptyMap; import static java.util.Collections.emptySet; @@ -41,15 +43,19 @@ public class StartRecoveryRequestTests extends ESTestCase { public void testSerialization() throws Exception { final Version targetNodeVersion = randomVersion(random()); + Store.MetadataSnapshot metadataSnapshot = randomBoolean() ? Store.MetadataSnapshot.EMPTY : + new Store.MetadataSnapshot(Collections.emptyMap(), + Collections.singletonMap(Engine.HISTORY_UUID_KEY, UUIDs.randomBase64UUID()), randomIntBetween(0, 100)); final StartRecoveryRequest outRequest = new StartRecoveryRequest( new ShardId("test", "_na_", 0), UUIDs.randomBase64UUID(), new DiscoveryNode("a", buildNewFakeTransportAddress(), emptyMap(), emptySet(), targetNodeVersion), new DiscoveryNode("b", buildNewFakeTransportAddress(), emptyMap(), emptySet(), targetNodeVersion), - Store.MetadataSnapshot.EMPTY, + metadataSnapshot, randomBoolean(), randomNonNegativeLong(), - randomBoolean() ? SequenceNumbersService.UNASSIGNED_SEQ_NO : randomNonNegativeLong()); + randomBoolean() || metadataSnapshot.getHistoryUUID() == null ? + SequenceNumbers.UNASSIGNED_SEQ_NO : randomNonNegativeLong()); final ByteArrayOutputStream outBuffer = new ByteArrayOutputStream(); final OutputStreamStreamOutput out = new OutputStreamStreamOutput(outBuffer); @@ -72,7 +78,7 @@ public void testSerialization() throws Exception { if (targetNodeVersion.onOrAfter(Version.V_6_0_0_alpha1)) { assertThat(outRequest.startingSeqNo(), equalTo(inRequest.startingSeqNo())); } else { - assertThat(SequenceNumbersService.UNASSIGNED_SEQ_NO, equalTo(inRequest.startingSeqNo())); + assertThat(SequenceNumbers.UNASSIGNED_SEQ_NO, equalTo(inRequest.startingSeqNo())); } } diff --git a/core/src/test/java/org/elasticsearch/indices/settings/GetSettingsBlocksIT.java b/core/src/test/java/org/elasticsearch/indices/settings/GetSettingsBlocksIT.java index cb45a639c07eb..3d9b2aab7ad16 100644 --- a/core/src/test/java/org/elasticsearch/indices/settings/GetSettingsBlocksIT.java +++ b/core/src/test/java/org/elasticsearch/indices/settings/GetSettingsBlocksIT.java @@ -21,6 +21,7 @@ import org.elasticsearch.action.admin.indices.settings.get.GetSettingsResponse; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.index.mapper.FieldMapper; import org.elasticsearch.index.mapper.MapperService; import org.elasticsearch.test.ESIntegTestCase; @@ -42,7 +43,7 @@ public void testGetSettingsWithBlocks() throws Exception { .setSettings(Settings.builder() .put("index.refresh_interval", -1) .put("index.merge.policy.expunge_deletes_allowed", "30") - .put(MapperService.INDEX_MAPPER_DYNAMIC_SETTING.getKey(), false))); + .put(FieldMapper.IGNORE_MALFORMED_SETTING.getKey(), false))); for (String block : Arrays.asList(SETTING_BLOCKS_READ, SETTING_BLOCKS_WRITE, SETTING_READ_ONLY, SETTING_READ_ONLY_ALLOW_DELETE)) { try { @@ -51,7 +52,7 @@ public void testGetSettingsWithBlocks() throws Exception { assertThat(response.getIndexToSettings().size(), greaterThanOrEqualTo(1)); assertThat(response.getSetting("test", "index.refresh_interval"), equalTo("-1")); assertThat(response.getSetting("test", "index.merge.policy.expunge_deletes_allowed"), equalTo("30")); - assertThat(response.getSetting("test", MapperService.INDEX_MAPPER_DYNAMIC_SETTING.getKey()), equalTo("false")); + assertThat(response.getSetting("test", FieldMapper.IGNORE_MALFORMED_SETTING.getKey()), equalTo("false")); } finally { disableIndexBlock("test", block); } diff --git a/core/src/test/java/org/elasticsearch/indices/state/OpenCloseIndexIT.java b/core/src/test/java/org/elasticsearch/indices/state/OpenCloseIndexIT.java index a867425f392a8..f0808f3574195 100644 --- a/core/src/test/java/org/elasticsearch/indices/state/OpenCloseIndexIT.java +++ b/core/src/test/java/org/elasticsearch/indices/state/OpenCloseIndexIT.java @@ -30,6 +30,7 @@ import org.elasticsearch.action.support.IndicesOptions; import org.elasticsearch.client.Client; import org.elasticsearch.cluster.metadata.IndexMetaData; +import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.index.IndexNotFoundException; @@ -67,6 +68,7 @@ public void testSimpleCloseOpen() { OpenIndexResponse openIndexResponse = client.admin().indices().prepareOpen("test1").execute().actionGet(); assertThat(openIndexResponse.isAcknowledged(), equalTo(true)); + assertThat(openIndexResponse.isShardsAcknowledged(), equalTo(true)); assertIndexIsOpened("test1"); } @@ -123,6 +125,7 @@ public void testOpenOneMissingIndexIgnoreMissing() { OpenIndexResponse openIndexResponse = client.admin().indices().prepareOpen("test1", "test2") .setIndicesOptions(IndicesOptions.lenientExpandOpen()).execute().actionGet(); assertThat(openIndexResponse.isAcknowledged(), equalTo(true)); + assertThat(openIndexResponse.isShardsAcknowledged(), equalTo(true)); assertIndexIsOpened("test1"); } @@ -141,8 +144,10 @@ public void testCloseOpenMultipleIndices() { OpenIndexResponse openIndexResponse1 = client.admin().indices().prepareOpen("test1").execute().actionGet(); assertThat(openIndexResponse1.isAcknowledged(), equalTo(true)); + assertThat(openIndexResponse1.isShardsAcknowledged(), equalTo(true)); OpenIndexResponse openIndexResponse2 = client.admin().indices().prepareOpen("test2").execute().actionGet(); assertThat(openIndexResponse2.isAcknowledged(), equalTo(true)); + assertThat(openIndexResponse2.isShardsAcknowledged(), equalTo(true)); assertIndexIsOpened("test1", "test2", "test3"); } @@ -159,6 +164,7 @@ public void testCloseOpenWildcard() { OpenIndexResponse openIndexResponse = client.admin().indices().prepareOpen("test*").execute().actionGet(); assertThat(openIndexResponse.isAcknowledged(), equalTo(true)); + assertThat(openIndexResponse.isShardsAcknowledged(), equalTo(true)); assertIndexIsOpened("test1", "test2", "a"); } @@ -174,6 +180,7 @@ public void testCloseOpenAll() { OpenIndexResponse openIndexResponse = client.admin().indices().prepareOpen("_all").execute().actionGet(); assertThat(openIndexResponse.isAcknowledged(), equalTo(true)); + assertThat(openIndexResponse.isShardsAcknowledged(), equalTo(true)); assertIndexIsOpened("test1", "test2", "test3"); } @@ -189,6 +196,7 @@ public void testCloseOpenAllWildcard() { OpenIndexResponse openIndexResponse = client.admin().indices().prepareOpen("*").execute().actionGet(); assertThat(openIndexResponse.isAcknowledged(), equalTo(true)); + assertThat(openIndexResponse.isShardsAcknowledged(), equalTo(true)); assertIndexIsOpened("test1", "test2", "test3"); } @@ -229,6 +237,7 @@ public void testOpenAlreadyOpenedIndex() { //no problem if we try to open an index that's already in open state OpenIndexResponse openIndexResponse1 = client.admin().indices().prepareOpen("test1").execute().actionGet(); assertThat(openIndexResponse1.isAcknowledged(), equalTo(true)); + assertThat(openIndexResponse1.isShardsAcknowledged(), equalTo(true)); assertIndexIsOpened("test1"); } @@ -264,6 +273,7 @@ public void testSimpleCloseOpenAlias() { OpenIndexResponse openIndexResponse = client.admin().indices().prepareOpen("test1-alias").execute().actionGet(); assertThat(openIndexResponse.isAcknowledged(), equalTo(true)); + assertThat(openIndexResponse.isShardsAcknowledged(), equalTo(true)); assertIndexIsOpened("test1"); } @@ -284,9 +294,24 @@ public void testCloseOpenAliasMultipleIndices() { OpenIndexResponse openIndexResponse = client.admin().indices().prepareOpen("test-alias").execute().actionGet(); assertThat(openIndexResponse.isAcknowledged(), equalTo(true)); + assertThat(openIndexResponse.isShardsAcknowledged(), equalTo(true)); assertIndexIsOpened("test1", "test2"); } + public void testOpenWaitingForActiveShardsFailed() { + Client client = client(); + Settings settings = Settings.builder() + .put(IndexMetaData.INDEX_NUMBER_OF_SHARDS_SETTING.getKey(), 1) + .put(IndexMetaData.INDEX_NUMBER_OF_REPLICAS_SETTING.getKey(), 0) + .build(); + assertAcked(client.admin().indices().prepareCreate("test").setSettings(settings).get()); + assertAcked(client.admin().indices().prepareClose("test").get()); + + OpenIndexResponse response = client.admin().indices().prepareOpen("test").setTimeout("100ms").setWaitForActiveShards(2).get(); + assertAcked(response); + assertThat(response.isShardsAcknowledged(), equalTo(false)); + } + private void assertIndexIsOpened(String... indices) { checkIndexState(IndexMetaData.State.OPEN, indices); } @@ -359,6 +384,7 @@ public void testOpenCloseIndexWithBlocks() { // Opening an index is not blocked OpenIndexResponse openIndexResponse = client().admin().indices().prepareOpen("test").execute().actionGet(); assertAcked(openIndexResponse); + assertThat(openIndexResponse.isShardsAcknowledged(), equalTo(true)); assertIndexIsOpened("test"); } finally { disableIndexBlock("test", blockSetting); diff --git a/core/src/test/java/org/elasticsearch/indices/state/RareClusterStateIT.java b/core/src/test/java/org/elasticsearch/indices/state/RareClusterStateIT.java index 5aa9ddc453323..bf213b51475fb 100644 --- a/core/src/test/java/org/elasticsearch/indices/state/RareClusterStateIT.java +++ b/core/src/test/java/org/elasticsearch/indices/state/RareClusterStateIT.java @@ -107,14 +107,15 @@ public void testUnassignedShardAndEmptyNodesInRoutingTable() throws Exception { .nodes(DiscoveryNodes.EMPTY_NODES) .build(), false ); - RoutingAllocation routingAllocation = new RoutingAllocation(allocationDeciders, routingNodes, current, ClusterInfo.EMPTY, System.nanoTime(), false); + RoutingAllocation routingAllocation = new RoutingAllocation(allocationDeciders, routingNodes, current, ClusterInfo.EMPTY, System.nanoTime()); allocator.allocateUnassigned(routingAllocation); } public void testAssignmentWithJustAddedNodes() throws Exception { internalCluster().startNode(); final String index = "index"; - prepareCreate(index).setSettings(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1, IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0).get(); + prepareCreate(index).setSettings(Settings.builder().put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1) + .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0)).get(); ensureGreen(index); // close to have some unassigned started shards shards.. @@ -176,7 +177,7 @@ public void testDeleteCreateInOneBulk() throws Exception { internalCluster().startMasterOnlyNode(); String dataNode = internalCluster().startDataOnlyNode(); assertFalse(client().admin().cluster().prepareHealth().setWaitForNodes("2").get().isTimedOut()); - prepareCreate("test").setSettings(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0).addMapping("type").get(); + prepareCreate("test").setSettings(Settings.builder().put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0)).addMapping("type").get(); ensureGreen("test"); // now that the cluster is stable, remove publishing timeout @@ -193,8 +194,8 @@ public void testDeleteCreateInOneBulk() throws Exception { disruption.startDisrupting(); logger.info("--> delete index and recreate it"); assertFalse(client().admin().indices().prepareDelete("test").setTimeout("200ms").get().isAcknowledged()); - assertFalse(prepareCreate("test").setTimeout("200ms").setSettings(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0, - IndexMetaData.SETTING_WAIT_FOR_ACTIVE_SHARDS.getKey(), "0").get().isAcknowledged()); + assertFalse(prepareCreate("test").setTimeout("200ms").setSettings(Settings.builder().put(IndexMetaData + .SETTING_NUMBER_OF_REPLICAS, 0).put(IndexMetaData.SETTING_WAIT_FOR_ACTIVE_SHARDS.getKey(), "0")).get().isAcknowledged()); logger.info("--> letting cluster proceed"); disruption.stopDisrupting(); ensureGreen(TimeValue.timeValueMinutes(30), "test"); @@ -405,8 +406,7 @@ public void onFailure(Exception e) { } }); - // Wait for document to be indexed on primary - assertBusy(() -> assertTrue(client().prepareGet("index", "type", "1").setPreference("_primary").get().isExists())); + assertBusy(() -> assertTrue(client().prepareGet("index", "type", "1").get().isExists())); // The mappings have not been propagated to the replica yet as a consequence the document count not be indexed // We wait on purpose to make sure that the document is not indexed because the shard operation is stalled diff --git a/core/src/test/java/org/elasticsearch/indices/stats/IndexStatsIT.java b/core/src/test/java/org/elasticsearch/indices/stats/IndexStatsIT.java index 8a24579051ca0..9dce712130650 100644 --- a/core/src/test/java/org/elasticsearch/indices/stats/IndexStatsIT.java +++ b/core/src/test/java/org/elasticsearch/indices/stats/IndexStatsIT.java @@ -226,7 +226,8 @@ public void testClearAllCaches() throws Exception { } public void testQueryCache() throws Exception { - assertAcked(client().admin().indices().prepareCreate("idx").setSettings(IndicesRequestCache.INDEX_CACHE_REQUEST_ENABLED_SETTING.getKey(), true).get()); + assertAcked(client().admin().indices().prepareCreate("idx") + .setSettings(Settings.builder().put(IndicesRequestCache.INDEX_CACHE_REQUEST_ENABLED_SETTING.getKey(), true)).get()); ensureGreen(); // index docs until we have at least one doc on each shard, otherwise, our tests will not work @@ -390,7 +391,8 @@ public void testThrottleStats() throws Exception { public void testSimpleStats() throws Exception { // this test has some type stats tests that can be removed in 7.0 - assertAcked(prepareCreate("test1").setSettings("index.version.created", Version.V_5_6_0.id)); // allows for multiple types + assertAcked(prepareCreate("test1") + .setSettings(Settings.builder().put("index.version.created", Version.V_5_6_0.id))); // allows for multiple types createIndex("test2"); ensureGreen(); @@ -556,7 +558,7 @@ public void testMergeStats() { public void testSegmentsStats() { assertAcked(prepareCreate("test_index") - .setSettings(SETTING_NUMBER_OF_REPLICAS, between(0, 1))); + .setSettings(Settings.builder().put(SETTING_NUMBER_OF_REPLICAS, between(0, 1)))); ensureGreen(); NumShards test1 = getNumShards("test_index"); @@ -571,6 +573,7 @@ public void testSegmentsStats() { client().admin().indices().prepareFlush().get(); client().admin().indices().prepareForceMerge().setMaxNumSegments(1).execute().actionGet(); + client().admin().indices().prepareRefresh().get(); stats = client().admin().indices().prepareStats().setSegments(true).get(); assertThat(stats.getTotal().getSegments(), notNullValue()); @@ -742,7 +745,7 @@ public void testMultiIndex() throws Exception { public void testFieldDataFieldsParam() throws Exception { assertAcked(client().admin().indices().prepareCreate("test1") - .setSettings("index.version.created", Version.V_5_6_0.id) + .setSettings(Settings.builder().put("index.version.created", Version.V_5_6_0.id)) .addMapping("doc", "bar", "type=text,fielddata=true", "baz", "type=text,fielddata=true").get()); diff --git a/core/src/test/java/org/elasticsearch/indices/template/SimpleIndexTemplateIT.java b/core/src/test/java/org/elasticsearch/indices/template/SimpleIndexTemplateIT.java index 901e2b37bf830..e81538b9057ba 100644 --- a/core/src/test/java/org/elasticsearch/indices/template/SimpleIndexTemplateIT.java +++ b/core/src/test/java/org/elasticsearch/indices/template/SimpleIndexTemplateIT.java @@ -376,7 +376,7 @@ public void testInvalidSettings() throws Exception { createIndex("test"); GetSettingsResponse getSettingsResponse = client().admin().indices().prepareGetSettings("test").get(); - assertNull(getSettingsResponse.getIndexToSettings().get("test").getAsMap().get("index.does_not_exist")); + assertNull(getSettingsResponse.getIndexToSettings().get("test").get("index.does_not_exist")); } public void testIndexTemplateWithAliases() throws Exception { @@ -392,7 +392,7 @@ public void testIndexTemplateWithAliases() throws Exception { .get(); assertAcked(prepareCreate("test_index") - .setSettings("index.version.created", Version.V_5_6_0.id) // allow for multiple version + .setSettings(Settings.builder().put("index.version.created", Version.V_5_6_0.id)) // allow for multiple version .addMapping("type1").addMapping("type2").addMapping("typeX").addMapping("typeY").addMapping("typeZ")); ensureGreen(); @@ -852,6 +852,6 @@ public void testPartitionedTemplate() throws Exception { .get(); GetSettingsResponse getSettingsResponse = client().admin().indices().prepareGetSettings("test_good").get(); - assertEquals("6", getSettingsResponse.getIndexToSettings().get("test_good").getAsMap().get("index.routing_partition_size")); + assertEquals("6", getSettingsResponse.getIndexToSettings().get("test_good").get("index.routing_partition_size")); } } diff --git a/core/src/test/java/org/elasticsearch/ingest/ConfigurationUtilsTests.java b/core/src/test/java/org/elasticsearch/ingest/ConfigurationUtilsTests.java index 6817455f7a695..af863410f9f35 100644 --- a/core/src/test/java/org/elasticsearch/ingest/ConfigurationUtilsTests.java +++ b/core/src/test/java/org/elasticsearch/ingest/ConfigurationUtilsTests.java @@ -115,7 +115,7 @@ public void testReadProcessors() throws Exception { Map registry = Collections.singletonMap("test_processor", (factories, tag, config) -> processor); - List>> config = new ArrayList<>(); + List> config = new ArrayList<>(); Map emptyConfig = Collections.emptyMap(); config.add(Collections.singletonMap("test_processor", emptyConfig)); config.add(Collections.singletonMap("test_processor", emptyConfig)); @@ -135,7 +135,7 @@ public void testReadProcessors() throws Exception { assertThat(e.getHeader("processor_type"), equalTo(Collections.singletonList("unknown_processor"))); assertThat(e.getHeader("property_name"), is(nullValue())); - List>> config2 = new ArrayList<>(); + List> config2 = new ArrayList<>(); unknownTaggedConfig = new HashMap<>(); unknownTaggedConfig.put("tag", "my_unknown"); config2.add(Collections.singletonMap("unknown_processor", unknownTaggedConfig)); @@ -157,4 +157,27 @@ public void testReadProcessors() throws Exception { assertThat(e2.getHeader("property_name"), is(nullValue())); } + public void testReadProcessorFromObjectOrMap() throws Exception { + Processor processor = mock(Processor.class); + Map registry = + Collections.singletonMap("script", (processorFactories, tag, config) -> { + config.clear(); + return processor; + }); + + Object emptyConfig = Collections.emptyMap(); + Processor processor1 = ConfigurationUtils.readProcessor(registry, "script", emptyConfig); + assertThat(processor1, sameInstance(processor)); + + Object inlineScript = "test_script"; + Processor processor2 = ConfigurationUtils.readProcessor(registry, "script", inlineScript); + assertThat(processor2, sameInstance(processor)); + + Object invalidConfig = 12L; + + ElasticsearchParseException ex = expectThrows(ElasticsearchParseException.class, + () -> ConfigurationUtils.readProcessor(registry, "unknown_processor", invalidConfig)); + assertThat(ex.getMessage(), equalTo("property isn't a map, but of type [" + invalidConfig.getClass().getName() + "]")); + } + } diff --git a/core/src/test/java/org/elasticsearch/ingest/IngestClientIT.java b/core/src/test/java/org/elasticsearch/ingest/IngestClientIT.java index 2b59f0d421cb9..654927b19f2fb 100644 --- a/core/src/test/java/org/elasticsearch/ingest/IngestClientIT.java +++ b/core/src/test/java/org/elasticsearch/ingest/IngestClientIT.java @@ -36,11 +36,16 @@ import org.elasticsearch.action.ingest.SimulatePipelineRequest; import org.elasticsearch.action.ingest.SimulatePipelineResponse; import org.elasticsearch.action.ingest.WritePipelineResponse; +import org.elasticsearch.action.support.replication.TransportReplicationActionTests; +import org.elasticsearch.action.update.UpdateRequest; import org.elasticsearch.client.Requests; +import org.elasticsearch.common.bytes.BytesArray; import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.plugins.Plugin; +import org.elasticsearch.script.Script; +import org.elasticsearch.script.ScriptService; import org.elasticsearch.test.ESIntegTestCase; import java.util.Arrays; @@ -169,6 +174,43 @@ public void testBulkWithIngestFailures() throws Exception { } } + public void testBulkWithUpsert() throws Exception { + createIndex("index"); + + BytesReference source = jsonBuilder().startObject() + .field("description", "my_pipeline") + .startArray("processors") + .startObject() + .startObject("test") + .endObject() + .endObject() + .endArray() + .endObject().bytes(); + PutPipelineRequest putPipelineRequest = new PutPipelineRequest("_id", source, XContentType.JSON); + client().admin().cluster().putPipeline(putPipelineRequest).get(); + + BulkRequest bulkRequest = new BulkRequest(); + IndexRequest indexRequest = new IndexRequest("index", "type", "1").setPipeline("_id"); + indexRequest.source(Requests.INDEX_CONTENT_TYPE, "field1", "val1"); + bulkRequest.add(indexRequest); + UpdateRequest updateRequest = new UpdateRequest("index", "type", "2"); + updateRequest.doc("{}", Requests.INDEX_CONTENT_TYPE); + updateRequest.upsert("{\"field1\":\"upserted_val\"}", XContentType.JSON).upsertRequest().setPipeline("_id"); + bulkRequest.add(updateRequest); + + BulkResponse response = client().bulk(bulkRequest).actionGet(); + + assertThat(response.getItems().length, equalTo(bulkRequest.requests().size())); + Map inserted = client().prepareGet("index", "type", "1") + .get().getSourceAsMap(); + assertThat(inserted.get("field1"), equalTo("val1")); + assertThat(inserted.get("processed"), equalTo(true)); + Map upserted = client().prepareGet("index", "type", "2") + .get().getSourceAsMap(); + assertThat(upserted.get("field1"), equalTo("upserted_val")); + assertThat(upserted.get("processed"), equalTo(true)); + } + public void test() throws Exception { BytesReference source = jsonBuilder().startObject() .field("description", "my_pipeline") diff --git a/core/src/test/java/org/elasticsearch/monitor/jvm/JvmGcMonitorServiceSettingsTests.java b/core/src/test/java/org/elasticsearch/monitor/jvm/JvmGcMonitorServiceSettingsTests.java index 48817e52d562e..f3e86c532d590 100644 --- a/core/src/test/java/org/elasticsearch/monitor/jvm/JvmGcMonitorServiceSettingsTests.java +++ b/core/src/test/java/org/elasticsearch/monitor/jvm/JvmGcMonitorServiceSettingsTests.java @@ -22,9 +22,9 @@ import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.test.ESTestCase; +import org.elasticsearch.threadpool.Scheduler.Cancellable; import org.elasticsearch.threadpool.TestThreadPool; import org.elasticsearch.threadpool.ThreadPool; -import org.elasticsearch.threadpool.ThreadPool.Cancellable; import java.util.AbstractMap; import java.util.HashSet; diff --git a/core/src/test/java/org/elasticsearch/monitor/os/OsProbeTests.java b/core/src/test/java/org/elasticsearch/monitor/os/OsProbeTests.java index 71305c41f56d7..21e9c488d9966 100644 --- a/core/src/test/java/org/elasticsearch/monitor/os/OsProbeTests.java +++ b/core/src/test/java/org/elasticsearch/monitor/os/OsProbeTests.java @@ -22,6 +22,7 @@ import org.apache.lucene.util.Constants; import org.elasticsearch.test.ESTestCase; +import java.math.BigInteger; import java.util.Arrays; import java.util.List; @@ -117,6 +118,12 @@ public void testOsStats() { assertThat(stats.getCgroup().getCpuStat().getNumberOfElapsedPeriods(), greaterThanOrEqualTo(0L)); assertThat(stats.getCgroup().getCpuStat().getNumberOfTimesThrottled(), greaterThanOrEqualTo(0L)); assertThat(stats.getCgroup().getCpuStat().getTimeThrottledNanos(), greaterThanOrEqualTo(0L)); + // These could be null if transported from a node running an older version, but shouldn't be null on the current node + assertThat(stats.getCgroup().getMemoryControlGroup(), notNullValue()); + assertThat(stats.getCgroup().getMemoryLimitInBytes(), notNullValue()); + assertThat(new BigInteger(stats.getCgroup().getMemoryLimitInBytes()), greaterThan(BigInteger.ZERO)); + assertThat(stats.getCgroup().getMemoryUsageInBytes(), notNullValue()); + assertThat(new BigInteger(stats.getCgroup().getMemoryUsageInBytes()), greaterThan(BigInteger.ZERO)); } } else { assertNull(stats.getCgroup()); @@ -159,7 +166,7 @@ List readProcSelfCgroup() { "9:net_cls,net_prio:/", "8:pids:/", "7:blkio:/", - "6:memory:/", + "6:memory:/" + hierarchy, "5:devices:/user.slice", "4:hugetlb:/", "3:perf_event:/", @@ -194,6 +201,19 @@ List readSysFsCgroupCpuAcctCpuStat(String controlGroup) { "throttled_time 139298645489"); } + @Override + String readSysFsCgroupMemoryLimitInBytes(String controlGroup) { + assertThat(controlGroup, equalTo("/" + hierarchy)); + // This is the highest value that can be stored in an unsigned 64 bit number, hence too big for long + return "18446744073709551615"; + } + + @Override + String readSysFsCgroupMemoryUsageInBytes(String controlGroup) { + assertThat(controlGroup, equalTo("/" + hierarchy)); + return "4796416"; + } + @Override boolean areCgroupStatsAvailable() { return areCgroupStatsAvailable; @@ -213,6 +233,8 @@ boolean areCgroupStatsAvailable() { assertThat(cgroup.getCpuStat().getNumberOfElapsedPeriods(), equalTo(17992L)); assertThat(cgroup.getCpuStat().getNumberOfTimesThrottled(), equalTo(1311L)); assertThat(cgroup.getCpuStat().getTimeThrottledNanos(), equalTo(139298645489L)); + assertThat(cgroup.getMemoryLimitInBytes(), equalTo("18446744073709551615")); + assertThat(cgroup.getMemoryUsageInBytes(), equalTo("4796416")); } else { assertNull(cgroup); } diff --git a/core/src/test/java/org/elasticsearch/monitor/os/OsStatsTests.java b/core/src/test/java/org/elasticsearch/monitor/os/OsStatsTests.java index f1e2371db5cb7..0f05e62358976 100644 --- a/core/src/test/java/org/elasticsearch/monitor/os/OsStatsTests.java +++ b/core/src/test/java/org/elasticsearch/monitor/os/OsStatsTests.java @@ -42,7 +42,10 @@ public void testSerialization() throws IOException { randomAlphaOfLength(8), randomNonNegativeLong(), randomNonNegativeLong(), - new OsStats.Cgroup.CpuStat(randomNonNegativeLong(), randomNonNegativeLong(), randomNonNegativeLong())); + new OsStats.Cgroup.CpuStat(randomNonNegativeLong(), randomNonNegativeLong(), randomNonNegativeLong()), + randomAlphaOfLength(8), + Long.toString(randomNonNegativeLong()), + Long.toString(randomNonNegativeLong())); OsStats osStats = new OsStats(System.currentTimeMillis(), cpu, mem, swap, cgroup); try (BytesStreamOutput out = new BytesStreamOutput()) { @@ -70,6 +73,8 @@ public void testSerialization() throws IOException { assertEquals( osStats.getCgroup().getCpuStat().getTimeThrottledNanos(), deserializedOsStats.getCgroup().getCpuStat().getTimeThrottledNanos()); + assertEquals(osStats.getCgroup().getMemoryLimitInBytes(), deserializedOsStats.getCgroup().getMemoryLimitInBytes()); + assertEquals(osStats.getCgroup().getMemoryUsageInBytes(), deserializedOsStats.getCgroup().getMemoryUsageInBytes()); } } } diff --git a/core/src/test/java/org/elasticsearch/node/NodeTests.java b/core/src/test/java/org/elasticsearch/node/NodeTests.java index ec806799e71f6..f1c8177b5a61c 100644 --- a/core/src/test/java/org/elasticsearch/node/NodeTests.java +++ b/core/src/test/java/org/elasticsearch/node/NodeTests.java @@ -22,6 +22,7 @@ import org.apache.lucene.util.LuceneTestCase; import org.elasticsearch.Version; import org.elasticsearch.bootstrap.BootstrapCheck; +import org.elasticsearch.bootstrap.BootstrapContext; import org.elasticsearch.cluster.ClusterName; import org.elasticsearch.common.network.NetworkModule; import org.elasticsearch.common.settings.Settings; @@ -64,17 +65,8 @@ public void testNodeName() throws IOException { } public static class CheckPlugin extends Plugin { - public static final BootstrapCheck CHECK = new BootstrapCheck() { - @Override - public boolean check() { - return false; - } + public static final BootstrapCheck CHECK = context -> BootstrapCheck.BootstrapCheckResult.success(); - @Override - public String errorMessage() { - return "boom"; - } - }; @Override public List getBootstrapChecks() { return Collections.singletonList(CHECK); @@ -90,7 +82,7 @@ public void testLoadPluginBootstrapChecks() throws IOException { AtomicBoolean executed = new AtomicBoolean(false); try (Node node = new MockNode(settings.build(), Arrays.asList(getTestTransportPlugin(), CheckPlugin.class)) { @Override - protected void validateNodeBeforeAcceptingRequests(Settings settings, BoundTransportAddress boundTransportAddress, + protected void validateNodeBeforeAcceptingRequests(BootstrapContext context, BoundTransportAddress boundTransportAddress, List bootstrapChecks) throws NodeValidationException { assertEquals(1, bootstrapChecks.size()); assertSame(CheckPlugin.CHECK, bootstrapChecks.get(0)); @@ -132,7 +124,7 @@ public void testNodeAttributes() throws IOException { Settings.Builder settings = baseSettings().put(Node.NODE_ATTRIBUTES.getKey() + "test_attr", attr); try (Node node = new MockNode(settings.build(), Collections.singleton(getTestTransportPlugin()))) { final Settings nodeSettings = randomBoolean() ? node.settings() : node.getEnvironment().settings(); - assertEquals(attr, Node.NODE_ATTRIBUTES.get(nodeSettings).getAsMap().get("test_attr")); + assertEquals(attr, Node.NODE_ATTRIBUTES.getAsMap(nodeSettings).get("test_attr")); } // leading whitespace not allowed diff --git a/core/src/test/java/org/elasticsearch/node/ResponseCollectorServiceTests.java b/core/src/test/java/org/elasticsearch/node/ResponseCollectorServiceTests.java index d620007d2cd02..d86d7b46cc7a1 100644 --- a/core/src/test/java/org/elasticsearch/node/ResponseCollectorServiceTests.java +++ b/core/src/test/java/org/elasticsearch/node/ResponseCollectorServiceTests.java @@ -67,7 +67,7 @@ public void testNodeStats() throws Exception { collector.addNodeStatistics("node1", 1, 100, 10); Map nodeStats = collector.getAllNodeStatistics(); assertTrue(nodeStats.containsKey("node1")); - assertThat(nodeStats.get("node1").queueSize, equalTo(1.0)); + assertThat(nodeStats.get("node1").queueSize, equalTo(1)); assertThat(nodeStats.get("node1").responseTime, equalTo(100.0)); assertThat(nodeStats.get("node1").serviceTime, equalTo(10.0)); } @@ -113,7 +113,7 @@ public void testConcurrentAddingAndRemoving() throws Exception { logger.info("--> got stats: {}", nodeStats); for (String nodeId : nodes) { if (nodeStats.containsKey(nodeId)) { - assertThat(nodeStats.get(nodeId).queueSize, greaterThan(0.0)); + assertThat(nodeStats.get(nodeId).queueSize, greaterThan(0)); assertThat(nodeStats.get(nodeId).responseTime, greaterThan(0.0)); assertThat(nodeStats.get(nodeId).serviceTime, greaterThan(0.0)); } diff --git a/core/src/test/java/org/elasticsearch/nodesinfo/NodeInfoStreamingTests.java b/core/src/test/java/org/elasticsearch/nodesinfo/NodeInfoStreamingTests.java index f830e8b62727a..665c0430207e6 100644 --- a/core/src/test/java/org/elasticsearch/nodesinfo/NodeInfoStreamingTests.java +++ b/core/src/test/java/org/elasticsearch/nodesinfo/NodeInfoStreamingTests.java @@ -143,13 +143,13 @@ private static NodeInfo createNodeInfo() { List plugins = new ArrayList<>(); for (int i = 0; i < numPlugins; i++) { plugins.add(new PluginInfo(randomAlphaOfLengthBetween(3, 10), randomAlphaOfLengthBetween(3, 10), - randomAlphaOfLengthBetween(3, 10), randomAlphaOfLengthBetween(3, 10), randomBoolean())); + randomAlphaOfLengthBetween(3, 10), randomAlphaOfLengthBetween(3, 10), randomBoolean(), randomBoolean())); } int numModules = randomIntBetween(0, 5); List modules = new ArrayList<>(); for (int i = 0; i < numModules; i++) { modules.add(new PluginInfo(randomAlphaOfLengthBetween(3, 10), randomAlphaOfLengthBetween(3, 10), - randomAlphaOfLengthBetween(3, 10), randomAlphaOfLengthBetween(3, 10), randomBoolean())); + randomAlphaOfLengthBetween(3, 10), randomAlphaOfLengthBetween(3, 10), randomBoolean(), randomBoolean())); } pluginsAndModules = new PluginsAndModules(plugins, modules); } diff --git a/core/src/test/java/org/elasticsearch/plugins/PluginInfoTests.java b/core/src/test/java/org/elasticsearch/plugins/PluginInfoTests.java index d8d1fa6ccabaa..1e6cdfc722018 100644 --- a/core/src/test/java/org/elasticsearch/plugins/PluginInfoTests.java +++ b/core/src/test/java/org/elasticsearch/plugins/PluginInfoTests.java @@ -209,11 +209,11 @@ public void testReadFromPropertiesJvmMissingClassname() throws Exception { public void testPluginListSorted() { List plugins = new ArrayList<>(); - plugins.add(new PluginInfo("c", "foo", "dummy", "dummyclass", randomBoolean())); - plugins.add(new PluginInfo("b", "foo", "dummy", "dummyclass", randomBoolean())); - plugins.add(new PluginInfo("e", "foo", "dummy", "dummyclass", randomBoolean())); - plugins.add(new PluginInfo("a", "foo", "dummy", "dummyclass", randomBoolean())); - plugins.add(new PluginInfo("d", "foo", "dummy", "dummyclass", randomBoolean())); + plugins.add(new PluginInfo("c", "foo", "dummy", "dummyclass", randomBoolean(), randomBoolean())); + plugins.add(new PluginInfo("b", "foo", "dummy", "dummyclass", randomBoolean(), randomBoolean())); + plugins.add(new PluginInfo("e", "foo", "dummy", "dummyclass", randomBoolean(), randomBoolean())); + plugins.add(new PluginInfo("a", "foo", "dummy", "dummyclass", randomBoolean(), randomBoolean())); + plugins.add(new PluginInfo("d", "foo", "dummy", "dummyclass", randomBoolean(), randomBoolean())); PluginsAndModules pluginsInfo = new PluginsAndModules(plugins, Collections.emptyList()); diff --git a/core/src/test/java/org/elasticsearch/plugins/PluginsServiceTests.java b/core/src/test/java/org/elasticsearch/plugins/PluginsServiceTests.java index c3fd0b19f73e2..3bd31097dcae6 100644 --- a/core/src/test/java/org/elasticsearch/plugins/PluginsServiceTests.java +++ b/core/src/test/java/org/elasticsearch/plugins/PluginsServiceTests.java @@ -19,15 +19,19 @@ package org.elasticsearch.plugins; +import org.apache.lucene.util.Constants; import org.apache.lucene.util.LuceneTestCase; import org.elasticsearch.Version; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.env.Environment; +import org.elasticsearch.env.TestEnvironment; import org.elasticsearch.index.IndexModule; import org.elasticsearch.test.ESTestCase; import java.io.IOException; +import java.nio.file.FileSystemException; import java.nio.file.Files; +import java.nio.file.NoSuchFileException; import java.nio.file.Path; import java.util.Arrays; import java.util.Collection; @@ -36,6 +40,7 @@ import static org.hamcrest.Matchers.containsString; import static org.hamcrest.Matchers.hasToString; +import static org.hamcrest.Matchers.instanceOf; @LuceneTestCase.SuppressFileSystems(value = "ExtrasFS") public class PluginsServiceTests extends ESTestCase { @@ -55,7 +60,7 @@ public Settings additionalSettings() { public static class FilterablePlugin extends Plugin implements ScriptPlugin {} static PluginsService newPluginsService(Settings settings, Class... classpathPlugins) { - return new PluginsService(settings, null, null, new Environment(settings).pluginsFile(), Arrays.asList(classpathPlugins)); + return new PluginsService(settings, null, null, TestEnvironment.newEnvironment(settings).pluginsFile(), Arrays.asList(classpathPlugins)); } public void testAdditionalSettings() { @@ -124,6 +129,32 @@ public void testHiddenFiles() throws IOException { assertThat(e, hasToString(containsString(expected))); } + public void testDesktopServicesStoreFiles() throws IOException { + final Path home = createTempDir(); + final Settings settings = + Settings.builder() + .put(Environment.PATH_HOME_SETTING.getKey(), home) + .build(); + final Path plugins = home.resolve("plugins"); + Files.createDirectories(plugins); + final Path desktopServicesStore = plugins.resolve(".DS_Store"); + Files.createFile(desktopServicesStore); + if (Constants.MAC_OS_X) { + @SuppressWarnings("unchecked") final PluginsService pluginsService = newPluginsService(settings); + assertNotNull(pluginsService); + } else { + final IllegalStateException e = expectThrows(IllegalStateException.class, () -> newPluginsService(settings)); + assertThat(e, hasToString(containsString("Could not load plugin descriptor for existing plugin [.DS_Store]"))); + assertNotNull(e.getCause()); + assertThat(e.getCause(), instanceOf(FileSystemException.class)); + if (Constants.WINDOWS) { + assertThat(e.getCause(), instanceOf(NoSuchFileException.class)); + } else { + assertThat(e.getCause(), hasToString(containsString("Not a directory"))); + } + } + } + public void testStartupWithRemovingMarker() throws IOException { final Path home = createTempDir(); final Settings settings = diff --git a/core/src/test/java/org/elasticsearch/recovery/RelocationIT.java b/core/src/test/java/org/elasticsearch/recovery/RelocationIT.java index 48f6fdeaedbbd..738cd00a43d2e 100644 --- a/core/src/test/java/org/elasticsearch/recovery/RelocationIT.java +++ b/core/src/test/java/org/elasticsearch/recovery/RelocationIT.java @@ -24,10 +24,6 @@ import org.apache.lucene.index.IndexFileNames; import org.apache.lucene.util.English; import org.elasticsearch.action.admin.cluster.health.ClusterHealthResponse; -import org.elasticsearch.action.admin.indices.stats.IndexShardStats; -import org.elasticsearch.action.admin.indices.stats.IndexStats; -import org.elasticsearch.action.admin.indices.stats.IndicesStatsResponse; -import org.elasticsearch.action.admin.indices.stats.ShardStats; import org.elasticsearch.action.index.IndexRequestBuilder; import org.elasticsearch.action.search.SearchResponse; import org.elasticsearch.client.Client; @@ -44,8 +40,7 @@ import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.env.NodeEnvironment; -import org.elasticsearch.index.seqno.SeqNoStats; -import org.elasticsearch.index.seqno.SequenceNumbersService; +import org.elasticsearch.index.IndexService; import org.elasticsearch.index.shard.IndexEventListener; import org.elasticsearch.index.shard.IndexShard; import org.elasticsearch.index.shard.IndexShardState; @@ -59,6 +54,7 @@ import org.elasticsearch.test.ESIntegTestCase; import org.elasticsearch.test.ESIntegTestCase.ClusterScope; import org.elasticsearch.test.ESIntegTestCase.Scope; +import org.elasticsearch.test.InternalSettingsPlugin; import org.elasticsearch.test.MockIndexEventListener; import org.elasticsearch.test.junit.annotations.TestLogging; import org.elasticsearch.test.transport.MockTransportService; @@ -77,7 +73,6 @@ import java.util.Arrays; import java.util.Collection; import java.util.List; -import java.util.Optional; import java.util.concurrent.CountDownLatch; import java.util.concurrent.ExecutionException; import java.util.concurrent.Semaphore; @@ -100,38 +95,13 @@ public class RelocationIT extends ESIntegTestCase { @Override protected Collection> nodePlugins() { - return Arrays.asList(MockTransportService.TestPlugin.class, MockIndexEventListener.TestPlugin.class); + return Arrays.asList(InternalSettingsPlugin.class, MockTransportService.TestPlugin.class, MockIndexEventListener.TestPlugin.class); } @Override protected void beforeIndexDeletion() throws Exception { super.beforeIndexDeletion(); - assertBusy(() -> { - IndicesStatsResponse stats = client().admin().indices().prepareStats().clear().get(); - for (IndexStats indexStats : stats.getIndices().values()) { - for (IndexShardStats indexShardStats : indexStats.getIndexShards().values()) { - Optional maybePrimary = Stream.of(indexShardStats.getShards()) - .filter(s -> s.getShardRouting().active() && s.getShardRouting().primary()) - .findFirst(); - if (maybePrimary.isPresent() == false) { - continue; - } - ShardStats primary = maybePrimary.get(); - final SeqNoStats primarySeqNoStats = primary.getSeqNoStats(); - assertThat(primary.getShardRouting() + " should have set the global checkpoint", - primarySeqNoStats.getGlobalCheckpoint(), not(equalTo(SequenceNumbersService.UNASSIGNED_SEQ_NO))); - for (ShardStats shardStats : indexShardStats) { - final SeqNoStats seqNoStats = shardStats.getSeqNoStats(); - assertThat(shardStats.getShardRouting() + " local checkpoint mismatch", - seqNoStats.getLocalCheckpoint(), equalTo(primarySeqNoStats.getLocalCheckpoint())); - assertThat(shardStats.getShardRouting() + " global checkpoint mismatch", - seqNoStats.getGlobalCheckpoint(), equalTo(primarySeqNoStats.getGlobalCheckpoint())); - assertThat(shardStats.getShardRouting() + " max seq no mismatch", - seqNoStats.getMaxSeqNo(), equalTo(primarySeqNoStats.getMaxSeqNo())); - } - } - } - }); + assertSeqNos(); } public void testSimpleRelocationNoIndexing() { @@ -287,11 +257,14 @@ public void testRelocationWhileRefreshing() throws Exception { nodes[0] = internalCluster().startNode(); logger.info("--> creating test index ..."); - prepareCreate("test", Settings.builder() - .put("index.number_of_shards", 1) - .put("index.number_of_replicas", numberOfReplicas) - .put("index.refresh_interval", -1) // we want to control refreshes c - ).get(); + prepareCreate( + "test", + Settings.builder() + .put("index.number_of_shards", 1) + .put("index.number_of_replicas", numberOfReplicas) + .put("index.refresh_interval", -1) // we want to control refreshes + .put(IndexService.GLOBAL_CHECKPOINT_SYNC_INTERVAL_SETTING.getKey(), "100ms")) + .get(); for (int i = 1; i < numberOfNodes; i++) { logger.info("--> starting [node_{}] ...", i); @@ -369,9 +342,6 @@ public void indexShardStateChanged(IndexShard indexShard, @Nullable IndexShardSt } - // refresh is a replication action so this forces a global checkpoint sync which is needed as these are asserted on in tear down - client().admin().indices().prepareRefresh("test").get(); - } public void testCancellationCleansTempFiles() throws Exception { @@ -380,7 +350,7 @@ public void testCancellationCleansTempFiles() throws Exception { final String p_node = internalCluster().startNode(); prepareCreate(indexName, Settings.builder() - .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1, IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0) + .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1).put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0) ).get(); internalCluster().startNode(); @@ -467,11 +437,12 @@ public void testIndexAndRelocateConcurrently() throws ExecutionException, Interr logger.info("red nodes: {}", (Object)redNodes); ensureStableCluster(halfNodes * 2); - assertAcked(prepareCreate("test", Settings.builder() - .put("index.routing.allocation.exclude.color", "blue") - .put(indexSettings()) - .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, randomInt(halfNodes - 1)) - )); + final Settings.Builder settings = Settings.builder() + .put("index.routing.allocation.exclude.color", "blue") + .put(indexSettings()) + .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, randomInt(halfNodes - 1)) + .put(IndexService.GLOBAL_CHECKPOINT_SYNC_INTERVAL_SETTING.getKey(), "100ms"); + assertAcked(prepareCreate("test", settings)); assertAllShardsOnNodes("test", redNodes); int numDocs = randomIntBetween(100, 150); ArrayList ids = new ArrayList<>(); @@ -512,8 +483,6 @@ public void testIndexAndRelocateConcurrently() throws ExecutionException, Interr assertSearchHits(afterRelocation, ids.toArray(new String[ids.size()])); } - // refresh is a replication action so this forces a global checkpoint sync which is needed as these are asserted on in tear down - client().admin().indices().prepareRefresh("test").get(); } class RecoveryCorruption extends MockTransportService.DelegateTransport { diff --git a/core/src/test/java/org/elasticsearch/rest/action/RestActionsTests.java b/core/src/test/java/org/elasticsearch/rest/action/RestActionsTests.java index 7272243e3cf93..401cc79b02092 100644 --- a/core/src/test/java/org/elasticsearch/rest/action/RestActionsTests.java +++ b/core/src/test/java/org/elasticsearch/rest/action/RestActionsTests.java @@ -19,6 +19,8 @@ package org.elasticsearch.rest.action; +import com.fasterxml.jackson.core.io.JsonEOFException; +import java.util.Arrays; import org.elasticsearch.common.ParsingException; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.NamedXContentRegistry; @@ -59,10 +61,32 @@ public void testParseTopLevelBuilder() throws IOException { } public void testParseTopLevelBuilderEmptyObject() throws IOException { - String requestBody = "{}"; - try (XContentParser parser = createParser(JsonXContent.jsonXContent, requestBody)) { - QueryBuilder query = RestActions.getQueryContent(parser); - assertNull(query); + for (String requestBody : Arrays.asList("{}", "")) { + try (XContentParser parser = createParser(JsonXContent.jsonXContent, requestBody)) { + QueryBuilder query = RestActions.getQueryContent(parser); + assertNull(query); + } + } + } + + public void testParseTopLevelBuilderMalformedJson() throws IOException { + for (String requestBody : Arrays.asList("\"\"", "\"someString\"", "\"{\"")) { + try (XContentParser parser = createParser(JsonXContent.jsonXContent, requestBody)) { + ParsingException exception = + expectThrows(ParsingException.class, () -> RestActions.getQueryContent(parser)); + assertEquals("Expected [START_OBJECT] but found [VALUE_STRING]", exception.getMessage()); + } + } + } + + public void testParseTopLevelBuilderIncompleteJson() throws IOException { + for (String requestBody : Arrays.asList("{", "{ \"query\" :")) { + try (XContentParser parser = createParser(JsonXContent.jsonXContent, requestBody)) { + ParsingException exception = + expectThrows(ParsingException.class, () -> RestActions.getQueryContent(parser)); + assertEquals("Failed to parse", exception.getMessage()); + assertEquals(JsonEOFException.class, exception.getRootCause().getClass()); + } } } diff --git a/core/src/test/java/org/elasticsearch/rest/action/cat/RestIndicesActionTests.java b/core/src/test/java/org/elasticsearch/rest/action/cat/RestIndicesActionTests.java index 998020cbd2659..cd592c9ed1e9c 100644 --- a/core/src/test/java/org/elasticsearch/rest/action/cat/RestIndicesActionTests.java +++ b/core/src/test/java/org/elasticsearch/rest/action/cat/RestIndicesActionTests.java @@ -136,11 +136,14 @@ public void testBuildTable() { private IndicesStatsResponse randomIndicesStatsResponse(final Index[] indices) { List shardStats = new ArrayList<>(); for (final Index index : indices) { - for (int i = 0; i < 2; i++) { + int numShards = randomInt(5); + int primaryIdx = randomIntBetween(-1, numShards - 1); // -1 means there is no primary shard. + for (int i = 0; i < numShards; i++) { ShardId shardId = new ShardId(index, i); + boolean primary = (i == primaryIdx); Path path = createTempDir().resolve("indices").resolve(index.getUUID()).resolve(String.valueOf(i)); - ShardRouting shardRouting = ShardRouting.newUnassigned(shardId, i == 0, - i == 0 ? StoreRecoverySource.EMPTY_STORE_INSTANCE : PeerRecoverySource.INSTANCE, + ShardRouting shardRouting = ShardRouting.newUnassigned(shardId, primary, + primary ? StoreRecoverySource.EMPTY_STORE_INSTANCE : PeerRecoverySource.INSTANCE, new UnassignedInfo(UnassignedInfo.Reason.INDEX_CREATED, null) ); shardRouting = shardRouting.initialize("node-0", null, ShardRouting.UNAVAILABLE_EXPECTED_SHARD_SIZE); diff --git a/core/src/test/java/org/elasticsearch/rest/action/document/RestBulkActionTests.java b/core/src/test/java/org/elasticsearch/rest/action/document/RestBulkActionTests.java new file mode 100644 index 0000000000000..5011946914a50 --- /dev/null +++ b/core/src/test/java/org/elasticsearch/rest/action/document/RestBulkActionTests.java @@ -0,0 +1,76 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.rest.action.document; + +import java.util.HashMap; +import java.util.Map; +import org.elasticsearch.Version; +import org.elasticsearch.action.bulk.BulkRequest; +import org.elasticsearch.action.update.UpdateRequest; +import org.elasticsearch.client.node.NodeClient; +import org.elasticsearch.common.bytes.BytesArray; +import org.elasticsearch.common.xcontent.XContentType; +import org.elasticsearch.rest.RestChannel; +import org.elasticsearch.rest.RestController; +import org.elasticsearch.rest.RestRequest; +import org.elasticsearch.test.ESTestCase; +import org.elasticsearch.test.rest.FakeRestRequest; +import org.hamcrest.CustomMatcher; +import org.mockito.Mockito; + +import static org.mockito.Matchers.any; +import static org.mockito.Matchers.argThat; +import static org.mockito.Mockito.mock; + +/** + * Tests for {@link RestBulkAction}. + */ +public class RestBulkActionTests extends ESTestCase { + + public void testBulkPipelineUpsert() throws Exception { + final NodeClient mockClient = mock(NodeClient.class); + final Map params = new HashMap<>(); + params.put("pipeline", "timestamps"); + new RestBulkAction(settings(Version.CURRENT).build(), mock(RestController.class)) + .handleRequest( + new FakeRestRequest.Builder( + xContentRegistry()).withPath("my_index/my_type/_bulk").withParams(params) + .withContent( + new BytesArray( + "{\"index\":{\"_id\":\"1\"}}\n" + + "{\"field1\":\"val1\"}\n" + + "{\"update\":{\"_id\":\"2\"}}\n" + + "{\"script\":{\"source\":\"ctx._source.counter++;\"},\"upsert\":{\"field1\":\"upserted_val\"}}\n" + ), + XContentType.JSON + ).withMethod(RestRequest.Method.POST).build(), + mock(RestChannel.class), mockClient + ); + Mockito.verify(mockClient) + .bulk(argThat(new CustomMatcher("Pipeline in upsert request") { + @Override + public boolean matches(final Object item) { + BulkRequest request = (BulkRequest) item; + UpdateRequest update = (UpdateRequest) request.requests().get(1); + return "timestamps".equals(update.upsertRequest().getPipeline()); + } + }), any()); + } +} diff --git a/core/src/test/java/org/elasticsearch/routing/PartitionedRoutingIT.java b/core/src/test/java/org/elasticsearch/routing/PartitionedRoutingIT.java index b23ce6a9286bb..07a73a09f4ab4 100644 --- a/core/src/test/java/org/elasticsearch/routing/PartitionedRoutingIT.java +++ b/core/src/test/java/org/elasticsearch/routing/PartitionedRoutingIT.java @@ -105,7 +105,7 @@ public void testShrinking() throws Exception { index = "index_" + currentShards; logger.info("--> shrinking index [" + previousIndex + "] to [" + index + "]"); - client().admin().indices().prepareShrinkIndex(previousIndex, index) + client().admin().indices().prepareResizeIndex(previousIndex, index) .setSettings(Settings.builder() .put("index.number_of_shards", currentShards) .put("index.number_of_replicas", numberOfReplicas()) diff --git a/core/src/test/java/org/elasticsearch/script/ScriptServiceTests.java b/core/src/test/java/org/elasticsearch/script/ScriptServiceTests.java index 206d0d6390a41..42a4c2f6abb1a 100644 --- a/core/src/test/java/org/elasticsearch/script/ScriptServiceTests.java +++ b/core/src/test/java/org/elasticsearch/script/ScriptServiceTests.java @@ -18,6 +18,7 @@ */ package org.elasticsearch.script; +import org.elasticsearch.ElasticsearchParseException; import org.elasticsearch.ResourceNotFoundException; import org.elasticsearch.action.admin.cluster.storedscripts.GetStoredScriptRequest; import org.elasticsearch.cluster.ClusterName; @@ -26,7 +27,9 @@ import org.elasticsearch.common.breaker.CircuitBreakingException; import org.elasticsearch.common.bytes.BytesArray; import org.elasticsearch.common.bytes.BytesReference; +import org.elasticsearch.common.collect.Tuple; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.env.Environment; @@ -39,7 +42,9 @@ import java.util.Map; import java.util.function.Function; +import static org.elasticsearch.script.ScriptService.MAX_COMPILATION_RATE_FUNCTION; import static org.hamcrest.CoreMatchers.containsString; +import static org.hamcrest.Matchers.is; import static org.hamcrest.Matchers.notNullValue; import static org.hamcrest.Matchers.sameInstance; @@ -55,7 +60,7 @@ public class ScriptServiceTests extends ESTestCase { public void setup() throws IOException { baseSettings = Settings.builder() .put(Environment.PATH_HOME_SETTING.getKey(), createTempDir().toString()) - .put(ScriptService.SCRIPT_MAX_COMPILATIONS_PER_MINUTE.getKey(), 10000) + .put(ScriptService.SCRIPT_MAX_COMPILATIONS_RATE.getKey(), "10000/1m") .build(); Map, Object>> scripts = new HashMap<>(); for (int i = 0; i < 20; ++i) { @@ -82,30 +87,58 @@ StoredScriptSource getScriptFromClusterState(String id) { }; } + // even though circuit breaking is allowed to be configured per minute, we actually weigh this over five minutes + // simply by multiplying by five, so even setting it to one, requires five compilations to break public void testCompilationCircuitBreaking() throws Exception { buildScriptService(Settings.EMPTY); - scriptService.setMaxCompilationsPerMinute(1); + scriptService.setMaxCompilationRate(Tuple.tuple(1, TimeValue.timeValueMinutes(1))); scriptService.checkCompilationLimit(); // should pass expectThrows(CircuitBreakingException.class, () -> scriptService.checkCompilationLimit()); - scriptService.setMaxCompilationsPerMinute(2); + scriptService.setMaxCompilationRate(Tuple.tuple(2, TimeValue.timeValueMinutes(1))); scriptService.checkCompilationLimit(); // should pass scriptService.checkCompilationLimit(); // should pass expectThrows(CircuitBreakingException.class, () -> scriptService.checkCompilationLimit()); int count = randomIntBetween(5, 50); - scriptService.setMaxCompilationsPerMinute(count); + scriptService.setMaxCompilationRate(Tuple.tuple(count, TimeValue.timeValueMinutes(1))); for (int i = 0; i < count; i++) { scriptService.checkCompilationLimit(); // should pass } expectThrows(CircuitBreakingException.class, () -> scriptService.checkCompilationLimit()); - scriptService.setMaxCompilationsPerMinute(0); + scriptService.setMaxCompilationRate(Tuple.tuple(0, TimeValue.timeValueMinutes(1))); expectThrows(CircuitBreakingException.class, () -> scriptService.checkCompilationLimit()); - scriptService.setMaxCompilationsPerMinute(Integer.MAX_VALUE); + scriptService.setMaxCompilationRate(Tuple.tuple(Integer.MAX_VALUE, TimeValue.timeValueMinutes(1))); int largeLimit = randomIntBetween(1000, 10000); for (int i = 0; i < largeLimit; i++) { scriptService.checkCompilationLimit(); } } + public void testMaxCompilationRateSetting() throws Exception { + assertThat(MAX_COMPILATION_RATE_FUNCTION.apply("10/1m"), is(Tuple.tuple(10, TimeValue.timeValueMinutes(1)))); + assertThat(MAX_COMPILATION_RATE_FUNCTION.apply("10/60s"), is(Tuple.tuple(10, TimeValue.timeValueMinutes(1)))); + assertException("10/m", ElasticsearchParseException.class, "failed to parse [m]"); + assertException("6/1.6m", ElasticsearchParseException.class, "failed to parse [1.6m], fractional time values are not supported"); + assertException("foo/bar", IllegalArgumentException.class, "could not parse [foo] as integer in value [foo/bar]"); + assertException("6.0/1m", IllegalArgumentException.class, "could not parse [6.0] as integer in value [6.0/1m]"); + assertException("6/-1m", IllegalArgumentException.class, "time value [-1m] must be positive"); + assertException("6/0m", IllegalArgumentException.class, "time value [0m] must be positive"); + assertException("10", IllegalArgumentException.class, + "parameter must contain a positive integer and a timevalue, i.e. 10/1m, but was [10]"); + assertException("anything", IllegalArgumentException.class, + "parameter must contain a positive integer and a timevalue, i.e. 10/1m, but was [anything]"); + assertException("/1m", IllegalArgumentException.class, + "parameter must contain a positive integer and a timevalue, i.e. 10/1m, but was [/1m]"); + assertException("10/", IllegalArgumentException.class, + "parameter must contain a positive integer and a timevalue, i.e. 10/1m, but was [10/]"); + assertException("-1/1m", IllegalArgumentException.class, "rate [-1] must be positive"); + assertException("10/5s", IllegalArgumentException.class, "time value [5s] must be at least on a one minute resolution"); + } + + private void assertException(String rate, Class clazz, String message) { + Exception e = expectThrows(clazz, () -> MAX_COMPILATION_RATE_FUNCTION.apply(rate)); + assertThat(e.getMessage(), is(message)); + } + public void testNotSupportedDisableDynamicSetting() throws IOException { try { buildScriptService(Settings.builder().put(ScriptService.DISABLE_DYNAMIC_SCRIPTING_SETTING, randomUnicodeOfLength(randomIntBetween(1, 10))).build()); diff --git a/core/src/test/java/org/elasticsearch/script/ScriptTests.java b/core/src/test/java/org/elasticsearch/script/ScriptTests.java index 9584bf01a5c83..0459be255e57f 100644 --- a/core/src/test/java/org/elasticsearch/script/ScriptTests.java +++ b/core/src/test/java/org/elasticsearch/script/ScriptTests.java @@ -21,6 +21,7 @@ import org.elasticsearch.common.io.stream.InputStreamStreamInput; import org.elasticsearch.common.io.stream.OutputStreamStreamOutput; +import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentFactory; @@ -83,5 +84,13 @@ private Script createScript() throws IOException { ); } - + public void testParse() throws IOException { + Script expectedScript = createScript(); + try (XContentBuilder builder = XContentFactory.contentBuilder(randomFrom(XContentType.values()))) { + expectedScript.toXContent(builder, ToXContent.EMPTY_PARAMS); + Settings settings = Settings.fromXContent(createParser(builder)); + Script actualScript = Script.parse(settings); + assertThat(actualScript, equalTo(expectedScript)); + } + } } diff --git a/core/src/test/java/org/elasticsearch/search/AbstractSearchTestCase.java b/core/src/test/java/org/elasticsearch/search/AbstractSearchTestCase.java index 5fe9d9d75fbab..2144ce057a53a 100644 --- a/core/src/test/java/org/elasticsearch/search/AbstractSearchTestCase.java +++ b/core/src/test/java/org/elasticsearch/search/AbstractSearchTestCase.java @@ -35,7 +35,7 @@ import org.elasticsearch.search.builder.SearchSourceBuilder; import org.elasticsearch.search.collapse.CollapseBuilderTests; import org.elasticsearch.search.fetch.subphase.highlight.HighlightBuilderTests; -import org.elasticsearch.search.rescore.QueryRescoreBuilderTests; +import org.elasticsearch.search.rescore.QueryRescorerBuilderTests; import org.elasticsearch.search.suggest.SuggestBuilderTests; import org.elasticsearch.test.ESTestCase; @@ -90,7 +90,7 @@ protected SearchSourceBuilder createSearchSourceBuilder() { return RandomSearchRequestGenerator.randomSearchSourceBuilder( HighlightBuilderTests::randomHighlighterBuilder, SuggestBuilderTests::randomSuggestBuilder, - QueryRescoreBuilderTests::randomRescoreBuilder, + QueryRescorerBuilderTests::randomRescoreBuilder, randomExtBuilders, CollapseBuilderTests::randomCollapseBuilder); } diff --git a/core/src/test/java/org/elasticsearch/search/DefaultSearchContextTests.java b/core/src/test/java/org/elasticsearch/search/DefaultSearchContextTests.java new file mode 100644 index 0000000000000..c20724b8a92c7 --- /dev/null +++ b/core/src/test/java/org/elasticsearch/search/DefaultSearchContextTests.java @@ -0,0 +1,178 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.search; + +import org.apache.lucene.index.IndexReader; +import org.apache.lucene.index.RandomIndexWriter; +import org.apache.lucene.search.IndexSearcher; +import org.apache.lucene.search.QueryCachingPolicy; +import org.apache.lucene.search.Sort; +import org.apache.lucene.store.Directory; +import org.elasticsearch.Version; +import org.elasticsearch.action.search.SearchType; +import org.elasticsearch.cluster.metadata.IndexMetaData; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.unit.TimeValue; +import org.elasticsearch.common.util.BigArrays; +import org.elasticsearch.common.util.MockBigArrays; +import org.elasticsearch.index.IndexService; +import org.elasticsearch.index.IndexSettings; +import org.elasticsearch.index.cache.IndexCache; +import org.elasticsearch.index.cache.query.QueryCache; +import org.elasticsearch.index.engine.Engine; +import org.elasticsearch.index.mapper.MapperService; +import org.elasticsearch.index.query.AbstractQueryBuilder; +import org.elasticsearch.index.query.ParsedQuery; +import org.elasticsearch.index.query.QueryShardContext; +import org.elasticsearch.index.shard.IndexShard; +import org.elasticsearch.index.shard.ShardId; +import org.elasticsearch.indices.breaker.NoneCircuitBreakerService; +import org.elasticsearch.search.internal.AliasFilter; +import org.elasticsearch.search.internal.ScrollContext; +import org.elasticsearch.search.internal.ShardSearchRequest; +import org.elasticsearch.search.rescore.RescoreContext; +import org.elasticsearch.search.slice.SliceBuilder; +import org.elasticsearch.search.sort.SortAndFormats; +import org.elasticsearch.test.ESTestCase; + +import java.util.UUID; + +import static org.hamcrest.Matchers.equalTo; +import static org.mockito.Matchers.anyObject; +import static org.mockito.Matchers.anyString; +import static org.mockito.Matchers.eq; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.when; + + +public class DefaultSearchContextTests extends ESTestCase { + + public void testPreProcess() throws Exception { + TimeValue timeout = new TimeValue(randomIntBetween(1, 100)); + ShardSearchRequest shardSearchRequest = mock(ShardSearchRequest.class); + when(shardSearchRequest.searchType()).thenReturn(SearchType.DEFAULT); + ShardId shardId = new ShardId("index", UUID.randomUUID().toString(), 1); + when(shardSearchRequest.shardId()).thenReturn(shardId); + when(shardSearchRequest.types()).thenReturn(new String[]{}); + + IndexShard indexShard = mock(IndexShard.class); + QueryCachingPolicy queryCachingPolicy = mock(QueryCachingPolicy.class); + when(indexShard.getQueryCachingPolicy()).thenReturn(queryCachingPolicy); + + int maxResultWindow = randomIntBetween(50, 100); + int maxRescoreWindow = randomIntBetween(50, 100); + int maxSlicesPerScroll = randomIntBetween(50, 100); + Settings settings = Settings.builder() + .put("index.max_result_window", maxResultWindow) + .put("index.max_slices_per_scroll", maxSlicesPerScroll) + .put("index.max_rescore_window", maxRescoreWindow) + .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT) + .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 1) + .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 2) + .build(); + + IndexService indexService = mock(IndexService.class); + IndexCache indexCache = mock(IndexCache.class); + QueryCache queryCache = mock(QueryCache.class); + when(indexCache.query()).thenReturn(queryCache); + when(indexService.cache()).thenReturn(indexCache); + QueryShardContext queryShardContext = mock(QueryShardContext.class); + when(indexService.newQueryShardContext(eq(shardId.id()), anyObject(), anyObject(), anyString())).thenReturn(queryShardContext); + MapperService mapperService = mock(MapperService.class); + when(mapperService.hasNested()).thenReturn(randomBoolean()); + when(indexService.mapperService()).thenReturn(mapperService); + + IndexMetaData indexMetaData = IndexMetaData.builder("index").settings(settings).build(); + IndexSettings indexSettings = new IndexSettings(indexMetaData, Settings.EMPTY); + when(indexService.getIndexSettings()).thenReturn(indexSettings); + + BigArrays bigArrays = new MockBigArrays(Settings.EMPTY, new NoneCircuitBreakerService()); + + try (Directory dir = newDirectory(); + RandomIndexWriter w = new RandomIndexWriter(random(), dir); + IndexReader reader = w.getReader(); + Engine.Searcher searcher = new Engine.Searcher("test", new IndexSearcher(reader))) { + + DefaultSearchContext context1 = new DefaultSearchContext(1L, shardSearchRequest, null, searcher, indexService, + indexShard, bigArrays, null, timeout, null, null); + context1.from(300); + + // resultWindow greater than maxResultWindow and scrollContext is null + IllegalArgumentException exception = expectThrows(IllegalArgumentException.class, () -> context1.preProcess(false)); + assertThat(exception.getMessage(), equalTo("Result window is too large, from + size must be less than or equal to:" + + " [" + maxResultWindow + "] but was [310]. See the scroll api for a more efficient way to request large data sets. " + + "This limit can be set by changing the [" + IndexSettings.MAX_RESULT_WINDOW_SETTING.getKey() + + "] index level setting.")); + + // resultWindow greater than maxResultWindow and scrollContext isn't null + context1.scrollContext(new ScrollContext()); + exception = expectThrows(IllegalArgumentException.class, () -> context1.preProcess(false)); + assertThat(exception.getMessage(), equalTo("Batch size is too large, size must be less than or equal to: [" + + maxResultWindow + "] but was [310]. Scroll batch sizes cost as much memory as result windows so they are " + + "controlled by the [" + IndexSettings.MAX_RESULT_WINDOW_SETTING.getKey() + "] index level setting.")); + + // resultWindow not greater than maxResultWindow and both rescore and sort are not null + context1.from(0); + DocValueFormat docValueFormat = mock(DocValueFormat.class); + SortAndFormats sortAndFormats = new SortAndFormats(new Sort(), new DocValueFormat[]{docValueFormat}); + context1.sort(sortAndFormats); + + RescoreContext rescoreContext = mock(RescoreContext.class); + when(rescoreContext.getWindowSize()).thenReturn(500); + context1.addRescore(rescoreContext); + + exception = expectThrows(IllegalArgumentException.class, () -> context1.preProcess(false)); + assertThat(exception.getMessage(), equalTo("Cannot use [sort] option in conjunction with [rescore].")); + + // rescore is null but sort is not null and rescoreContext.getWindowSize() exceeds maxResultWindow + context1.sort(null); + exception = expectThrows(IllegalArgumentException.class, () -> context1.preProcess(false)); + + assertThat(exception.getMessage(), equalTo("Rescore window [" + rescoreContext.getWindowSize() + "] is too large. " + + "It must be less than [" + maxRescoreWindow + "]. This prevents allocating massive heaps for storing the results " + + "to be rescored. This limit can be set by changing the [" + IndexSettings.MAX_RESCORE_WINDOW_SETTING.getKey() + + "] index level setting.")); + + // rescore is null but sliceBuilder is not null + DefaultSearchContext context2 = new DefaultSearchContext(2L, shardSearchRequest, null, searcher, indexService, + indexShard, bigArrays, null, timeout, null, null); + + SliceBuilder sliceBuilder = mock(SliceBuilder.class); + int numSlices = maxSlicesPerScroll + randomIntBetween(1, 100); + when(sliceBuilder.getMax()).thenReturn(numSlices); + context2.sliceBuilder(sliceBuilder); + + exception = expectThrows(IllegalArgumentException.class, () -> context2.preProcess(false)); + assertThat(exception.getMessage(), equalTo("The number of slices [" + numSlices + "] is too large. It must " + + "be less than [" + maxSlicesPerScroll + "]. This limit can be set by changing the [" + + IndexSettings.MAX_SLICES_PER_SCROLL.getKey() + "] index level setting.")); + + // No exceptions should be thrown + when(shardSearchRequest.getAliasFilter()).thenReturn(AliasFilter.EMPTY); + when(shardSearchRequest.indexBoost()).thenReturn(AbstractQueryBuilder.DEFAULT_BOOST); + + DefaultSearchContext context3 = new DefaultSearchContext(3L, shardSearchRequest, null, searcher, indexService, + indexShard, bigArrays, null, timeout, null, null); + ParsedQuery parsedQuery = ParsedQuery.parsedMatchAllQuery(); + context3.sliceBuilder(null).parsedQuery(parsedQuery).preProcess(false); + assertEquals(context3.query(), context3.buildFilteredQuery(parsedQuery.query())); + } + } +} diff --git a/core/src/test/java/org/elasticsearch/search/MultiValueModeTests.java b/core/src/test/java/org/elasticsearch/search/MultiValueModeTests.java index 1a357c55eb056..d9eb45013263d 100644 --- a/core/src/test/java/org/elasticsearch/search/MultiValueModeTests.java +++ b/core/src/test/java/org/elasticsearch/search/MultiValueModeTests.java @@ -19,8 +19,6 @@ package org.elasticsearch.search; -import com.carrotsearch.randomizedtesting.generators.RandomStrings; - import org.apache.lucene.index.BinaryDocValues; import org.apache.lucene.index.DocValues; import org.apache.lucene.index.NumericDocValues; @@ -41,7 +39,6 @@ import org.elasticsearch.index.fielddata.NumericDoubleValues; import org.elasticsearch.index.fielddata.SortedBinaryDocValues; import org.elasticsearch.index.fielddata.SortedNumericDoubleValues; -import org.elasticsearch.search.MultiValueMode.UnsortedNumericDoubleValues; import org.elasticsearch.test.ESTestCase; import java.io.IOException; @@ -92,7 +89,7 @@ public void testSingleValuedLongs() throws Exception { docsWithValue.set(i); } } - + final Supplier multiValues = () -> DocValues.singleton(new AbstractNumericDocValues() { int docId = -1; @Override @@ -161,6 +158,8 @@ private void verifySortedNumeric(Supplier supplier, int for (int i = 0; i < maxDoc; ++i) { assertTrue(selected.advanceExact(i)); final long actual = selected.longValue(); + verifyLongValueCanCalledMoreThanOnce(selected, actual); + long expected = 0; if (values.advanceExact(i) == false) { expected = missingValue; @@ -204,6 +203,12 @@ private void verifySortedNumeric(Supplier supplier, int } } + private void verifyLongValueCanCalledMoreThanOnce(NumericDocValues values, long expected) throws IOException { + for (int j = 0, numCall = randomIntBetween(1, 10); j < numCall; j++) { + assertEquals(expected, values.longValue()); + } + } + private void verifySortedNumeric(Supplier supplier, int maxDoc, FixedBitSet rootDocs, FixedBitSet innerDocs) throws IOException { for (long missingValue : new long[] { 0, randomLong() }) { for (MultiValueMode mode : new MultiValueMode[] {MultiValueMode.MIN, MultiValueMode.MAX, MultiValueMode.SUM, MultiValueMode.AVG}) { @@ -213,6 +218,8 @@ private void verifySortedNumeric(Supplier supplier, int for (int root = rootDocs.nextSetBit(0); root != -1; root = root + 1 < maxDoc ? rootDocs.nextSetBit(root + 1) : -1) { assertTrue(selected.advanceExact(root)); final long actual = selected.longValue(); + verifyLongValueCanCalledMoreThanOnce(selected, actual); + long expected = 0; if (mode == MultiValueMode.MAX) { expected = Long.MIN_VALUE; @@ -321,14 +328,13 @@ public int docValueCount() { private void verifySortedNumericDouble(Supplier supplier, int maxDoc) throws IOException { for (long missingValue : new long[] { 0, randomLong() }) { for (MultiValueMode mode : MultiValueMode.values()) { - if (MultiValueMode.MEDIAN.equals(mode)) { - continue; - } SortedNumericDoubleValues values = supplier.get(); final NumericDoubleValues selected = mode.select(values, missingValue); for (int i = 0; i < maxDoc; ++i) { assertTrue(selected.advanceExact(i)); final double actual = selected.doubleValue(); + verifyDoubleValueCanCalledMoreThanOnce(selected, actual); + double expected = 0.0; if (values.advanceExact(i) == false) { expected = missingValue; @@ -372,6 +378,12 @@ private void verifySortedNumericDouble(Supplier suppl } } + private void verifyDoubleValueCanCalledMoreThanOnce(NumericDoubleValues values, double expected) throws IOException { + for (int j = 0, numCall = randomIntBetween(1, 10); j < numCall; j++) { + assertTrue(Double.compare(values.doubleValue(), expected) == 0); + } + } + private void verifySortedNumericDouble(Supplier supplier, int maxDoc, FixedBitSet rootDocs, FixedBitSet innerDocs) throws IOException { for (long missingValue : new long[] { 0, randomLong() }) { for (MultiValueMode mode : new MultiValueMode[] {MultiValueMode.MIN, MultiValueMode.MAX, MultiValueMode.SUM, MultiValueMode.AVG}) { @@ -380,7 +392,9 @@ private void verifySortedNumericDouble(Supplier suppl int prevRoot = -1; for (int root = rootDocs.nextSetBit(0); root != -1; root = root + 1 < maxDoc ? rootDocs.nextSetBit(root + 1) : -1) { assertTrue(selected.advanceExact(root)); - final double actual = selected.doubleValue();; + final double actual = selected.doubleValue(); + verifyDoubleValueCanCalledMoreThanOnce(selected, actual); + double expected = 0.0; if (mode == MultiValueMode.MAX) { expected = Long.MIN_VALUE; @@ -422,7 +436,7 @@ public void testSingleValuedStrings() throws Exception { final FixedBitSet docsWithValue = randomBoolean() ? null : new FixedBitSet(numDocs); for (int i = 0; i < array.length; ++i) { if (randomBoolean()) { - array[i] = new BytesRef(RandomStrings.randomAsciiOfLength(random(), 8)); + array[i] = new BytesRef(randomAlphaOfLengthBetween(8, 8)); if (docsWithValue != null) { docsWithValue.set(i); } @@ -457,7 +471,7 @@ public void testMultiValuedStrings() throws Exception { for (int i = 0; i < numDocs; ++i) { final BytesRef[] values = new BytesRef[randomInt(4)]; for (int j = 0; j < values.length; ++j) { - values[j] = new BytesRef(RandomStrings.randomAsciiOfLength(random(), 8)); + values[j] = new BytesRef(randomAlphaOfLengthBetween(8, 8)); } Arrays.sort(values); array[i] = values; @@ -490,13 +504,15 @@ public int docValueCount() { } private void verifySortedBinary(Supplier supplier, int maxDoc) throws IOException { - for (BytesRef missingValue : new BytesRef[] { new BytesRef(), new BytesRef(RandomStrings.randomAsciiOfLength(random(), 8)) }) { + for (BytesRef missingValue : new BytesRef[] { new BytesRef(), new BytesRef(randomAlphaOfLengthBetween(8, 8)) }) { for (MultiValueMode mode : new MultiValueMode[] {MultiValueMode.MIN, MultiValueMode.MAX}) { SortedBinaryDocValues values = supplier.get(); final BinaryDocValues selected = mode.select(values, missingValue); for (int i = 0; i < maxDoc; ++i) { assertTrue(selected.advanceExact(i)); final BytesRef actual = selected.binaryValue(); + verifyBinaryValueCanCalledMoreThanOnce(selected, actual); + BytesRef expected = null; if (values.advanceExact(i) == false) { expected = missingValue; @@ -525,8 +541,14 @@ private void verifySortedBinary(Supplier supplier, int ma } } + private void verifyBinaryValueCanCalledMoreThanOnce(BinaryDocValues values, BytesRef expected) throws IOException { + for (int j = 0, numCall = randomIntBetween(1, 10); j < numCall; j++) { + assertEquals(values.binaryValue(), expected); + } + } + private void verifySortedBinary(Supplier supplier, int maxDoc, FixedBitSet rootDocs, FixedBitSet innerDocs) throws IOException { - for (BytesRef missingValue : new BytesRef[] { new BytesRef(), new BytesRef(RandomStrings.randomAsciiOfLength(random(), 8)) }) { + for (BytesRef missingValue : new BytesRef[] { new BytesRef(), new BytesRef(randomAlphaOfLengthBetween(8, 8)) }) { for (MultiValueMode mode : new MultiValueMode[] {MultiValueMode.MIN, MultiValueMode.MAX}) { SortedBinaryDocValues values = supplier.get(); final BinaryDocValues selected = mode.select(values, missingValue, rootDocs, new BitSetIterator(innerDocs, 0L), maxDoc); @@ -534,6 +556,8 @@ private void verifySortedBinary(Supplier supplier, int ma for (int root = rootDocs.nextSetBit(0); root != -1; root = root + 1 < maxDoc ? rootDocs.nextSetBit(root + 1) : -1) { assertTrue(selected.advanceExact(root)); final BytesRef actual = selected.binaryValue(); + verifyBinaryValueCanCalledMoreThanOnce(selected, actual); + BytesRef expected = null; for (int child = innerDocs.nextSetBit(prevRoot + 1); child != -1 && child < root; child = innerDocs.nextSetBit(child + 1)) { if (values.advanceExact(child)) { @@ -659,7 +683,11 @@ private void verifySortedSet(Supplier supplier, int maxDoc) SortedSetDocValues values = supplier.get(); final SortedDocValues selected = mode.select(values); for (int i = 0; i < maxDoc; ++i) { - final long actual = selected.advanceExact(i) ? selected.ordValue() : -1; + long actual = -1; + if (selected.advanceExact(i)) { + actual = selected.ordValue(); + verifyOrdValueCanCalledMoreThanOnce(selected, selected.ordValue()); + } int expected = -1; if (values.advanceExact(i)) { for (long ord = values.nextOrd(); ord != SortedSetDocValues.NO_MORE_ORDS; ord = values.nextOrd()) { @@ -680,13 +708,23 @@ private void verifySortedSet(Supplier supplier, int maxDoc) } } + private void verifyOrdValueCanCalledMoreThanOnce(SortedDocValues values, long expected) throws IOException { + for (int j = 0, numCall = randomIntBetween(1, 10); j < numCall; j++) { + assertEquals(values.ordValue(), expected); + } + } + private void verifySortedSet(Supplier supplier, int maxDoc, FixedBitSet rootDocs, FixedBitSet innerDocs) throws IOException { for (MultiValueMode mode : new MultiValueMode[] {MultiValueMode.MIN, MultiValueMode.MAX}) { SortedSetDocValues values = supplier.get(); final SortedDocValues selected = mode.select(values, rootDocs, new BitSetIterator(innerDocs, 0L)); int prevRoot = -1; for (int root = rootDocs.nextSetBit(0); root != -1; root = root + 1 < maxDoc ? rootDocs.nextSetBit(root + 1) : -1) { - final int actual = selected.advanceExact(root) ? selected.ordValue() : -1; + int actual = -1; + if (selected.advanceExact(root)) { + actual = selected.ordValue(); + verifyOrdValueCanCalledMoreThanOnce(selected, actual); + } int expected = -1; for (int child = innerDocs.nextSetBit(prevRoot + 1); child != -1 && child < root; child = innerDocs.nextSetBit(child + 1)) { if (values.advanceExact(child)) { @@ -711,126 +749,6 @@ private void verifySortedSet(Supplier supplier, int maxDoc, } } - public void testUnsortedSingleValuedDoubles() throws Exception { - final int numDocs = scaledRandomIntBetween(1, 100); - final double[] array = new double[numDocs]; - final FixedBitSet docsWithValue = randomBoolean() ? null : new FixedBitSet(numDocs); - for (int i = 0; i < array.length; ++i) { - if (randomBoolean()) { - array[i] = randomDouble(); - if (docsWithValue != null) { - docsWithValue.set(i); - } - } else if (docsWithValue != null && randomBoolean()) { - docsWithValue.set(i); - } - } - final NumericDoubleValues singleValues = new NumericDoubleValues() { - private int docID; - @Override - public boolean advanceExact(int doc) throws IOException { - docID = doc; - return docsWithValue == null || docsWithValue.get(docID); - } - @Override - public double doubleValue() { - return array[docID]; - } - }; - final SortedNumericDoubleValues singletonValues = FieldData.singleton(singleValues); - final MultiValueMode.UnsortedNumericDoubleValues multiValues = new MultiValueMode.UnsortedNumericDoubleValues() { - - @Override - public int docValueCount() { - return singletonValues.docValueCount(); - } - - @Override - public boolean advanceExact(int doc) throws IOException { - return singletonValues.advanceExact(doc); - } - - @Override - public double nextValue() throws IOException { - return Math.cos(singletonValues.nextValue()); - } - }; - verifyUnsortedNumeric(() -> multiValues, numDocs); - } - - public void testUnsortedMultiValuedDoubles() throws Exception { - final int numDocs = scaledRandomIntBetween(1, 100); - final double[][] array = new double[numDocs][]; - for (int i = 0; i < numDocs; ++i) { - final double[] values = new double[randomInt(4)]; - for (int j = 0; j < values.length; ++j) { - values[j] = randomDouble(); - } - Arrays.sort(values); - array[i] = values; - } - final MultiValueMode.UnsortedNumericDoubleValues multiValues = new MultiValueMode.UnsortedNumericDoubleValues() { - int doc; - int i; - - @Override - public int docValueCount() { - return array[doc].length; - } - - @Override - public boolean advanceExact(int doc) { - this.doc = doc; - i = 0; - return array[doc].length > 0; - } - - @Override - public double nextValue() { - return Math.sin(array[doc][i++]); - } - }; - verifyUnsortedNumeric(() -> multiValues, numDocs); - } - - private void verifyUnsortedNumeric(Supplier supplier, int maxDoc) throws IOException { - for (double missingValue : new double[] { 0, randomDouble() }) { - for (MultiValueMode mode : new MultiValueMode[] {MultiValueMode.MIN, MultiValueMode.MAX, MultiValueMode.SUM, MultiValueMode.AVG}) { - UnsortedNumericDoubleValues values = supplier.get(); - final NumericDoubleValues selected = mode.select(values, missingValue); - for (int i = 0; i < maxDoc; ++i) { - assertTrue(selected.advanceExact(i)); - final double actual = selected.doubleValue(); - double expected = 0.0; - if (values.advanceExact(i) == false) { - expected = missingValue; - } else { - int numValues = values.docValueCount(); - if (mode == MultiValueMode.MAX) { - expected = Long.MIN_VALUE; - } else if (mode == MultiValueMode.MIN) { - expected = Long.MAX_VALUE; - } - for (int j = 0; j < numValues; ++j) { - if (mode == MultiValueMode.SUM || mode == MultiValueMode.AVG) { - expected += values.nextValue(); - } else if (mode == MultiValueMode.MIN) { - expected = Math.min(expected, values.nextValue()); - } else if (mode == MultiValueMode.MAX) { - expected = Math.max(expected, values.nextValue()); - } - } - if (mode == MultiValueMode.AVG) { - expected = expected/numValues; - } - } - - assertEquals(mode.toString() + " docId=" + i, expected, actual, 0.1); - } - } - } - } - public void testValidOrdinals() { assertThat(MultiValueMode.SUM.ordinal(), equalTo(0)); assertThat(MultiValueMode.AVG.ordinal(), equalTo(1)); diff --git a/core/src/test/java/org/elasticsearch/search/SearchHitTests.java b/core/src/test/java/org/elasticsearch/search/SearchHitTests.java index 4605582603f4b..818a61fa32b64 100644 --- a/core/src/test/java/org/elasticsearch/search/SearchHitTests.java +++ b/core/src/test/java/org/elasticsearch/search/SearchHitTests.java @@ -61,7 +61,7 @@ public class SearchHitTests extends ESTestCase { - private static Set META_FIELDS = Sets.newHashSet("_uid", "_all", "_parent", "_routing", "_size", "_timestamp", "_ttl"); + private static Set META_FIELDS = Sets.newHashSet("_uid", "_parent", "_routing", "_size", "_timestamp", "_ttl"); public static SearchHit createTestItem(boolean withOptionalInnerHits) { int internalId = randomInt(); diff --git a/core/src/test/java/org/elasticsearch/search/SearchModuleTests.java b/core/src/test/java/org/elasticsearch/search/SearchModuleTests.java index 64c2a287b3755..fccec4ed468b8 100644 --- a/core/src/test/java/org/elasticsearch/search/SearchModuleTests.java +++ b/core/src/test/java/org/elasticsearch/search/SearchModuleTests.java @@ -26,6 +26,8 @@ import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.index.query.QueryBuilder; +import org.elasticsearch.index.query.QueryRewriteContext; +import org.elasticsearch.index.query.QueryShardContext; import org.elasticsearch.index.query.TermQueryBuilder; import org.elasticsearch.index.query.functionscore.GaussDecayFunctionBuilder; import org.elasticsearch.plugins.SearchPlugin; @@ -58,6 +60,9 @@ import org.elasticsearch.search.fetch.subphase.highlight.PlainHighlighter; import org.elasticsearch.search.fetch.subphase.highlight.UnifiedHighlighter; import org.elasticsearch.search.internal.SearchContext; +import org.elasticsearch.search.rescore.QueryRescorerBuilder; +import org.elasticsearch.search.rescore.RescoreContext; +import org.elasticsearch.search.rescore.RescorerBuilder; import org.elasticsearch.search.suggest.CustomSuggesterSearchIT.CustomSuggestionBuilder; import org.elasticsearch.search.suggest.SuggestionBuilder; import org.elasticsearch.search.suggest.term.TermSuggestionBuilder; @@ -87,8 +92,7 @@ public Map getHighlighters() { return singletonMap("plain", new PlainHighlighter()); } }; - expectThrows(IllegalArgumentException.class, - () -> new SearchModule(Settings.EMPTY, false, singletonList(registersDupeHighlighter))); + expectThrows(IllegalArgumentException.class, registryForPlugin(registersDupeHighlighter)); SearchPlugin registersDupeSuggester = new SearchPlugin() { @Override @@ -96,8 +100,7 @@ public List> getSuggesters() { return singletonList(new SuggesterSpec<>("term", TermSuggestionBuilder::new, TermSuggestionBuilder::fromXContent)); } }; - expectThrows(IllegalArgumentException.class, () -> new NamedXContentRegistry( - new SearchModule(Settings.EMPTY, false, singletonList(registersDupeSuggester)).getNamedXContents())); + expectThrows(IllegalArgumentException.class, registryForPlugin(registersDupeSuggester)); SearchPlugin registersDupeScoreFunction = new SearchPlugin() { @Override @@ -106,8 +109,7 @@ public List> getScoreFunctions() { GaussDecayFunctionBuilder.PARSER)); } }; - expectThrows(IllegalArgumentException.class, () -> new NamedXContentRegistry( - new SearchModule(Settings.EMPTY, false, singletonList(registersDupeScoreFunction)).getNamedXContents())); + expectThrows(IllegalArgumentException.class, registryForPlugin(registersDupeScoreFunction)); SearchPlugin registersDupeSignificanceHeuristic = new SearchPlugin() { @Override @@ -115,8 +117,7 @@ public List(ChiSquare.NAME, ChiSquare::new, ChiSquare.PARSER)); } }; - expectThrows(IllegalArgumentException.class, () -> new SearchModule(Settings.EMPTY, false, - singletonList(registersDupeSignificanceHeuristic))); + expectThrows(IllegalArgumentException.class, registryForPlugin(registersDupeSignificanceHeuristic)); SearchPlugin registersDupeMovAvgModel = new SearchPlugin() { @Override @@ -124,8 +125,7 @@ public List> g return singletonList(new SearchExtensionSpec<>(SimpleModel.NAME, SimpleModel::new, SimpleModel.PARSER)); } }; - expectThrows(IllegalArgumentException.class, () -> new SearchModule(Settings.EMPTY, false, - singletonList(registersDupeMovAvgModel))); + expectThrows(IllegalArgumentException.class, registryForPlugin(registersDupeMovAvgModel)); SearchPlugin registersDupeFetchSubPhase = new SearchPlugin() { @Override @@ -133,8 +133,7 @@ public List getFetchSubPhases(FetchPhaseConstructionContext conte return singletonList(new ExplainFetchSubPhase()); } }; - expectThrows(IllegalArgumentException.class, () -> new SearchModule(Settings.EMPTY, false, - singletonList(registersDupeFetchSubPhase))); + expectThrows(IllegalArgumentException.class, registryForPlugin(registersDupeFetchSubPhase)); SearchPlugin registersDupeQuery = new SearchPlugin() { @Override @@ -142,8 +141,7 @@ public List> getQueries() { return singletonList(new QuerySpec<>(TermQueryBuilder.NAME, TermQueryBuilder::new, TermQueryBuilder::fromXContent)); } }; - expectThrows(IllegalArgumentException.class, () -> new NamedXContentRegistry( - new SearchModule(Settings.EMPTY, false, singletonList(registersDupeQuery)).getNamedXContents())); + expectThrows(IllegalArgumentException.class, registryForPlugin(registersDupeQuery)); SearchPlugin registersDupeAggregation = new SearchPlugin() { @Override @@ -152,8 +150,7 @@ public List getAggregations() { TermsAggregationBuilder::parse)); } }; - expectThrows(IllegalArgumentException.class, () -> new NamedXContentRegistry(new SearchModule(Settings.EMPTY, false, - singletonList(registersDupeAggregation)).getNamedXContents())); + expectThrows(IllegalArgumentException.class, registryForPlugin(registersDupeAggregation)); SearchPlugin registersDupePipelineAggregation = new SearchPlugin() { @Override @@ -166,8 +163,19 @@ public List getPipelineAggregations() { .addResultReader(InternalDerivative::new)); } }; - expectThrows(IllegalArgumentException.class, () -> new NamedXContentRegistry(new SearchModule(Settings.EMPTY, false, - singletonList(registersDupePipelineAggregation)).getNamedXContents())); + expectThrows(IllegalArgumentException.class, registryForPlugin(registersDupePipelineAggregation)); + + SearchPlugin registersDupeRescorer = new SearchPlugin() { + public List> getRescorers() { + return singletonList( + new RescorerSpec<>(QueryRescorerBuilder.NAME, QueryRescorerBuilder::new, QueryRescorerBuilder::fromXContent)); + } + }; + expectThrows(IllegalArgumentException.class, registryForPlugin(registersDupeRescorer)); + } + + private ThrowingRunnable registryForPlugin(SearchPlugin plugin) { + return () -> new NamedXContentRegistry(new SearchModule(Settings.EMPTY, false, singletonList(plugin)).getNamedXContents()); } public void testRegisterSuggester() { @@ -262,6 +270,20 @@ public List getPipelineAggregations() { hasSize(1)); } + public void testRegisterRescorer() { + SearchModule module = new SearchModule(Settings.EMPTY, false, singletonList(new SearchPlugin() { + @Override + public List> getRescorers() { + return singletonList(new RescorerSpec<>("test", TestRescorerBuilder::new, TestRescorerBuilder::fromXContent)); + } + })); + assertThat( + module.getNamedXContents().stream() + .filter(entry -> entry.categoryClass.equals(RescorerBuilder.class) && entry.name.match("test")) + .collect(toList()), + hasSize(1)); + } + private static final String[] NON_DEPRECATED_QUERIES = new String[] { "bool", "boosting", @@ -301,6 +323,7 @@ public List getPipelineAggregations() { "span_within", "term", "terms", + "terms_set", "type", "wildcard", "wrapper" @@ -424,4 +447,37 @@ public InternalAggregation reduce(InternalAggregation aggregation, ReduceContext return null; } } + + private static class TestRescorerBuilder extends RescorerBuilder { + public static TestRescorerBuilder fromXContent(XContentParser parser) { + return null; + } + + TestRescorerBuilder(StreamInput in) throws IOException { + super(in); + } + + @Override + public String getWriteableName() { + return "test"; + } + + @Override + public RescorerBuilder rewrite(QueryRewriteContext ctx) throws IOException { + return this; + } + + @Override + protected void doWriteTo(StreamOutput out) throws IOException { + } + + @Override + protected void doXContent(XContentBuilder builder, Params params) throws IOException { + } + + @Override + public RescoreContext innerBuildContext(int windowSize, QueryShardContext context) throws IOException { + return null; + } + } } diff --git a/core/src/test/java/org/elasticsearch/search/SearchRequestTests.java b/core/src/test/java/org/elasticsearch/search/SearchRequestTests.java index fb83bcfaa4543..d37b8b4b13392 100644 --- a/core/src/test/java/org/elasticsearch/search/SearchRequestTests.java +++ b/core/src/test/java/org/elasticsearch/search/SearchRequestTests.java @@ -19,6 +19,7 @@ package org.elasticsearch.search; +import org.elasticsearch.action.ActionRequestValidationException; import org.elasticsearch.action.search.SearchRequest; import org.elasticsearch.action.search.SearchType; import org.elasticsearch.action.support.IndicesOptions; @@ -27,6 +28,7 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.common.util.ArrayUtils; +import org.elasticsearch.search.builder.SearchSourceBuilder; import java.io.IOException; import java.util.ArrayList; @@ -41,8 +43,7 @@ public void testSerialization() throws Exception { try (BytesStreamOutput output = new BytesStreamOutput()) { searchRequest.writeTo(output); try (StreamInput in = new NamedWriteableAwareStreamInput(output.bytes().streamInput(), namedWriteableRegistry)) { - SearchRequest deserializedRequest = new SearchRequest(); - deserializedRequest.readFrom(in); + SearchRequest deserializedRequest = new SearchRequest(in); assertEquals(deserializedRequest, searchRequest); assertEquals(deserializedRequest.hashCode(), searchRequest.hashCode()); assertNotSame(deserializedRequest, searchRequest); @@ -80,6 +81,37 @@ public void testIllegalArguments() { assertEquals("keepAlive must not be null", e.getMessage()); } + public void testValidate() throws IOException { + + { + // if scroll isn't set, validate should never add errors + SearchRequest searchRequest = createSearchRequest().source(new SearchSourceBuilder()); + searchRequest.scroll((Scroll) null); + ActionRequestValidationException validationErrors = searchRequest.validate(); + assertNull(validationErrors); + } + { + // disabeling `track_total_hits` isn't valid in scroll context + SearchRequest searchRequest = createSearchRequest().source(new SearchSourceBuilder()); + searchRequest.scroll(new TimeValue(1000)); + searchRequest.source().trackTotalHits(false); + ActionRequestValidationException validationErrors = searchRequest.validate(); + assertNotNull(validationErrors); + assertEquals(1, validationErrors.validationErrors().size()); + assertEquals("disabling [track_total_hits] is not allowed in a scroll context", validationErrors.validationErrors().get(0)); + } + { + // scroll and `from` isn't valid + SearchRequest searchRequest = createSearchRequest().source(new SearchSourceBuilder()); + searchRequest.scroll(new TimeValue(1000)); + searchRequest.source().from(10); + ActionRequestValidationException validationErrors = searchRequest.validate(); + assertNotNull(validationErrors); + assertEquals(1, validationErrors.validationErrors().size()); + assertEquals("using [from] is not allowed in a scroll context", validationErrors.validationErrors().get(0)); + } + } + public void testEqualsAndHashcode() throws IOException { checkEqualsAndHashCode(createSearchRequest(), SearchRequestTests::copyRequest, this::mutate); } diff --git a/core/src/test/java/org/elasticsearch/search/SearchServiceTests.java b/core/src/test/java/org/elasticsearch/search/SearchServiceTests.java index 57ae81156ea59..92f018f282a43 100644 --- a/core/src/test/java/org/elasticsearch/search/SearchServiceTests.java +++ b/core/src/test/java/org/elasticsearch/search/SearchServiceTests.java @@ -25,7 +25,6 @@ import org.elasticsearch.action.ActionListener; import org.elasticsearch.action.index.IndexResponse; import org.elasticsearch.action.search.SearchPhaseExecutionException; -import org.elasticsearch.action.search.SearchRequest; import org.elasticsearch.action.search.SearchResponse; import org.elasticsearch.action.search.SearchTask; import org.elasticsearch.action.search.SearchType; @@ -48,6 +47,10 @@ import org.elasticsearch.indices.IndicesService; import org.elasticsearch.plugins.Plugin; import org.elasticsearch.plugins.SearchPlugin; +import org.elasticsearch.script.MockScriptEngine; +import org.elasticsearch.script.MockScriptPlugin; +import org.elasticsearch.script.Script; +import org.elasticsearch.script.ScriptType; import org.elasticsearch.search.aggregations.bucket.global.GlobalAggregationBuilder; import org.elasticsearch.search.aggregations.bucket.terms.TermsAggregationBuilder; import org.elasticsearch.search.aggregations.support.ValueType; @@ -61,11 +64,14 @@ import java.io.IOException; import java.util.Collection; +import java.util.Collections; import java.util.List; +import java.util.Map; import java.util.concurrent.CountDownLatch; import java.util.concurrent.ExecutionException; import java.util.concurrent.Semaphore; import java.util.concurrent.atomic.AtomicBoolean; +import java.util.function.Function; import static java.util.Collections.singletonList; import static org.elasticsearch.action.support.WriteRequest.RefreshPolicy.IMMEDIATE; @@ -84,7 +90,19 @@ protected boolean resetNodeAfterTest() { @Override protected Collection> getPlugins() { - return pluginList(FailOnRewriteQueryPlugin.class); + return pluginList(FailOnRewriteQueryPlugin.class, CustomScriptPlugin.class); + } + + public static class CustomScriptPlugin extends MockScriptPlugin { + + static final String DUMMY_SCRIPT = "dummyScript"; + + @Override + protected Map, Object>> pluginScripts() { + return Collections.singletonMap(DUMMY_SCRIPT, vars -> { + return "dummy"; + }); + } } @Override @@ -230,8 +248,8 @@ public void testTimeout() throws IOException { new String[0], false, new AliasFilter(null, Strings.EMPTY_ARRAY), - 1.0f), - null); + 1.0f) + ); try { // the search context should inherit the default timeout assertThat(contextWithDefaultTimeout.timeout(), equalTo(TimeValue.timeValueSeconds(5))); @@ -250,8 +268,8 @@ public void testTimeout() throws IOException { new String[0], false, new AliasFilter(null, Strings.EMPTY_ARRAY), - 1.0f), - null); + 1.0f) + ); try { // the search context should inherit the query timeout assertThat(context.timeout(), equalTo(TimeValue.timeValueSeconds(seconds))); @@ -262,6 +280,68 @@ public void testTimeout() throws IOException { } + /** + * test that getting more than the allowed number of docvalue_fields throws an exception + */ + public void testMaxDocvalueFieldsSearch() throws IOException { + createIndex("index"); + final SearchService service = getInstanceFromNode(SearchService.class); + final IndicesService indicesService = getInstanceFromNode(IndicesService.class); + final IndexService indexService = indicesService.indexServiceSafe(resolveIndex("index")); + final IndexShard indexShard = indexService.getShard(0); + + SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder(); + // adding the maximum allowed number of docvalue_fields to retrieve + for (int i = 0; i < indexService.getIndexSettings().getMaxDocvalueFields(); i++) { + searchSourceBuilder.docValueField("field" + i); + } + try (SearchContext context = service.createContext(new ShardSearchLocalRequest(indexShard.shardId(), 1, SearchType.DEFAULT, + searchSourceBuilder, new String[0], false, new AliasFilter(null, Strings.EMPTY_ARRAY), 1.0f))) { + assertNotNull(context); + searchSourceBuilder.docValueField("one_field_too_much"); + IllegalArgumentException ex = expectThrows(IllegalArgumentException.class, + () -> service.createContext(new ShardSearchLocalRequest(indexShard.shardId(), 1, SearchType.DEFAULT, + searchSourceBuilder, new String[0], false, new AliasFilter(null, Strings.EMPTY_ARRAY), 1.0f))); + assertEquals( + "Trying to retrieve too many docvalue_fields. Must be less than or equal to: [100] but was [101]. " + + "This limit can be set by changing the [index.max_docvalue_fields_search] index level setting.", + ex.getMessage()); + } + } + + /** + * test that getting more than the allowed number of script_fields throws an exception + */ + public void testMaxScriptFieldsSearch() throws IOException { + createIndex("index"); + final SearchService service = getInstanceFromNode(SearchService.class); + final IndicesService indicesService = getInstanceFromNode(IndicesService.class); + final IndexService indexService = indicesService.indexServiceSafe(resolveIndex("index")); + final IndexShard indexShard = indexService.getShard(0); + + SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder(); + // adding the maximum allowed number of script_fields to retrieve + int maxScriptFields = indexService.getIndexSettings().getMaxScriptFields(); + for (int i = 0; i < maxScriptFields; i++) { + searchSourceBuilder.scriptField("field" + i, + new Script(ScriptType.INLINE, MockScriptEngine.NAME, CustomScriptPlugin.DUMMY_SCRIPT, Collections.emptyMap())); + } + try (SearchContext context = service.createContext(new ShardSearchLocalRequest(indexShard.shardId(), 1, SearchType.DEFAULT, + searchSourceBuilder, new String[0], false, new AliasFilter(null, Strings.EMPTY_ARRAY), 1.0f))) { + assertNotNull(context); + searchSourceBuilder.scriptField("anotherScriptField", + new Script(ScriptType.INLINE, MockScriptEngine.NAME, CustomScriptPlugin.DUMMY_SCRIPT, Collections.emptyMap())); + IllegalArgumentException ex = expectThrows(IllegalArgumentException.class, + () -> service.createContext(new ShardSearchLocalRequest(indexShard.shardId(), 1, SearchType.DEFAULT, + searchSourceBuilder, new String[0], false, new AliasFilter(null, Strings.EMPTY_ARRAY), 1.0f))); + assertEquals( + "Trying to retrieve too many script_fields. Must be less than or equal to: [" + maxScriptFields + "] but was [" + + (maxScriptFields + 1) + + "]. This limit can be set by changing the [index.max_script_fields] index level setting.", + ex.getMessage()); + } + } + public static class FailOnRewriteQueryPlugin extends Plugin implements SearchPlugin { @Override public List> getQueries() { diff --git a/core/src/test/java/org/elasticsearch/search/aggregations/EquivalenceIT.java b/core/src/test/java/org/elasticsearch/search/aggregations/EquivalenceIT.java index 6c08c1697244f..02b3632d2e8ca 100644 --- a/core/src/test/java/org/elasticsearch/search/aggregations/EquivalenceIT.java +++ b/core/src/test/java/org/elasticsearch/search/aggregations/EquivalenceIT.java @@ -266,10 +266,6 @@ public void testDuelTerms() throws Exception { .setIndicesOptions(IndicesOptions.lenientExpandOpen()) .execute().get()); - TermsAggregatorFactory.ExecutionMode[] globalOrdinalModes = new TermsAggregatorFactory.ExecutionMode[] { - TermsAggregatorFactory.ExecutionMode.GLOBAL_ORDINALS_HASH, TermsAggregatorFactory.ExecutionMode.GLOBAL_ORDINALS - }; - SearchResponse resp = client().prepareSearch("idx") .addAggregation( terms("long") @@ -294,14 +290,14 @@ public void testDuelTerms() throws Exception { terms("string_global_ordinals") .field("string_values") .collectMode(randomFrom(SubAggCollectionMode.values())) - .executionHint(globalOrdinalModes[randomInt(globalOrdinalModes.length - 1)].toString()) + .executionHint(TermsAggregatorFactory.ExecutionMode.GLOBAL_ORDINALS.toString()) .size(maxNumTerms) .subAggregation(extendedStats("stats").field("num"))) .addAggregation( terms("string_global_ordinals_doc_values") .field("string_values.doc_values") .collectMode(randomFrom(SubAggCollectionMode.values())) - .executionHint(globalOrdinalModes[randomInt(globalOrdinalModes.length - 1)].toString()) + .executionHint(TermsAggregatorFactory.ExecutionMode.GLOBAL_ORDINALS.toString()) .size(maxNumTerms) .subAggregation(extendedStats("stats").field("num"))) .execute().actionGet(); diff --git a/core/src/test/java/org/elasticsearch/search/aggregations/MultiBucketAggregatorWrapperTests.java b/core/src/test/java/org/elasticsearch/search/aggregations/MultiBucketAggregatorWrapperTests.java new file mode 100644 index 0000000000000..0a83b2ec5c794 --- /dev/null +++ b/core/src/test/java/org/elasticsearch/search/aggregations/MultiBucketAggregatorWrapperTests.java @@ -0,0 +1,93 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.search.aggregations; + +import org.apache.lucene.analysis.MockAnalyzer; +import org.apache.lucene.index.LeafReaderContext; +import org.apache.lucene.index.memory.MemoryIndex; +import org.apache.lucene.search.Scorer; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.util.BigArrays; +import org.elasticsearch.common.util.MockBigArrays; +import org.elasticsearch.indices.breaker.NoneCircuitBreakerService; +import org.elasticsearch.search.internal.SearchContext; +import org.elasticsearch.test.ESTestCase; + +import java.io.IOException; +import java.util.Collections; +import java.util.List; +import java.util.Map; + +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.when; +import static org.mockito.Mockito.verify; +import static org.mockito.Mockito.reset; +import static org.mockito.Mockito.same; +import static org.mockito.Mockito.verifyNoMoreInteractions; + +public class MultiBucketAggregatorWrapperTests extends ESTestCase { + + public void testNoNullScorerIsDelegated() throws Exception { + LeafReaderContext leafReaderContext = MemoryIndex.fromDocument(Collections.emptyList(), new MockAnalyzer(random())) + .createSearcher().getIndexReader().leaves().get(0); + BigArrays bigArrays = new MockBigArrays(Settings.EMPTY, new NoneCircuitBreakerService()); + SearchContext searchContext = mock(SearchContext.class); + when(searchContext.bigArrays()).thenReturn(bigArrays); + + Aggregator aggregator = mock(Aggregator.class); + AggregatorFactory aggregatorFactory = new TestAggregatorFactory(searchContext, aggregator); + LeafBucketCollector wrappedCollector = mock(LeafBucketCollector.class); + when(aggregator.getLeafCollector(leafReaderContext)).thenReturn(wrappedCollector); + Aggregator wrapper = AggregatorFactory.asMultiBucketAggregator(aggregatorFactory, searchContext, null); + + LeafBucketCollector collector = wrapper.getLeafCollector(leafReaderContext); + + collector.collect(0, 0); + // setScorer should not be invoked as it has not been set + // Only collect should be invoked: + verify(wrappedCollector).collect(0, 0); + verifyNoMoreInteractions(wrappedCollector); + + reset(wrappedCollector); + Scorer scorer = mock(Scorer.class); + collector.setScorer(scorer); + collector.collect(0, 1); + verify(wrappedCollector).setScorer(same(scorer)); + verify(wrappedCollector).collect(0, 0); + verifyNoMoreInteractions(wrappedCollector); + wrapper.close(); + } + + static class TestAggregatorFactory extends AggregatorFactory { + + private final Aggregator aggregator; + + TestAggregatorFactory(SearchContext context, Aggregator aggregator) throws IOException { + super("_name", context, null, new AggregatorFactories.Builder(), Collections.emptyMap()); + this.aggregator = aggregator; + } + + @Override + protected Aggregator createInternal(Aggregator parent, boolean collectsFromSingleBucket, List list, + Map metaData) throws IOException { + return aggregator; + } + } + +} diff --git a/core/src/test/java/org/elasticsearch/search/aggregations/bucket/DiversifiedSamplerIT.java b/core/src/test/java/org/elasticsearch/search/aggregations/bucket/DiversifiedSamplerIT.java index 6710bcdb23168..a8bc97682f0db 100644 --- a/core/src/test/java/org/elasticsearch/search/aggregations/bucket/DiversifiedSamplerIT.java +++ b/core/src/test/java/org/elasticsearch/search/aggregations/bucket/DiversifiedSamplerIT.java @@ -21,6 +21,7 @@ import org.elasticsearch.action.admin.indices.refresh.RefreshRequest; import org.elasticsearch.action.search.SearchResponse; import org.elasticsearch.action.search.SearchType; +import org.elasticsearch.common.settings.Settings; import org.elasticsearch.index.query.TermQueryBuilder; import org.elasticsearch.search.aggregations.bucket.sampler.DiversifiedAggregationBuilder; import org.elasticsearch.search.aggregations.bucket.sampler.Sampler; @@ -62,12 +63,14 @@ public String randomExecutionHint() { @Override public void setupSuiteScopeCluster() throws Exception { - assertAcked(prepareCreate("test").setSettings(SETTING_NUMBER_OF_SHARDS, NUM_SHARDS, SETTING_NUMBER_OF_REPLICAS, 0).addMapping( + assertAcked(prepareCreate("test").setSettings( + Settings.builder().put(SETTING_NUMBER_OF_SHARDS, NUM_SHARDS).put(SETTING_NUMBER_OF_REPLICAS, 0)).addMapping( "book", "author", "type=keyword", "name", "type=keyword", "genre", "type=keyword", "price", "type=float")); createIndex("idx_unmapped"); // idx_unmapped_author is same as main index but missing author field - assertAcked(prepareCreate("idx_unmapped_author").setSettings(SETTING_NUMBER_OF_SHARDS, NUM_SHARDS, SETTING_NUMBER_OF_REPLICAS, 0) + assertAcked(prepareCreate("idx_unmapped_author").setSettings( + Settings.builder().put(SETTING_NUMBER_OF_SHARDS, NUM_SHARDS).put(SETTING_NUMBER_OF_REPLICAS, 0)) .addMapping("book", "name", "type=keyword", "genre", "type=keyword", "price", "type=float")); diff --git a/core/src/test/java/org/elasticsearch/search/aggregations/bucket/FilterTests.java b/core/src/test/java/org/elasticsearch/search/aggregations/bucket/FilterTests.java index 5b195aaf4eee1..13b2c36e7ead4 100644 --- a/core/src/test/java/org/elasticsearch/search/aggregations/bucket/FilterTests.java +++ b/core/src/test/java/org/elasticsearch/search/aggregations/bucket/FilterTests.java @@ -27,12 +27,8 @@ public class FilterTests extends BaseAggregationTestCase indexBuilders = new ArrayList<>(); @@ -116,7 +117,8 @@ public void testShardMinDocCountTermsTest() throws Exception { if (termtype.equals("text")) { termMappings += ",fielddata=true"; } - assertAcked(prepareCreate(index).setSettings(SETTING_NUMBER_OF_SHARDS, 1, SETTING_NUMBER_OF_REPLICAS, 0).addMapping(type, "text", termMappings)); + assertAcked(prepareCreate(index).setSettings(Settings.builder().put(SETTING_NUMBER_OF_SHARDS, 1).put(SETTING_NUMBER_OF_REPLICAS, 0)) + .addMapping(type, "text", termMappings)); List indexBuilders = new ArrayList<>(); addTermsDocs("1", 1, indexBuilders);//low doc freq but high score diff --git a/core/src/test/java/org/elasticsearch/search/aggregations/bucket/adjacency/AdjacencyMatrixAggregationBuilderTests.java b/core/src/test/java/org/elasticsearch/search/aggregations/bucket/adjacency/AdjacencyMatrixAggregationBuilderTests.java new file mode 100644 index 0000000000000..643344bb3bb03 --- /dev/null +++ b/core/src/test/java/org/elasticsearch/search/aggregations/bucket/adjacency/AdjacencyMatrixAggregationBuilderTests.java @@ -0,0 +1,83 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.search.aggregations.bucket.adjacency; + +import org.elasticsearch.Version; +import org.elasticsearch.cluster.metadata.IndexMetaData; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.index.IndexSettings; +import org.elasticsearch.index.query.QueryBuilder; +import org.elasticsearch.index.query.QueryShardContext; +import org.elasticsearch.index.shard.IndexShard; +import org.elasticsearch.search.aggregations.AggregatorFactories; +import org.elasticsearch.search.aggregations.AggregatorFactory; +import org.elasticsearch.search.internal.SearchContext; +import org.elasticsearch.test.ESTestCase; +import org.elasticsearch.test.TestSearchContext; + +import java.util.Collections; +import java.util.HashMap; +import java.util.Map; + +import static org.hamcrest.Matchers.equalTo; +import static org.hamcrest.Matchers.is; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.when; + +public class AdjacencyMatrixAggregationBuilderTests extends ESTestCase { + + public void testFilterSizeLimitation() throws Exception { + // filter size grater than max size should thrown a exception + QueryShardContext queryShardContext = mock(QueryShardContext.class); + IndexShard indexShard = mock(IndexShard.class); + Settings settings = Settings.builder() + .put("index.max_adjacency_matrix_filters", 2) + .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT) + .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 1) + .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 2) + .build(); + IndexMetaData indexMetaData = IndexMetaData.builder("index").settings(settings).build(); + IndexSettings indexSettings = new IndexSettings(indexMetaData, Settings.EMPTY); + when(indexShard.indexSettings()).thenReturn(indexSettings); + SearchContext context = new TestSearchContext(queryShardContext, indexShard); + + Map filters = new HashMap<>(3); + for (int i = 0; i < 3; i++) { + QueryBuilder queryBuilder = mock(QueryBuilder.class); + // return builder itself to skip rewrite + when(queryBuilder.rewrite(queryShardContext)).thenReturn(queryBuilder); + filters.put("filter" + i, queryBuilder); + } + AdjacencyMatrixAggregationBuilder builder = new AdjacencyMatrixAggregationBuilder("dummy", filters); + IllegalArgumentException ex + = expectThrows(IllegalArgumentException.class, () -> builder.doBuild(context, null, new AggregatorFactories.Builder())); + assertThat(ex.getMessage(), equalTo("Number of filters is too large, must be less than or equal to: [2] but was [3]." + + "This limit can be set by changing the [" + IndexSettings.MAX_ADJACENCY_MATRIX_FILTERS_SETTING.getKey() + + "] index level setting.")); + + // filter size not grater than max size should return an instance of AdjacencyMatrixAggregatorFactory + Map emptyFilters = Collections.emptyMap(); + + AdjacencyMatrixAggregationBuilder aggregationBuilder = new AdjacencyMatrixAggregationBuilder("dummy", emptyFilters); + AggregatorFactory factory = aggregationBuilder.doBuild(context, null, new AggregatorFactories.Builder()); + assertThat(factory instanceof AdjacencyMatrixAggregatorFactory, is(true)); + assertThat(factory.name(), equalTo("dummy")); + } +} diff --git a/core/src/test/java/org/elasticsearch/search/aggregations/bucket/filter/FilterAggregatorTests.java b/core/src/test/java/org/elasticsearch/search/aggregations/bucket/filter/FilterAggregatorTests.java index fb615e66dfb57..f3d057d8e8cd0 100644 --- a/core/src/test/java/org/elasticsearch/search/aggregations/bucket/filter/FilterAggregatorTests.java +++ b/core/src/test/java/org/elasticsearch/search/aggregations/bucket/filter/FilterAggregatorTests.java @@ -36,9 +36,6 @@ import org.elasticsearch.index.query.QueryBuilders; import org.elasticsearch.search.aggregations.AggregatorFactory; import org.elasticsearch.search.aggregations.AggregatorTestCase; -import org.elasticsearch.search.aggregations.bucket.filter.FilterAggregationBuilder; -import org.elasticsearch.search.aggregations.bucket.filter.FilterAggregatorFactory; -import org.elasticsearch.search.aggregations.bucket.filter.InternalFilter; import org.hamcrest.Matchers; import org.junit.Before; @@ -121,7 +118,7 @@ public void testParsedAsFilter() throws IOException { AggregatorFactory factory = createAggregatorFactory(builder, indexSearcher, fieldType); assertThat(factory, Matchers.instanceOf(FilterAggregatorFactory.class)); FilterAggregatorFactory filterFactory = (FilterAggregatorFactory) factory; - Query parsedQuery = filterFactory.weight.getQuery(); + Query parsedQuery = filterFactory.getWeight().getQuery(); assertThat(parsedQuery, Matchers.instanceOf(BooleanQuery.class)); assertEquals(2, ((BooleanQuery) parsedQuery).clauses().size()); // means the bool query has been parsed as a filter, if it was a query minShouldMatch would diff --git a/core/src/test/java/org/elasticsearch/search/aggregations/bucket/filter/FiltersAggregatorTests.java b/core/src/test/java/org/elasticsearch/search/aggregations/bucket/filter/FiltersAggregatorTests.java index 0420e9d5b9b76..6fdf207249f43 100644 --- a/core/src/test/java/org/elasticsearch/search/aggregations/bucket/filter/FiltersAggregatorTests.java +++ b/core/src/test/java/org/elasticsearch/search/aggregations/bucket/filter/FiltersAggregatorTests.java @@ -214,7 +214,7 @@ public void testParsedAsFilter() throws IOException { AggregatorFactory factory = createAggregatorFactory(builder, indexSearcher, fieldType); assertThat(factory, Matchers.instanceOf(FiltersAggregatorFactory.class)); FiltersAggregatorFactory filtersFactory = (FiltersAggregatorFactory) factory; - Query parsedQuery = filtersFactory.weights[0].getQuery(); + Query parsedQuery = filtersFactory.getWeights()[0].getQuery(); assertThat(parsedQuery, Matchers.instanceOf(BooleanQuery.class)); assertEquals(2, ((BooleanQuery) parsedQuery).clauses().size()); // means the bool query has been parsed as a filter, if it was a query minShouldMatch would diff --git a/core/src/test/java/org/elasticsearch/search/aggregations/bucket/geogrid/GeoHashGridParserTests.java b/core/src/test/java/org/elasticsearch/search/aggregations/bucket/geogrid/GeoHashGridParserTests.java index 682dc1777b044..7f46cb9e551a8 100644 --- a/core/src/test/java/org/elasticsearch/search/aggregations/bucket/geogrid/GeoHashGridParserTests.java +++ b/core/src/test/java/org/elasticsearch/search/aggregations/bucket/geogrid/GeoHashGridParserTests.java @@ -19,11 +19,15 @@ package org.elasticsearch.search.aggregations.bucket.geogrid; import org.elasticsearch.common.ParsingException; +import org.elasticsearch.common.unit.DistanceUnit; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.common.xcontent.json.JsonXContent; import org.elasticsearch.test.ESTestCase; +import static org.hamcrest.Matchers.containsString; +import static org.hamcrest.Matchers.greaterThanOrEqualTo; import static org.hamcrest.Matchers.instanceOf; +import static org.hamcrest.Matchers.lessThanOrEqualTo; public class GeoHashGridParserTests extends ESTestCase { public void testParseValidFromInts() throws Exception { @@ -46,16 +50,52 @@ public void testParseValidFromStrings() throws Exception { assertNotNull(GeoGridAggregationBuilder.parse("geohash_grid", stParser)); } + public void testParseDistanceUnitPrecision() throws Exception { + double distance = randomDoubleBetween(10.0, 100.00, true); + DistanceUnit unit = randomFrom(DistanceUnit.values()); + if (unit.equals(DistanceUnit.MILLIMETERS)) { + distance = 5600 + randomDouble(); // 5.6cm is approx. smallest distance represented by precision 12 + } + String distanceString = distance + unit.toString(); + XContentParser stParser = createParser(JsonXContent.jsonXContent, + "{\"field\":\"my_loc\", \"precision\": \"" + distanceString + "\", \"size\": \"500\", \"shard_size\": \"550\"}"); + XContentParser.Token token = stParser.nextToken(); + assertSame(XContentParser.Token.START_OBJECT, token); + // can create a factory + GeoGridAggregationBuilder builder = GeoGridAggregationBuilder.parse("geohash_grid", stParser); + assertNotNull(builder); + assertThat(builder.precision(), greaterThanOrEqualTo(0)); + assertThat(builder.precision(), lessThanOrEqualTo(12)); + } + + public void testParseInvalidUnitPrecision() throws Exception { + XContentParser stParser = createParser(JsonXContent.jsonXContent, + "{\"field\":\"my_loc\", \"precision\": \"10kg\", \"size\": \"500\", \"shard_size\": \"550\"}"); + XContentParser.Token token = stParser.nextToken(); + assertSame(XContentParser.Token.START_OBJECT, token); + ParsingException ex = expectThrows(ParsingException.class, () -> GeoGridAggregationBuilder.parse("geohash_grid", stParser)); + assertEquals("[geohash_grid] failed to parse field [precision]", ex.getMessage()); + assertThat(ex.getCause(), instanceOf(NumberFormatException.class)); + assertEquals("For input string: \"10kg\"", ex.getCause().getMessage()); + } + + public void testParseDistanceUnitPrecisionTooSmall() throws Exception { + XContentParser stParser = createParser(JsonXContent.jsonXContent, + "{\"field\":\"my_loc\", \"precision\": \"1cm\", \"size\": \"500\", \"shard_size\": \"550\"}"); + XContentParser.Token token = stParser.nextToken(); + assertSame(XContentParser.Token.START_OBJECT, token); + ParsingException ex = expectThrows(ParsingException.class, () -> GeoGridAggregationBuilder.parse("geohash_grid", stParser)); + assertEquals("[geohash_grid] failed to parse field [precision]", ex.getMessage()); + assertThat(ex.getCause(), instanceOf(IllegalArgumentException.class)); + assertEquals("precision too high [1cm]", ex.getCause().getMessage()); + } + public void testParseErrorOnBooleanPrecision() throws Exception { XContentParser stParser = createParser(JsonXContent.jsonXContent, "{\"field\":\"my_loc\", \"precision\":false}"); XContentParser.Token token = stParser.nextToken(); assertSame(XContentParser.Token.START_OBJECT, token); - try { - GeoGridAggregationBuilder.parse("geohash_grid", stParser); - fail(); - } catch (IllegalArgumentException ex) { - assertEquals("[geohash_grid] precision doesn't support values of type: VALUE_BOOLEAN", ex.getMessage()); - } + Exception e = expectThrows(ParsingException.class, () -> GeoGridAggregationBuilder.parse("geohash_grid", stParser)); + assertThat(e.getMessage(), containsString("[geohash_grid] precision doesn't support values of type: VALUE_BOOLEAN")); } public void testParseErrorOnPrecisionOutOfRange() throws Exception { diff --git a/core/src/test/java/org/elasticsearch/search/aggregations/bucket/histogram/InternalHistogramTests.java b/core/src/test/java/org/elasticsearch/search/aggregations/bucket/histogram/InternalHistogramTests.java index af826a7d7900e..8c383e799fee5 100644 --- a/core/src/test/java/org/elasticsearch/search/aggregations/bucket/histogram/InternalHistogramTests.java +++ b/core/src/test/java/org/elasticsearch/search/aggregations/bucket/histogram/InternalHistogramTests.java @@ -23,12 +23,15 @@ import org.elasticsearch.common.io.stream.Writeable.Reader; import org.elasticsearch.search.DocValueFormat; import org.elasticsearch.search.aggregations.BucketOrder; +import org.elasticsearch.search.aggregations.InternalAggregation; +import org.elasticsearch.search.aggregations.InternalAggregation.ReduceContext; import org.elasticsearch.search.aggregations.InternalAggregations; import org.elasticsearch.search.aggregations.InternalMultiBucketAggregationTestCase; import org.elasticsearch.search.aggregations.ParsedMultiBucketAggregation; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; import java.util.ArrayList; +import java.util.Arrays; import java.util.HashMap; import java.util.List; import java.util.Map; @@ -63,6 +66,27 @@ protected InternalHistogram createTestInstance(String name, return new InternalHistogram(name, buckets, order, 1, null, format, keyed, pipelineAggregators, metaData); } + // issue 26787 + public void testHandlesNaN() { + InternalHistogram histogram = createTestInstance(); + InternalHistogram histogram2 = createTestInstance(); + List buckets = histogram.getBuckets(); + if (buckets == null || buckets.isEmpty()) { + return; + } + + // Set the key of one bucket to NaN. Must be the last bucket because NaN is greater than everything else. + List newBuckets = new ArrayList<>(buckets.size()); + if (buckets.size() > 1) { + newBuckets.addAll(buckets.subList(0, buckets.size() - 1)); + } + InternalHistogram.Bucket b = buckets.get(buckets.size() - 1); + newBuckets.add(new InternalHistogram.Bucket(Double.NaN, b.docCount, keyed, b.format, b.aggregations)); + + InternalHistogram newHistogram = histogram.create(newBuckets); + newHistogram.doReduce(Arrays.asList(newHistogram, histogram2), new InternalAggregation.ReduceContext(null, null, false)); + } + @Override protected void assertReduced(InternalHistogram reduced, List inputs) { Map expectedCounts = new TreeMap<>(); diff --git a/core/src/test/java/org/elasticsearch/search/aggregations/bucket/nested/NestedAggregatorTests.java b/core/src/test/java/org/elasticsearch/search/aggregations/bucket/nested/NestedAggregatorTests.java index 7000924001f20..f6e18f828045f 100644 --- a/core/src/test/java/org/elasticsearch/search/aggregations/bucket/nested/NestedAggregatorTests.java +++ b/core/src/test/java/org/elasticsearch/search/aggregations/bucket/nested/NestedAggregatorTests.java @@ -22,6 +22,7 @@ import org.apache.lucene.document.Document; import org.apache.lucene.document.Field; import org.apache.lucene.document.SortedNumericDocValuesField; +import org.apache.lucene.document.SortedSetDocValuesField; import org.apache.lucene.index.DirectoryReader; import org.apache.lucene.index.IndexReader; import org.apache.lucene.index.IndexWriterConfig; @@ -34,21 +35,33 @@ import org.apache.lucene.search.MatchAllDocsQuery; import org.apache.lucene.search.TermQuery; import org.apache.lucene.store.Directory; +import org.apache.lucene.util.BytesRef; +import org.elasticsearch.common.collect.Tuple; import org.elasticsearch.common.lucene.search.Queries; +import org.elasticsearch.index.mapper.KeywordFieldMapper; import org.elasticsearch.index.mapper.MappedFieldType; import org.elasticsearch.index.mapper.NumberFieldMapper; import org.elasticsearch.index.mapper.TypeFieldMapper; import org.elasticsearch.index.mapper.UidFieldMapper; import org.elasticsearch.search.aggregations.AggregatorTestCase; +import org.elasticsearch.search.aggregations.BucketOrder; import org.elasticsearch.search.aggregations.InternalAggregation; +import org.elasticsearch.search.aggregations.bucket.terms.Terms; +import org.elasticsearch.search.aggregations.bucket.terms.TermsAggregationBuilder; import org.elasticsearch.search.aggregations.metrics.max.InternalMax; +import org.elasticsearch.search.aggregations.metrics.max.Max; import org.elasticsearch.search.aggregations.metrics.max.MaxAggregationBuilder; +import org.elasticsearch.search.aggregations.metrics.min.Min; +import org.elasticsearch.search.aggregations.metrics.min.MinAggregationBuilder; import org.elasticsearch.search.aggregations.metrics.sum.InternalSum; import org.elasticsearch.search.aggregations.metrics.sum.SumAggregationBuilder; +import org.elasticsearch.search.aggregations.support.ValueType; import java.io.IOException; import java.util.ArrayList; +import java.util.Arrays; import java.util.List; +import java.util.Locale; import java.util.stream.DoubleStream; public class NestedAggregatorTests extends AggregatorTestCase { @@ -314,6 +327,189 @@ public void testResetRootDocId() throws Exception { } } + public void testNestedOrdering() throws IOException { + try (Directory directory = newDirectory()) { + try (RandomIndexWriter iw = new RandomIndexWriter(random(), directory)) { + iw.addDocuments(generateBook("1", new String[]{"a"}, new int[]{12, 13, 14})); + iw.addDocuments(generateBook("2", new String[]{"b"}, new int[]{5, 50})); + iw.addDocuments(generateBook("3", new String[]{"c"}, new int[]{39, 19})); + iw.addDocuments(generateBook("4", new String[]{"d"}, new int[]{2, 1, 3})); + iw.addDocuments(generateBook("5", new String[]{"a"}, new int[]{70, 10})); + iw.addDocuments(generateBook("6", new String[]{"e"}, new int[]{23, 21})); + iw.addDocuments(generateBook("7", new String[]{"e", "a"}, new int[]{8, 8})); + iw.addDocuments(generateBook("8", new String[]{"f"}, new int[]{12, 14})); + iw.addDocuments(generateBook("9", new String[]{"g", "c", "e"}, new int[]{18, 8})); + } + try (IndexReader indexReader = wrap(DirectoryReader.open(directory))) { + MappedFieldType fieldType1 = new NumberFieldMapper.NumberFieldType(NumberFieldMapper.NumberType.LONG); + fieldType1.setName("num_pages"); + MappedFieldType fieldType2 = new KeywordFieldMapper.KeywordFieldType(); + fieldType2.setHasDocValues(true); + fieldType2.setName("author"); + + TermsAggregationBuilder termsBuilder = new TermsAggregationBuilder("authors", ValueType.STRING) + .field("author").order(BucketOrder.aggregation("chapters>num_pages.value", true)); + NestedAggregationBuilder nestedBuilder = new NestedAggregationBuilder("chapters", "nested_chapters"); + MaxAggregationBuilder maxAgg = new MaxAggregationBuilder("num_pages").field("num_pages"); + nestedBuilder.subAggregation(maxAgg); + termsBuilder.subAggregation(nestedBuilder); + + Terms terms = search(newSearcher(indexReader, false, true), + new MatchAllDocsQuery(), termsBuilder, fieldType1, fieldType2); + + assertEquals(7, terms.getBuckets().size()); + assertEquals("authors", terms.getName()); + + Terms.Bucket bucket = terms.getBuckets().get(0); + assertEquals("d", bucket.getKeyAsString()); + Max numPages = ((Nested) bucket.getAggregations().get("chapters")).getAggregations().get("num_pages"); + assertEquals(3, (int) numPages.getValue()); + + bucket = terms.getBuckets().get(1); + assertEquals("f", bucket.getKeyAsString()); + numPages = ((Nested) bucket.getAggregations().get("chapters")).getAggregations().get("num_pages"); + assertEquals(14, (int) numPages.getValue()); + + bucket = terms.getBuckets().get(2); + assertEquals("g", bucket.getKeyAsString()); + numPages = ((Nested) bucket.getAggregations().get("chapters")).getAggregations().get("num_pages"); + assertEquals(18, (int) numPages.getValue()); + + bucket = terms.getBuckets().get(3); + assertEquals("e", bucket.getKeyAsString()); + numPages = ((Nested) bucket.getAggregations().get("chapters")).getAggregations().get("num_pages"); + assertEquals(23, (int) numPages.getValue()); + + bucket = terms.getBuckets().get(4); + assertEquals("c", bucket.getKeyAsString()); + numPages = ((Nested) bucket.getAggregations().get("chapters")).getAggregations().get("num_pages"); + assertEquals(39, (int) numPages.getValue()); + + bucket = terms.getBuckets().get(5); + assertEquals("b", bucket.getKeyAsString()); + numPages = ((Nested) bucket.getAggregations().get("chapters")).getAggregations().get("num_pages"); + assertEquals(50, (int) numPages.getValue()); + + bucket = terms.getBuckets().get(6); + assertEquals("a", bucket.getKeyAsString()); + numPages = ((Nested) bucket.getAggregations().get("chapters")).getAggregations().get("num_pages"); + assertEquals(70, (int) numPages.getValue()); + + // reverse order: + termsBuilder = new TermsAggregationBuilder("authors", ValueType.STRING) + .field("author").order(BucketOrder.aggregation("chapters>num_pages.value", false)); + nestedBuilder = new NestedAggregationBuilder("chapters", "nested_chapters"); + maxAgg = new MaxAggregationBuilder("num_pages").field("num_pages"); + nestedBuilder.subAggregation(maxAgg); + termsBuilder.subAggregation(nestedBuilder); + + terms = search(newSearcher(indexReader, false, true), new MatchAllDocsQuery(), termsBuilder, fieldType1, fieldType2); + + assertEquals(7, terms.getBuckets().size()); + assertEquals("authors", terms.getName()); + + bucket = terms.getBuckets().get(0); + assertEquals("a", bucket.getKeyAsString()); + numPages = ((Nested) bucket.getAggregations().get("chapters")).getAggregations().get("num_pages"); + assertEquals(70, (int) numPages.getValue()); + + bucket = terms.getBuckets().get(1); + assertEquals("b", bucket.getKeyAsString()); + numPages = ((Nested) bucket.getAggregations().get("chapters")).getAggregations().get("num_pages"); + assertEquals(50, (int) numPages.getValue()); + + bucket = terms.getBuckets().get(2); + assertEquals("c", bucket.getKeyAsString()); + numPages = ((Nested) bucket.getAggregations().get("chapters")).getAggregations().get("num_pages"); + assertEquals(39, (int) numPages.getValue()); + + bucket = terms.getBuckets().get(3); + assertEquals("e", bucket.getKeyAsString()); + numPages = ((Nested) bucket.getAggregations().get("chapters")).getAggregations().get("num_pages"); + assertEquals(23, (int) numPages.getValue()); + + bucket = terms.getBuckets().get(4); + assertEquals("g", bucket.getKeyAsString()); + numPages = ((Nested) bucket.getAggregations().get("chapters")).getAggregations().get("num_pages"); + assertEquals(18, (int) numPages.getValue()); + + bucket = terms.getBuckets().get(5); + assertEquals("f", bucket.getKeyAsString()); + numPages = ((Nested) bucket.getAggregations().get("chapters")).getAggregations().get("num_pages"); + assertEquals(14, (int) numPages.getValue()); + + bucket = terms.getBuckets().get(6); + assertEquals("d", bucket.getKeyAsString()); + numPages = ((Nested) bucket.getAggregations().get("chapters")).getAggregations().get("num_pages"); + assertEquals(3, (int) numPages.getValue()); + } + } + } + + public void testNestedOrdering_random() throws IOException { + int numBooks = randomIntBetween(32, 512); + List> books = new ArrayList<>(); + for (int i = 0; i < numBooks; i++) { + int numChapters = randomIntBetween(1, 8); + int[] chapters = new int[numChapters]; + for (int j = 0; j < numChapters; j++) { + chapters[j] = randomIntBetween(2, 64); + } + books.add(Tuple.tuple(String.format(Locale.ROOT, "%03d", i), chapters)); + } + try (Directory directory = newDirectory()) { + try (RandomIndexWriter iw = new RandomIndexWriter(random(), directory)) { + int id = 0; + for (Tuple book : books) { + iw.addDocuments(generateBook( + String.format(Locale.ROOT, "%03d", id), new String[]{book.v1()}, book.v2()) + ); + id++; + } + } + for (Tuple book : books) { + Arrays.sort(book.v2()); + } + books.sort((o1, o2) -> { + int cmp = Integer.compare(o1.v2()[0], o2.v2()[0]); + if (cmp == 0) { + return o1.v1().compareTo(o2.v1()); + } else { + return cmp; + } + }); + try (IndexReader indexReader = wrap(DirectoryReader.open(directory))) { + MappedFieldType fieldType1 = new NumberFieldMapper.NumberFieldType(NumberFieldMapper.NumberType.LONG); + fieldType1.setName("num_pages"); + MappedFieldType fieldType2 = new KeywordFieldMapper.KeywordFieldType(); + fieldType2.setHasDocValues(true); + fieldType2.setName("author"); + + TermsAggregationBuilder termsBuilder = new TermsAggregationBuilder("authors", ValueType.STRING) + .size(books.size()).field("author") + .order(BucketOrder.compound(BucketOrder.aggregation("chapters>num_pages.value", true), BucketOrder.key(true))); + NestedAggregationBuilder nestedBuilder = new NestedAggregationBuilder("chapters", "nested_chapters"); + MinAggregationBuilder minAgg = new MinAggregationBuilder("num_pages").field("num_pages"); + nestedBuilder.subAggregation(minAgg); + termsBuilder.subAggregation(nestedBuilder); + + Terms terms = search(newSearcher(indexReader, false, true), + new MatchAllDocsQuery(), termsBuilder, fieldType1, fieldType2); + + assertEquals(books.size(), terms.getBuckets().size()); + assertEquals("authors", terms.getName()); + + for (int i = 0; i < books.size(); i++) { + Tuple book = books.get(i); + Terms.Bucket bucket = terms.getBuckets().get(i); + assertEquals(book.v1(), bucket.getKeyAsString()); + Min numPages = ((Nested) bucket.getAggregations().get("chapters")).getAggregations().get("num_pages"); + assertEquals(book.v2()[0], (int) numPages.getValue()); + } + } + } + } + private double generateMaxDocs(List documents, int numNestedDocs, int id, String path, String fieldName) { return DoubleStream.of(generateDocuments(documents, numNestedDocs, id, path, fieldName)) .max().orElse(Double.NEGATIVE_INFINITY); @@ -340,4 +536,26 @@ private double[] generateDocuments(List documents, int numNestedDocs, return values; } + private List generateBook(String id, String[] authors, int[] numPages) { + List documents = new ArrayList<>(); + + for (int numPage : numPages) { + Document document = new Document(); + document.add(new Field(UidFieldMapper.NAME, "book#" + id, UidFieldMapper.Defaults.NESTED_FIELD_TYPE)); + document.add(new Field(TypeFieldMapper.NAME, "__nested_chapters", TypeFieldMapper.Defaults.FIELD_TYPE)); + document.add(new SortedNumericDocValuesField("num_pages", numPage)); + documents.add(document); + } + + Document document = new Document(); + document.add(new Field(UidFieldMapper.NAME, "book#" + id, UidFieldMapper.Defaults.FIELD_TYPE)); + document.add(new Field(TypeFieldMapper.NAME, "book", TypeFieldMapper.Defaults.FIELD_TYPE)); + for (String author : authors) { + document.add(new SortedSetDocValuesField("author", new BytesRef(author))); + } + documents.add(document); + + return documents; + } + } diff --git a/core/src/test/java/org/elasticsearch/search/aggregations/bucket/terms/TermsAggregatorTests.java b/core/src/test/java/org/elasticsearch/search/aggregations/bucket/terms/TermsAggregatorTests.java index 7d6ad32d97ac9..266b1a6e50f09 100644 --- a/core/src/test/java/org/elasticsearch/search/aggregations/bucket/terms/TermsAggregatorTests.java +++ b/core/src/test/java/org/elasticsearch/search/aggregations/bucket/terms/TermsAggregatorTests.java @@ -102,13 +102,6 @@ public void testGlobalOrdinalsExecutionHint() throws Exception { globalAgg = (GlobalOrdinalsStringTermsAggregator) aggregator; assertTrue(globalAgg.remapGlobalOrds()); - aggregationBuilder = new TermsAggregationBuilder("_name", ValueType.STRING) - .field("string") - .executionHint("global_ordinals_hash"); - aggregator = createAggregator(aggregationBuilder, indexSearcher, fieldType); - assertThat(aggregator, instanceOf(GlobalOrdinalsStringTermsAggregator.class)); - globalAgg = (GlobalOrdinalsStringTermsAggregator) aggregator; - assertTrue(globalAgg.remapGlobalOrds()); indexReader.close(); directory.close(); } diff --git a/core/src/test/java/org/elasticsearch/search/aggregations/metrics/InternalStatsTests.java b/core/src/test/java/org/elasticsearch/search/aggregations/metrics/InternalStatsTests.java index 2e3437d2093e5..4ce29e4e0ed83 100644 --- a/core/src/test/java/org/elasticsearch/search/aggregations/metrics/InternalStatsTests.java +++ b/core/src/test/java/org/elasticsearch/search/aggregations/metrics/InternalStatsTests.java @@ -19,6 +19,9 @@ package org.elasticsearch.search.aggregations.metrics; import org.elasticsearch.common.io.stream.Writeable; +import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.json.JsonXContent; import org.elasticsearch.search.DocValueFormat; import org.elasticsearch.search.aggregations.ParsedAggregation; import org.elasticsearch.search.aggregations.metrics.stats.InternalStats; @@ -26,6 +29,8 @@ import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; import org.elasticsearch.test.InternalAggregationTestCase; +import java.io.IOException; +import java.util.Collections; import java.util.HashMap; import java.util.List; import java.util.Map; @@ -80,7 +85,7 @@ static void assertStats(InternalStats aggregation, ParsedStats parsed) { long count = aggregation.getCount(); assertEquals(count, parsed.getCount()); // for count == 0, fields are rendered as `null`, so we test that we parse to default values used also in the reduce phase - assertEquals(count > 0 ? aggregation.getMin() : Double.POSITIVE_INFINITY , parsed.getMin(), 0); + assertEquals(count > 0 ? aggregation.getMin() : Double.POSITIVE_INFINITY, parsed.getMin(), 0); assertEquals(count > 0 ? aggregation.getMax() : Double.NEGATIVE_INFINITY, parsed.getMax(), 0); assertEquals(count > 0 ? aggregation.getSum() : 0, parsed.getSum(), 0); assertEquals(count > 0 ? aggregation.getAvg() : 0, parsed.getAvg(), 0); @@ -153,5 +158,55 @@ protected InternalStats mutateInstance(InternalStats instance) { } return new InternalStats(name, count, sum, min, max, formatter, pipelineAggregators, metaData); } + + public void testDoXContentBody() throws IOException { + // count is greater than zero + double min = randomDoubleBetween(-1000000, 1000000, true); + double max = randomDoubleBetween(-1000000, 1000000, true); + double sum = randomDoubleBetween(-1000000, 1000000, true); + int count = randomIntBetween(1, 10); + DocValueFormat format = randomNumericDocValueFormat(); + InternalStats internalStats = createInstance("stats", count, sum, min, max, format, Collections.emptyList(), null); + XContentBuilder builder = JsonXContent.contentBuilder().prettyPrint(); + builder.startObject(); + internalStats.doXContentBody(builder, ToXContent.EMPTY_PARAMS); + builder.endObject(); + + String expected = "{\n" + + " \"count\" : " + count + ",\n" + + " \"min\" : " + min + ",\n" + + " \"max\" : " + max + ",\n" + + " \"avg\" : " + internalStats.getAvg() + ",\n" + + " \"sum\" : " + sum; + if (format != DocValueFormat.RAW) { + expected += ",\n"+ + " \"min_as_string\" : \"" + format.format(internalStats.getMin()) + "\",\n" + + " \"max_as_string\" : \"" + format.format(internalStats.getMax()) + "\",\n" + + " \"avg_as_string\" : \"" + format.format(internalStats.getAvg()) + "\",\n" + + " \"sum_as_string\" : \"" + format.format(internalStats.getSum()) + "\""; + } + expected += "\n}"; + assertEquals(expected, builder.string()); + + // count is zero + format = randomNumericDocValueFormat(); + min = 0.0; + max = 0.0; + sum = 0.0; + count = 0; + internalStats = createInstance("stats", count, sum, min, max, format, Collections.emptyList(), null); + builder = JsonXContent.contentBuilder().prettyPrint(); + builder.startObject(); + internalStats.doXContentBody(builder, ToXContent.EMPTY_PARAMS); + builder.endObject(); + + assertEquals("{\n" + + " \"count\" : 0,\n" + + " \"min\" : null,\n" + + " \"max\" : null,\n" + + " \"avg\" : null,\n" + + " \"sum\" : 0.0\n" + + "}", builder.string()); + } } diff --git a/core/src/test/java/org/elasticsearch/search/aggregations/metrics/ScriptedMetricIT.java b/core/src/test/java/org/elasticsearch/search/aggregations/metrics/ScriptedMetricIT.java index dab523b7c348d..24d94d5a4643c 100644 --- a/core/src/test/java/org/elasticsearch/search/aggregations/metrics/ScriptedMetricIT.java +++ b/core/src/test/java/org/elasticsearch/search/aggregations/metrics/ScriptedMetricIT.java @@ -94,6 +94,10 @@ protected Map, Object>> pluginScripts() { scripts.put("_agg.add(1)", vars -> aggScript(vars, agg -> ((List) agg).add(1))); + scripts.put("_agg[param1] = param2", vars -> + aggScript(vars, agg -> ((Map) agg).put(XContentMapValues.extractValue("params.param1", vars), + XContentMapValues.extractValue("params.param2", vars)))); + scripts.put("vars.multiplier = 3", vars -> ((Map) vars.get("vars")).put("multiplier", 3)); @@ -356,6 +360,52 @@ public void testMapWithParams() { assertThat(totalCount, equalTo(numDocs)); } + public void testMapWithParamsAndImplicitAggMap() { + Map params = new HashMap<>(); + // don't put any _agg map in params + params.put("param1", "12"); + params.put("param2", 1); + + // The _agg hashmap will be available even if not declared in the params map + Script mapScript = new Script(ScriptType.INLINE, CustomScriptPlugin.NAME, "_agg[param1] = param2", params); + + SearchResponse response = client().prepareSearch("idx") + .setQuery(matchAllQuery()) + .addAggregation(scriptedMetric("scripted").params(params).mapScript(mapScript)) + .get(); + assertSearchResponse(response); + assertThat(response.getHits().getTotalHits(), equalTo(numDocs)); + + Aggregation aggregation = response.getAggregations().get("scripted"); + assertThat(aggregation, notNullValue()); + assertThat(aggregation, instanceOf(ScriptedMetric.class)); + ScriptedMetric scriptedMetricAggregation = (ScriptedMetric) aggregation; + assertThat(scriptedMetricAggregation.getName(), equalTo("scripted")); + assertThat(scriptedMetricAggregation.aggregation(), notNullValue()); + assertThat(scriptedMetricAggregation.aggregation(), instanceOf(ArrayList.class)); + List aggregationList = (List) scriptedMetricAggregation.aggregation(); + assertThat(aggregationList.size(), equalTo(getNumShards("idx").numPrimaries)); + int numShardsRun = 0; + for (Object object : aggregationList) { + assertThat(object, notNullValue()); + assertThat(object, instanceOf(Map.class)); + Map map = (Map) object; + for (Map.Entry entry : map.entrySet()) { + assertThat(entry, notNullValue()); + assertThat(entry.getKey(), notNullValue()); + assertThat(entry.getKey(), instanceOf(String.class)); + assertThat(entry.getValue(), notNullValue()); + assertThat(entry.getValue(), instanceOf(Number.class)); + String stringValue = (String) entry.getKey(); + assertThat(stringValue, equalTo("12")); + Number numberValue = (Number) entry.getValue(); + assertThat(numberValue, equalTo((Number) 1)); + numShardsRun++; + } + } + assertThat(numShardsRun, greaterThan(0)); + } + public void testInitMapWithParams() { Map varsMap = new HashMap<>(); varsMap.put("multiplier", 1); diff --git a/core/src/test/java/org/elasticsearch/search/aggregations/metrics/TopHitsIT.java b/core/src/test/java/org/elasticsearch/search/aggregations/metrics/TopHitsIT.java index d1283c06ed273..4f8493c0b001f 100644 --- a/core/src/test/java/org/elasticsearch/search/aggregations/metrics/TopHitsIT.java +++ b/core/src/test/java/org/elasticsearch/search/aggregations/metrics/TopHitsIT.java @@ -28,6 +28,7 @@ import org.elasticsearch.common.document.DocumentField; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.index.IndexSettings; import org.elasticsearch.index.query.MatchAllQueryBuilder; import org.elasticsearch.index.query.QueryBuilders; import org.elasticsearch.plugins.Plugin; @@ -49,7 +50,7 @@ import org.elasticsearch.search.aggregations.metrics.tophits.TopHits; import org.elasticsearch.search.fetch.subphase.highlight.HighlightBuilder; import org.elasticsearch.search.fetch.subphase.highlight.HighlightField; -import org.elasticsearch.search.rescore.RescoreBuilder; +import org.elasticsearch.search.rescore.QueryRescorerBuilder; import org.elasticsearch.search.sort.ScriptSortBuilder.ScriptSortType; import org.elasticsearch.search.sort.SortBuilders; import org.elasticsearch.search.sort.SortOrder; @@ -728,7 +729,7 @@ public void testTopHitsInNestedSimple() throws Exception { assertThat(searchHits.getTotalHits(), equalTo(1L)); assertThat(searchHits.getAt(0).getNestedIdentity().getField().string(), equalTo("comments")); assertThat(searchHits.getAt(0).getNestedIdentity().getOffset(), equalTo(0)); - assertThat(extractValue("comments.date", searchHits.getAt(0).getSourceAsMap()), equalTo(1)); + assertThat(extractValue("date", searchHits.getAt(0).getSourceAsMap()), equalTo(1)); bucket = terms.getBucketByKey("b"); assertThat(bucket.getDocCount(), equalTo(2L)); @@ -737,10 +738,10 @@ public void testTopHitsInNestedSimple() throws Exception { assertThat(searchHits.getTotalHits(), equalTo(2L)); assertThat(searchHits.getAt(0).getNestedIdentity().getField().string(), equalTo("comments")); assertThat(searchHits.getAt(0).getNestedIdentity().getOffset(), equalTo(1)); - assertThat(extractValue("comments.date", searchHits.getAt(0).getSourceAsMap()), equalTo(2)); + assertThat(extractValue("date", searchHits.getAt(0).getSourceAsMap()), equalTo(2)); assertThat(searchHits.getAt(1).getNestedIdentity().getField().string(), equalTo("comments")); assertThat(searchHits.getAt(1).getNestedIdentity().getOffset(), equalTo(0)); - assertThat(extractValue("comments.date", searchHits.getAt(1).getSourceAsMap()), equalTo(3)); + assertThat(extractValue("date", searchHits.getAt(1).getSourceAsMap()), equalTo(3)); bucket = terms.getBucketByKey("c"); assertThat(bucket.getDocCount(), equalTo(1L)); @@ -749,7 +750,7 @@ public void testTopHitsInNestedSimple() throws Exception { assertThat(searchHits.getTotalHits(), equalTo(1L)); assertThat(searchHits.getAt(0).getNestedIdentity().getField().string(), equalTo("comments")); assertThat(searchHits.getAt(0).getNestedIdentity().getOffset(), equalTo(1)); - assertThat(extractValue("comments.date", searchHits.getAt(0).getSourceAsMap()), equalTo(4)); + assertThat(extractValue("date", searchHits.getAt(0).getSourceAsMap()), equalTo(4)); } public void testTopHitsInSecondLayerNested() throws Exception { @@ -802,49 +803,49 @@ public void testTopHitsInSecondLayerNested() throws Exception { assertThat(topReviewers.getHits().getHits().length, equalTo(7)); assertThat(topReviewers.getHits().getAt(0).getId(), equalTo("1")); - assertThat(extractValue("comments.reviewers.name", topReviewers.getHits().getAt(0).getSourceAsMap()), equalTo("user a")); + assertThat(extractValue("name", topReviewers.getHits().getAt(0).getSourceAsMap()), equalTo("user a")); assertThat(topReviewers.getHits().getAt(0).getNestedIdentity().getField().string(), equalTo("comments")); assertThat(topReviewers.getHits().getAt(0).getNestedIdentity().getOffset(), equalTo(0)); assertThat(topReviewers.getHits().getAt(0).getNestedIdentity().getChild().getField().string(), equalTo("reviewers")); assertThat(topReviewers.getHits().getAt(0).getNestedIdentity().getChild().getOffset(), equalTo(0)); assertThat(topReviewers.getHits().getAt(1).getId(), equalTo("1")); - assertThat(extractValue("comments.reviewers.name", topReviewers.getHits().getAt(1).getSourceAsMap()), equalTo("user b")); + assertThat(extractValue("name", topReviewers.getHits().getAt(1).getSourceAsMap()), equalTo("user b")); assertThat(topReviewers.getHits().getAt(1).getNestedIdentity().getField().string(), equalTo("comments")); assertThat(topReviewers.getHits().getAt(1).getNestedIdentity().getOffset(), equalTo(0)); assertThat(topReviewers.getHits().getAt(1).getNestedIdentity().getChild().getField().string(), equalTo("reviewers")); assertThat(topReviewers.getHits().getAt(1).getNestedIdentity().getChild().getOffset(), equalTo(1)); assertThat(topReviewers.getHits().getAt(2).getId(), equalTo("1")); - assertThat(extractValue("comments.reviewers.name", topReviewers.getHits().getAt(2).getSourceAsMap()), equalTo("user c")); + assertThat(extractValue("name", topReviewers.getHits().getAt(2).getSourceAsMap()), equalTo("user c")); assertThat(topReviewers.getHits().getAt(2).getNestedIdentity().getField().string(), equalTo("comments")); assertThat(topReviewers.getHits().getAt(2).getNestedIdentity().getOffset(), equalTo(0)); assertThat(topReviewers.getHits().getAt(2).getNestedIdentity().getChild().getField().string(), equalTo("reviewers")); assertThat(topReviewers.getHits().getAt(2).getNestedIdentity().getChild().getOffset(), equalTo(2)); assertThat(topReviewers.getHits().getAt(3).getId(), equalTo("1")); - assertThat(extractValue("comments.reviewers.name", topReviewers.getHits().getAt(3).getSourceAsMap()), equalTo("user c")); + assertThat(extractValue("name", topReviewers.getHits().getAt(3).getSourceAsMap()), equalTo("user c")); assertThat(topReviewers.getHits().getAt(3).getNestedIdentity().getField().string(), equalTo("comments")); assertThat(topReviewers.getHits().getAt(3).getNestedIdentity().getOffset(), equalTo(1)); assertThat(topReviewers.getHits().getAt(3).getNestedIdentity().getChild().getField().string(), equalTo("reviewers")); assertThat(topReviewers.getHits().getAt(3).getNestedIdentity().getChild().getOffset(), equalTo(0)); assertThat(topReviewers.getHits().getAt(4).getId(), equalTo("1")); - assertThat(extractValue("comments.reviewers.name", topReviewers.getHits().getAt(4).getSourceAsMap()), equalTo("user d")); + assertThat(extractValue("name", topReviewers.getHits().getAt(4).getSourceAsMap()), equalTo("user d")); assertThat(topReviewers.getHits().getAt(4).getNestedIdentity().getField().string(), equalTo("comments")); assertThat(topReviewers.getHits().getAt(4).getNestedIdentity().getOffset(), equalTo(1)); assertThat(topReviewers.getHits().getAt(4).getNestedIdentity().getChild().getField().string(), equalTo("reviewers")); assertThat(topReviewers.getHits().getAt(4).getNestedIdentity().getChild().getOffset(), equalTo(1)); assertThat(topReviewers.getHits().getAt(5).getId(), equalTo("1")); - assertThat(extractValue("comments.reviewers.name", topReviewers.getHits().getAt(5).getSourceAsMap()), equalTo("user e")); + assertThat(extractValue("name", topReviewers.getHits().getAt(5).getSourceAsMap()), equalTo("user e")); assertThat(topReviewers.getHits().getAt(5).getNestedIdentity().getField().string(), equalTo("comments")); assertThat(topReviewers.getHits().getAt(5).getNestedIdentity().getOffset(), equalTo(1)); assertThat(topReviewers.getHits().getAt(5).getNestedIdentity().getChild().getField().string(), equalTo("reviewers")); assertThat(topReviewers.getHits().getAt(5).getNestedIdentity().getChild().getOffset(), equalTo(2)); assertThat(topReviewers.getHits().getAt(6).getId(), equalTo("2")); - assertThat(extractValue("comments.reviewers.name", topReviewers.getHits().getAt(6).getSourceAsMap()), equalTo("user f")); + assertThat(extractValue("name", topReviewers.getHits().getAt(6).getSourceAsMap()), equalTo("user f")); assertThat(topReviewers.getHits().getAt(0).getNestedIdentity().getField().string(), equalTo("comments")); assertThat(topReviewers.getHits().getAt(0).getNestedIdentity().getOffset(), equalTo(0)); assertThat(topReviewers.getHits().getAt(0).getNestedIdentity().getChild().getField().string(), equalTo("reviewers")); @@ -900,7 +901,7 @@ public void testNestedFetchFeatures() { assertThat(field.getValue().toString(), equalTo("5")); assertThat(searchHit.getSourceAsMap().size(), equalTo(1)); - assertThat(extractValue("comments.message", searchHit.getSourceAsMap()), equalTo("some comment")); + assertThat(extractValue("message", searchHit.getSourceAsMap()), equalTo("some comment")); } public void testTopHitsInNested() throws Exception { @@ -933,7 +934,7 @@ public void testTopHitsInNested() throws Exception { for (int j = 0; j < 3; j++) { assertThat(searchHits.getAt(j).getNestedIdentity().getField().string(), equalTo("comments")); assertThat(searchHits.getAt(j).getNestedIdentity().getOffset(), equalTo(0)); - assertThat(extractValue("comments.id", searchHits.getAt(j).getSourceAsMap()), equalTo(0)); + assertThat(extractValue("id", searchHits.getAt(j).getSourceAsMap()), equalTo(0)); HighlightField highlightField = searchHits.getAt(j).getHighlightFields().get("comments.message"); assertThat(highlightField.getFragments().length, equalTo(1)); @@ -942,7 +943,10 @@ public void testTopHitsInNested() throws Exception { } } - public void testDontExplode() throws Exception { + public void testUseMaxDocInsteadOfSize() throws Exception { + client().admin().indices().prepareUpdateSettings("idx") + .setSettings(Collections.singletonMap(IndexSettings.MAX_INNER_RESULT_WINDOW_SETTING.getKey(), ArrayUtil.MAX_ARRAY_LENGTH)) + .get(); SearchResponse response = client() .prepareSearch("idx") .addAggregation(terms("terms") @@ -954,6 +958,67 @@ public void testDontExplode() throws Exception { ) .get(); assertNoFailures(response); + client().admin().indices().prepareUpdateSettings("idx") + .setSettings(Collections.singletonMap(IndexSettings.MAX_INNER_RESULT_WINDOW_SETTING.getKey(), null)) + .get(); + } + + public void testTooHighResultWindow() throws Exception { + SearchResponse response = client() + .prepareSearch("idx") + .addAggregation(terms("terms") + .executionHint(randomExecutionHint()) + .field(TERMS_AGGS_FIELD) + .subAggregation( + topHits("hits").from(50).size(10).sort(SortBuilders.fieldSort(SORT_FIELD).order(SortOrder.DESC)) + ) + ) + .get(); + assertNoFailures(response); + + Exception e = expectThrows(SearchPhaseExecutionException.class, () -> client().prepareSearch("idx") + .addAggregation(terms("terms") + .executionHint(randomExecutionHint()) + .field(TERMS_AGGS_FIELD) + .subAggregation( + topHits("hits").from(100).size(10).sort(SortBuilders.fieldSort(SORT_FIELD).order(SortOrder.DESC)) + ) + ).get()); + assertThat(e.getCause().getMessage(), + containsString("the top hits aggregator [hits]'s from + size must be less than or equal to: [100] but was [110]")); + e = expectThrows(SearchPhaseExecutionException.class, () -> client().prepareSearch("idx") + .addAggregation(terms("terms") + .executionHint(randomExecutionHint()) + .field(TERMS_AGGS_FIELD) + .subAggregation( + topHits("hits").from(10).size(100).sort(SortBuilders.fieldSort(SORT_FIELD).order(SortOrder.DESC)) + ) + ).get()); + assertThat(e.getCause().getMessage(), + containsString("the top hits aggregator [hits]'s from + size must be less than or equal to: [100] but was [110]")); + + client().admin().indices().prepareUpdateSettings("idx") + .setSettings(Collections.singletonMap(IndexSettings.MAX_INNER_RESULT_WINDOW_SETTING.getKey(), 110)) + .get(); + response = client().prepareSearch("idx") + .addAggregation(terms("terms") + .executionHint(randomExecutionHint()) + .field(TERMS_AGGS_FIELD) + .subAggregation( + topHits("hits").from(100).size(10).sort(SortBuilders.fieldSort(SORT_FIELD).order(SortOrder.DESC)) + )).get(); + assertNoFailures(response); + response = client().prepareSearch("idx") + .addAggregation(terms("terms") + .executionHint(randomExecutionHint()) + .field(TERMS_AGGS_FIELD) + .subAggregation( + topHits("hits").from(10).size(100).sort(SortBuilders.fieldSort(SORT_FIELD).order(SortOrder.DESC)) + )).get(); + assertNoFailures(response); + client().admin().indices().prepareUpdateSettings("idx") + .setSettings(Collections.singletonMap(IndexSettings.MAX_INNER_RESULT_WINDOW_SETTING.getKey(), null)) + .get(); } public void testNoStoredFields() throws Exception { @@ -1054,7 +1119,7 @@ public void testWithRescore() { SearchResponse response = client() .prepareSearch("idx") .addRescorer( - RescoreBuilder.queryRescorer(new MatchAllQueryBuilder().boost(3.0f)) + new QueryRescorerBuilder(new MatchAllQueryBuilder().boost(3.0f)) ) .addAggregation(terms("terms") .field(TERMS_AGGS_FIELD) @@ -1076,7 +1141,7 @@ public void testWithRescore() { SearchResponse response = client() .prepareSearch("idx") .addRescorer( - RescoreBuilder.queryRescorer(new MatchAllQueryBuilder().boost(3.0f)) + new QueryRescorerBuilder(new MatchAllQueryBuilder().boost(3.0f)) ) .addAggregation(terms("terms") .field(TERMS_AGGS_FIELD) @@ -1099,7 +1164,7 @@ public void testWithRescore() { SearchResponse response = client() .prepareSearch("idx") .addRescorer( - RescoreBuilder.queryRescorer(new MatchAllQueryBuilder().boost(3.0f)) + new QueryRescorerBuilder(new MatchAllQueryBuilder().boost(3.0f)) ) .addAggregation(terms("terms") .field(TERMS_AGGS_FIELD) @@ -1121,7 +1186,7 @@ public void testWithRescore() { SearchResponse response = client() .prepareSearch("idx") .addRescorer( - RescoreBuilder.queryRescorer(new MatchAllQueryBuilder().boost(3.0f)) + new QueryRescorerBuilder(new MatchAllQueryBuilder().boost(3.0f)) ) .addAggregation(terms("terms") .field(TERMS_AGGS_FIELD) diff --git a/core/src/test/java/org/elasticsearch/search/aggregations/metrics/tophits/TopHitsAggregatorTests.java b/core/src/test/java/org/elasticsearch/search/aggregations/metrics/tophits/TopHitsAggregatorTests.java index 43686a1465e60..5555e987ec402 100644 --- a/core/src/test/java/org/elasticsearch/search/aggregations/metrics/tophits/TopHitsAggregatorTests.java +++ b/core/src/test/java/org/elasticsearch/search/aggregations/metrics/tophits/TopHitsAggregatorTests.java @@ -21,15 +21,22 @@ import org.apache.lucene.analysis.core.KeywordAnalyzer; import org.apache.lucene.document.Document; import org.apache.lucene.document.Field; +import org.apache.lucene.document.Field.Store; import org.apache.lucene.document.SortedSetDocValuesField; +import org.apache.lucene.document.StringField; import org.apache.lucene.index.DirectoryReader; import org.apache.lucene.index.IndexReader; +import org.apache.lucene.index.IndexWriter; import org.apache.lucene.index.RandomIndexWriter; +import org.apache.lucene.index.Term; import org.apache.lucene.queryparser.classic.QueryParser; +import org.apache.lucene.search.BooleanClause.Occur; +import org.apache.lucene.search.BooleanQuery; import org.apache.lucene.search.IndexSearcher; import org.apache.lucene.search.MatchAllDocsQuery; import org.apache.lucene.search.MatchNoDocsQuery; import org.apache.lucene.search.Query; +import org.apache.lucene.search.TermQuery; import org.apache.lucene.store.Directory; import org.apache.lucene.util.BytesRef; import org.elasticsearch.index.mapper.KeywordFieldMapper; @@ -39,6 +46,7 @@ import org.elasticsearch.search.SearchHits; import org.elasticsearch.search.aggregations.Aggregation; import org.elasticsearch.search.aggregations.AggregationBuilder; +import org.elasticsearch.search.aggregations.AggregationBuilders; import org.elasticsearch.search.aggregations.AggregatorTestCase; import org.elasticsearch.search.aggregations.bucket.terms.Terms; import org.elasticsearch.search.sort.SortOrder; @@ -148,4 +156,47 @@ private Document document(String id, String... stringValues) { } return document; } + + public void testSetScorer() throws Exception { + Directory directory = newDirectory(); + IndexWriter w = new IndexWriter(directory, newIndexWriterConfig() + // only merge adjacent segments + .setMergePolicy(newLogMergePolicy())); + // first window (see BooleanScorer) has matches on one clause only + for (int i = 0; i < 2048; ++i) { + Document doc = new Document(); + doc.add(new StringField("_id", Uid.encodeId(Integer.toString(i)), Store.YES)); + if (i == 1000) { // any doc in 0..2048 + doc.add(new StringField("string", "bar", Store.NO)); + } + w.addDocument(doc); + } + // second window has matches in two clauses + for (int i = 0; i < 2048; ++i) { + Document doc = new Document(); + doc.add(new StringField("_id", Uid.encodeId(Integer.toString(2048 + i)), Store.YES)); + if (i == 500) { // any doc in 0..2048 + doc.add(new StringField("string", "baz", Store.NO)); + } else if (i == 1500) { + doc.add(new StringField("string", "bar", Store.NO)); + } + w.addDocument(doc); + } + + w.forceMerge(1); // we need all docs to be in the same segment + + IndexReader reader = DirectoryReader.open(w); + w.close(); + + IndexSearcher searcher = new IndexSearcher(reader); + Query query = new BooleanQuery.Builder() + .add(new TermQuery(new Term("string", "bar")), Occur.SHOULD) + .add(new TermQuery(new Term("string", "baz")), Occur.SHOULD) + .build(); + AggregationBuilder agg = AggregationBuilders.topHits("top_hits"); + TopHits result = searchAndReduce(searcher, query, agg, STRING_FIELD_TYPE); + assertEquals(3, result.getHits().totalHits); + reader.close(); + directory.close(); + } } diff --git a/core/src/test/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/AvgBucketTests.java b/core/src/test/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/AvgBucketTests.java index 5ae06f671f164..223cbb231ea8d 100644 --- a/core/src/test/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/AvgBucketTests.java +++ b/core/src/test/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/AvgBucketTests.java @@ -19,7 +19,16 @@ package org.elasticsearch.search.aggregations.pipeline.bucketmetrics; +import org.elasticsearch.search.aggregations.AggregationBuilder; +import org.elasticsearch.search.aggregations.bucket.global.GlobalAggregationBuilder; +import org.elasticsearch.search.aggregations.bucket.terms.TermsAggregationBuilder; +import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; import org.elasticsearch.search.aggregations.pipeline.bucketmetrics.avg.AvgBucketPipelineAggregationBuilder; +import org.elasticsearch.search.aggregations.support.ValueType; + +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; public class AvgBucketTests extends AbstractBucketMetricsTestCase { @@ -28,5 +37,31 @@ protected AvgBucketPipelineAggregationBuilder doCreateTestAggregatorFactory(Stri return new AvgBucketPipelineAggregationBuilder(name, bucketsPath); } + public void testValidate() { + AggregationBuilder singleBucketAgg = new GlobalAggregationBuilder("global"); + AggregationBuilder multiBucketAgg = new TermsAggregationBuilder("terms", ValueType.STRING); + final List aggBuilders = new ArrayList<>(); + aggBuilders.add(singleBucketAgg); + aggBuilders.add(multiBucketAgg); + + // First try to point to a non-existent agg + final AvgBucketPipelineAggregationBuilder builder = new AvgBucketPipelineAggregationBuilder("name", "invalid_agg>metric"); + IllegalArgumentException ex = expectThrows(IllegalArgumentException.class, + () -> builder.validate(null, aggBuilders, Collections.emptyList())); + assertEquals(PipelineAggregator.Parser.BUCKETS_PATH.getPreferredName() + + " aggregation does not exist for aggregation [name]: invalid_agg>metric", ex.getMessage()); + + // Now try to point to a single bucket agg + AvgBucketPipelineAggregationBuilder builder2 = new AvgBucketPipelineAggregationBuilder("name", "global>metric"); + ex = expectThrows(IllegalArgumentException.class, () -> builder2.validate(null, aggBuilders, Collections.emptyList())); + assertEquals("The first aggregation in " + PipelineAggregator.Parser.BUCKETS_PATH.getPreferredName() + + " must be a multi-bucket aggregation for aggregation [name] found :" + GlobalAggregationBuilder.class.getName() + + " for buckets path: global>metric", ex.getMessage()); + + // Now try to point to a valid multi-bucket agg (no exception should be thrown) + AvgBucketPipelineAggregationBuilder builder3 = new AvgBucketPipelineAggregationBuilder("name", "terms>metric"); + builder3.validate(null, aggBuilders, Collections.emptyList()); + + } } diff --git a/core/src/test/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/ExtendedStatsBucketTests.java b/core/src/test/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/ExtendedStatsBucketTests.java index b77c0bdd972cb..d177568700022 100644 --- a/core/src/test/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/ExtendedStatsBucketTests.java +++ b/core/src/test/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/ExtendedStatsBucketTests.java @@ -21,7 +21,16 @@ import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentFactory; +import org.elasticsearch.search.aggregations.AggregationBuilder; +import org.elasticsearch.search.aggregations.bucket.global.GlobalAggregationBuilder; +import org.elasticsearch.search.aggregations.bucket.terms.TermsAggregationBuilder; +import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; import org.elasticsearch.search.aggregations.pipeline.bucketmetrics.stats.extended.ExtendedStatsBucketPipelineAggregationBuilder; +import org.elasticsearch.search.aggregations.support.ValueType; + +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; import static org.hamcrest.Matchers.equalTo; @@ -52,4 +61,32 @@ public void testSigmaFromInt() throws Exception { assertThat(builder.sigma(), equalTo(5.0)); } + + public void testValidate() { + AggregationBuilder singleBucketAgg = new GlobalAggregationBuilder("global"); + AggregationBuilder multiBucketAgg = new TermsAggregationBuilder("terms", ValueType.STRING); + final List aggBuilders = new ArrayList<>(); + aggBuilders.add(singleBucketAgg); + aggBuilders.add(multiBucketAgg); + + // First try to point to a non-existent agg + final ExtendedStatsBucketPipelineAggregationBuilder builder = new ExtendedStatsBucketPipelineAggregationBuilder("name", + "invalid_agg>metric"); + IllegalArgumentException ex = expectThrows(IllegalArgumentException.class, + () -> builder.validate(null, aggBuilders, Collections.emptyList())); + assertEquals(PipelineAggregator.Parser.BUCKETS_PATH.getPreferredName() + + " aggregation does not exist for aggregation [name]: invalid_agg>metric", ex.getMessage()); + + // Now try to point to a single bucket agg + ExtendedStatsBucketPipelineAggregationBuilder builder2 = new ExtendedStatsBucketPipelineAggregationBuilder("name", "global>metric"); + ex = expectThrows(IllegalArgumentException.class, () -> builder2.validate(null, aggBuilders, Collections.emptyList())); + assertEquals("The first aggregation in " + PipelineAggregator.Parser.BUCKETS_PATH.getPreferredName() + + " must be a multi-bucket aggregation for aggregation [name] found :" + GlobalAggregationBuilder.class.getName() + + " for buckets path: global>metric", ex.getMessage()); + + // Now try to point to a valid multi-bucket agg (no exception should be + // thrown) + ExtendedStatsBucketPipelineAggregationBuilder builder3 = new ExtendedStatsBucketPipelineAggregationBuilder("name", "terms>metric"); + builder3.validate(null, aggBuilders, Collections.emptyList()); + } } diff --git a/core/src/test/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/MaxBucketTests.java b/core/src/test/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/MaxBucketTests.java index 43115c668a0ac..a8e78a31f954e 100644 --- a/core/src/test/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/MaxBucketTests.java +++ b/core/src/test/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/MaxBucketTests.java @@ -19,7 +19,16 @@ package org.elasticsearch.search.aggregations.pipeline.bucketmetrics; +import org.elasticsearch.search.aggregations.AggregationBuilder; +import org.elasticsearch.search.aggregations.bucket.global.GlobalAggregationBuilder; +import org.elasticsearch.search.aggregations.bucket.terms.TermsAggregationBuilder; +import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; import org.elasticsearch.search.aggregations.pipeline.bucketmetrics.max.MaxBucketPipelineAggregationBuilder; +import org.elasticsearch.search.aggregations.support.ValueType; + +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; public class MaxBucketTests extends AbstractBucketMetricsTestCase { @@ -28,5 +37,31 @@ protected MaxBucketPipelineAggregationBuilder doCreateTestAggregatorFactory(Stri return new MaxBucketPipelineAggregationBuilder(name, bucketsPath); } + public void testValidate() { + AggregationBuilder singleBucketAgg = new GlobalAggregationBuilder("global"); + AggregationBuilder multiBucketAgg = new TermsAggregationBuilder("terms", ValueType.STRING); + final List aggBuilders = new ArrayList<>(); + aggBuilders.add(singleBucketAgg); + aggBuilders.add(multiBucketAgg); + + // First try to point to a non-existent agg + final MaxBucketPipelineAggregationBuilder builder = new MaxBucketPipelineAggregationBuilder("name", "invalid_agg>metric"); + IllegalArgumentException ex = expectThrows(IllegalArgumentException.class, + () -> builder.validate(null, aggBuilders, Collections.emptyList())); + assertEquals(PipelineAggregator.Parser.BUCKETS_PATH.getPreferredName() + + " aggregation does not exist for aggregation [name]: invalid_agg>metric", ex.getMessage()); + + // Now try to point to a single bucket agg + MaxBucketPipelineAggregationBuilder builder2 = new MaxBucketPipelineAggregationBuilder("name", "global>metric"); + ex = expectThrows(IllegalArgumentException.class, () -> builder2.validate(null, aggBuilders, Collections.emptyList())); + assertEquals("The first aggregation in " + PipelineAggregator.Parser.BUCKETS_PATH.getPreferredName() + + " must be a multi-bucket aggregation for aggregation [name] found :" + GlobalAggregationBuilder.class.getName() + + " for buckets path: global>metric", ex.getMessage()); + + // Now try to point to a valid multi-bucket agg (no exception should be + // thrown) + MaxBucketPipelineAggregationBuilder builder3 = new MaxBucketPipelineAggregationBuilder("name", "terms>metric"); + builder3.validate(null, aggBuilders, Collections.emptyList()); + } } diff --git a/core/src/test/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/MinBucketTests.java b/core/src/test/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/MinBucketTests.java index e577e733b4ee2..21efed4a5cff9 100644 --- a/core/src/test/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/MinBucketTests.java +++ b/core/src/test/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/MinBucketTests.java @@ -19,7 +19,16 @@ package org.elasticsearch.search.aggregations.pipeline.bucketmetrics; +import org.elasticsearch.search.aggregations.AggregationBuilder; +import org.elasticsearch.search.aggregations.bucket.global.GlobalAggregationBuilder; +import org.elasticsearch.search.aggregations.bucket.terms.TermsAggregationBuilder; +import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; import org.elasticsearch.search.aggregations.pipeline.bucketmetrics.min.MinBucketPipelineAggregationBuilder; +import org.elasticsearch.search.aggregations.support.ValueType; + +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; public class MinBucketTests extends AbstractBucketMetricsTestCase { @@ -28,5 +37,31 @@ protected MinBucketPipelineAggregationBuilder doCreateTestAggregatorFactory(Stri return new MinBucketPipelineAggregationBuilder(name, bucketsPath); } + public void testValidate() { + AggregationBuilder singleBucketAgg = new GlobalAggregationBuilder("global"); + AggregationBuilder multiBucketAgg = new TermsAggregationBuilder("terms", ValueType.STRING); + final List aggBuilders = new ArrayList<>(); + aggBuilders.add(singleBucketAgg); + aggBuilders.add(multiBucketAgg); + + // First try to point to a non-existent agg + final MinBucketPipelineAggregationBuilder builder = new MinBucketPipelineAggregationBuilder("name", "invalid_agg>metric"); + IllegalArgumentException ex = expectThrows(IllegalArgumentException.class, + () -> builder.validate(null, aggBuilders, Collections.emptyList())); + assertEquals(PipelineAggregator.Parser.BUCKETS_PATH.getPreferredName() + + " aggregation does not exist for aggregation [name]: invalid_agg>metric", ex.getMessage()); + + // Now try to point to a single bucket agg + MinBucketPipelineAggregationBuilder builder2 = new MinBucketPipelineAggregationBuilder("name", "global>metric"); + ex = expectThrows(IllegalArgumentException.class, () -> builder2.validate(null, aggBuilders, Collections.emptyList())); + assertEquals("The first aggregation in " + PipelineAggregator.Parser.BUCKETS_PATH.getPreferredName() + + " must be a multi-bucket aggregation for aggregation [name] found :" + GlobalAggregationBuilder.class.getName() + + " for buckets path: global>metric", ex.getMessage()); + + // Now try to point to a valid multi-bucket agg (no exception should be + // thrown) + MinBucketPipelineAggregationBuilder builder3 = new MinBucketPipelineAggregationBuilder("name", "terms>metric"); + builder3.validate(null, aggBuilders, Collections.emptyList()); + } } diff --git a/core/src/test/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/PercentilesBucketTests.java b/core/src/test/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/PercentilesBucketTests.java index b04b5743c68cb..4851c96972231 100644 --- a/core/src/test/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/PercentilesBucketTests.java +++ b/core/src/test/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/PercentilesBucketTests.java @@ -21,7 +21,16 @@ import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentFactory; +import org.elasticsearch.search.aggregations.AggregationBuilder; +import org.elasticsearch.search.aggregations.bucket.global.GlobalAggregationBuilder; +import org.elasticsearch.search.aggregations.bucket.terms.TermsAggregationBuilder; +import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; import org.elasticsearch.search.aggregations.pipeline.bucketmetrics.percentile.PercentilesBucketPipelineAggregationBuilder; +import org.elasticsearch.search.aggregations.support.ValueType; + +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; import static org.hamcrest.Matchers.equalTo; @@ -56,4 +65,32 @@ public void testPercentsFromMixedArray() throws Exception { assertThat(builder.percents(), equalTo(new double[]{0.0, 20.0, 50.0, 75.99})); } + + public void testValidate() { + AggregationBuilder singleBucketAgg = new GlobalAggregationBuilder("global"); + AggregationBuilder multiBucketAgg = new TermsAggregationBuilder("terms", ValueType.STRING); + final List aggBuilders = new ArrayList<>(); + aggBuilders.add(singleBucketAgg); + aggBuilders.add(multiBucketAgg); + + // First try to point to a non-existent agg + final PercentilesBucketPipelineAggregationBuilder builder = new PercentilesBucketPipelineAggregationBuilder("name", + "invalid_agg>metric"); + IllegalArgumentException ex = expectThrows(IllegalArgumentException.class, + () -> builder.validate(null, aggBuilders, Collections.emptyList())); + assertEquals(PipelineAggregator.Parser.BUCKETS_PATH.getPreferredName() + + " aggregation does not exist for aggregation [name]: invalid_agg>metric", ex.getMessage()); + + // Now try to point to a single bucket agg + PercentilesBucketPipelineAggregationBuilder builder2 = new PercentilesBucketPipelineAggregationBuilder("name", "global>metric"); + ex = expectThrows(IllegalArgumentException.class, () -> builder2.validate(null, aggBuilders, Collections.emptyList())); + assertEquals("The first aggregation in " + PipelineAggregator.Parser.BUCKETS_PATH.getPreferredName() + + " must be a multi-bucket aggregation for aggregation [name] found :" + GlobalAggregationBuilder.class.getName() + + " for buckets path: global>metric", ex.getMessage()); + + // Now try to point to a valid multi-bucket agg (no exception should be + // thrown) + PercentilesBucketPipelineAggregationBuilder builder3 = new PercentilesBucketPipelineAggregationBuilder("name", "terms>metric"); + builder3.validate(null, aggBuilders, Collections.emptyList()); + } } diff --git a/core/src/test/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/StatsBucketTests.java b/core/src/test/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/StatsBucketTests.java index 2be01b3fd6817..7611d7b07b3ae 100644 --- a/core/src/test/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/StatsBucketTests.java +++ b/core/src/test/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/StatsBucketTests.java @@ -19,7 +19,16 @@ package org.elasticsearch.search.aggregations.pipeline.bucketmetrics; +import org.elasticsearch.search.aggregations.AggregationBuilder; +import org.elasticsearch.search.aggregations.bucket.global.GlobalAggregationBuilder; +import org.elasticsearch.search.aggregations.bucket.terms.TermsAggregationBuilder; +import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; import org.elasticsearch.search.aggregations.pipeline.bucketmetrics.stats.StatsBucketPipelineAggregationBuilder; +import org.elasticsearch.search.aggregations.support.ValueType; + +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; public class StatsBucketTests extends AbstractBucketMetricsTestCase { @@ -29,5 +38,31 @@ protected StatsBucketPipelineAggregationBuilder doCreateTestAggregatorFactory(St return new StatsBucketPipelineAggregationBuilder(name, bucketsPath); } + public void testValidate() { + AggregationBuilder singleBucketAgg = new GlobalAggregationBuilder("global"); + AggregationBuilder multiBucketAgg = new TermsAggregationBuilder("terms", ValueType.STRING); + final List aggBuilders = new ArrayList<>(); + aggBuilders.add(singleBucketAgg); + aggBuilders.add(multiBucketAgg); + + // First try to point to a non-existent agg + final StatsBucketPipelineAggregationBuilder builder = new StatsBucketPipelineAggregationBuilder("name", "invalid_agg>metric"); + IllegalArgumentException ex = expectThrows(IllegalArgumentException.class, + () -> builder.validate(null, aggBuilders, Collections.emptyList())); + assertEquals(PipelineAggregator.Parser.BUCKETS_PATH.getPreferredName() + + " aggregation does not exist for aggregation [name]: invalid_agg>metric", ex.getMessage()); + + // Now try to point to a single bucket agg + StatsBucketPipelineAggregationBuilder builder2 = new StatsBucketPipelineAggregationBuilder("name", "global>metric"); + ex = expectThrows(IllegalArgumentException.class, () -> builder2.validate(null, aggBuilders, Collections.emptyList())); + assertEquals("The first aggregation in " + PipelineAggregator.Parser.BUCKETS_PATH.getPreferredName() + + " must be a multi-bucket aggregation for aggregation [name] found :" + GlobalAggregationBuilder.class.getName() + + " for buckets path: global>metric", ex.getMessage()); + + // Now try to point to a valid multi-bucket agg (no exception should be + // thrown) + StatsBucketPipelineAggregationBuilder builder3 = new StatsBucketPipelineAggregationBuilder("name", "terms>metric"); + builder3.validate(null, aggBuilders, Collections.emptyList()); + } } diff --git a/core/src/test/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/SumBucketTests.java b/core/src/test/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/SumBucketTests.java index 01ce22770580b..62fc1f977970f 100644 --- a/core/src/test/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/SumBucketTests.java +++ b/core/src/test/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/SumBucketTests.java @@ -19,7 +19,16 @@ package org.elasticsearch.search.aggregations.pipeline.bucketmetrics; +import org.elasticsearch.search.aggregations.AggregationBuilder; +import org.elasticsearch.search.aggregations.bucket.global.GlobalAggregationBuilder; +import org.elasticsearch.search.aggregations.bucket.terms.TermsAggregationBuilder; +import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; import org.elasticsearch.search.aggregations.pipeline.bucketmetrics.sum.SumBucketPipelineAggregationBuilder; +import org.elasticsearch.search.aggregations.support.ValueType; + +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; public class SumBucketTests extends AbstractBucketMetricsTestCase { @@ -28,5 +37,31 @@ protected SumBucketPipelineAggregationBuilder doCreateTestAggregatorFactory(Stri return new SumBucketPipelineAggregationBuilder(name, bucketsPath); } + public void testValidate() { + AggregationBuilder singleBucketAgg = new GlobalAggregationBuilder("global"); + AggregationBuilder multiBucketAgg = new TermsAggregationBuilder("terms", ValueType.STRING); + final List aggBuilders = new ArrayList<>(); + aggBuilders.add(singleBucketAgg); + aggBuilders.add(multiBucketAgg); + + // First try to point to a non-existent agg + final SumBucketPipelineAggregationBuilder builder = new SumBucketPipelineAggregationBuilder("name", "invalid_agg>metric"); + IllegalArgumentException ex = expectThrows(IllegalArgumentException.class, + () -> builder.validate(null, aggBuilders, Collections.emptyList())); + assertEquals(PipelineAggregator.Parser.BUCKETS_PATH.getPreferredName() + + " aggregation does not exist for aggregation [name]: invalid_agg>metric", ex.getMessage()); + + // Now try to point to a single bucket agg + SumBucketPipelineAggregationBuilder builder2 = new SumBucketPipelineAggregationBuilder("name", "global>metric"); + ex = expectThrows(IllegalArgumentException.class, () -> builder2.validate(null, aggBuilders, Collections.emptyList())); + assertEquals("The first aggregation in " + PipelineAggregator.Parser.BUCKETS_PATH.getPreferredName() + + " must be a multi-bucket aggregation for aggregation [name] found :" + GlobalAggregationBuilder.class.getName() + + " for buckets path: global>metric", ex.getMessage()); + + // Now try to point to a valid multi-bucket agg (no exception should be + // thrown) + SumBucketPipelineAggregationBuilder builder3 = new SumBucketPipelineAggregationBuilder("name", "terms>metric"); + builder3.validate(null, aggBuilders, Collections.emptyList()); + } } diff --git a/core/src/test/java/org/elasticsearch/search/basic/SearchWhileCreatingIndexIT.java b/core/src/test/java/org/elasticsearch/search/basic/SearchWhileCreatingIndexIT.java index c83b3ccc9e505..4748b6292c417 100644 --- a/core/src/test/java/org/elasticsearch/search/basic/SearchWhileCreatingIndexIT.java +++ b/core/src/test/java/org/elasticsearch/search/basic/SearchWhileCreatingIndexIT.java @@ -73,10 +73,10 @@ private void searchWhileCreatingIndex(boolean createIndex, int numberOfReplicas) logger.info("using preference {}", preference); // we want to make sure that while recovery happens, and a replica gets recovered, its properly refreshed - ClusterHealthStatus status = ClusterHealthStatus.RED; + ClusterHealthStatus status = client().admin().cluster().prepareHealth("test").get().getStatus();; while (status != ClusterHealthStatus.GREEN) { - // first, verify that search on the primary search works - SearchResponse searchResponse = client().prepareSearch("test").setPreference("_primary").setQuery(QueryBuilders.termQuery("field", "test")).execute().actionGet(); + // first, verify that search normal search works + SearchResponse searchResponse = client().prepareSearch("test").setQuery(QueryBuilders.termQuery("field", "test")).execute().actionGet(); assertHitCount(searchResponse, 1); Client client = client(); searchResponse = client.prepareSearch("test").setPreference(preference + Integer.toString(counter++)).setQuery(QueryBuilders.termQuery("field", "test")).execute().actionGet(); diff --git a/core/src/test/java/org/elasticsearch/search/basic/SearchWithRandomExceptionsIT.java b/core/src/test/java/org/elasticsearch/search/basic/SearchWithRandomExceptionsIT.java index 5b68d3cd1f1f6..339240c15b94c 100644 --- a/core/src/test/java/org/elasticsearch/search/basic/SearchWithRandomExceptionsIT.java +++ b/core/src/test/java/org/elasticsearch/search/basic/SearchWithRandomExceptionsIT.java @@ -107,7 +107,7 @@ public void testRandomExceptions() throws IOException, InterruptedException, Exe .put(EXCEPTION_TOP_LEVEL_RATIO_KEY, topLevelRate) .put(EXCEPTION_LOW_LEVEL_RATIO_KEY, lowLevelRate) .put(MockEngineSupport.WRAP_READER_RATIO.getKey(), 1.0d); - logger.info("creating index: [test] using settings: [{}]", settings.build().getAsMap()); + logger.info("creating index: [test] using settings: [{}]", settings.build()); assertAcked(prepareCreate("test") .setSettings(settings) .addMapping("type", mapping, XContentType.JSON)); diff --git a/core/src/test/java/org/elasticsearch/search/basic/SearchWithRandomIOExceptionsIT.java b/core/src/test/java/org/elasticsearch/search/basic/SearchWithRandomIOExceptionsIT.java index dbba84f86ff08..835b980d6653e 100644 --- a/core/src/test/java/org/elasticsearch/search/basic/SearchWithRandomIOExceptionsIT.java +++ b/core/src/test/java/org/elasticsearch/search/basic/SearchWithRandomIOExceptionsIT.java @@ -90,7 +90,7 @@ public void testRandomDirectoryIOExceptions() throws IOException, InterruptedExc if (createIndexWithoutErrors) { Settings.Builder settings = Settings.builder() .put("index.number_of_replicas", numberOfReplicas()); - logger.info("creating index: [test] using settings: [{}]", settings.build().getAsMap()); + logger.info("creating index: [test] using settings: [{}]", settings.build()); client().admin().indices().prepareCreate("test") .setSettings(settings) .addMapping("type", mapping, XContentType.JSON).execute().actionGet(); @@ -112,7 +112,7 @@ public void testRandomDirectoryIOExceptions() throws IOException, InterruptedExc .put(MockFSIndexStore.INDEX_CHECK_INDEX_ON_CLOSE_SETTING.getKey(), false) .put(MockFSDirectoryService.RANDOM_IO_EXCEPTION_RATE_SETTING.getKey(), exceptionRate) .put(MockFSDirectoryService.RANDOM_IO_EXCEPTION_RATE_ON_OPEN_SETTING.getKey(), exceptionOnOpenRate); // we cannot expect that the index will be valid - logger.info("creating index: [test] using settings: [{}]", settings.build().getAsMap()); + logger.info("creating index: [test] using settings: [{}]", settings.build()); client().admin().indices().prepareCreate("test") .setSettings(settings) .addMapping("type", mapping, XContentType.JSON).execute().actionGet(); diff --git a/core/src/test/java/org/elasticsearch/search/collapse/CollapseBuilderTests.java b/core/src/test/java/org/elasticsearch/search/collapse/CollapseBuilderTests.java index f31eb0b490f6e..b4a840426687f 100644 --- a/core/src/test/java/org/elasticsearch/search/collapse/CollapseBuilderTests.java +++ b/core/src/test/java/org/elasticsearch/search/collapse/CollapseBuilderTests.java @@ -210,6 +210,10 @@ public String typeName() { public Query termQuery(Object value, QueryShardContext context) { return null; } + + public Query existsQuery(QueryShardContext context) { + return null; + } }; fieldType.setName("field"); fieldType.setHasDocValues(true); diff --git a/core/src/test/java/org/elasticsearch/search/fetch/subphase/InnerHitsIT.java b/core/src/test/java/org/elasticsearch/search/fetch/subphase/InnerHitsIT.java index 55a424754d51f..b41ba7a85f710 100644 --- a/core/src/test/java/org/elasticsearch/search/fetch/subphase/InnerHitsIT.java +++ b/core/src/test/java/org/elasticsearch/search/fetch/subphase/InnerHitsIT.java @@ -23,10 +23,13 @@ import org.apache.lucene.util.ArrayUtil; import org.elasticsearch.Version; import org.elasticsearch.action.index.IndexRequestBuilder; +import org.elasticsearch.action.search.SearchPhaseExecutionException; import org.elasticsearch.action.search.SearchResponse; import org.elasticsearch.cluster.health.ClusterHealthStatus; +import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentType; +import org.elasticsearch.index.IndexSettings; import org.elasticsearch.index.query.BoolQueryBuilder; import org.elasticsearch.index.query.InnerHitBuilder; import org.elasticsearch.index.query.QueryBuilder; @@ -384,6 +387,9 @@ public void testNestedDefinedAsObject() throws Exception { public void testInnerHitsWithObjectFieldThatHasANestedField() throws Exception { assertAcked(prepareCreate("articles") + // number_of_shards = 1, because then we catch the expected exception in the same way. + // (See expectThrows(...) below) + .setSettings(Settings.builder().put("index.number_of_shards", 1)) .addMapping("article", jsonBuilder().startObject() .startObject("properties") .startObject("comments") @@ -400,32 +406,54 @@ public void testInnerHitsWithObjectFieldThatHasANestedField() throws Exception { List requests = new ArrayList<>(); requests.add(client().prepareIndex("articles", "article", "1").setSource(jsonBuilder().startObject() .field("title", "quick brown fox") - .startObject("comments") - .startArray("messages") - .startObject().field("message", "fox eat quick").endObject() - .startObject().field("message", "bear eat quick").endObject() + .startArray("comments") + .startObject() + .startArray("messages") + .startObject().field("message", "fox eat quick").endObject() + .startObject().field("message", "bear eat quick").endObject() + .endArray() + .endObject() + .startObject() + .startArray("messages") + .startObject().field("message", "no fox").endObject() + .endArray() + .endObject() .endArray() - .endObject() .endObject())); indexRandom(true, requests); + Exception e = expectThrows(Exception.class, () -> client().prepareSearch("articles").setQuery(nestedQuery("comments.messages", + matchQuery("comments.messages.message", "fox"), ScoreMode.Avg).innerHit(new InnerHitBuilder())).get()); + assertEquals("Cannot execute inner hits. One or more parent object fields of nested field [comments.messages] are " + + "not nested. All parent fields need to be nested fields too", e.getCause().getCause().getMessage()); + + e = expectThrows(Exception.class, () -> client().prepareSearch("articles").setQuery(nestedQuery("comments.messages", + matchQuery("comments.messages.message", "fox"), ScoreMode.Avg).innerHit(new InnerHitBuilder() + .setFetchSourceContext(new FetchSourceContext(true)))).get()); + assertEquals("Cannot execute inner hits. One or more parent object fields of nested field [comments.messages] are " + + "not nested. All parent fields need to be nested fields too", e.getCause().getCause().getMessage()); + SearchResponse response = client().prepareSearch("articles") .setQuery(nestedQuery("comments.messages", matchQuery("comments.messages.message", "fox"), ScoreMode.Avg) - .innerHit(new InnerHitBuilder())).get(); + .innerHit(new InnerHitBuilder().setFetchSourceContext(new FetchSourceContext(false)))).get(); assertNoFailures(response); assertHitCount(response, 1); SearchHit hit = response.getHits().getAt(0); assertThat(hit.getId(), equalTo("1")); SearchHits messages = hit.getInnerHits().get("comments.messages"); - assertThat(messages.getTotalHits(), equalTo(1L)); + assertThat(messages.getTotalHits(), equalTo(2L)); assertThat(messages.getAt(0).getId(), equalTo("1")); assertThat(messages.getAt(0).getNestedIdentity().getField().string(), equalTo("comments.messages")); - assertThat(messages.getAt(0).getNestedIdentity().getOffset(), equalTo(0)); + assertThat(messages.getAt(0).getNestedIdentity().getOffset(), equalTo(2)); assertThat(messages.getAt(0).getNestedIdentity().getChild(), nullValue()); + assertThat(messages.getAt(1).getId(), equalTo("1")); + assertThat(messages.getAt(1).getNestedIdentity().getField().string(), equalTo("comments.messages")); + assertThat(messages.getAt(1).getNestedIdentity().getOffset(), equalTo(0)); + assertThat(messages.getAt(1).getNestedIdentity().getChild(), nullValue()); response = client().prepareSearch("articles") .setQuery(nestedQuery("comments.messages", matchQuery("comments.messages.message", "bear"), ScoreMode.Avg) - .innerHit(new InnerHitBuilder())).get(); + .innerHit(new InnerHitBuilder().setFetchSourceContext(new FetchSourceContext(false)))).get(); assertNoFailures(response); assertHitCount(response, 1); hit = response.getHits().getAt(0); @@ -446,7 +474,7 @@ public void testInnerHitsWithObjectFieldThatHasANestedField() throws Exception { indexRandom(true, requests); response = client().prepareSearch("articles") .setQuery(nestedQuery("comments.messages", matchQuery("comments.messages.message", "fox"), ScoreMode.Avg) - .innerHit(new InnerHitBuilder())).get(); + .innerHit(new InnerHitBuilder().setFetchSourceContext(new FetchSourceContext(false)))).get(); assertNoFailures(response); assertHitCount(response, 1); hit = response.getHits().getAt(0);; @@ -568,9 +596,9 @@ public void testNestedSource() throws Exception { client().prepareIndex("index1", "message", "1").setSource(jsonBuilder().startObject() .field("message", "quick brown fox") .startArray("comments") - .startObject().field("message", "fox eat quick").endObject() - .startObject().field("message", "fox ate rabbit x y z").endObject() - .startObject().field("message", "rabbit got away").endObject() + .startObject().field("message", "fox eat quick").field("x", "y").endObject() + .startObject().field("message", "fox ate rabbit x y z").field("x", "y").endObject() + .startObject().field("message", "rabbit got away").field("x", "y").endObject() .endArray() .endObject()).get(); refresh(); @@ -586,9 +614,11 @@ public void testNestedSource() throws Exception { assertHitCount(response, 1); assertThat(response.getHits().getAt(0).getInnerHits().get("comments").getTotalHits(), equalTo(2L)); - assertThat(extractValue("comments.message", response.getHits().getAt(0).getInnerHits().get("comments").getAt(0).getSourceAsMap()), + assertThat(response.getHits().getAt(0).getInnerHits().get("comments").getAt(0).getSourceAsMap().size(), equalTo(1)); + assertThat(response.getHits().getAt(0).getInnerHits().get("comments").getAt(0).getSourceAsMap().get("message"), equalTo("fox eat quick")); - assertThat(extractValue("comments.message", response.getHits().getAt(0).getInnerHits().get("comments").getAt(1).getSourceAsMap()), + assertThat(response.getHits().getAt(0).getInnerHits().get("comments").getAt(1).getSourceAsMap().size(), equalTo(1)); + assertThat(response.getHits().getAt(0).getInnerHits().get("comments").getAt(1).getSourceAsMap().get("message"), equalTo("fox ate rabbit x y z")); response = client().prepareSearch() @@ -599,15 +629,17 @@ public void testNestedSource() throws Exception { assertHitCount(response, 1); assertThat(response.getHits().getAt(0).getInnerHits().get("comments").getTotalHits(), equalTo(2L)); - assertThat(extractValue("comments.message", response.getHits().getAt(0).getInnerHits().get("comments").getAt(0).getSourceAsMap()), + assertThat(response.getHits().getAt(0).getInnerHits().get("comments").getAt(0).getSourceAsMap().size(), equalTo(2)); + assertThat(response.getHits().getAt(0).getInnerHits().get("comments").getAt(0).getSourceAsMap().get("message"), equalTo("fox eat quick")); - assertThat(extractValue("comments.message", response.getHits().getAt(0).getInnerHits().get("comments").getAt(1).getSourceAsMap()), + assertThat(response.getHits().getAt(0).getInnerHits().get("comments").getAt(0).getSourceAsMap().size(), equalTo(2)); + assertThat(response.getHits().getAt(0).getInnerHits().get("comments").getAt(1).getSourceAsMap().get("message"), equalTo("fox ate rabbit x y z")); } public void testInnerHitsWithIgnoreUnmapped() throws Exception { assertAcked(prepareCreate("index1") - .setSettings("index.version.created", Version.V_5_6_0.id) + .setSettings(Settings.builder().put("index.version.created", Version.V_5_6_0.id)) .addMapping("parent_type", "nested_type", "type=nested") .addMapping("child_type", "_parent", "type=parent_type") ); @@ -629,8 +661,11 @@ public void testInnerHitsWithIgnoreUnmapped() throws Exception { assertSearchHits(response, "1", "3"); } - public void testDontExplode() throws Exception { + public void testUseMaxDocInsteadOfSize() throws Exception { assertAcked(prepareCreate("index2").addMapping("type", "nested", "type=nested")); + client().admin().indices().prepareUpdateSettings("index2") + .setSettings(Collections.singletonMap(IndexSettings.MAX_INNER_RESULT_WINDOW_SETTING.getKey(), ArrayUtil.MAX_ARRAY_LENGTH)) + .get(); client().prepareIndex("index2", "type", "1").setSource(jsonBuilder().startObject() .startArray("nested") .startObject() @@ -650,4 +685,50 @@ public void testDontExplode() throws Exception { assertHitCount(response, 1); } + public void testTooHighResultWindow() throws Exception { + assertAcked(prepareCreate("index2").addMapping("type", "nested", "type=nested")); + client().prepareIndex("index2", "type", "1").setSource(jsonBuilder().startObject() + .startArray("nested") + .startObject() + .field("field", "value1") + .endObject() + .endArray() + .endObject()) + .setRefreshPolicy(IMMEDIATE) + .get(); + SearchResponse response = client().prepareSearch("index2") + .setQuery(nestedQuery("nested", matchQuery("nested.field", "value1"), ScoreMode.Avg) + .innerHit(new InnerHitBuilder().setFrom(50).setSize(10).setName("_name"))) + .get(); + assertNoFailures(response); + assertHitCount(response, 1); + + Exception e = expectThrows(SearchPhaseExecutionException.class, () -> client().prepareSearch("index2") + .setQuery(nestedQuery("nested", matchQuery("nested.field", "value1"), ScoreMode.Avg) + .innerHit(new InnerHitBuilder().setFrom(100).setSize(10).setName("_name"))) + .get()); + assertThat(e.getCause().getMessage(), + containsString("the inner hit definition's [_name]'s from + size must be less than or equal to: [100] but was [110]")); + e = expectThrows(SearchPhaseExecutionException.class, () -> client().prepareSearch("index2") + .setQuery(nestedQuery("nested", matchQuery("nested.field", "value1"), ScoreMode.Avg) + .innerHit(new InnerHitBuilder().setFrom(10).setSize(100).setName("_name"))) + .get()); + assertThat(e.getCause().getMessage(), + containsString("the inner hit definition's [_name]'s from + size must be less than or equal to: [100] but was [110]")); + + client().admin().indices().prepareUpdateSettings("index2") + .setSettings(Collections.singletonMap(IndexSettings.MAX_INNER_RESULT_WINDOW_SETTING.getKey(), 110)) + .get(); + response = client().prepareSearch("index2") + .setQuery(nestedQuery("nested", matchQuery("nested.field", "value1"), ScoreMode.Avg) + .innerHit(new InnerHitBuilder().setFrom(100).setSize(10).setName("_name"))) + .get(); + assertNoFailures(response); + response = client().prepareSearch("index2") + .setQuery(nestedQuery("nested", matchQuery("nested.field", "value1"), ScoreMode.Avg) + .innerHit(new InnerHitBuilder().setFrom(10).setSize(100).setName("_name"))) + .get(); + assertNoFailures(response); + } + } diff --git a/core/src/test/java/org/elasticsearch/search/fetch/subphase/MatchedQueriesIT.java b/core/src/test/java/org/elasticsearch/search/fetch/subphase/MatchedQueriesIT.java index 763518804e277..761f9798d7286 100644 --- a/core/src/test/java/org/elasticsearch/search/fetch/subphase/MatchedQueriesIT.java +++ b/core/src/test/java/org/elasticsearch/search/fetch/subphase/MatchedQueriesIT.java @@ -301,7 +301,6 @@ public void testMatchedWithShould() throws Exception { .should(queryStringQuery("dolor").queryName("dolor")) .should(queryStringQuery("elit").queryName("elit")) ) - .setPreference("_primary") .get(); assertHitCount(searchResponse, 2L); diff --git a/core/src/test/java/org/elasticsearch/search/fetch/subphase/highlight/HighlighterSearchIT.java b/core/src/test/java/org/elasticsearch/search/fetch/subphase/highlight/HighlighterSearchIT.java index eb6a5a6b3c857..faf1f65f34bda 100644 --- a/core/src/test/java/org/elasticsearch/search/fetch/subphase/highlight/HighlighterSearchIT.java +++ b/core/src/test/java/org/elasticsearch/search/fetch/subphase/highlight/HighlighterSearchIT.java @@ -19,8 +19,8 @@ package org.elasticsearch.search.fetch.subphase.highlight; import com.carrotsearch.randomizedtesting.generators.RandomPicks; + import org.apache.lucene.search.join.ScoreMode; -import org.elasticsearch.Version; import org.elasticsearch.action.index.IndexRequestBuilder; import org.elasticsearch.action.search.SearchRequestBuilder; import org.elasticsearch.action.search.SearchResponse; @@ -40,7 +40,6 @@ import org.elasticsearch.index.query.QueryBuilders; import org.elasticsearch.index.query.functionscore.FunctionScoreQueryBuilder; import org.elasticsearch.index.query.functionscore.RandomScoreFunctionBuilder; -import org.elasticsearch.index.search.MatchQuery; import org.elasticsearch.plugins.Plugin; import org.elasticsearch.rest.RestStatus; import org.elasticsearch.search.SearchHit; @@ -80,7 +79,6 @@ import static org.elasticsearch.index.query.QueryBuilders.rangeQuery; import static org.elasticsearch.index.query.QueryBuilders.regexpQuery; import static org.elasticsearch.index.query.QueryBuilders.termQuery; -import static org.elasticsearch.index.query.QueryBuilders.typeQuery; import static org.elasticsearch.index.query.QueryBuilders.wildcardQuery; import static org.elasticsearch.search.builder.SearchSourceBuilder.highlight; import static org.elasticsearch.search.builder.SearchSourceBuilder.searchSource; @@ -1360,9 +1358,9 @@ public void testPhrasePrefix() throws IOException { Builder builder = Settings.builder() .put(indexSettings()) .put("index.analysis.analyzer.synonym.tokenizer", "whitespace") - .putArray("index.analysis.analyzer.synonym.filter", "synonym", "lowercase") + .putList("index.analysis.analyzer.synonym.filter", "synonym", "lowercase") .put("index.analysis.filter.synonym.type", "synonym") - .putArray("index.analysis.filter.synonym.synonyms", "quick => fast"); + .putList("index.analysis.filter.synonym.synonyms", "quick => fast"); assertAcked(prepareCreate("first_test_index").setSettings(builder.build()).addMapping("type1", type1TermVectorMapping())); @@ -1475,7 +1473,7 @@ public void testPlainHighlightDifferentFragmenter() throws Exception { refresh(); SearchResponse response = client().prepareSearch("test") - .setQuery(QueryBuilders.matchQuery("tags", "long tag").type(MatchQuery.Type.PHRASE)) + .setQuery(QueryBuilders.matchPhraseQuery("tags", "long tag")) .highlighter( new HighlightBuilder().field(new HighlightBuilder.Field("tags") .highlighterType("plain").fragmentSize(-1).numOfFragments(2).fragmenter("simple"))) @@ -1486,7 +1484,7 @@ public void testPlainHighlightDifferentFragmenter() throws Exception { equalTo("here is another one that is very long tag and has the tag token near the end")); response = client().prepareSearch("test") - .setQuery(QueryBuilders.matchQuery("tags", "long tag").type(MatchQuery.Type.PHRASE)) + .setQuery(QueryBuilders.matchPhraseQuery("tags", "long tag")) .highlighter( new HighlightBuilder().field(new Field("tags").highlighterType("plain").fragmentSize(-1).numOfFragments(2) .fragmenter("span"))).get(); @@ -1497,7 +1495,7 @@ public void testPlainHighlightDifferentFragmenter() throws Exception { equalTo("here is another one that is very long tag and has the tag token near the end")); assertFailures(client().prepareSearch("test") - .setQuery(QueryBuilders.matchQuery("tags", "long tag").type(MatchQuery.Type.PHRASE)) + .setQuery(QueryBuilders.matchPhraseQuery("tags", "long tag")) .highlighter( new HighlightBuilder().field(new Field("tags").highlighterType("plain").fragmentSize(-1).numOfFragments(2) .fragmenter("invalid"))), @@ -1555,7 +1553,7 @@ public void testMissingStoredField() throws Exception { // This query used to fail when the field to highlight was absent SearchResponse response = client().prepareSearch("test") - .setQuery(QueryBuilders.matchQuery("field", "highlight").type(MatchQuery.Type.BOOLEAN)) + .setQuery(QueryBuilders.matchQuery("field", "highlight")) .highlighter( new HighlightBuilder().field(new HighlightBuilder.Field("highlight_field").fragmentSize(-1).numOfFragments(1) .fragmenter("simple"))).get(); @@ -1580,7 +1578,7 @@ public void testNumericHighlighting() throws Exception { refresh(); SearchResponse response = client().prepareSearch("test") - .setQuery(QueryBuilders.matchQuery("text", "test").type(MatchQuery.Type.BOOLEAN)) + .setQuery(QueryBuilders.matchQuery("text", "test")) .highlighter( new HighlightBuilder().field("text").field("byte").field("short").field("int").field("long").field("float") .field("double")) @@ -1605,7 +1603,7 @@ public void testResetTwice() throws Exception { refresh(); SearchResponse response = client().prepareSearch("test") - .setQuery(QueryBuilders.matchQuery("text", "test").type(MatchQuery.Type.BOOLEAN)) + .setQuery(QueryBuilders.matchQuery("text", "test")) .highlighter(new HighlightBuilder().field("text")).execute().actionGet(); // PatternAnalyzer will throw an exception if it is resetted twice assertHitCount(response, 1L); @@ -2775,9 +2773,9 @@ public void testSynonyms() throws IOException { Builder builder = Settings.builder() .put(indexSettings()) .put("index.analysis.analyzer.synonym.tokenizer", "whitespace") - .putArray("index.analysis.analyzer.synonym.filter", "synonym", "lowercase") + .putList("index.analysis.analyzer.synonym.filter", "synonym", "lowercase") .put("index.analysis.filter.synonym.type", "synonym") - .putArray("index.analysis.filter.synonym.synonyms", "fast,quick"); + .putList("index.analysis.filter.synonym.synonyms", "fast,quick"); assertAcked(prepareCreate("test").setSettings(builder.build()) .addMapping("type1", "field1", @@ -2815,7 +2813,7 @@ public void testSynonyms() throws IOException { public void testHighlightQueryRewriteDatesWithNow() throws Exception { assertAcked(client().admin().indices().prepareCreate("index-1").addMapping("type", "d", "type=date", "field", "type=text,store=true,term_vector=with_positions_offsets") - .setSettings("index.number_of_replicas", 0, "index.number_of_shards", 2) + .setSettings(Settings.builder().put("index.number_of_replicas", 0).put("index.number_of_shards", 2)) .get()); DateTime now = new DateTime(ISOChronology.getInstanceUTC()); indexRandom(true, client().prepareIndex("index-1", "type", "1").setSource("d", now, "field", "hello world"), @@ -2841,4 +2839,80 @@ public void testHighlightQueryRewriteDatesWithNow() throws Exception { equalTo("hello world")); } } + + public void testWithNestedQuery() throws Exception { + String mapping = jsonBuilder().startObject().startObject("type").startObject("properties") + .startObject("text") + .field("type", "text") + .field("index_options", "offsets") + .field("term_vector", "with_positions_offsets") + .endObject() + .startObject("foo") + .field("type", "nested") + .startObject("properties") + .startObject("text") + .field("type", "text") + .endObject() + .endObject() + .endObject() + .endObject().endObject().endObject().string(); + prepareCreate("test").addMapping("type", mapping, XContentType.JSON).get(); + + client().prepareIndex("test", "type", "1").setSource(jsonBuilder().startObject() + .startArray("foo") + .startObject().field("text", "brown").endObject() + .startObject().field("text", "cow").endObject() + .endArray() + .field("text", "brown") + .endObject()).setRefreshPolicy(WriteRequest.RefreshPolicy.IMMEDIATE) + .get(); + + for (String type : new String[] {"unified", "plain"}) { + SearchResponse searchResponse = client().prepareSearch() + .setQuery(nestedQuery("foo", matchQuery("foo.text", "brown cow"), ScoreMode.None)) + .highlighter(new HighlightBuilder() + .field(new Field("foo.text").highlighterType(type))) + .get(); + assertHitCount(searchResponse, 1); + HighlightField field = searchResponse.getHits().getAt(0).getHighlightFields().get("foo.text"); + assertThat(field.getFragments().length, equalTo(2)); + assertThat(field.getFragments()[0].string(), equalTo("brown")); + assertThat(field.getFragments()[1].string(), equalTo("cow")); + + searchResponse = client().prepareSearch() + .setQuery(nestedQuery("foo", prefixQuery("foo.text", "bro"), ScoreMode.None)) + .highlighter(new HighlightBuilder() + .field(new Field("foo.text").highlighterType(type))) + .get(); + assertHitCount(searchResponse, 1); + field = searchResponse.getHits().getAt(0).getHighlightFields().get("foo.text"); + assertThat(field.getFragments().length, equalTo(1)); + assertThat(field.getFragments()[0].string(), equalTo("brown")); + + searchResponse = client().prepareSearch() + .setQuery(nestedQuery("foo", prefixQuery("foo.text", "bro"), ScoreMode.None)) + .highlighter(new HighlightBuilder() + .field(new Field("foo.text").highlighterType("plain"))) + .get(); + assertHitCount(searchResponse, 1); + field = searchResponse.getHits().getAt(0).getHighlightFields().get("foo.text"); + assertThat(field.getFragments().length, equalTo(1)); + assertThat(field.getFragments()[0].string(), equalTo("brown")); + } + + // For unified and fvh highlighters we just check that the nested query is correctly extracted + // but we highlight the root text field since nested documents cannot be highlighted with postings nor term vectors + // directly. + for (String type : ALL_TYPES) { + SearchResponse searchResponse = client().prepareSearch() + .setQuery(nestedQuery("foo", prefixQuery("foo.text", "bro"), ScoreMode.None)) + .highlighter(new HighlightBuilder() + .field(new Field("text").highlighterType(type).requireFieldMatch(false))) + .get(); + assertHitCount(searchResponse, 1); + HighlightField field = searchResponse.getHits().getAt(0).getHighlightFields().get("text"); + assertThat(field.getFragments().length, equalTo(1)); + assertThat(field.getFragments()[0].string(), equalTo("brown")); + } + } } diff --git a/core/src/test/java/org/elasticsearch/search/fields/SearchFieldsIT.java b/core/src/test/java/org/elasticsearch/search/fields/SearchFieldsIT.java index 8c25c64874f5c..71db66c7fb208 100644 --- a/core/src/test/java/org/elasticsearch/search/fields/SearchFieldsIT.java +++ b/core/src/test/java/org/elasticsearch/search/fields/SearchFieldsIT.java @@ -28,6 +28,7 @@ import org.elasticsearch.common.collect.MapBuilder; import org.elasticsearch.common.document.DocumentField; import org.elasticsearch.common.joda.Joda; +import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.common.xcontent.support.XContentMapValues; @@ -640,7 +641,7 @@ public void testSearchFieldsNonLeafField() throws Exception { public void testGetFieldsComplexField() throws Exception { client().admin().indices().prepareCreate("my-index") - .setSettings("index.refresh_interval", -1) + .setSettings(Settings.builder().put("index.refresh_interval", -1)) .addMapping("doc", jsonBuilder() .startObject() .startObject("doc") diff --git a/core/src/test/java/org/elasticsearch/search/functionscore/QueryRescorerIT.java b/core/src/test/java/org/elasticsearch/search/functionscore/QueryRescorerIT.java index 2b188adeb7087..58565b5f264b7 100644 --- a/core/src/test/java/org/elasticsearch/search/functionscore/QueryRescorerIT.java +++ b/core/src/test/java/org/elasticsearch/search/functionscore/QueryRescorerIT.java @@ -38,7 +38,7 @@ import org.elasticsearch.search.SearchHits; import org.elasticsearch.search.rescore.QueryRescoreMode; import org.elasticsearch.search.rescore.QueryRescorerBuilder; -import org.elasticsearch.search.rescore.RescoreBuilder; +import org.elasticsearch.search.sort.SortBuilders; import org.elasticsearch.test.ESIntegTestCase; import java.util.Arrays; @@ -51,10 +51,12 @@ import static org.elasticsearch.index.query.QueryBuilders.boolQuery; import static org.elasticsearch.index.query.QueryBuilders.constantScoreQuery; import static org.elasticsearch.index.query.QueryBuilders.functionScoreQuery; +import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery; import static org.elasticsearch.index.query.QueryBuilders.matchPhraseQuery; +import static org.elasticsearch.index.query.QueryBuilders.matchQuery; +import static org.elasticsearch.index.query.QueryBuilders.queryStringQuery; import static org.elasticsearch.index.query.QueryBuilders.termQuery; import static org.elasticsearch.index.query.functionscore.ScoreFunctionBuilders.weightFactorFunction; -import static org.elasticsearch.search.rescore.RescoreBuilder.queryRescorer; import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked; import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertFirstHit; import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertFourthHit; @@ -65,6 +67,7 @@ import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertThirdHit; import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.hasId; import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.hasScore; +import static org.hamcrest.Matchers.containsString; import static org.hamcrest.Matchers.equalTo; import static org.hamcrest.Matchers.greaterThan; import static org.hamcrest.Matchers.lessThanOrEqualTo; @@ -84,8 +87,8 @@ public void testEnforceWindowSize() { for (int j = 0 ; j < iters; j++) { SearchResponse searchResponse = client().prepareSearch() .setQuery(QueryBuilders.matchAllQuery()) - .setRescorer(queryRescorer( - QueryBuilders.functionScoreQuery(QueryBuilders.matchAllQuery(), + .setRescorer(new QueryRescorerBuilder( + functionScoreQuery(matchAllQuery(), ScoreFunctionBuilders.weightFactorFunction(100)).boostMode(CombineFunction.REPLACE)) .setQueryWeight(0.0f).setRescoreQueryWeight(1.0f), 1).setSize(randomIntBetween(2, 10)).execute() .actionGet(); @@ -120,7 +123,7 @@ public void testRescorePhrase() throws Exception { SearchResponse searchResponse = client().prepareSearch() .setQuery(QueryBuilders.matchQuery("field1", "the quick brown").operator(Operator.OR)) .setRescorer( - queryRescorer(QueryBuilders.matchPhraseQuery("field1", "quick brown").slop(2).boost(4.0f)) + new QueryRescorerBuilder(matchPhraseQuery("field1", "quick brown").slop(2).boost(4.0f)) .setRescoreQueryWeight(2), 5).execute().actionGet(); assertThat(searchResponse.getHits().getTotalHits(), equalTo(3L)); @@ -131,7 +134,7 @@ public void testRescorePhrase() throws Exception { searchResponse = client().prepareSearch() .setQuery(QueryBuilders.matchQuery("field1", "the quick brown").operator(Operator.OR)) - .setRescorer(queryRescorer(QueryBuilders.matchPhraseQuery("field1", "the quick brown").slop(3)), 5) + .setRescorer(new QueryRescorerBuilder(matchPhraseQuery("field1", "the quick brown").slop(3)), 5) .execute().actionGet(); assertHitCount(searchResponse, 3); @@ -141,7 +144,7 @@ public void testRescorePhrase() throws Exception { searchResponse = client().prepareSearch() .setQuery(QueryBuilders.matchQuery("field1", "the quick brown").operator(Operator.OR)) - .setRescorer(queryRescorer((QueryBuilders.matchPhraseQuery("field1", "the quick brown"))), 5).execute() + .setRescorer(new QueryRescorerBuilder(matchPhraseQuery("field1", "the quick brown")), 5).execute() .actionGet(); assertHitCount(searchResponse, 3); @@ -154,9 +157,9 @@ public void testRescorePhrase() throws Exception { public void testMoreDocs() throws Exception { Builder builder = Settings.builder(); builder.put("index.analysis.analyzer.synonym.tokenizer", "whitespace"); - builder.putArray("index.analysis.analyzer.synonym.filter", "synonym", "lowercase"); + builder.putList("index.analysis.analyzer.synonym.filter", "synonym", "lowercase"); builder.put("index.analysis.filter.synonym.type", "synonym"); - builder.putArray("index.analysis.filter.synonym.synonyms", "ave => ave, avenue", "street => str, street"); + builder.putList("index.analysis.filter.synonym.synonyms", "ave => ave, avenue", "street => str, street"); XContentBuilder mapping = XContentFactory.jsonBuilder().startObject().startObject("type1").startObject("properties") .startObject("field1").field("type", "text").field("analyzer", "whitespace").field("search_analyzer", "synonym") @@ -187,7 +190,7 @@ public void testMoreDocs() throws Exception { .setQuery(QueryBuilders.matchQuery("field1", "lexington avenue massachusetts").operator(Operator.OR)) .setFrom(0) .setSize(5) - .setRescorer(queryRescorer(QueryBuilders.matchPhraseQuery("field1", "lexington avenue massachusetts").slop(3)) + .setRescorer(new QueryRescorerBuilder(matchPhraseQuery("field1", "lexington avenue massachusetts").slop(3)) .setQueryWeight(0.6f).setRescoreQueryWeight(2.0f), 20).execute().actionGet(); assertThat(searchResponse.getHits().getHits().length, equalTo(5)); @@ -202,7 +205,7 @@ public void testMoreDocs() throws Exception { .setFrom(0) .setSize(5) .setSearchType(SearchType.DFS_QUERY_THEN_FETCH) - .setRescorer(queryRescorer(QueryBuilders.matchPhraseQuery("field1", "lexington avenue massachusetts").slop(3)) + .setRescorer(new QueryRescorerBuilder(matchPhraseQuery("field1", "lexington avenue massachusetts").slop(3)) .setQueryWeight(0.6f).setRescoreQueryWeight(2.0f), 20).execute().actionGet(); assertThat(searchResponse.getHits().getHits().length, equalTo(5)); @@ -219,7 +222,7 @@ public void testMoreDocs() throws Exception { .setFrom(2) .setSize(5) .setSearchType(SearchType.DFS_QUERY_THEN_FETCH) - .setRescorer(queryRescorer(QueryBuilders.matchPhraseQuery("field1", "lexington avenue massachusetts").slop(3)) + .setRescorer(new QueryRescorerBuilder(matchPhraseQuery("field1", "lexington avenue massachusetts").slop(3)) .setQueryWeight(0.6f).setRescoreQueryWeight(2.0f), 20).execute().actionGet(); assertThat(searchResponse.getHits().getHits().length, equalTo(5)); @@ -232,9 +235,9 @@ public void testMoreDocs() throws Exception { public void testSmallRescoreWindow() throws Exception { Builder builder = Settings.builder(); builder.put("index.analysis.analyzer.synonym.tokenizer", "whitespace"); - builder.putArray("index.analysis.analyzer.synonym.filter", "synonym", "lowercase"); + builder.putList("index.analysis.analyzer.synonym.filter", "synonym", "lowercase"); builder.put("index.analysis.filter.synonym.type", "synonym"); - builder.putArray("index.analysis.filter.synonym.synonyms", "ave => ave, avenue", "street => str, street"); + builder.putList("index.analysis.filter.synonym.synonyms", "ave => ave, avenue", "street => str, street"); XContentBuilder mapping = XContentFactory.jsonBuilder().startObject().startObject("type1").startObject("properties") .startObject("field1").field("type", "text").field("analyzer", "whitespace").field("search_analyzer", "synonym") @@ -270,7 +273,7 @@ public void testSmallRescoreWindow() throws Exception { .setQuery(QueryBuilders.matchQuery("field1", "massachusetts")) .setFrom(0) .setSize(5) - .setRescorer(queryRescorer(QueryBuilders.matchPhraseQuery("field1", "lexington avenue massachusetts").slop(3)) + .setRescorer(new QueryRescorerBuilder(matchPhraseQuery("field1", "lexington avenue massachusetts").slop(3)) .setQueryWeight(0.6f).setRescoreQueryWeight(2.0f), 2).execute().actionGet(); // Only top 2 hits were re-ordered: assertThat(searchResponse.getHits().getHits().length, equalTo(4)); @@ -287,7 +290,7 @@ public void testSmallRescoreWindow() throws Exception { .setQuery(QueryBuilders.matchQuery("field1", "massachusetts")) .setFrom(0) .setSize(5) - .setRescorer(queryRescorer(QueryBuilders.matchPhraseQuery("field1", "lexington avenue massachusetts").slop(3)) + .setRescorer(new QueryRescorerBuilder(matchPhraseQuery("field1", "lexington avenue massachusetts").slop(3)) .setQueryWeight(0.6f).setRescoreQueryWeight(2.0f), 3).execute().actionGet(); // Only top 3 hits were re-ordered: @@ -304,9 +307,9 @@ public void testSmallRescoreWindow() throws Exception { public void testRescorerMadeScoresWorse() throws Exception { Builder builder = Settings.builder(); builder.put("index.analysis.analyzer.synonym.tokenizer", "whitespace"); - builder.putArray("index.analysis.analyzer.synonym.filter", "synonym", "lowercase"); + builder.putList("index.analysis.analyzer.synonym.filter", "synonym", "lowercase"); builder.put("index.analysis.filter.synonym.type", "synonym"); - builder.putArray("index.analysis.filter.synonym.synonyms", "ave => ave, avenue", "street => str, street"); + builder.putList("index.analysis.filter.synonym.synonyms", "ave => ave, avenue", "street => str, street"); XContentBuilder mapping = XContentFactory.jsonBuilder().startObject().startObject("type1").startObject("properties") .startObject("field1").field("type", "text").field("analyzer", "whitespace").field("search_analyzer", "synonym") @@ -342,7 +345,7 @@ public void testRescorerMadeScoresWorse() throws Exception { .setQuery(QueryBuilders.matchQuery("field1", "massachusetts").operator(Operator.OR)) .setFrom(0) .setSize(5) - .setRescorer(queryRescorer(QueryBuilders.matchPhraseQuery("field1", "lexington avenue massachusetts").slop(3)) + .setRescorer(new QueryRescorerBuilder(matchPhraseQuery("field1", "lexington avenue massachusetts").slop(3)) .setQueryWeight(1.0f).setRescoreQueryWeight(-1f), 3).execute().actionGet(); // 6 and 1 got worse, and then the hit (2) outside the rescore window were sorted ahead: @@ -410,7 +413,7 @@ public void testEquivalence() throws Exception { .setQuery(QueryBuilders.matchQuery("field1", query).operator(Operator.OR)) .setFrom(0) .setSize(resultSize) - .setRescorer(queryRescorer(constantScoreQuery(QueryBuilders.matchPhraseQuery("field1", intToEnglish).slop(3))) + .setRescorer(new QueryRescorerBuilder(constantScoreQuery(matchPhraseQuery("field1", intToEnglish).slop(3))) .setQueryWeight(1.0f) // no weight - so we basically use the same score as the actual query .setRescoreQueryWeight(0.0f), rescoreWindow) @@ -432,7 +435,7 @@ public void testEquivalence() throws Exception { .setQuery(QueryBuilders.matchQuery("field1", query).operator(Operator.OR)) .setFrom(0) .setSize(resultSize) - .setRescorer(queryRescorer(constantScoreQuery(matchPhraseQuery("field1", "not in the index").slop(3))) + .setRescorer(new QueryRescorerBuilder(constantScoreQuery(matchPhraseQuery("field1", "not in the index").slop(3))) .setQueryWeight(1.0f) .setRescoreQueryWeight(1.0f), rescoreWindow).execute() .actionGet(); @@ -462,7 +465,7 @@ public void testExplain() throws Exception { .prepareSearch() .setSearchType(SearchType.DFS_QUERY_THEN_FETCH) .setQuery(QueryBuilders.matchQuery("field1", "the quick brown").operator(Operator.OR)) - .setRescorer(queryRescorer(matchPhraseQuery("field1", "the quick brown").slop(2).boost(4.0f)) + .setRescorer(new QueryRescorerBuilder(matchPhraseQuery("field1", "the quick brown").slop(2).boost(4.0f)) .setQueryWeight(0.5f).setRescoreQueryWeight(0.4f), 5).setExplain(true).execute() .actionGet(); assertHitCount(searchResponse, 3); @@ -490,7 +493,7 @@ public void testExplain() throws Exception { String[] scoreModes = new String[]{ "max", "min", "avg", "total", "multiply", "" }; String[] descriptionModes = new String[]{ "max of:", "min of:", "avg of:", "sum of:", "product of:", "sum of:" }; for (int innerMode = 0; innerMode < scoreModes.length; innerMode++) { - QueryRescorerBuilder innerRescoreQuery = queryRescorer(QueryBuilders.matchQuery("field1", "the quick brown").boost(4.0f)) + QueryRescorerBuilder innerRescoreQuery = new QueryRescorerBuilder(matchQuery("field1", "the quick brown").boost(4.0f)) .setQueryWeight(0.5f).setRescoreQueryWeight(0.4f); if (!"".equals(scoreModes[innerMode])) { @@ -513,7 +516,7 @@ public void testExplain() throws Exception { } for (int outerMode = 0; outerMode < scoreModes.length; outerMode++) { - QueryRescorerBuilder outerRescoreQuery = queryRescorer(QueryBuilders.matchQuery("field1", "the quick brown").boost(4.0f)) + QueryRescorerBuilder outerRescoreQuery = new QueryRescorerBuilder(matchQuery("field1", "the quick brown").boost(4.0f)) .setQueryWeight(0.5f).setRescoreQueryWeight(0.4f); if (!"".equals(scoreModes[outerMode])) { @@ -557,7 +560,7 @@ public void testScoring() throws Exception { .should(functionScoreQuery(termQuery("field1", intToEnglish[1]), weightFactorFunction(3.0f)).boostMode(REPLACE)) .should(functionScoreQuery(termQuery("field1", intToEnglish[2]), weightFactorFunction(5.0f)).boostMode(REPLACE)) .should(functionScoreQuery(termQuery("field1", intToEnglish[3]), weightFactorFunction(0.2f)).boostMode(REPLACE)); - QueryRescorerBuilder rescoreQuery = queryRescorer(boolQuery() + QueryRescorerBuilder rescoreQuery = new QueryRescorerBuilder(boolQuery() .should(functionScoreQuery(termQuery("field1", intToEnglish[0]), weightFactorFunction(5.0f)).boostMode(REPLACE)) .should(functionScoreQuery(termQuery("field1", intToEnglish[1]), weightFactorFunction(7.0f)).boostMode(REPLACE)) .should(functionScoreQuery(termQuery("field1", intToEnglish[3]), weightFactorFunction(0.0f)).boostMode(REPLACE))); @@ -621,12 +624,12 @@ public void testScoring() throws Exception { public void testMultipleRescores() throws Exception { int numDocs = indexRandomNumbers("keyword", 1, true); - QueryRescorerBuilder eightIsGreat = RescoreBuilder - .queryRescorer(QueryBuilders.functionScoreQuery(QueryBuilders.termQuery("field1", English.intToEnglish(8)), + QueryRescorerBuilder eightIsGreat = new QueryRescorerBuilder(functionScoreQuery( + termQuery("field1", English.intToEnglish(8)), ScoreFunctionBuilders.weightFactorFunction(1000.0f)).boostMode(CombineFunction.REPLACE)) .setScoreMode(QueryRescoreMode.Total); - QueryRescorerBuilder sevenIsBetter = RescoreBuilder - .queryRescorer(QueryBuilders.functionScoreQuery(QueryBuilders.termQuery("field1", English.intToEnglish(7)), + QueryRescorerBuilder sevenIsBetter = new QueryRescorerBuilder(functionScoreQuery( + termQuery("field1", English.intToEnglish(7)), ScoreFunctionBuilders.weightFactorFunction(10000.0f)).boostMode(CombineFunction.REPLACE)) .setScoreMode(QueryRescoreMode.Total); @@ -643,11 +646,11 @@ public void testMultipleRescores() throws Exception { // We have no idea what the second hit will be because we didn't get a chance to look for seven // Now use one rescore to drag the number we're looking for into the window of another - QueryRescorerBuilder ninetyIsGood = RescoreBuilder.queryRescorer(QueryBuilders - .functionScoreQuery(QueryBuilders.queryStringQuery("*ninety*"), ScoreFunctionBuilders.weightFactorFunction(1000.0f)) + QueryRescorerBuilder ninetyIsGood = new QueryRescorerBuilder(functionScoreQuery( + queryStringQuery("*ninety*"), ScoreFunctionBuilders.weightFactorFunction(1000.0f)) .boostMode(CombineFunction.REPLACE)).setScoreMode(QueryRescoreMode.Total); - QueryRescorerBuilder oneToo = RescoreBuilder.queryRescorer(QueryBuilders - .functionScoreQuery(QueryBuilders.queryStringQuery("*one*"), ScoreFunctionBuilders.weightFactorFunction(1000.0f)) + QueryRescorerBuilder oneToo = new QueryRescorerBuilder(functionScoreQuery( + queryStringQuery("*one*"), ScoreFunctionBuilders.weightFactorFunction(1000.0f)) .boostMode(CombineFunction.REPLACE)).setScoreMode(QueryRescoreMode.Total); request.clearRescorers().addRescorer(ninetyIsGood, numDocs).addRescorer(oneToo, 10); response = request.setSize(2).get(); @@ -700,8 +703,49 @@ public void testFromSize() throws Exception { request.setQuery(QueryBuilders.termQuery("text", "hello")); request.setFrom(1); request.setSize(4); - request.addRescorer(RescoreBuilder.queryRescorer(QueryBuilders.matchAllQuery()), 50); + request.addRescorer(new QueryRescorerBuilder(matchAllQuery()), 50); assertEquals(4, request.get().getHits().getHits().length); } + + public void testRescorePhaseWithInvalidSort() throws Exception { + assertAcked(prepareCreate("test")); + for(int i=0;i<5;i++) { + client().prepareIndex("test", "type", ""+i).setSource("number", 0).get(); + } + refresh(); + + Exception exc = expectThrows(Exception.class, + () -> client().prepareSearch() + .addSort(SortBuilders.fieldSort("number")) + .setTrackScores(true) + .addRescorer(new QueryRescorerBuilder(matchAllQuery()), 50) + .get() + ); + assertNotNull(exc.getCause()); + assertThat(exc.getCause().getMessage(), + containsString("Cannot use [sort] option in conjunction with [rescore].")); + + exc = expectThrows(Exception.class, + () -> client().prepareSearch() + .addSort(SortBuilders.fieldSort("number")) + .addSort(SortBuilders.scoreSort()) + .setTrackScores(true) + .addRescorer(new QueryRescorerBuilder(matchAllQuery()), 50) + .get() + ); + assertNotNull(exc.getCause()); + assertThat(exc.getCause().getMessage(), + containsString("Cannot use [sort] option in conjunction with [rescore].")); + + SearchResponse resp = client().prepareSearch().addSort(SortBuilders.scoreSort()) + .setTrackScores(true) + .addRescorer(new QueryRescorerBuilder(matchAllQuery()).setRescoreQueryWeight(100.0f), 50) + .get(); + assertThat(resp.getHits().totalHits, equalTo(5L)); + assertThat(resp.getHits().getHits().length, equalTo(5)); + for (SearchHit hit : resp.getHits().getHits()) { + assertThat(hit.getScore(), equalTo(101f)); + } + } } diff --git a/core/src/test/java/org/elasticsearch/search/functionscore/RandomScoreFunctionIT.java b/core/src/test/java/org/elasticsearch/search/functionscore/RandomScoreFunctionIT.java index 31366c2534cb2..257089c90545f 100644 --- a/core/src/test/java/org/elasticsearch/search/functionscore/RandomScoreFunctionIT.java +++ b/core/src/test/java/org/elasticsearch/search/functionscore/RandomScoreFunctionIT.java @@ -107,7 +107,7 @@ public void testConsistentHitsWithSameSeed() throws Exception { for (int o = 0; o < outerIters; o++) { final int seed = randomInt(); String preference = randomRealisticUnicodeOfLengthBetween(1, 10); // at least one char!! - // randomPreference should not start with '_' (reserved for known preference types (e.g. _shards, _primary) + // randomPreference should not start with '_' (reserved for known preference types (e.g. _shards) while (preference.startsWith("_")) { preference = randomRealisticUnicodeOfLengthBetween(1, 10); } diff --git a/core/src/test/java/org/elasticsearch/search/internal/ShardSearchTransportRequestTests.java b/core/src/test/java/org/elasticsearch/search/internal/ShardSearchTransportRequestTests.java index a16b91872d086..f68d3a3583503 100644 --- a/core/src/test/java/org/elasticsearch/search/internal/ShardSearchTransportRequestTests.java +++ b/core/src/test/java/org/elasticsearch/search/internal/ShardSearchTransportRequestTests.java @@ -61,8 +61,7 @@ public void testSerialization() throws Exception { try (BytesStreamOutput output = new BytesStreamOutput()) { shardSearchTransportRequest.writeTo(output); try (StreamInput in = new NamedWriteableAwareStreamInput(output.bytes().streamInput(), namedWriteableRegistry)) { - ShardSearchTransportRequest deserializedRequest = new ShardSearchTransportRequest(); - deserializedRequest.readFrom(in); + ShardSearchTransportRequest deserializedRequest = new ShardSearchTransportRequest(in); assertEquals(deserializedRequest.scroll(), shardSearchTransportRequest.scroll()); assertEquals(deserializedRequest.getAliasFilter(), shardSearchTransportRequest.getAliasFilter()); assertArrayEquals(deserializedRequest.indices(), shardSearchTransportRequest.indices()); diff --git a/core/src/test/java/org/elasticsearch/search/morelikethis/MoreLikeThisIT.java b/core/src/test/java/org/elasticsearch/search/morelikethis/MoreLikeThisIT.java index d504df60b630b..49676486588d9 100644 --- a/core/src/test/java/org/elasticsearch/search/morelikethis/MoreLikeThisIT.java +++ b/core/src/test/java/org/elasticsearch/search/morelikethis/MoreLikeThisIT.java @@ -370,7 +370,7 @@ public void testSimpleMoreLikeThisIdsMultipleTypes() throws Exception { logger.info("Creating index test"); int numOfTypes = randomIntBetween(2, 10); CreateIndexRequestBuilder createRequestBuilder = prepareCreate("test") - .setSettings("index.version.created", Version.V_5_6_0.id); + .setSettings(Settings.builder().put("index.version.created", Version.V_5_6_0.id)); for (int i = 0; i < numOfTypes; i++) { createRequestBuilder.addMapping("type" + i, jsonBuilder().startObject().startObject("type" + i).startObject("properties") .startObject("text").field("type", "text").endObject() @@ -403,7 +403,7 @@ public void testMoreLikeThisMultiValueFields() throws Exception { logger.info("Creating the index ..."); assertAcked(prepareCreate("test") .addMapping("type1", "text", "type=text,analyzer=keyword") - .setSettings(SETTING_NUMBER_OF_SHARDS, 1)); + .setSettings(Settings.builder().put(SETTING_NUMBER_OF_SHARDS, 1))); ensureGreen(); logger.info("Indexing ..."); @@ -435,7 +435,7 @@ public void testMinimumShouldMatch() throws ExecutionException, InterruptedExcep logger.info("Creating the index ..."); assertAcked(prepareCreate("test") .addMapping("type1", "text", "type=text,analyzer=whitespace") - .setSettings(SETTING_NUMBER_OF_SHARDS, 1)); + .setSettings(Settings.builder().put(SETTING_NUMBER_OF_SHARDS, 1))); ensureGreen(); logger.info("Indexing with each doc having one less term ..."); diff --git a/core/src/test/java/org/elasticsearch/search/nested/SimpleNestedIT.java b/core/src/test/java/org/elasticsearch/search/nested/SimpleNestedIT.java index 3e4792690ad5e..68ef78f4273a8 100644 --- a/core/src/test/java/org/elasticsearch/search/nested/SimpleNestedIT.java +++ b/core/src/test/java/org/elasticsearch/search/nested/SimpleNestedIT.java @@ -33,7 +33,9 @@ import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentFactory; +import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.index.query.QueryBuilders; +import org.elasticsearch.search.sort.NestedSortBuilder; import org.elasticsearch.search.sort.SortBuilders; import org.elasticsearch.search.sort.SortMode; import org.elasticsearch.search.sort.SortOrder; @@ -81,12 +83,10 @@ public void testSimpleNested() throws Exception { .endObject()).execute().actionGet(); waitForRelocation(ClusterHealthStatus.GREEN); - // flush, so we fetch it from the index (as see that we filter nested docs) - flush(); GetResponse getResponse = client().prepareGet("test", "type1", "1").get(); assertThat(getResponse.isExists(), equalTo(true)); assertThat(getResponse.getSourceAsBytes(), notNullValue()); - + refresh(); // check the numDocs assertDocumentCount("test", 3); @@ -124,8 +124,7 @@ public void testSimpleNested() throws Exception { .endArray() .endObject()).execute().actionGet(); waitForRelocation(ClusterHealthStatus.GREEN); - // flush, so we fetch it from the index (as see that we filter nested docs) - flush(); + refresh(); assertDocumentCount("test", 6); searchResponse = client().prepareSearch("test").setQuery(nestedQuery("nested1", @@ -149,8 +148,7 @@ public void testSimpleNested() throws Exception { DeleteResponse deleteResponse = client().prepareDelete("test", "type1", "2").execute().actionGet(); assertEquals(DocWriteResponse.Result.DELETED, deleteResponse.getResult()); - // flush, so we fetch it from the index (as see that we filter nested docs) - flush(); + refresh(); assertDocumentCount("test", 3); searchResponse = client().prepareSearch("test").setQuery(nestedQuery("nested1", termQuery("nested1.n_field1", "n_value1_1"), ScoreMode.Avg)).execute().actionGet(); @@ -177,11 +175,10 @@ public void testMultiNested() throws Exception { .endArray() .endObject()).execute().actionGet(); - // flush, so we fetch it from the index (as see that we filter nested docs) - flush(); GetResponse getResponse = client().prepareGet("test", "type1", "1").execute().actionGet(); assertThat(getResponse.isExists(), equalTo(true)); waitForRelocation(ClusterHealthStatus.GREEN); + refresh(); // check the numDocs assertDocumentCount("test", 7); @@ -498,6 +495,220 @@ public void testSimpleNestedSortingWithNestedFilterMissing() throws Exception { client().prepareClearScroll().addScrollId("_all").get(); } + public void testNestedSortWithMultiLevelFiltering() throws Exception { + assertAcked(prepareCreate("test") + .addMapping("type1", "{\n" + + " \"type1\": {\n" + + " \"properties\": {\n" + + " \"acl\": {\n" + + " \"type\": \"nested\",\n" + + " \"properties\": {\n" + + " \"access_id\": {\"type\": \"keyword\"},\n" + + " \"operation\": {\n" + + " \"type\": \"nested\",\n" + + " \"properties\": {\n" + + " \"name\": {\"type\": \"keyword\"},\n" + + " \"user\": {\n" + + " \"type\": \"nested\",\n" + + " \"properties\": {\n" + + " \"username\": {\"type\": \"keyword\"},\n" + + " \"id\": {\"type\": \"integer\"}\n" + + " }\n" + + " }\n" + + " }\n" + + " }\n" + + " }\n" + + " }\n" + + " }\n" + + " }\n" + + "}", XContentType.JSON)); + ensureGreen(); + + client().prepareIndex("test", "type1", "1").setSource("{\n" + + " \"acl\": [\n" + + " {\n" + + " \"access_id\": 1,\n" + + " \"operation\": [\n" + + " {\n" + + " \"name\": \"read\",\n" + + " \"user\": [\n" + + " {\"username\": \"matt\", \"id\": 1},\n" + + " {\"username\": \"shay\", \"id\": 2},\n" + + " {\"username\": \"adrien\", \"id\": 3}\n" + + " ]\n" + + " },\n" + + " {\n" + + " \"name\": \"write\",\n" + + " \"user\": [\n" + + " {\"username\": \"shay\", \"id\": 2},\n" + + " {\"username\": \"adrien\", \"id\": 3}\n" + + " ]\n" + + " }\n" + + " ]\n" + + " },\n" + + " {\n" + + " \"access_id\": 2,\n" + + " \"operation\": [\n" + + " {\n" + + " \"name\": \"read\",\n" + + " \"user\": [\n" + + " {\"username\": \"jim\", \"id\": 4},\n" + + " {\"username\": \"shay\", \"id\": 2}\n" + + " ]\n" + + " },\n" + + " {\n" + + " \"name\": \"write\",\n" + + " \"user\": [\n" + + " {\"username\": \"shay\", \"id\": 2}\n" + + " ]\n" + + " },\n" + + " {\n" + + " \"name\": \"execute\",\n" + + " \"user\": [\n" + + " {\"username\": \"shay\", \"id\": 2}\n" + + " ]\n" + + " }\n" + + " ]\n" + + " }\n" + + " ]\n" + + "}", XContentType.JSON).execute().actionGet(); + + client().prepareIndex("test", "type1", "2").setSource("{\n" + + " \"acl\": [\n" + + " {\n" + + " \"access_id\": 1,\n" + + " \"operation\": [\n" + + " {\n" + + " \"name\": \"read\",\n" + + " \"user\": [\n" + + " {\"username\": \"matt\", \"id\": 1},\n" + + " {\"username\": \"luca\", \"id\": 5}\n" + + " ]\n" + + " },\n" + + " {\n" + + " \"name\": \"execute\",\n" + + " \"user\": [\n" + + " {\"username\": \"luca\", \"id\": 5}\n" + + " ]\n" + + " }\n" + + " ]\n" + + " },\n" + + " {\n" + + " \"access_id\": 3,\n" + + " \"operation\": [\n" + + " {\n" + + " \"name\": \"read\",\n" + + " \"user\": [\n" + + " {\"username\": \"matt\", \"id\": 1}\n" + + " ]\n" + + " },\n" + + " {\n" + + " \"name\": \"write\",\n" + + " \"user\": [\n" + + " {\"username\": \"matt\", \"id\": 1}\n" + + " ]\n" + + " },\n" + + " {\n" + + " \"name\": \"execute\",\n" + + " \"user\": [\n" + + " {\"username\": \"matt\", \"id\": 1}\n" + + " ]\n" + + " }\n" + + " ]\n" + + " }\n" + + " ]\n" + + "}", XContentType.JSON).execute().actionGet(); + refresh(); + + // access id = 1, read, max value, asc, should use matt and shay + SearchResponse searchResponse = client().prepareSearch() + .setQuery(matchAllQuery()) + .addSort( + SortBuilders.fieldSort("acl.operation.user.username") + .setNestedSort(new NestedSortBuilder("acl") + .setFilter(QueryBuilders.termQuery("acl.access_id", "1")) + .setNestedSort(new NestedSortBuilder("acl.operation") + .setFilter(QueryBuilders.termQuery("acl.operation.name", "read")) + .setNestedSort(new NestedSortBuilder("acl.operation.user")))) + .sortMode(SortMode.MAX) + .order(SortOrder.ASC) + ) + .execute().actionGet(); + + assertHitCount(searchResponse, 2); + assertThat(searchResponse.getHits().getHits().length, equalTo(2)); + assertThat(searchResponse.getHits().getHits()[0].getId(), equalTo("2")); + assertThat(searchResponse.getHits().getHits()[0].getSortValues()[0].toString(), equalTo("matt")); + assertThat(searchResponse.getHits().getHits()[1].getId(), equalTo("1")); + assertThat(searchResponse.getHits().getHits()[1].getSortValues()[0].toString(), equalTo("shay")); + + + // access id = 1, read, min value, asc, should now use adrien and luca + searchResponse = client().prepareSearch() + .setQuery(matchAllQuery()) + .addSort( + SortBuilders.fieldSort("acl.operation.user.username") + .setNestedSort(new NestedSortBuilder("acl") + .setFilter(QueryBuilders.termQuery("acl.access_id", "1")) + .setNestedSort(new NestedSortBuilder("acl.operation") + .setFilter(QueryBuilders.termQuery("acl.operation.name", "read")) + .setNestedSort(new NestedSortBuilder("acl.operation.user")))) + .sortMode(SortMode.MIN) + .order(SortOrder.ASC) + ) + .execute().actionGet(); + + assertHitCount(searchResponse, 2); + assertThat(searchResponse.getHits().getHits().length, equalTo(2)); + assertThat(searchResponse.getHits().getHits()[0].getId(), equalTo("1")); + assertThat(searchResponse.getHits().getHits()[0].getSortValues()[0].toString(), equalTo("adrien")); + assertThat(searchResponse.getHits().getHits()[1].getId(), equalTo("2")); + assertThat(searchResponse.getHits().getHits()[1].getSortValues()[0].toString(), equalTo("luca")); + + // execute, by matt or luca, by user id, sort missing first + searchResponse = client().prepareSearch() + .setQuery(matchAllQuery()) + .addSort( + SortBuilders.fieldSort("acl.operation.user.id") + .setNestedSort(new NestedSortBuilder("acl") + .setNestedSort(new NestedSortBuilder("acl.operation") + .setFilter(QueryBuilders.termQuery("acl.operation.name", "execute")) + .setNestedSort(new NestedSortBuilder("acl.operation.user") + .setFilter(QueryBuilders.termsQuery("acl.operation.user.username", "matt", "luca"))))) + .missing("_first") + .sortMode(SortMode.MIN) + .order(SortOrder.DESC) + ) + .execute().actionGet(); + + assertHitCount(searchResponse, 2); + assertThat(searchResponse.getHits().getHits().length, equalTo(2)); + assertThat(searchResponse.getHits().getHits()[0].getId(), equalTo("1")); // missing first + assertThat(searchResponse.getHits().getHits()[1].getId(), equalTo("2")); + assertThat(searchResponse.getHits().getHits()[1].getSortValues()[0].toString(), equalTo("1")); + + // execute, by matt or luca, by username, sort missing last (default) + searchResponse = client().prepareSearch() + .setQuery(matchAllQuery()) + .addSort( + SortBuilders.fieldSort("acl.operation.user.username") + .setNestedSort(new NestedSortBuilder("acl") + .setNestedSort(new NestedSortBuilder("acl.operation") + .setFilter(QueryBuilders.termQuery("acl.operation.name", "execute")) + .setNestedSort(new NestedSortBuilder("acl.operation.user") + .setFilter(QueryBuilders.termsQuery("acl.operation.user.username", "matt", "luca"))))) + .sortMode(SortMode.MIN) + .order(SortOrder.DESC) + ) + .execute().actionGet(); + + assertHitCount(searchResponse, 2); + assertThat(searchResponse.getHits().getHits().length, equalTo(2)); + assertThat(searchResponse.getHits().getHits()[0].getId(), equalTo("2")); + assertThat(searchResponse.getHits().getHits()[0].getSortValues()[0].toString(), equalTo("luca")); + assertThat(searchResponse.getHits().getHits()[1].getId(), equalTo("1")); // missing last + } + public void testSortNestedWithNestedFilter() throws Exception { assertAcked(prepareCreate("test") .addMapping("type1", XContentFactory.jsonBuilder() @@ -529,7 +740,7 @@ public void testSortNestedWithNestedFilter() throws Exception { ensureGreen(); // sum: 11 - client().prepareIndex("test", "type1", Integer.toString(1)).setSource(jsonBuilder() + client().prepareIndex("test", "type1", "1").setSource(jsonBuilder() .startObject() .field("grand_parent_values", 1L) .startArray("parent") @@ -568,7 +779,7 @@ public void testSortNestedWithNestedFilter() throws Exception { .endObject()).execute().actionGet(); // sum: 7 - client().prepareIndex("test", "type1", Integer.toString(2)).setSource(jsonBuilder() + client().prepareIndex("test", "type1", "2").setSource(jsonBuilder() .startObject() .field("grand_parent_values", 2L) .startArray("parent") @@ -607,7 +818,7 @@ public void testSortNestedWithNestedFilter() throws Exception { .endObject()).execute().actionGet(); // sum: 2 - client().prepareIndex("test", "type1", Integer.toString(3)).setSource(jsonBuilder() + client().prepareIndex("test", "type1", "3").setSource(jsonBuilder() .startObject() .field("grand_parent_values", 3L) .startArray("parent") @@ -722,25 +933,27 @@ public void testSortNestedWithNestedFilter() throws Exception { assertThat(searchResponse.getHits().getHits()[2].getId(), equalTo("3")); assertThat(searchResponse.getHits().getHits()[2].getSortValues()[0].toString(), equalTo("3")); + searchResponse = client().prepareSearch() .setQuery(matchAllQuery()) .addSort( SortBuilders.fieldSort("parent.child.child_values") - .setNestedPath("parent.child") - .setNestedFilter(QueryBuilders.termQuery("parent.filter", false)) + .setNestedSort(new NestedSortBuilder("parent") + .setFilter(QueryBuilders.termQuery("parent.filter", false)) + .setNestedSort(new NestedSortBuilder("parent.child"))) + .sortMode(SortMode.MAX) .order(SortOrder.ASC) ) .execute().actionGet(); assertHitCount(searchResponse, 3); assertThat(searchResponse.getHits().getHits().length, equalTo(3)); - // TODO: If we expose ToChildBlockJoinQuery we can filter sort values based on a higher level nested objects -// assertThat(searchResponse.getHits().getHits()[0].getId(), equalTo("3")); -// assertThat(searchResponse.getHits().getHits()[0].sortValues()[0].toString(), equalTo("-3")); -// assertThat(searchResponse.getHits().getHits()[1].getId(), equalTo("2")); -// assertThat(searchResponse.getHits().getHits()[1].sortValues()[0].toString(), equalTo("-2")); -// assertThat(searchResponse.getHits().getHits()[2].getId(), equalTo("1")); -// assertThat(searchResponse.getHits().getHits()[2].sortValues()[0].toString(), equalTo("-1")); + assertThat(searchResponse.getHits().getHits()[0].getId(), equalTo("3")); + assertThat(searchResponse.getHits().getHits()[0].getSortValues()[0].toString(), equalTo("3")); + assertThat(searchResponse.getHits().getHits()[1].getId(), equalTo("2")); + assertThat(searchResponse.getHits().getHits()[1].getSortValues()[0].toString(), equalTo("4")); + assertThat(searchResponse.getHits().getHits()[2].getId(), equalTo("1")); + assertThat(searchResponse.getHits().getHits()[2].getSortValues()[0].toString(), equalTo("6")); // Check if closest nested type is resolved searchResponse = client().prepareSearch() diff --git a/core/src/test/java/org/elasticsearch/search/preference/SearchPreferenceIT.java b/core/src/test/java/org/elasticsearch/search/preference/SearchPreferenceIT.java index 9163ee572cfc2..8cbb626b6770e 100644 --- a/core/src/test/java/org/elasticsearch/search/preference/SearchPreferenceIT.java +++ b/core/src/test/java/org/elasticsearch/search/preference/SearchPreferenceIT.java @@ -25,6 +25,7 @@ import org.elasticsearch.action.search.SearchResponse; import org.elasticsearch.client.Client; import org.elasticsearch.cluster.health.ClusterHealthStatus; +import org.elasticsearch.cluster.routing.OperationRouting; import org.elasticsearch.common.Strings; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentType; @@ -43,12 +44,18 @@ import static org.hamcrest.Matchers.equalTo; import static org.hamcrest.Matchers.hasToString; import static org.hamcrest.Matchers.not; -import static org.hamcrest.Matchers.is; import static org.hamcrest.Matchers.greaterThan; import static org.hamcrest.Matchers.greaterThanOrEqualTo; @ESIntegTestCase.ClusterScope(minNumDataNodes = 2) public class SearchPreferenceIT extends ESIntegTestCase { + + @Override + public Settings nodeSettings(int nodeOrdinal) { + return Settings.builder().put(super.nodeSettings(nodeOrdinal)) + .put(OperationRouting.USE_ADAPTIVE_REPLICA_SELECTION_SETTING.getKey(), false).build(); + } + // see #2896 public void testStopOneNodePreferenceWithRedState() throws InterruptedException, IOException { assertAcked(prepareCreate("test").setSettings(Settings.builder().put("index.number_of_shards", cluster().numDataNodes()+2).put("index.number_of_replicas", 0))); @@ -59,7 +66,7 @@ public void testStopOneNodePreferenceWithRedState() throws InterruptedException, refresh(); internalCluster().stopRandomDataNode(); client().admin().cluster().prepareHealth().setWaitForStatus(ClusterHealthStatus.RED).execute().actionGet(); - String[] preferences = new String[] {"_primary", "_local", "_primary_first", "_prefer_nodes:somenode", "_prefer_nodes:server2", "_prefer_nodes:somenode,server2"}; + String[] preferences = new String[]{"_local", "_prefer_nodes:somenode", "_prefer_nodes:server2", "_prefer_nodes:somenode,server2"}; for (String pref : preferences) { logger.info("--> Testing out preference={}", pref); SearchResponse searchResponse = client().prepareSearch().setSize(0).setPreference(pref).execute().actionGet(); @@ -105,54 +112,14 @@ public void testSimplePreference() throws Exception { client().prepareIndex("test", "type1").setSource("field1", "value1").execute().actionGet(); refresh(); - SearchResponse searchResponse = client().prepareSearch().setQuery(matchAllQuery()).setPreference("_local").execute().actionGet(); - assertThat(searchResponse.getHits().getTotalHits(), equalTo(1L)); - searchResponse = client().prepareSearch().setQuery(matchAllQuery()).setPreference("_local").execute().actionGet(); - assertThat(searchResponse.getHits().getTotalHits(), equalTo(1L)); - - searchResponse = client().prepareSearch().setQuery(matchAllQuery()).setPreference("_primary").execute().actionGet(); - assertThat(searchResponse.getHits().getTotalHits(), equalTo(1L)); - searchResponse = client().prepareSearch().setQuery(matchAllQuery()).setPreference("_primary").execute().actionGet(); - assertThat(searchResponse.getHits().getTotalHits(), equalTo(1L)); - - searchResponse = client().prepareSearch().setQuery(matchAllQuery()).setPreference("_replica").execute().actionGet(); - assertThat(searchResponse.getHits().getTotalHits(), equalTo(1L)); - searchResponse = client().prepareSearch().setQuery(matchAllQuery()).setPreference("_replica").execute().actionGet(); + SearchResponse searchResponse = client().prepareSearch().setQuery(matchAllQuery()).execute().actionGet(); assertThat(searchResponse.getHits().getTotalHits(), equalTo(1L)); - searchResponse = client().prepareSearch().setQuery(matchAllQuery()).setPreference("_replica_first").execute().actionGet(); - assertThat(searchResponse.getHits().getTotalHits(), equalTo(1L)); - searchResponse = client().prepareSearch().setQuery(matchAllQuery()).setPreference("_replica_first").execute().actionGet(); + searchResponse = client().prepareSearch().setQuery(matchAllQuery()).setPreference("_local").execute().actionGet(); assertThat(searchResponse.getHits().getTotalHits(), equalTo(1L)); searchResponse = client().prepareSearch().setQuery(matchAllQuery()).setPreference("1234").execute().actionGet(); assertThat(searchResponse.getHits().getTotalHits(), equalTo(1L)); - searchResponse = client().prepareSearch().setQuery(matchAllQuery()).setPreference("1234").execute().actionGet(); - assertThat(searchResponse.getHits().getTotalHits(), equalTo(1L)); - } - - public void testReplicaPreference() throws Exception { - client().admin().indices().prepareCreate("test").setSettings("{\"number_of_replicas\": 0}", XContentType.JSON).get(); - ensureGreen(); - - client().prepareIndex("test", "type1").setSource("field1", "value1").execute().actionGet(); - refresh(); - - try { - client().prepareSearch().setQuery(matchAllQuery()).setPreference("_replica").execute().actionGet(); - fail("should have failed because there are no replicas"); - } catch (Exception e) { - // pass - } - - SearchResponse resp = client().prepareSearch().setQuery(matchAllQuery()).setPreference("_replica_first").execute().actionGet(); - assertThat(resp.getHits().getTotalHits(), equalTo(1L)); - - client().admin().indices().prepareUpdateSettings("test").setSettings("{\"number_of_replicas\": 1}", XContentType.JSON).get(); - ensureGreen("test"); - - resp = client().prepareSearch().setQuery(matchAllQuery()).setPreference("_replica").execute().actionGet(); - assertThat(resp.getHits().getTotalHits(), equalTo(1L)); } public void testThatSpecifyingNonExistingNodesReturnsUsefulError() throws Exception { diff --git a/core/src/test/java/org/elasticsearch/search/profile/aggregation/AggregationProfilerIT.java b/core/src/test/java/org/elasticsearch/search/profile/aggregation/AggregationProfilerIT.java index 9914938854d03..e0cc63beeab4c 100644 --- a/core/src/test/java/org/elasticsearch/search/profile/aggregation/AggregationProfilerIT.java +++ b/core/src/test/java/org/elasticsearch/search/profile/aggregation/AggregationProfilerIT.java @@ -100,7 +100,7 @@ public void testSimpleProfile() { ProfileResult histoAggResult = aggProfileResultsList.get(0); assertThat(histoAggResult, notNullValue()); assertThat(histoAggResult.getQueryName(), - equalTo("org.elasticsearch.search.aggregations.bucket.histogram.HistogramAggregator")); + equalTo("HistogramAggregator")); assertThat(histoAggResult.getLuceneDescription(), equalTo("histo")); assertThat(histoAggResult.getProfiledChildren().size(), equalTo(0)); assertThat(histoAggResult.getTime(), greaterThan(0L)); @@ -137,7 +137,7 @@ public void testMultiLevelProfile() { ProfileResult histoAggResult = aggProfileResultsList.get(0); assertThat(histoAggResult, notNullValue()); assertThat(histoAggResult.getQueryName(), - equalTo("org.elasticsearch.search.aggregations.bucket.histogram.HistogramAggregator")); + equalTo("HistogramAggregator")); assertThat(histoAggResult.getLuceneDescription(), equalTo("histo")); assertThat(histoAggResult.getTime(), greaterThan(0L)); Map histoBreakdown = histoAggResult.getTimeBreakdown(); @@ -154,7 +154,7 @@ public void testMultiLevelProfile() { ProfileResult termsAggResult = histoAggResult.getProfiledChildren().get(0); assertThat(termsAggResult, notNullValue()); - assertThat(termsAggResult.getQueryName(), equalTo(GlobalOrdinalsStringTermsAggregator.class.getName())); + assertThat(termsAggResult.getQueryName(), equalTo(GlobalOrdinalsStringTermsAggregator.class.getSimpleName())); assertThat(termsAggResult.getLuceneDescription(), equalTo("terms")); assertThat(termsAggResult.getTime(), greaterThan(0L)); Map termsBreakdown = termsAggResult.getTimeBreakdown(); @@ -171,7 +171,7 @@ public void testMultiLevelProfile() { ProfileResult avgAggResult = termsAggResult.getProfiledChildren().get(0); assertThat(avgAggResult, notNullValue()); - assertThat(avgAggResult.getQueryName(), equalTo(AvgAggregator.class.getName())); + assertThat(avgAggResult.getQueryName(), equalTo(AvgAggregator.class.getSimpleName())); assertThat(avgAggResult.getLuceneDescription(), equalTo("avg")); assertThat(avgAggResult.getTime(), greaterThan(0L)); Map avgBreakdown = termsAggResult.getTimeBreakdown(); @@ -207,7 +207,7 @@ public void testMultiLevelProfileBreadthFirst() { ProfileResult histoAggResult = aggProfileResultsList.get(0); assertThat(histoAggResult, notNullValue()); assertThat(histoAggResult.getQueryName(), - equalTo("org.elasticsearch.search.aggregations.bucket.histogram.HistogramAggregator")); + equalTo("HistogramAggregator")); assertThat(histoAggResult.getLuceneDescription(), equalTo("histo")); assertThat(histoAggResult.getTime(), greaterThan(0L)); Map histoBreakdown = histoAggResult.getTimeBreakdown(); @@ -224,7 +224,7 @@ public void testMultiLevelProfileBreadthFirst() { ProfileResult termsAggResult = histoAggResult.getProfiledChildren().get(0); assertThat(termsAggResult, notNullValue()); - assertThat(termsAggResult.getQueryName(), equalTo(GlobalOrdinalsStringTermsAggregator.class.getName())); + assertThat(termsAggResult.getQueryName(), equalTo(GlobalOrdinalsStringTermsAggregator.class.getSimpleName())); assertThat(termsAggResult.getLuceneDescription(), equalTo("terms")); assertThat(termsAggResult.getTime(), greaterThan(0L)); Map termsBreakdown = termsAggResult.getTimeBreakdown(); @@ -241,7 +241,7 @@ public void testMultiLevelProfileBreadthFirst() { ProfileResult avgAggResult = termsAggResult.getProfiledChildren().get(0); assertThat(avgAggResult, notNullValue()); - assertThat(avgAggResult.getQueryName(), equalTo(AvgAggregator.class.getName())); + assertThat(avgAggResult.getQueryName(), equalTo(AvgAggregator.class.getSimpleName())); assertThat(avgAggResult.getLuceneDescription(), equalTo("avg")); assertThat(avgAggResult.getTime(), greaterThan(0L)); Map avgBreakdown = termsAggResult.getTimeBreakdown(); @@ -277,7 +277,7 @@ public void testDiversifiedAggProfile() { ProfileResult diversifyAggResult = aggProfileResultsList.get(0); assertThat(diversifyAggResult, notNullValue()); assertThat(diversifyAggResult.getQueryName(), - equalTo(DiversifiedOrdinalsSamplerAggregator.class.getName())); + equalTo(DiversifiedOrdinalsSamplerAggregator.class.getSimpleName())); assertThat(diversifyAggResult.getLuceneDescription(), equalTo("diversify")); assertThat(diversifyAggResult.getTime(), greaterThan(0L)); Map histoBreakdown = diversifyAggResult.getTimeBreakdown(); @@ -294,7 +294,7 @@ public void testDiversifiedAggProfile() { ProfileResult maxAggResult = diversifyAggResult.getProfiledChildren().get(0); assertThat(maxAggResult, notNullValue()); - assertThat(maxAggResult.getQueryName(), equalTo(MaxAggregator.class.getName())); + assertThat(maxAggResult.getQueryName(), equalTo(MaxAggregator.class.getSimpleName())); assertThat(maxAggResult.getLuceneDescription(), equalTo("max")); assertThat(maxAggResult.getTime(), greaterThan(0L)); Map termsBreakdown = maxAggResult.getTimeBreakdown(); @@ -338,7 +338,7 @@ public void testComplexProfile() { ProfileResult histoAggResult = aggProfileResultsList.get(0); assertThat(histoAggResult, notNullValue()); assertThat(histoAggResult.getQueryName(), - equalTo("org.elasticsearch.search.aggregations.bucket.histogram.HistogramAggregator")); + equalTo("HistogramAggregator")); assertThat(histoAggResult.getLuceneDescription(), equalTo("histo")); assertThat(histoAggResult.getTime(), greaterThan(0L)); Map histoBreakdown = histoAggResult.getTimeBreakdown(); @@ -355,7 +355,7 @@ public void testComplexProfile() { ProfileResult tagsAggResult = histoAggResult.getProfiledChildren().get(0); assertThat(tagsAggResult, notNullValue()); - assertThat(tagsAggResult.getQueryName(), equalTo(GlobalOrdinalsStringTermsAggregator.class.getName())); + assertThat(tagsAggResult.getQueryName(), equalTo(GlobalOrdinalsStringTermsAggregator.class.getSimpleName())); assertThat(tagsAggResult.getLuceneDescription(), equalTo("tags")); assertThat(tagsAggResult.getTime(), greaterThan(0L)); Map tagsBreakdown = tagsAggResult.getTimeBreakdown(); @@ -372,7 +372,7 @@ public void testComplexProfile() { ProfileResult avgAggResult = tagsAggResult.getProfiledChildren().get(0); assertThat(avgAggResult, notNullValue()); - assertThat(avgAggResult.getQueryName(), equalTo(AvgAggregator.class.getName())); + assertThat(avgAggResult.getQueryName(), equalTo(AvgAggregator.class.getSimpleName())); assertThat(avgAggResult.getLuceneDescription(), equalTo("avg")); assertThat(avgAggResult.getTime(), greaterThan(0L)); Map avgBreakdown = tagsAggResult.getTimeBreakdown(); @@ -389,7 +389,7 @@ public void testComplexProfile() { ProfileResult maxAggResult = tagsAggResult.getProfiledChildren().get(1); assertThat(maxAggResult, notNullValue()); - assertThat(maxAggResult.getQueryName(), equalTo(MaxAggregator.class.getName())); + assertThat(maxAggResult.getQueryName(), equalTo(MaxAggregator.class.getSimpleName())); assertThat(maxAggResult.getLuceneDescription(), equalTo("max")); assertThat(maxAggResult.getTime(), greaterThan(0L)); Map maxBreakdown = tagsAggResult.getTimeBreakdown(); @@ -406,7 +406,7 @@ public void testComplexProfile() { ProfileResult stringsAggResult = histoAggResult.getProfiledChildren().get(1); assertThat(stringsAggResult, notNullValue()); - assertThat(stringsAggResult.getQueryName(), equalTo(GlobalOrdinalsStringTermsAggregator.class.getName())); + assertThat(stringsAggResult.getQueryName(), equalTo(GlobalOrdinalsStringTermsAggregator.class.getSimpleName())); assertThat(stringsAggResult.getLuceneDescription(), equalTo("strings")); assertThat(stringsAggResult.getTime(), greaterThan(0L)); Map stringsBreakdown = stringsAggResult.getTimeBreakdown(); @@ -423,7 +423,7 @@ public void testComplexProfile() { avgAggResult = stringsAggResult.getProfiledChildren().get(0); assertThat(avgAggResult, notNullValue()); - assertThat(avgAggResult.getQueryName(), equalTo(AvgAggregator.class.getName())); + assertThat(avgAggResult.getQueryName(), equalTo(AvgAggregator.class.getSimpleName())); assertThat(avgAggResult.getLuceneDescription(), equalTo("avg")); assertThat(avgAggResult.getTime(), greaterThan(0L)); avgBreakdown = stringsAggResult.getTimeBreakdown(); @@ -440,7 +440,7 @@ public void testComplexProfile() { maxAggResult = stringsAggResult.getProfiledChildren().get(1); assertThat(maxAggResult, notNullValue()); - assertThat(maxAggResult.getQueryName(), equalTo(MaxAggregator.class.getName())); + assertThat(maxAggResult.getQueryName(), equalTo(MaxAggregator.class.getSimpleName())); assertThat(maxAggResult.getLuceneDescription(), equalTo("max")); assertThat(maxAggResult.getTime(), greaterThan(0L)); maxBreakdown = stringsAggResult.getTimeBreakdown(); @@ -457,7 +457,7 @@ public void testComplexProfile() { tagsAggResult = stringsAggResult.getProfiledChildren().get(2); assertThat(tagsAggResult, notNullValue()); - assertThat(tagsAggResult.getQueryName(), equalTo(GlobalOrdinalsStringTermsAggregator.class.getName())); + assertThat(tagsAggResult.getQueryName(), equalTo(GlobalOrdinalsStringTermsAggregator.class.getSimpleName())); assertThat(tagsAggResult.getLuceneDescription(), equalTo("tags")); assertThat(tagsAggResult.getTime(), greaterThan(0L)); tagsBreakdown = tagsAggResult.getTimeBreakdown(); @@ -474,7 +474,7 @@ public void testComplexProfile() { avgAggResult = tagsAggResult.getProfiledChildren().get(0); assertThat(avgAggResult, notNullValue()); - assertThat(avgAggResult.getQueryName(), equalTo(AvgAggregator.class.getName())); + assertThat(avgAggResult.getQueryName(), equalTo(AvgAggregator.class.getSimpleName())); assertThat(avgAggResult.getLuceneDescription(), equalTo("avg")); assertThat(avgAggResult.getTime(), greaterThan(0L)); avgBreakdown = tagsAggResult.getTimeBreakdown(); @@ -491,7 +491,7 @@ public void testComplexProfile() { maxAggResult = tagsAggResult.getProfiledChildren().get(1); assertThat(maxAggResult, notNullValue()); - assertThat(maxAggResult.getQueryName(), equalTo(MaxAggregator.class.getName())); + assertThat(maxAggResult.getQueryName(), equalTo(MaxAggregator.class.getSimpleName())); assertThat(maxAggResult.getLuceneDescription(), equalTo("max")); assertThat(maxAggResult.getTime(), greaterThan(0L)); maxBreakdown = tagsAggResult.getTimeBreakdown(); diff --git a/core/src/test/java/org/elasticsearch/search/profile/query/QueryProfilerIT.java b/core/src/test/java/org/elasticsearch/search/profile/query/QueryProfilerIT.java index d5198485351b1..14378fdb1c8a9 100644 --- a/core/src/test/java/org/elasticsearch/search/profile/query/QueryProfilerIT.java +++ b/core/src/test/java/org/elasticsearch/search/profile/query/QueryProfilerIT.java @@ -134,14 +134,12 @@ public void testProfileMatchesRegular() throws Exception { .setQuery(q) .setProfile(false) .addSort("_id", SortOrder.ASC) - .setPreference("_primary") .setSearchType(SearchType.QUERY_THEN_FETCH); SearchRequestBuilder profile = client().prepareSearch("test") .setQuery(q) .setProfile(true) .addSort("_id", SortOrder.ASC) - .setPreference("_primary") .setSearchType(SearchType.QUERY_THEN_FETCH); MultiSearchResponse.Item[] responses = client().prepareMultiSearch() diff --git a/core/src/test/java/org/elasticsearch/search/profile/query/QueryProfilerTests.java b/core/src/test/java/org/elasticsearch/search/profile/query/QueryProfilerTests.java index 43c6018d8f896..4582567705138 100644 --- a/core/src/test/java/org/elasticsearch/search/profile/query/QueryProfilerTests.java +++ b/core/src/test/java/org/elasticsearch/search/profile/query/QueryProfilerTests.java @@ -28,7 +28,6 @@ import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.index.RandomIndexWriter; import org.apache.lucene.index.Term; -import org.apache.lucene.search.DocIdSetIterator; import org.apache.lucene.search.Explanation; import org.apache.lucene.search.IndexSearcher; import org.apache.lucene.search.LeafCollector; @@ -242,7 +241,7 @@ public ScorerSupplier scorerSupplier(LeafReaderContext context) throws IOExcepti return new ScorerSupplier() { @Override - public Scorer get(boolean randomAccess) throws IOException { + public Scorer get(long loadCost) throws IOException { throw new UnsupportedOperationException(); } diff --git a/core/src/test/java/org/elasticsearch/search/query/MultiMatchQueryIT.java b/core/src/test/java/org/elasticsearch/search/query/MultiMatchQueryIT.java index ba7a13cf0102e..fd619b69c9eff 100644 --- a/core/src/test/java/org/elasticsearch/search/query/MultiMatchQueryIT.java +++ b/core/src/test/java/org/elasticsearch/search/query/MultiMatchQueryIT.java @@ -179,7 +179,7 @@ private XContentBuilder createMapping() throws IOException { } public void testDefaults() throws ExecutionException, InterruptedException { - MatchQuery.Type type = randomBoolean() ? MatchQueryBuilder.DEFAULT_TYPE : MatchQuery.Type.BOOLEAN; + MatchQuery.Type type = MatchQuery.Type.BOOLEAN; SearchResponse searchResponse = client().prepareSearch("test") .setQuery(randomizeType(multiMatchQuery("marvel hero captain america", "full_name", "first_name", "last_name", "category") .operator(Operator.OR))).get(); @@ -270,9 +270,7 @@ public void testSingleField() throws NoSuchFieldException, IllegalAccessExceptio .addSort("_id", SortOrder.ASC) .setQuery(multiMatchQueryBuilder).get(); MatchQueryBuilder matchQueryBuilder = QueryBuilders.matchQuery(field, builder.toString()); - if (multiMatchQueryBuilder.getType() != null) { - matchQueryBuilder.type(MatchQuery.Type.valueOf(multiMatchQueryBuilder.getType().matchQueryType().toString())); - } + SearchResponse matchResp = client().prepareSearch("test") // _id tie sort .addSort("_score", SortOrder.DESC) @@ -294,7 +292,7 @@ public void testSingleField() throws NoSuchFieldException, IllegalAccessExceptio public void testCutoffFreq() throws ExecutionException, InterruptedException { final long numDocs = client().prepareSearch("test").setSize(0) .setQuery(matchAllQuery()).get().getHits().getTotalHits(); - MatchQuery.Type type = randomBoolean() ? MatchQueryBuilder.DEFAULT_TYPE : MatchQuery.Type.BOOLEAN; + MatchQuery.Type type = MatchQuery.Type.BOOLEAN; Float cutoffFrequency = randomBoolean() ? Math.min(1, numDocs * 1.f / between(10, 20)) : 1.f / between(10, 20); SearchResponse searchResponse = client().prepareSearch("test") .setQuery(randomizeType(multiMatchQuery("marvel hero captain america", "full_name", "first_name", "last_name", "category") @@ -357,7 +355,7 @@ public void testEquivalence() { int numIters = scaledRandomIntBetween(5, 10); for (int i = 0; i < numIters; i++) { { - MatchQuery.Type type = randomBoolean() ? MatchQueryBuilder.DEFAULT_TYPE : MatchQuery.Type.BOOLEAN; + MatchQuery.Type type = MatchQuery.Type.BOOLEAN; MultiMatchQueryBuilder multiMatchQueryBuilder = randomBoolean() ? multiMatchQuery("marvel hero captain america", "full_name", "first_name", "last_name", "category") : multiMatchQuery("marvel hero captain america", "*_name", randomBoolean() ? "category" : "categ*"); SearchResponse left = client().prepareSearch("test").setSize(numDocs) @@ -377,7 +375,7 @@ public void testEquivalence() { } { - MatchQuery.Type type = randomBoolean() ? MatchQueryBuilder.DEFAULT_TYPE : MatchQuery.Type.BOOLEAN; + MatchQuery.Type type = MatchQuery.Type.BOOLEAN; String minShouldMatch = randomBoolean() ? null : "" + between(0, 1); Operator op = randomBoolean() ? Operator.AND : Operator.OR; MultiMatchQueryBuilder multiMatchQueryBuilder = randomBoolean() ? multiMatchQuery("captain america", "full_name", "first_name", "last_name", "category") : @@ -474,6 +472,7 @@ public void testCrossFieldMode() throws ExecutionException, InterruptedException .setQuery(randomizeType(multiMatchQuery("captain america 15", "full_name", "first_name", "last_name", "category", "skill") .type(MultiMatchQueryBuilder.Type.CROSS_FIELDS) .analyzer("category") + .lenient(true) .operator(Operator.AND))).get(); assertHitCount(searchResponse, 1L); assertFirstHit(searchResponse, hasId("theone")); @@ -482,6 +481,7 @@ public void testCrossFieldMode() throws ExecutionException, InterruptedException .setQuery(randomizeType(multiMatchQuery("captain america 15", "full_name", "first_name", "last_name", "category", "skill", "int-field") .type(MultiMatchQueryBuilder.Type.CROSS_FIELDS) .analyzer("category") + .lenient(true) .operator(Operator.AND))).get(); assertHitCount(searchResponse, 1L); assertFirstHit(searchResponse, hasId("theone")); @@ -490,6 +490,7 @@ public void testCrossFieldMode() throws ExecutionException, InterruptedException .setQuery(randomizeType(multiMatchQuery("captain america 15", "skill", "full_name", "first_name", "last_name", "category", "int-field") .type(MultiMatchQueryBuilder.Type.CROSS_FIELDS) .analyzer("category") + .lenient(true) .operator(Operator.AND))).get(); assertHitCount(searchResponse, 1L); assertFirstHit(searchResponse, hasId("theone")); @@ -498,6 +499,7 @@ public void testCrossFieldMode() throws ExecutionException, InterruptedException searchResponse = client().prepareSearch("test") .setQuery(randomizeType(multiMatchQuery("captain america 15", "first_name", "last_name", "skill") .type(MultiMatchQueryBuilder.Type.CROSS_FIELDS) + .lenient(true) .analyzer("category"))).get(); assertFirstHit(searchResponse, hasId("theone")); diff --git a/core/src/test/java/org/elasticsearch/search/query/QueryPhaseTests.java b/core/src/test/java/org/elasticsearch/search/query/QueryPhaseTests.java index feca42e5495b3..12b4b2daaee19 100644 --- a/core/src/test/java/org/elasticsearch/search/query/QueryPhaseTests.java +++ b/core/src/test/java/org/elasticsearch/search/query/QueryPhaseTests.java @@ -38,7 +38,10 @@ import org.apache.lucene.search.ConstantScoreQuery; import org.apache.lucene.search.FieldComparator; import org.apache.lucene.search.FieldDoc; +import org.apache.lucene.search.FilterCollector; +import org.apache.lucene.search.FilterLeafCollector; import org.apache.lucene.search.IndexSearcher; +import org.apache.lucene.search.LeafCollector; import org.apache.lucene.search.MatchAllDocsQuery; import org.apache.lucene.search.MatchNoDocsQuery; import org.apache.lucene.search.Query; @@ -64,10 +67,8 @@ import static org.hamcrest.Matchers.anyOf; import static org.hamcrest.Matchers.equalTo; -import static org.hamcrest.Matchers.greaterThan; import static org.hamcrest.Matchers.greaterThanOrEqualTo; import static org.hamcrest.Matchers.instanceOf; -import static org.hamcrest.Matchers.lessThan; public class QueryPhaseTests extends IndexShardTestCase { @@ -226,7 +227,7 @@ public void testQueryCapturesThreadPoolStats() throws Exception { QueryPhase.execute(context, contextSearcher, checkCancelled -> {}, null); QuerySearchResult results = context.queryResult(); - assertThat(results.serviceTimeEWMA(), greaterThan(0L)); + assertThat(results.serviceTimeEWMA(), greaterThanOrEqualTo(0L)); assertThat(results.nodeQueueSize(), greaterThanOrEqualTo(0)); reader.close(); dir.close(); @@ -412,30 +413,19 @@ public void testIndexSortingEarlyTermination() throws Exception { context.setTask(new SearchTask(123L, "", "", "", null)); context.sort(new SortAndFormats(sort, new DocValueFormat[] {DocValueFormat.RAW})); - final AtomicBoolean collected = new AtomicBoolean(); final IndexReader reader = DirectoryReader.open(dir); - IndexSearcher contextSearcher = new IndexSearcher(reader) { - protected void search(List leaves, Weight weight, Collector collector) throws IOException { - collected.set(true); - super.search(leaves, weight, collector); - } - }; + IndexSearcher contextSearcher = new IndexSearcher(reader); QueryPhase.execute(context, contextSearcher, checkCancelled -> {}, sort); - assertTrue(collected.get()); - assertTrue(context.queryResult().terminatedEarly()); assertThat(context.queryResult().topDocs().totalHits, equalTo((long) numDocs)); assertThat(context.queryResult().topDocs().scoreDocs.length, equalTo(1)); assertThat(context.queryResult().topDocs().scoreDocs[0], instanceOf(FieldDoc.class)); FieldDoc fieldDoc = (FieldDoc) context.queryResult().topDocs().scoreDocs[0]; assertThat(fieldDoc.fields[0], equalTo(1)); - { - collected.set(false); context.parsedPostFilter(new ParsedQuery(new MinDocQuery(1))); QueryPhase.execute(context, contextSearcher, checkCancelled -> {}, sort); - assertTrue(collected.get()); - assertTrue(context.queryResult().terminatedEarly()); + assertNull(context.queryResult().terminatedEarly()); assertThat(context.queryResult().topDocs().totalHits, equalTo(numDocs - 1L)); assertThat(context.queryResult().topDocs().scoreDocs.length, equalTo(1)); assertThat(context.queryResult().topDocs().scoreDocs[0], instanceOf(FieldDoc.class)); @@ -444,10 +434,8 @@ protected void search(List leaves, Weight weight, Collector c final TotalHitCountCollector totalHitCountCollector = new TotalHitCountCollector(); context.queryCollectors().put(TotalHitCountCollector.class, totalHitCountCollector); - collected.set(false); QueryPhase.execute(context, contextSearcher, checkCancelled -> {}, sort); - assertTrue(collected.get()); - assertTrue(context.queryResult().terminatedEarly()); + assertNull(context.queryResult().terminatedEarly()); assertThat(context.queryResult().topDocs().totalHits, equalTo((long) numDocs)); assertThat(context.queryResult().topDocs().scoreDocs.length, equalTo(1)); assertThat(context.queryResult().topDocs().scoreDocs[0], instanceOf(FieldDoc.class)); @@ -457,27 +445,19 @@ protected void search(List leaves, Weight weight, Collector c } { - collected.set(false); + contextSearcher = getAssertingEarlyTerminationSearcher(reader, 1); context.trackTotalHits(false); QueryPhase.execute(context, contextSearcher, checkCancelled -> {}, sort); - assertTrue(collected.get()); - assertTrue(context.queryResult().terminatedEarly()); - assertThat(context.queryResult().topDocs().totalHits, lessThan((long) numDocs)); + assertNull(context.queryResult().terminatedEarly()); assertThat(context.queryResult().topDocs().scoreDocs.length, equalTo(1)); assertThat(context.queryResult().topDocs().scoreDocs[0], instanceOf(FieldDoc.class)); assertThat(fieldDoc.fields[0], anyOf(equalTo(1), equalTo(2))); - final TotalHitCountCollector totalHitCountCollector = new TotalHitCountCollector(); - context.queryCollectors().put(TotalHitCountCollector.class, totalHitCountCollector); - collected.set(false); QueryPhase.execute(context, contextSearcher, checkCancelled -> {}, sort); - assertTrue(collected.get()); - assertTrue(context.queryResult().terminatedEarly()); - assertThat(context.queryResult().topDocs().totalHits, lessThan((long) numDocs)); + assertNull(context.queryResult().terminatedEarly()); assertThat(context.queryResult().topDocs().scoreDocs.length, equalTo(1)); assertThat(context.queryResult().topDocs().scoreDocs[0], instanceOf(FieldDoc.class)); assertThat(fieldDoc.fields[0], anyOf(equalTo(1), equalTo(2))); - assertThat(totalHitCountCollector.getTotalHits(), equalTo(numDocs)); } reader.close(); dir.close(); @@ -498,8 +478,9 @@ public void testIndexSortScrollOptimization() throws Exception { doc.add(new NumericDocValuesField("tiebreaker", i)); w.addDocument(doc); } - // Make sure that we can early terminate queries on this index - w.forceMerge(3); + if (randomBoolean()) { + w.forceMerge(randomIntBetween(1, 10)); + } w.close(); TestSearchContext context = new TestSearchContext(null, indexShard); @@ -513,28 +494,21 @@ public void testIndexSortScrollOptimization() throws Exception { context.setSize(10); context.sort(new SortAndFormats(sort, new DocValueFormat[] {DocValueFormat.RAW, DocValueFormat.RAW})); - final AtomicBoolean collected = new AtomicBoolean(); final IndexReader reader = DirectoryReader.open(dir); - IndexSearcher contextSearcher = new IndexSearcher(reader) { - protected void search(List leaves, Weight weight, Collector collector) throws IOException { - collected.set(true); - super.search(leaves, weight, collector); - } - }; + IndexSearcher contextSearcher = new IndexSearcher(reader); QueryPhase.execute(context, contextSearcher, checkCancelled -> {}, sort); assertThat(context.queryResult().topDocs().totalHits, equalTo((long) numDocs)); - assertTrue(collected.get()); assertNull(context.queryResult().terminatedEarly()); assertThat(context.terminateAfter(), equalTo(0)); assertThat(context.queryResult().getTotalHits(), equalTo((long) numDocs)); int sizeMinus1 = context.queryResult().topDocs().scoreDocs.length - 1; FieldDoc lastDoc = (FieldDoc) context.queryResult().topDocs().scoreDocs[sizeMinus1]; + contextSearcher = getAssertingEarlyTerminationSearcher(reader, 10); QueryPhase.execute(context, contextSearcher, checkCancelled -> {}, sort); + assertNull(context.queryResult().terminatedEarly()); assertThat(context.queryResult().topDocs().totalHits, equalTo((long) numDocs)); - assertTrue(collected.get()); - assertTrue(context.queryResult().terminatedEarly()); assertThat(context.terminateAfter(), equalTo(0)); assertThat(context.queryResult().getTotalHits(), equalTo((long) numDocs)); FieldDoc firstDoc = (FieldDoc) context.queryResult().topDocs().scoreDocs[0]; @@ -551,4 +525,37 @@ protected void search(List leaves, Weight weight, Collector c reader.close(); dir.close(); } + + static IndexSearcher getAssertingEarlyTerminationSearcher(IndexReader reader, int size) { + return new IndexSearcher(reader) { + protected void search(List leaves, Weight weight, Collector collector) throws IOException { + final Collector in = new AssertingEalyTerminationFilterCollector(collector, size); + super.search(leaves, weight, in); + } + }; + } + + private static class AssertingEalyTerminationFilterCollector extends FilterCollector { + private final int size; + + AssertingEalyTerminationFilterCollector(Collector in, int size) { + super(in); + this.size = size; + } + + @Override + public LeafCollector getLeafCollector(LeafReaderContext context) throws IOException { + final LeafCollector in = super.getLeafCollector(context); + return new FilterLeafCollector(in) { + int collected; + + @Override + public void collect(int doc) throws IOException { + assert collected <= size : "should not collect more than " + size + " doc per segment, got " + collected; + ++ collected; + super.collect(doc); + } + }; + } + } } diff --git a/core/src/test/java/org/elasticsearch/search/query/QueryStringIT.java b/core/src/test/java/org/elasticsearch/search/query/QueryStringIT.java index 733a910527c2c..ab8bcb539d6ae 100644 --- a/core/src/test/java/org/elasticsearch/search/query/QueryStringIT.java +++ b/core/src/test/java/org/elasticsearch/search/query/QueryStringIT.java @@ -19,7 +19,6 @@ package org.elasticsearch.search.query; -import org.apache.lucene.util.LuceneTestCase; import org.elasticsearch.ExceptionsHelper; import org.elasticsearch.action.admin.indices.create.CreateIndexRequestBuilder; import org.elasticsearch.action.index.IndexRequestBuilder; @@ -28,6 +27,7 @@ import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.common.xcontent.XContentType; +import org.elasticsearch.index.mapper.MapperService; import org.elasticsearch.index.query.Operator; import org.elasticsearch.index.query.QueryBuilders; import org.elasticsearch.index.query.QueryStringQueryBuilder; @@ -46,11 +46,11 @@ import java.util.List; import java.util.Set; +import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder; import static org.elasticsearch.index.query.QueryBuilders.queryStringQuery; import static org.elasticsearch.test.StreamsUtils.copyToStringFromClasspath; import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked; import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertHitCount; -import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertNoSearchHits; import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchHits; import static org.hamcrest.Matchers.containsInAnyOrder; import static org.hamcrest.Matchers.containsString; @@ -69,10 +69,6 @@ public void setup() throws Exception { ensureGreen("test"); } - private QueryStringQueryBuilder lenientQuery(String queryText) { - return queryStringQuery(queryText).lenient(true); - } - public void testBasicAllQuery() throws Exception { List reqs = new ArrayList<>(); reqs.add(client().prepareIndex("test", "doc", "1").setSource("f1", "foo bar baz")); @@ -175,8 +171,6 @@ public void testDocWithAllTypes() throws Exception { assertHits(resp.getHits(), "1"); resp = client().prepareSearch("test").setQuery(queryStringQuery("1.5")).get(); assertHits(resp.getHits(), "1"); - resp = client().prepareSearch("test").setQuery(queryStringQuery("12.23")).get(); - assertHits(resp.getHits(), "1"); resp = client().prepareSearch("test").setQuery(queryStringQuery("127.0.0.1")).get(); assertHits(resp.getHits(), "1"); // binary doesn't match @@ -208,50 +202,45 @@ public void testKeywordWithWhitespace() throws Exception { } public void testAllFields() throws Exception { - String indexBodyWithAll = copyToStringFromClasspath("/org/elasticsearch/search/query/all-query-index-with-all.json"); String indexBody = copyToStringFromClasspath("/org/elasticsearch/search/query/all-query-index.json"); - // Defaults to index.query.default_field=_all - prepareCreate("test_1").setSource(indexBodyWithAll, XContentType.JSON).get(); Settings.Builder settings = Settings.builder().put("index.query.default_field", "*"); - prepareCreate("test_2").setSource(indexBody, XContentType.JSON).setSettings(settings).get(); - ensureGreen("test_1","test_2"); + prepareCreate("test_1").setSource(indexBody, XContentType.JSON).setSettings(settings).get(); + ensureGreen("test_1"); List reqs = new ArrayList<>(); reqs.add(client().prepareIndex("test_1", "doc", "1").setSource("f1", "foo", "f2", "eggplant")); - reqs.add(client().prepareIndex("test_2", "doc", "1").setSource("f1", "foo", "f2", "eggplant")); indexRandom(true, false, reqs); SearchResponse resp = client().prepareSearch("test_1").setQuery( queryStringQuery("foo eggplant").defaultOperator(Operator.AND)).get(); assertHitCount(resp, 0L); - resp = client().prepareSearch("test_2").setQuery( - queryStringQuery("foo eggplant").defaultOperator(Operator.AND)).get(); - assertHitCount(resp, 0L); - resp = client().prepareSearch("test_1").setQuery( queryStringQuery("foo eggplant").defaultOperator(Operator.OR)).get(); assertHits(resp.getHits(), "1"); assertHitCount(resp, 1L); - - resp = client().prepareSearch("test_2").setQuery( - queryStringQuery("foo eggplant").defaultOperator(Operator.OR)).get(); - assertHits(resp.getHits(), "1"); - assertHitCount(resp, 1L); } - @LuceneTestCase.AwaitsFix(bugUrl="currently can't perform phrase queries on fields that don't support positions") public void testPhraseQueryOnFieldWithNoPositions() throws Exception { List reqs = new ArrayList<>(); reqs.add(client().prepareIndex("test", "doc", "1").setSource("f1", "foo bar", "f4", "eggplant parmesan")); reqs.add(client().prepareIndex("test", "doc", "2").setSource("f1", "foo bar", "f4", "chicken parmesan")); indexRandom(true, false, reqs); - SearchResponse resp = client().prepareSearch("test").setQuery(queryStringQuery("\"eggplant parmesan\"")).get(); - assertHits(resp.getHits(), "1"); - assertHitCount(resp, 1L); + SearchResponse resp = client().prepareSearch("test") + .setQuery(queryStringQuery("\"eggplant parmesan\"").lenient(true)).get(); + assertHitCount(resp, 0L); + + Exception exc = expectThrows(Exception.class, + () -> client().prepareSearch("test").setQuery( + queryStringQuery("f4:\"eggplant parmesan\"").lenient(false) + ).get() + ); + IllegalStateException ise = (IllegalStateException) ExceptionsHelper.unwrap(exc, IllegalStateException.class); + assertNotNull(ise); + assertThat(ise.getMessage(), containsString("field:[f4] was indexed without position data; cannot run PhraseQuery")); } public void testBooleanStrictQuery() throws Exception { @@ -275,10 +264,10 @@ private void setupIndexWithGraph(String index) throws Exception { Settings.builder() .put(indexSettings()) .put("index.analysis.filter.graphsyns.type", "synonym_graph") - .putArray("index.analysis.filter.graphsyns.synonyms", "wtf, what the fudge", "foo, bar baz") + .putList("index.analysis.filter.graphsyns.synonyms", "wtf, what the fudge", "foo, bar baz") .put("index.analysis.analyzer.lower_graphsyns.type", "custom") .put("index.analysis.analyzer.lower_graphsyns.tokenizer", "standard") - .putArray("index.analysis.analyzer.lower_graphsyns.filter", "lowercase", "graphsyns") + .putList("index.analysis.analyzer.lower_graphsyns.filter", "lowercase", "graphsyns") ); XContentBuilder mapping = XContentFactory.jsonBuilder().startObject().startObject(index).startObject("properties") @@ -356,6 +345,37 @@ public void testGraphQueries() throws Exception { assertSearchHits(searchResponse, "1", "2", "3"); } + public void testLimitOnExpandedFields() throws Exception { + XContentBuilder builder = jsonBuilder(); + builder.startObject(); + builder.startObject("type1"); + builder.startObject("properties"); + for (int i = 0; i < 1025; i++) { + builder.startObject("field" + i).field("type", "text").endObject(); + } + builder.endObject(); // properties + builder.endObject(); // type1 + builder.endObject(); + + assertAcked(prepareCreate("toomanyfields") + .setSettings(Settings.builder().put(MapperService.INDEX_MAPPING_TOTAL_FIELDS_LIMIT_SETTING.getKey(), 1200)) + .addMapping("type1", builder)); + + client().prepareIndex("toomanyfields", "type1", "1").setSource("field171", "foo bar baz").get(); + refresh(); + + Exception e = expectThrows(Exception.class, () -> { + QueryStringQueryBuilder qb = queryStringQuery("bar"); + if (randomBoolean()) { + qb.useAllFields(true); + } + logger.info("--> using {}", qb); + client().prepareSearch("toomanyfields").setQuery(qb).get(); + }); + assertThat(ExceptionsHelper.detailedMessage(e), + containsString("field expansion matches too many fields, limit: 1024, got: 1025")); + } + private void assertHits(SearchHits hits, String... ids) { assertThat(hits.getTotalHits(), equalTo((long) ids.length)); Set hitIds = new HashSet<>(); diff --git a/core/src/test/java/org/elasticsearch/search/query/SearchQueryIT.java b/core/src/test/java/org/elasticsearch/search/query/SearchQueryIT.java index 06f6e20d3836f..a94f499d0bac6 100644 --- a/core/src/test/java/org/elasticsearch/search/query/SearchQueryIT.java +++ b/core/src/test/java/org/elasticsearch/search/query/SearchQueryIT.java @@ -30,17 +30,16 @@ import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.common.xcontent.XContentType; +import org.elasticsearch.index.IndexSettings; import org.elasticsearch.index.query.BoolQueryBuilder; import org.elasticsearch.index.query.MatchQueryBuilder; import org.elasticsearch.index.query.MultiMatchQueryBuilder; import org.elasticsearch.index.query.Operator; import org.elasticsearch.index.query.QueryBuilders; -import org.elasticsearch.index.query.RangeQueryBuilder; import org.elasticsearch.index.query.TermQueryBuilder; import org.elasticsearch.index.query.WrapperQueryBuilder; import org.elasticsearch.index.query.functionscore.ScoreFunctionBuilders; import org.elasticsearch.index.search.MatchQuery; -import org.elasticsearch.index.search.MatchQuery.Type; import org.elasticsearch.indices.TermsLookup; import org.elasticsearch.plugins.Plugin; import org.elasticsearch.rest.RestStatus; @@ -70,6 +69,8 @@ import static org.elasticsearch.index.query.QueryBuilders.fuzzyQuery; import static org.elasticsearch.index.query.QueryBuilders.idsQuery; import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery; +import static org.elasticsearch.index.query.QueryBuilders.matchPhrasePrefixQuery; +import static org.elasticsearch.index.query.QueryBuilders.matchPhraseQuery; import static org.elasticsearch.index.query.QueryBuilders.matchQuery; import static org.elasticsearch.index.query.QueryBuilders.multiMatchQuery; import static org.elasticsearch.index.query.QueryBuilders.prefixQuery; @@ -99,7 +100,6 @@ import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertThirdHit; import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.hasId; import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.hasScore; -import static org.hamcrest.Matchers.allOf; import static org.hamcrest.Matchers.closeTo; import static org.hamcrest.Matchers.containsString; import static org.hamcrest.Matchers.equalTo; @@ -173,12 +173,12 @@ public void testIndexOptions() throws Exception { client().prepareIndex("test", "type1", "1").setSource("field1", "quick brown fox", "field2", "quick brown fox"), client().prepareIndex("test", "type1", "2").setSource("field1", "quick lazy huge brown fox", "field2", "quick lazy huge brown fox")); - SearchResponse searchResponse = client().prepareSearch().setQuery(matchQuery("field2", "quick brown").type(Type.PHRASE).slop(0)).get(); + SearchResponse searchResponse = client().prepareSearch().setQuery(matchPhraseQuery("field2", "quick brown").slop(0)).get(); assertHitCount(searchResponse, 1L); - assertFailures(client().prepareSearch().setQuery(matchQuery("field1", "quick brown").type(Type.PHRASE).slop(0)), - RestStatus.INTERNAL_SERVER_ERROR, - containsString("field \"field1\" was indexed without position data; cannot run PhraseQuery")); + assertFailures(client().prepareSearch().setQuery(matchPhraseQuery("field1", "quick brown").slop(0)), + RestStatus.BAD_REQUEST, + containsString("field:[field1] was indexed without position data; cannot run PhraseQuery")); } // see #3521 @@ -264,9 +264,10 @@ public void testAllDocsQueryString() throws InterruptedException, ExecutionExcep } public void testCommonTermsQuery() throws Exception { + client().admin().indices().prepareCreate("test") .addMapping("type1", "field1", "type=text,analyzer=whitespace") - .setSettings(SETTING_NUMBER_OF_SHARDS, 1).get(); + .setSettings(Settings.builder().put(SETTING_NUMBER_OF_SHARDS, 1)).get(); indexRandom(true, client().prepareIndex("test", "type1", "3").setSource("field1", "quick lazy huge brown pidgin", "field2", "the quick lazy huge brown fox jumps over the tree"), client().prepareIndex("test", "type1", "1").setSource("field1", "the quick brown fox"), client().prepareIndex("test", "type1", "2").setSource("field1", "the quick lazy huge brown fox jumps over the tree") ); @@ -349,7 +350,7 @@ public void testCommonTermsQueryStackedTokens() throws Exception { .put(indexSettings()) .put(SETTING_NUMBER_OF_SHARDS,1) .put("index.analysis.filter.syns.type","synonym") - .putArray("index.analysis.filter.syns.synonyms","quick,fast") + .putList("index.analysis.filter.syns.synonyms","quick,fast") .put("index.analysis.analyzer.syns.tokenizer","whitespace") .put("index.analysis.analyzer.syns.filter","syns") ) @@ -555,7 +556,7 @@ public void testDateRangeInQueryStringWithTimeZone_10477() { } public void testTypeFilter() throws Exception { - assertAcked(prepareCreate("test").setSettings("index.version.created", Version.V_5_6_0.id)); + assertAcked(prepareCreate("test").setSettings(Settings.builder().put("index.version.created", Version.V_5_6_0.id))); indexRandom(true, client().prepareIndex("test", "type1", "1").setSource("field1", "value1"), client().prepareIndex("test", "type2", "1").setSource("field1", "value1"), client().prepareIndex("test", "type1", "2").setSource("field1", "value1"), @@ -957,7 +958,7 @@ public void testFuzzyQueryString() { public void testQuotedQueryStringWithBoost() throws InterruptedException, ExecutionException { float boost = 10.0f; - assertAcked(prepareCreate("test").setSettings(SETTING_NUMBER_OF_SHARDS, 1)); + assertAcked(prepareCreate("test").setSettings(Settings.builder().put(SETTING_NUMBER_OF_SHARDS, 1))); indexRandom(true, client().prepareIndex("test", "type1", "1").setSource("important", "phrase match", "less_important", "nothing important"), client().prepareIndex("test", "type1", "2").setSource("important", "nothing important", "less_important", "phrase match") ); @@ -1220,7 +1221,7 @@ public void testBasicQueryById() throws Exception { } public void testBasicQueryByIdMultiType() throws Exception { - assertAcked(prepareCreate("test").setSettings("index.version.created", Version.V_5_6_0.id)); + assertAcked(prepareCreate("test").setSettings(Settings.builder().put("index.version.created", Version.V_5_6_0.id))); client().prepareIndex("test", "type1", "1").setSource("field1", "value1").get(); client().prepareIndex("test", "type2", "2").setSource("field1", "value2").get(); @@ -1393,7 +1394,7 @@ public void testNumericRangeFilter_2826() throws Exception { public void testMustNot() throws IOException, ExecutionException, InterruptedException { assertAcked(prepareCreate("test") //issue manifested only with shards>=2 - .setSettings(SETTING_NUMBER_OF_SHARDS, between(2, DEFAULT_MAX_NUM_SHARDS))); + .setSettings(Settings.builder().put(SETTING_NUMBER_OF_SHARDS, between(2, DEFAULT_MAX_NUM_SHARDS)))); indexRandom(true, client().prepareIndex("test", "test", "1").setSource("description", "foo other anything bar"), @@ -1407,7 +1408,7 @@ public void testMustNot() throws IOException, ExecutionException, InterruptedExc searchResponse = client().prepareSearch("test").setQuery( boolQuery() - .mustNot(matchQuery("description", "anything").type(Type.BOOLEAN)) + .mustNot(matchQuery("description", "anything")) ).setSearchType(SearchType.DFS_QUERY_THEN_FETCH).get(); assertHitCount(searchResponse, 2L); } @@ -1572,9 +1573,9 @@ public void testMatchQueryWithSynonyms() throws IOException { .put("index.analysis.analyzer.index.filter", "lowercase") .put("index.analysis.analyzer.search.type", "custom") .put("index.analysis.analyzer.search.tokenizer", "standard") - .putArray("index.analysis.analyzer.search.filter", "lowercase", "synonym") + .putList("index.analysis.analyzer.search.filter", "lowercase", "synonym") .put("index.analysis.filter.synonym.type", "synonym") - .putArray("index.analysis.filter.synonym.synonyms", "fast, quick")); + .putList("index.analysis.filter.synonym.synonyms", "fast, quick")); assertAcked(builder.addMapping("test", "text", "type=text,analyzer=index,search_analyzer=search")); client().prepareIndex("test", "test", "1").setSource("text", "quick brown fox").get(); @@ -1602,9 +1603,9 @@ public void testQueryStringWithSynonyms() throws IOException { .put("index.analysis.analyzer.index.filter", "lowercase") .put("index.analysis.analyzer.search.type", "custom") .put("index.analysis.analyzer.search.tokenizer", "standard") - .putArray("index.analysis.analyzer.search.filter", "lowercase", "synonym") + .putList("index.analysis.analyzer.search.filter", "lowercase", "synonym") .put("index.analysis.filter.synonym.type", "synonym") - .putArray("index.analysis.filter.synonym.synonyms", "fast, quick")); + .putList("index.analysis.filter.synonym.synonyms", "fast, quick")); assertAcked(builder.addMapping("test", "text", "type=text,analyzer=index,search_analyzer=search")); client().prepareIndex("test", "test", "1").setSource("text", "quick brown fox").get(); @@ -1788,12 +1789,6 @@ public void testRangeQueryWithTimeZone() throws Exception { .get(); assertHitCount(searchResponse, 1L); assertThat(searchResponse.getHits().getAt(0).getId(), is("4")); - - // A Range Filter on a numeric field with a TimeZone should raise an exception - e = expectThrows(SearchPhaseExecutionException.class, () -> - client().prepareSearch("test") - .setQuery(QueryBuilders.rangeQuery("num").from("0").to("4").timeZone("-01:00")) - .get()); } public void testSearchEmptyDoc() { @@ -1808,12 +1803,13 @@ public void testSearchEmptyDoc() { public void testNGramCopyField() { CreateIndexRequestBuilder builder = prepareCreate("test").setSettings(Settings.builder() .put(indexSettings()) + .put(IndexSettings.MAX_NGRAM_DIFF_SETTING.getKey(), 9) .put("index.analysis.analyzer.my_ngram_analyzer.type", "custom") .put("index.analysis.analyzer.my_ngram_analyzer.tokenizer", "my_ngram_tokenizer") .put("index.analysis.tokenizer.my_ngram_tokenizer.type", "nGram") .put("index.analysis.tokenizer.my_ngram_tokenizer.min_gram", "1") .put("index.analysis.tokenizer.my_ngram_tokenizer.max_gram", "10") - .putArray("index.analysis.tokenizer.my_ngram_tokenizer.token_chars", new String[0])); + .putList("index.analysis.tokenizer.my_ngram_tokenizer.token_chars", new String[0])); assertAcked(builder.addMapping("test", "origin", "type=text,copy_to=meta", "meta", "type=text,analyzer=my_ngram_analyzer")); // we only have ngrams as the index analyzer so searches will get standard analyzer @@ -1859,13 +1855,14 @@ public void testMatchPhrasePrefixQuery() throws ExecutionException, InterruptedE client().prepareIndex("test1", "type1", "2").setSource("field", "trying out Elasticsearch")); - SearchResponse searchResponse = client().prepareSearch().setQuery(matchQuery("field", "Johnnie la").slop(between(2,5)).type(Type.PHRASE_PREFIX)).get(); + SearchResponse searchResponse = client().prepareSearch().setQuery(matchPhrasePrefixQuery("field", "Johnnie la").slop(between(2, 5))) + .get(); assertHitCount(searchResponse, 1L); assertSearchHits(searchResponse, "1"); - searchResponse = client().prepareSearch().setQuery(matchQuery("field", "trying").type(Type.PHRASE_PREFIX)).get(); + searchResponse = client().prepareSearch().setQuery(matchPhrasePrefixQuery("field", "trying")).get(); assertHitCount(searchResponse, 1L); assertSearchHits(searchResponse, "2"); - searchResponse = client().prepareSearch().setQuery(matchQuery("field", "try").type(Type.PHRASE_PREFIX)).get(); + searchResponse = client().prepareSearch().setQuery(matchPhrasePrefixQuery("field", "try")).get(); assertHitCount(searchResponse, 1L); assertSearchHits(searchResponse, "2"); } @@ -1896,19 +1893,4 @@ public void testQueryStringParserCache() throws Exception { } } - public void testRangeQueryRangeFields_24744() throws Exception { - assertAcked(prepareCreate("test") - .addMapping("type1", "int_range", "type=integer_range")); - - client().prepareIndex("test", "type1", "1") - .setSource(jsonBuilder() - .startObject() - .startObject("int_range").field("gte", 10).field("lte", 20).endObject() - .endObject()).get(); - refresh(); - - RangeQueryBuilder range = new RangeQueryBuilder("int_range").relation("intersects").from(Integer.MIN_VALUE).to(Integer.MAX_VALUE); - SearchResponse searchResponse = client().prepareSearch("test").setQuery(range).get(); - assertHitCount(searchResponse, 1); - } } diff --git a/core/src/test/java/org/elasticsearch/search/query/SimpleQueryStringIT.java b/core/src/test/java/org/elasticsearch/search/query/SimpleQueryStringIT.java index 0b8c2ecf12973..bd4bf0624feb1 100644 --- a/core/src/test/java/org/elasticsearch/search/query/SimpleQueryStringIT.java +++ b/core/src/test/java/org/elasticsearch/search/query/SimpleQueryStringIT.java @@ -24,11 +24,15 @@ import org.elasticsearch.action.admin.indices.create.CreateIndexRequestBuilder; import org.elasticsearch.action.index.IndexRequestBuilder; import org.elasticsearch.action.search.SearchResponse; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.common.xcontent.XContentType; +import org.elasticsearch.index.mapper.MapperService; import org.elasticsearch.index.query.BoolQueryBuilder; import org.elasticsearch.index.query.Operator; import org.elasticsearch.index.query.QueryBuilders; +import org.elasticsearch.index.query.SimpleQueryStringBuilder; import org.elasticsearch.index.query.SimpleQueryStringFlag; import org.elasticsearch.plugins.Plugin; import org.elasticsearch.search.SearchHit; @@ -494,8 +498,6 @@ public void testDocWithAllTypes() throws Exception { assertHits(resp.getHits(), "1"); resp = client().prepareSearch("test").setQuery(simpleQueryStringQuery("1.5")).get(); assertHits(resp.getHits(), "1"); - resp = client().prepareSearch("test").setQuery(simpleQueryStringQuery("12.23")).get(); - assertHits(resp.getHits(), "1"); resp = client().prepareSearch("test").setQuery(simpleQueryStringQuery("127.0.0.1")).get(); assertHits(resp.getHits(), "1"); // binary doesn't match @@ -528,34 +530,6 @@ public void testKeywordWithWhitespace() throws Exception { assertHitCount(resp, 2L); } - public void testExplicitAllFieldsRequested() throws Exception { - String indexBody = copyToStringFromClasspath("/org/elasticsearch/search/query/all-query-index-with-all.json"); - prepareCreate("test") - .setSource(indexBody, XContentType.JSON) - // .setSettings(Settings.builder().put("index.version.created", Version.V_5_0_0.id)).get(); - .get(); - ensureGreen("test"); - - List reqs = new ArrayList<>(); - reqs.add(client().prepareIndex("test", "doc", "1").setSource("f1", "foo", "f2", "eggplant")); - indexRandom(true, false, reqs); - - SearchResponse resp = client().prepareSearch("test").setQuery( - simpleQueryStringQuery("foo eggplant").defaultOperator(Operator.AND)).get(); - assertHitCount(resp, 0L); - - resp = client().prepareSearch("test").setQuery( - simpleQueryStringQuery("foo eggplant").defaultOperator(Operator.AND).useAllFields(true)).get(); - assertHits(resp.getHits(), "1"); - assertHitCount(resp, 1L); - - Exception e = expectThrows(Exception.class, () -> - client().prepareSearch("test").setQuery( - simpleQueryStringQuery("blah").field("f1").useAllFields(true)).get()); - assertThat(ExceptionsHelper.detailedMessage(e), - containsString("cannot use [all_fields] parameter in conjunction with [fields]")); - } - public void testAllFieldsWithSpecifiedLeniency() throws IOException { String indexBody = copyToStringFromClasspath("/org/elasticsearch/search/query/all-query-index.json"); prepareCreate("test").setSource(indexBody, XContentType.JSON).get(); @@ -568,6 +542,38 @@ public void testAllFieldsWithSpecifiedLeniency() throws IOException { containsString("NumberFormatException[For input string: \"foo123\"]")); } + + public void testLimitOnExpandedFields() throws Exception { + XContentBuilder builder = jsonBuilder(); + builder.startObject(); + builder.startObject("type1"); + builder.startObject("properties"); + for (int i = 0; i < 1025; i++) { + builder.startObject("field" + i).field("type", "text").endObject(); + } + builder.endObject(); // properties + builder.endObject(); // type1 + builder.endObject(); + + assertAcked(prepareCreate("toomanyfields") + .setSettings(Settings.builder().put(MapperService.INDEX_MAPPING_TOTAL_FIELDS_LIMIT_SETTING.getKey(), 1200)) + .addMapping("type1", builder)); + + client().prepareIndex("toomanyfields", "type1", "1").setSource("field171", "foo bar baz").get(); + refresh(); + + Exception e = expectThrows(Exception.class, () -> { + SimpleQueryStringBuilder qb = simpleQueryStringQuery("bar"); + if (randomBoolean()) { + qb.useAllFields(true); + } + logger.info("--> using {}", qb); + client().prepareSearch("toomanyfields").setQuery(qb).get(); + }); + assertThat(ExceptionsHelper.detailedMessage(e), + containsString("field expansion matches too many fields, limit: 1024, got: 1025")); + } + private void assertHits(SearchHits hits, String... ids) { assertThat(hits.getTotalHits(), equalTo((long) ids.length)); Set hitIds = new HashSet<>(); diff --git a/core/src/test/java/org/elasticsearch/search/rescore/QueryRescoreBuilderTests.java b/core/src/test/java/org/elasticsearch/search/rescore/QueryRescorerBuilderTests.java similarity index 82% rename from core/src/test/java/org/elasticsearch/search/rescore/QueryRescoreBuilderTests.java rename to core/src/test/java/org/elasticsearch/search/rescore/QueryRescorerBuilderTests.java index e2764b0014c41..e1b104ca163cc 100644 --- a/core/src/test/java/org/elasticsearch/search/rescore/QueryRescoreBuilderTests.java +++ b/core/src/test/java/org/elasticsearch/search/rescore/QueryRescorerBuilderTests.java @@ -53,8 +53,9 @@ import static java.util.Collections.emptyList; import static org.elasticsearch.test.EqualsHashCodeTestUtils.checkEqualsAndHashCode; +import static org.hamcrest.Matchers.containsString; -public class QueryRescoreBuilderTests extends ESTestCase { +public class QueryRescorerBuilderTests extends ESTestCase { private static final int NUMBER_OF_TESTBUILDERS = 20; private static NamedWriteableRegistry namedWriteableRegistry; @@ -81,8 +82,8 @@ public static void afterClass() throws Exception { */ public void testSerialization() throws IOException { for (int runs = 0; runs < NUMBER_OF_TESTBUILDERS; runs++) { - RescoreBuilder original = randomRescoreBuilder(); - RescoreBuilder deserialized = copy(original); + RescorerBuilder original = randomRescoreBuilder(); + RescorerBuilder deserialized = copy(original); assertEquals(deserialized, original); assertEquals(deserialized.hashCode(), original.hashCode()); assertNotSame(deserialized, original); @@ -94,13 +95,13 @@ public void testSerialization() throws IOException { */ public void testEqualsAndHashcode() throws IOException { for (int runs = 0; runs < NUMBER_OF_TESTBUILDERS; runs++) { - checkEqualsAndHashCode(randomRescoreBuilder(), this::copy, QueryRescoreBuilderTests::mutate); + checkEqualsAndHashCode(randomRescoreBuilder(), this::copy, QueryRescorerBuilderTests::mutate); } } - private RescoreBuilder copy(RescoreBuilder original) throws IOException { + private RescorerBuilder copy(RescorerBuilder original) throws IOException { return copyWriteable(original, namedWriteableRegistry, - namedWriteableRegistry.getReader(RescoreBuilder.class, original.getWriteableName())); + namedWriteableRegistry.getReader(RescorerBuilder.class, original.getWriteableName())); } /** @@ -108,7 +109,7 @@ private RescoreBuilder copy(RescoreBuilder original) throws IOException { */ public void testFromXContent() throws IOException { for (int runs = 0; runs < NUMBER_OF_TESTBUILDERS; runs++) { - RescoreBuilder rescoreBuilder = randomRescoreBuilder(); + RescorerBuilder rescoreBuilder = randomRescoreBuilder(); XContentBuilder builder = XContentFactory.contentBuilder(randomFrom(XContentType.values())); if (randomBoolean()) { builder.prettyPrint(); @@ -119,7 +120,7 @@ public void testFromXContent() throws IOException { XContentParser parser = createParser(shuffled); parser.nextToken(); - RescoreBuilder secondRescoreBuilder = RescoreBuilder.parseFromXContent(parser); + RescorerBuilder secondRescoreBuilder = RescorerBuilder.parseFromXContent(parser); assertNotSame(rescoreBuilder, secondRescoreBuilder); assertEquals(rescoreBuilder, secondRescoreBuilder); assertEquals(rescoreBuilder.hashCode(), secondRescoreBuilder.hashCode()); @@ -127,7 +128,7 @@ public void testFromXContent() throws IOException { } /** - * test that build() outputs a {@link RescoreSearchContext} that has the same properties + * test that build() outputs a {@link RescoreContext} that has the same properties * than the test builder */ public void testBuildRescoreSearchContext() throws ElasticsearchParseException, IOException { @@ -147,10 +148,10 @@ public MappedFieldType fieldMapper(String name) { for (int runs = 0; runs < NUMBER_OF_TESTBUILDERS; runs++) { QueryRescorerBuilder rescoreBuilder = randomRescoreBuilder(); - QueryRescoreContext rescoreContext = rescoreBuilder.build(mockShardContext); - int expectedWindowSize = rescoreBuilder.windowSize() == null ? QueryRescoreContext.DEFAULT_WINDOW_SIZE : + QueryRescoreContext rescoreContext = (QueryRescoreContext) rescoreBuilder.buildContext(mockShardContext); + int expectedWindowSize = rescoreBuilder.windowSize() == null ? RescorerBuilder.DEFAULT_WINDOW_SIZE : rescoreBuilder.windowSize().intValue(); - assertEquals(expectedWindowSize, rescoreContext.window()); + assertEquals(expectedWindowSize, rescoreContext.getWindowSize()); Query expectedQuery = Rewriteable.rewrite(rescoreBuilder.getRescoreQuery(), mockShardContext).toQuery(mockShardContext); assertEquals(expectedQuery, rescoreContext.query()); assertEquals(rescoreBuilder.getQueryWeight(), rescoreContext.queryWeight(), Float.MIN_VALUE); @@ -173,22 +174,18 @@ public void testUnknownFieldsExpection() throws IOException { " \"window_size\" : 20,\n" + " \"bad_rescorer_name\" : { }\n" + "}\n"; - XContentParser parser = createParser(rescoreElement); - try { - RescoreBuilder.parseFromXContent(parser); - fail("expected a parsing exception"); - } catch (ParsingException e) { - assertEquals("rescore doesn't support rescorer with name [bad_rescorer_name]", e.getMessage()); + { + XContentParser parser = createParser(rescoreElement); + Exception e = expectThrows(ParsingException.class, () -> RescorerBuilder.parseFromXContent(parser)); + assertEquals("Unknown RescorerBuilder [bad_rescorer_name]", e.getMessage()); } rescoreElement = "{\n" + " \"bad_fieldName\" : 20\n" + "}\n"; - parser = createParser(rescoreElement); - try { - RescoreBuilder.parseFromXContent(parser); - fail("expected a parsing exception"); - } catch (ParsingException e) { + { + XContentParser parser = createParser(rescoreElement); + Exception e = expectThrows(ParsingException.class, () -> RescorerBuilder.parseFromXContent(parser)); assertEquals("rescore doesn't support [bad_fieldName]", e.getMessage()); } @@ -196,20 +193,16 @@ public void testUnknownFieldsExpection() throws IOException { " \"window_size\" : 20,\n" + " \"query\" : [ ]\n" + "}\n"; - parser = createParser(rescoreElement); - try { - RescoreBuilder.parseFromXContent(parser); - fail("expected a parsing exception"); - } catch (ParsingException e) { + { + XContentParser parser = createParser(rescoreElement); + Exception e = expectThrows(ParsingException.class, () -> RescorerBuilder.parseFromXContent(parser)); assertEquals("unexpected token [START_ARRAY] after [query]", e.getMessage()); } rescoreElement = "{ }"; - parser = createParser(rescoreElement); - try { - RescoreBuilder.parseFromXContent(parser); - fail("expected a parsing exception"); - } catch (ParsingException e) { + { + XContentParser parser = createParser(rescoreElement); + Exception e = expectThrows(ParsingException.class, () -> RescorerBuilder.parseFromXContent(parser)); assertEquals("missing rescore type", e.getMessage()); } @@ -217,11 +210,9 @@ public void testUnknownFieldsExpection() throws IOException { " \"window_size\" : 20,\n" + " \"query\" : { \"bad_fieldname\" : 1.0 } \n" + "}\n"; - parser = createParser(rescoreElement); - try { - RescoreBuilder.parseFromXContent(parser); - fail("expected a parsing exception"); - } catch (IllegalArgumentException e) { + { + XContentParser parser = createParser(rescoreElement); + Exception e = expectThrows(IllegalArgumentException.class, () -> RescorerBuilder.parseFromXContent(parser)); assertEquals("[query] unknown field [bad_fieldname], parser not found", e.getMessage()); } @@ -229,11 +220,9 @@ public void testUnknownFieldsExpection() throws IOException { " \"window_size\" : 20,\n" + " \"query\" : { \"rescore_query\" : { \"unknown_queryname\" : { } } } \n" + "}\n"; - parser = createParser(rescoreElement); - try { - RescoreBuilder.parseFromXContent(parser); - fail("expected a parsing exception"); - } catch (ParsingException e) { + { + XContentParser parser = createParser(rescoreElement); + Exception e = expectThrows(ParsingException.class, () -> RescorerBuilder.parseFromXContent(parser)); assertEquals("[query] failed to parse field [rescore_query]", e.getMessage()); } @@ -241,8 +230,8 @@ public void testUnknownFieldsExpection() throws IOException { " \"window_size\" : 20,\n" + " \"query\" : { \"rescore_query\" : { \"match_all\" : { } } } \n" + "}\n"; - parser = createParser(rescoreElement); - RescoreBuilder.parseFromXContent(parser); + XContentParser parser = createParser(rescoreElement); + RescorerBuilder.parseFromXContent(parser); } /** @@ -260,8 +249,8 @@ protected NamedXContentRegistry xContentRegistry() { return xContentRegistry; } - private static RescoreBuilder mutate(RescoreBuilder original) throws IOException { - RescoreBuilder mutation = ESTestCase.copyWriteable(original, namedWriteableRegistry, QueryRescorerBuilder::new); + private static RescorerBuilder mutate(RescorerBuilder original) throws IOException { + RescorerBuilder mutation = ESTestCase.copyWriteable(original, namedWriteableRegistry, QueryRescorerBuilder::new); if (randomBoolean()) { Integer windowSize = original.windowSize(); if (windowSize != null) { diff --git a/core/src/test/java/org/elasticsearch/search/scroll/SearchScrollIT.java b/core/src/test/java/org/elasticsearch/search/scroll/SearchScrollIT.java index c887b20e11f63..b030043faf746 100644 --- a/core/src/test/java/org/elasticsearch/search/scroll/SearchScrollIT.java +++ b/core/src/test/java/org/elasticsearch/search/scroll/SearchScrollIT.java @@ -19,6 +19,7 @@ package org.elasticsearch.search.scroll; +import org.elasticsearch.ExceptionsHelper; import org.elasticsearch.action.search.ClearScrollResponse; import org.elasticsearch.action.search.SearchRequestBuilder; import org.elasticsearch.action.search.SearchResponse; @@ -39,6 +40,7 @@ import org.elasticsearch.search.sort.SortOrder; import org.elasticsearch.test.ESIntegTestCase; import org.elasticsearch.test.hamcrest.ElasticsearchAssertions; +import org.junit.After; import java.io.IOException; import java.util.Map; @@ -54,6 +56,7 @@ import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchHits; import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchResponse; import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertThrows; +import static org.hamcrest.Matchers.containsString; import static org.hamcrest.Matchers.equalTo; import static org.hamcrest.Matchers.greaterThan; import static org.hamcrest.Matchers.is; @@ -63,6 +66,13 @@ * Tests for scrolling. */ public class SearchScrollIT extends ESIntegTestCase { + @After + public void cleanup() throws Exception { + assertAcked(client().admin().cluster().prepareUpdateSettings() + .setPersistentSettings(Settings.builder().putNull("*")) + .setTransientSettings(Settings.builder().putNull("*"))); + } + public void testSimpleScrollQueryThenFetch() throws Exception { client().admin().indices().prepareCreate("test").setSettings(Settings.builder().put("index.number_of_shards", 3)).execute().actionGet(); client().admin().cluster().prepareHealth().setWaitForEvents(Priority.LANGUID).setWaitForGreenStatus().execute().actionGet(); @@ -518,6 +528,74 @@ public void testCloseAndReopenOrDeleteWithActiveScroll() throws IOException { } } + public void testScrollInvalidDefaultKeepAlive() throws IOException { + IllegalArgumentException exc = expectThrows(IllegalArgumentException.class, () -> + client().admin().cluster().prepareUpdateSettings() + .setPersistentSettings(Settings.builder().put("search.max_keep_alive", "1m").put("search.default_keep_alive", "2m")).get + ()); + assertThat(exc.getMessage(), containsString("was (2 minutes > 1 minute)")); + + assertAcked(client().admin().cluster().prepareUpdateSettings() + .setPersistentSettings(Settings.builder().put("search.default_keep_alive", "5m").put("search.max_keep_alive", "5m")).get()); + + assertAcked(client().admin().cluster().prepareUpdateSettings() + .setPersistentSettings(Settings.builder().put("search.default_keep_alive", "2m")).get()); + + assertAcked(client().admin().cluster().prepareUpdateSettings() + .setPersistentSettings(Settings.builder().put("search.max_keep_alive", "2m")).get()); + + + exc = expectThrows(IllegalArgumentException.class, () -> client().admin().cluster().prepareUpdateSettings() + .setPersistentSettings(Settings.builder().put("search.default_keep_alive", "3m")).get()); + assertThat(exc.getMessage(), containsString("was (3 minutes > 2 minutes)")); + + assertAcked(client().admin().cluster().prepareUpdateSettings() + .setPersistentSettings(Settings.builder().put("search.default_keep_alive", "1m")).get()); + + exc = expectThrows(IllegalArgumentException.class, () -> client().admin().cluster().prepareUpdateSettings() + .setPersistentSettings(Settings.builder().put("search.max_keep_alive", "30s")).get()); + assertThat(exc.getMessage(), containsString("was (1 minute > 30 seconds)")); + } + + public void testInvalidScrollKeepAlive() throws IOException { + createIndex("test"); + for (int i = 0; i < 2; i++) { + client().prepareIndex("test", "type1", + Integer.toString(i)).setSource(jsonBuilder().startObject().field("field", i).endObject()).execute().actionGet(); + } + refresh(); + assertAcked(client().admin().cluster().prepareUpdateSettings() + .setPersistentSettings(Settings.builder().put("search.default_keep_alive", "5m").put("search.max_keep_alive", "5m")).get()); + + Exception exc = expectThrows(Exception.class, + () -> client().prepareSearch() + .setQuery(matchAllQuery()) + .setSize(1) + .setScroll(TimeValue.timeValueHours(2)) + .execute().actionGet()); + IllegalArgumentException illegalArgumentException = + (IllegalArgumentException) ExceptionsHelper.unwrap(exc, IllegalArgumentException.class); + assertNotNull(illegalArgumentException); + assertThat(illegalArgumentException.getMessage(), containsString("Keep alive for scroll (2 hours) is too large")); + + SearchResponse searchResponse = client().prepareSearch() + .setQuery(matchAllQuery()) + .setSize(1) + .setScroll(TimeValue.timeValueMinutes(5)) + .execute().actionGet(); + assertNotNull(searchResponse.getScrollId()); + assertThat(searchResponse.getHits().getTotalHits(), equalTo(2L)); + assertThat(searchResponse.getHits().getHits().length, equalTo(1)); + + exc = expectThrows(Exception.class, + () -> client().prepareSearchScroll(searchResponse.getScrollId()) + .setScroll(TimeValue.timeValueHours(3)).get()); + illegalArgumentException = + (IllegalArgumentException) ExceptionsHelper.unwrap(exc, IllegalArgumentException.class); + assertNotNull(illegalArgumentException); + assertThat(illegalArgumentException.getMessage(), containsString("Keep alive for scroll (3 hours) is too large")); + } + private void assertToXContentResponse(ClearScrollResponse response, boolean succeed, int numFreed) throws IOException { XContentBuilder builder = XContentFactory.jsonBuilder(); response.toXContent(builder, ToXContent.EMPTY_PARAMS); diff --git a/core/src/test/java/org/elasticsearch/search/searchafter/SearchAfterBuilderTests.java b/core/src/test/java/org/elasticsearch/search/searchafter/SearchAfterBuilderTests.java index 2179444aad763..edcfdc2155507 100644 --- a/core/src/test/java/org/elasticsearch/search/searchafter/SearchAfterBuilderTests.java +++ b/core/src/test/java/org/elasticsearch/search/searchafter/SearchAfterBuilderTests.java @@ -19,6 +19,11 @@ package org.elasticsearch.search.searchafter; +import org.apache.lucene.document.LatLonDocValuesField; +import org.apache.lucene.search.FieldComparator; +import org.apache.lucene.search.SortField; +import org.apache.lucene.search.SortedNumericSortField; +import org.apache.lucene.search.SortedSetSortField; import org.elasticsearch.common.geo.GeoPoint; import org.elasticsearch.common.io.stream.NamedWriteableRegistry; import org.elasticsearch.common.text.Text; @@ -27,13 +32,16 @@ import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.common.xcontent.json.JsonXContent; +import org.elasticsearch.index.fielddata.IndexFieldData; +import org.elasticsearch.search.MultiValueMode; import org.elasticsearch.test.ESTestCase; -import org.hamcrest.Matchers; import java.io.IOException; import java.util.Collections; +import static org.elasticsearch.search.searchafter.SearchAfterBuilder.extractSortType; import static org.elasticsearch.test.EqualsHashCodeTestUtils.checkEqualsAndHashCode; +import static org.hamcrest.Matchers.equalTo; public class SearchAfterBuilderTests extends ESTestCase { private static final int NUMBER_OF_TESTBUILDERS = 20; @@ -182,7 +190,7 @@ public void testWithNullArray() throws Exception { builder.setSortValues(null); fail("Should fail on null array."); } catch (NullPointerException e) { - assertThat(e.getMessage(), Matchers.equalTo("Values cannot be null.")); + assertThat(e.getMessage(), equalTo("Values cannot be null.")); } } @@ -192,7 +200,7 @@ public void testWithEmptyArray() throws Exception { builder.setSortValues(new Object[0]); fail("Should fail on empty array."); } catch (IllegalArgumentException e) { - assertThat(e.getMessage(), Matchers.equalTo("Values must contains at least one value.")); + assertThat(e.getMessage(), equalTo("Values must contains at least one value.")); } } @@ -215,4 +223,29 @@ private static void randomSearchFromBuilderWithSortValueThrows(Object containing Exception e = expectThrows(IllegalArgumentException.class, () -> builder.setSortValues(values)); assertEquals(e.getMessage(), "Can't handle search_after field value of type [" + containing.getClass() + "]"); } + + public void testExtractSortType() throws Exception { + SortField.Type type = extractSortType(LatLonDocValuesField.newDistanceSort("field", 0.0, 180.0)); + assertThat(type, equalTo(SortField.Type.DOUBLE)); + IndexFieldData.XFieldComparatorSource source = new IndexFieldData.XFieldComparatorSource(null, MultiValueMode.MIN, null) { + @Override + public SortField.Type reducedType() { + return SortField.Type.STRING; + } + + @Override + public FieldComparator newComparator(String fieldname, int numHits, int sortPos, boolean reversed) { + return null; + } + }; + + type = extractSortType(new SortField("field", source)); + assertThat(type, equalTo(SortField.Type.STRING)); + + type = extractSortType(new SortedNumericSortField("field", SortField.Type.DOUBLE)); + assertThat(type, equalTo(SortField.Type.DOUBLE)); + + type = extractSortType(new SortedSetSortField("field", false)); + assertThat(type, equalTo(SortField.Type.STRING)); + } } diff --git a/core/src/test/java/org/elasticsearch/search/simple/SimpleSearchIT.java b/core/src/test/java/org/elasticsearch/search/simple/SimpleSearchIT.java index c4bb4a811a51b..9eacb0e81bd29 100644 --- a/core/src/test/java/org/elasticsearch/search/simple/SimpleSearchIT.java +++ b/core/src/test/java/org/elasticsearch/search/simple/SimpleSearchIT.java @@ -79,7 +79,7 @@ public void testSearchRandomPreference() throws InterruptedException, ExecutionE int iters = scaledRandomIntBetween(10, 20); for (int i = 0; i < iters; i++) { String randomPreference = randomUnicodeOfLengthBetween(0, 4); - // randomPreference should not start with '_' (reserved for known preference types (e.g. _shards, _primary) + // randomPreference should not start with '_' (reserved for known preference types (e.g. _shards) while (randomPreference.startsWith("_")) { randomPreference = randomUnicodeOfLengthBetween(0, 4); } @@ -258,9 +258,7 @@ public void testLocaleDependentDate() throws Exception { } public void testSimpleTerminateAfterCount() throws Exception { - prepareCreate("test").setSettings( - SETTING_NUMBER_OF_SHARDS, 1, - SETTING_NUMBER_OF_REPLICAS, 0).get(); + prepareCreate("test").setSettings(Settings.builder().put(SETTING_NUMBER_OF_SHARDS, 1).put(SETTING_NUMBER_OF_REPLICAS, 0)).get(); ensureGreen(); int max = randomIntBetween(3, 29); List docbuilders = new ArrayList<>(max); @@ -315,7 +313,6 @@ public void testSimpleIndexSortEarlyTerminate() throws Exception { refresh(); SearchResponse searchResponse; - boolean hasEarlyTerminated = false; for (int i = 1; i < max; i++) { searchResponse = client().prepareSearch("test") .addDocValueField("rank") @@ -323,16 +320,11 @@ public void testSimpleIndexSortEarlyTerminate() throws Exception { .addSort("rank", SortOrder.ASC) .setSize(i).execute().actionGet(); assertThat(searchResponse.getHits().getTotalHits(), equalTo(-1L)); - if (searchResponse.isTerminatedEarly() != null) { - assertTrue(searchResponse.isTerminatedEarly()); - hasEarlyTerminated = true; - } for (int j = 0; j < i; j++) { assertThat(searchResponse.getHits().getAt(j).field("rank").getValue(), equalTo((long) j)); } } - assertTrue(hasEarlyTerminated); } public void testInsaneFromAndSize() throws Exception { @@ -364,8 +356,8 @@ public void testLargeFromAndSizeSucceeds() throws Exception { } public void testTooLargeFromAndSizeOkBySetting() throws Exception { - prepareCreate("idx").setSettings(IndexSettings.MAX_RESULT_WINDOW_SETTING.getKey(), - IndexSettings.MAX_RESULT_WINDOW_SETTING.get(Settings.EMPTY) * 2).get(); + prepareCreate("idx").setSettings(Settings.builder().put(IndexSettings.MAX_RESULT_WINDOW_SETTING.getKey(), + IndexSettings.MAX_RESULT_WINDOW_SETTING.get(Settings.EMPTY) * 2)).get(); indexRandom(true, client().prepareIndex("idx", "type").setSource("{}", XContentType.JSON)); assertHitCount(client().prepareSearch("idx").setFrom(IndexSettings.MAX_RESULT_WINDOW_SETTING.get(Settings.EMPTY)).get(), 1); @@ -390,7 +382,7 @@ public void testTooLargeFromAndSizeOkByDynamicSetting() throws Exception { } public void testTooLargeFromAndSizeBackwardsCompatibilityRecommendation() throws Exception { - prepareCreate("idx").setSettings(IndexSettings.MAX_RESULT_WINDOW_SETTING.getKey(), Integer.MAX_VALUE).get(); + prepareCreate("idx").setSettings(Settings.builder().put(IndexSettings.MAX_RESULT_WINDOW_SETTING.getKey(), Integer.MAX_VALUE)).get(); indexRandom(true, client().prepareIndex("idx", "type").setSource("{}", XContentType.JSON)); assertHitCount(client().prepareSearch("idx").setFrom(IndexSettings.MAX_RESULT_WINDOW_SETTING.get(Settings.EMPTY) * 10).get(), 1); @@ -409,8 +401,8 @@ public void testTooLargeRescoreWindow() throws Exception { public void testTooLargeRescoreOkBySetting() throws Exception { int defaultMaxWindow = IndexSettings.MAX_RESCORE_WINDOW_SETTING.get(Settings.EMPTY); - prepareCreate("idx").setSettings(IndexSettings.MAX_RESCORE_WINDOW_SETTING.getKey(), - defaultMaxWindow * 2).get(); + prepareCreate("idx").setSettings(Settings.builder().put(IndexSettings.MAX_RESCORE_WINDOW_SETTING.getKey(), defaultMaxWindow * 2)) + .get(); indexRandom(true, client().prepareIndex("idx", "type").setSource("{}", XContentType.JSON)); assertHitCount( @@ -420,8 +412,9 @@ public void testTooLargeRescoreOkBySetting() throws Exception { public void testTooLargeRescoreOkByResultWindowSetting() throws Exception { int defaultMaxWindow = IndexSettings.MAX_RESCORE_WINDOW_SETTING.get(Settings.EMPTY); - prepareCreate("idx").setSettings(IndexSettings.MAX_RESULT_WINDOW_SETTING.getKey(), // Note that this is the RESULT window. - defaultMaxWindow * 2).get(); + prepareCreate("idx").setSettings( + Settings.builder().put(IndexSettings.MAX_RESULT_WINDOW_SETTING.getKey(), // Note that this is the RESULT window. + defaultMaxWindow * 2)).get(); indexRandom(true, client().prepareIndex("idx", "type").setSource("{}", XContentType.JSON)); assertHitCount( diff --git a/core/src/test/java/org/elasticsearch/search/slice/SearchSliceIT.java b/core/src/test/java/org/elasticsearch/search/slice/SearchSliceIT.java index 2f89c5b2306b5..a5962dca5951b 100644 --- a/core/src/test/java/org/elasticsearch/search/slice/SearchSliceIT.java +++ b/core/src/test/java/org/elasticsearch/search/slice/SearchSliceIT.java @@ -23,6 +23,7 @@ import org.elasticsearch.action.search.SearchPhaseExecutionException; import org.elasticsearch.action.search.SearchRequestBuilder; import org.elasticsearch.action.search.SearchResponse; +import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentFactory; @@ -70,8 +71,7 @@ private int setupIndex(boolean withDocs) throws IOException, ExecutionException, .endObject().string(); int numberOfShards = randomIntBetween(1, 7); assertAcked(client().admin().indices().prepareCreate("test") - .setSettings("number_of_shards", numberOfShards, - "index.max_slices_per_scroll", 10000) + .setSettings(Settings.builder().put("number_of_shards", numberOfShards).put("index.max_slices_per_scroll", 10000)) .addMapping("type", mapping, XContentType.JSON)); ensureGreen(); diff --git a/core/src/test/java/org/elasticsearch/search/slice/SliceBuilderTests.java b/core/src/test/java/org/elasticsearch/search/slice/SliceBuilderTests.java index 7ea3dd2094b64..f6f147fc334de 100644 --- a/core/src/test/java/org/elasticsearch/search/slice/SliceBuilderTests.java +++ b/core/src/test/java/org/elasticsearch/search/slice/SliceBuilderTests.java @@ -153,6 +153,10 @@ public String typeName() { public Query termQuery(Object value, @Nullable QueryShardContext context) { return null; } + + public Query existsQuery(QueryShardContext context) { + return null; + } }; fieldType.setName(UidFieldMapper.NAME); fieldType.setHasDocValues(false); @@ -193,6 +197,10 @@ public String typeName() { public Query termQuery(Object value, @Nullable QueryShardContext context) { return null; } + + public Query existsQuery(QueryShardContext context) { + return null; + } }; fieldType.setName("field_doc_values"); fieldType.setHasDocValues(true); @@ -289,6 +297,10 @@ public String typeName() { public Query termQuery(Object value, @Nullable QueryShardContext context) { return null; } + + public Query existsQuery(QueryShardContext context) { + return null; + } }; fieldType.setName("field_without_doc_values"); when(context.fieldMapper("field_without_doc_values")).thenReturn(fieldType); diff --git a/core/src/test/java/org/elasticsearch/search/sort/AbstractSortTestCase.java b/core/src/test/java/org/elasticsearch/search/sort/AbstractSortTestCase.java index 9e8fee7c7e7a0..d05ddf4ee640e 100644 --- a/core/src/test/java/org/elasticsearch/search/sort/AbstractSortTestCase.java +++ b/core/src/test/java/org/elasticsearch/search/sort/AbstractSortTestCase.java @@ -20,7 +20,6 @@ package org.elasticsearch.search.sort; import org.apache.lucene.search.SortField; -import org.apache.lucene.util.Accountable; import org.elasticsearch.Version; import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.common.io.stream.NamedWriteableRegistry; @@ -35,7 +34,8 @@ import org.elasticsearch.index.Index; import org.elasticsearch.index.IndexSettings; import org.elasticsearch.index.cache.bitset.BitsetFilterCache; -import org.elasticsearch.index.fielddata.IndexFieldDataService; +import org.elasticsearch.index.fielddata.IndexFieldData; +import org.elasticsearch.index.fielddata.IndexFieldDataCache; import org.elasticsearch.index.mapper.ContentPath; import org.elasticsearch.index.mapper.MappedFieldType; import org.elasticsearch.index.mapper.Mapper.BuilderContext; @@ -47,8 +47,6 @@ import org.elasticsearch.index.query.QueryBuilder; import org.elasticsearch.index.query.QueryShardContext; import org.elasticsearch.index.query.TermQueryBuilder; -import org.elasticsearch.index.shard.ShardId; -import org.elasticsearch.indices.fielddata.cache.IndicesFieldDataCache; import org.elasticsearch.script.MockScriptEngine; import org.elasticsearch.script.ScriptEngine; import org.elasticsearch.script.ScriptModule; @@ -59,29 +57,33 @@ import org.elasticsearch.test.IndexSettingsModule; import org.junit.AfterClass; import org.junit.BeforeClass; +import org.mockito.Mockito; import java.io.IOException; import java.util.Collections; import java.util.Map; +import java.util.function.BiFunction; import java.util.function.Function; import static java.util.Collections.emptyList; import static org.elasticsearch.test.EqualsHashCodeTestUtils.checkEqualsAndHashCode; public abstract class AbstractSortTestCase> extends ESTestCase { + private static final int NUMBER_OF_TESTBUILDERS = 20; protected static NamedWriteableRegistry namedWriteableRegistry; private static NamedXContentRegistry xContentRegistry; private static ScriptService scriptService; + protected static String MOCK_SCRIPT_NAME = "dummy"; @BeforeClass - public static void init() throws IOException { + public static void init() { Settings baseSettings = Settings.builder() .put(Environment.PATH_HOME_SETTING.getKey(), createTempDir().toString()) .build(); - Map, Object>> scripts = Collections.singletonMap("dummy", p -> null); + Map, Object>> scripts = Collections.singletonMap(MOCK_SCRIPT_NAME, p -> null); ScriptEngine engine = new MockScriptEngine(MockScriptEngine.NAME, scripts); scriptService = new ScriptService(baseSettings, Collections.singletonMap(engine.getType(), engine), ScriptModule.CORE_CONTEXTS); @@ -94,6 +96,7 @@ public static void init() throws IOException { public static void afterClass() throws Exception { namedWriteableRegistry = null; xContentRegistry = null; + scriptService = null; } /** Returns random sort that is put under test */ @@ -132,9 +135,14 @@ public void testFromXContent() throws IOException { assertNotSame(testItem, parsedItem); assertEquals(testItem, parsedItem); assertEquals(testItem.hashCode(), parsedItem.hashCode()); + assertWarnings(testItem); } } + protected void assertWarnings(T testItem) { + // assert potential warnings based on the test sort configuration. Do nothing by default, subtests can overwrite + } + /** * test that build() outputs a {@link SortField} that is similar to the one * we would get when parsing the xContent the sort builder is rendering out @@ -166,7 +174,7 @@ public void testSerialization() throws IOException { /** * Test equality and hashCode properties */ - public void testEqualsAndHashcode() throws IOException { + public void testEqualsAndHashcode() { for (int runs = 0; runs < NUMBER_OF_TESTBUILDERS; runs++) { checkEqualsAndHashCode(createTestItem(), this::copy, this::mutate); } @@ -176,22 +184,14 @@ protected QueryShardContext createMockShardContext() { Index index = new Index(randomAlphaOfLengthBetween(1, 10), "_na_"); IndexSettings idxSettings = IndexSettingsModule.newIndexSettings(index, Settings.builder().put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT).build()); - IndicesFieldDataCache cache = new IndicesFieldDataCache(Settings.EMPTY, null); - IndexFieldDataService ifds = new IndexFieldDataService(IndexSettingsModule.newIndexSettings("test", Settings.EMPTY), - cache, null, null); - BitsetFilterCache bitsetFilterCache = new BitsetFilterCache(idxSettings, new BitsetFilterCache.Listener() { - - @Override - public void onRemoval(ShardId shardId, Accountable accountable) { - } + BitsetFilterCache bitsetFilterCache = new BitsetFilterCache(idxSettings, Mockito.mock(BitsetFilterCache.Listener.class)); + BiFunction> indexFieldDataLookup = (fieldType, fieldIndexName) -> { + IndexFieldData.Builder builder = fieldType.fielddataBuilder(fieldIndexName); + return builder.build(idxSettings, fieldType, new IndexFieldDataCache.None(), null, null); + }; + return new QueryShardContext(0, idxSettings, bitsetFilterCache, indexFieldDataLookup, null, null, scriptService, + xContentRegistry(), namedWriteableRegistry, null, null, () -> randomNonNegativeLong(), null) { - @Override - public void onCache(ShardId shardId, Accountable accountable) { - } - }); - long nowInMillis = randomNonNegativeLong(); - return new QueryShardContext(0, idxSettings, bitsetFilterCache, ifds::getForField, null, null, scriptService, - xContentRegistry(), namedWriteableRegistry, null, null, () -> nowInMillis, null) { @Override public MappedFieldType fieldMapper(String name) { return provideMappedFieldType(name); @@ -207,7 +207,7 @@ public ObjectMapper getObjectMapper(String name) { /** * Return a field type. We use {@link NumberFieldMapper.NumberFieldType} by default since it is compatible with all sort modes - * Tests that require other field type than double can override this. + * Tests that require other field types can override this. */ protected MappedFieldType provideMappedFieldType(String name) { NumberFieldMapper.NumberFieldType doubleFieldType = new NumberFieldMapper.NumberFieldType(NumberFieldMapper.NumberType.DOUBLE); diff --git a/core/src/test/java/org/elasticsearch/search/sort/FieldSortBuilderTests.java b/core/src/test/java/org/elasticsearch/search/sort/FieldSortBuilderTests.java index eda187d916f6b..163b9391a1b98 100644 --- a/core/src/test/java/org/elasticsearch/search/sort/FieldSortBuilderTests.java +++ b/core/src/test/java/org/elasticsearch/search/sort/FieldSortBuilderTests.java @@ -19,17 +19,45 @@ package org.elasticsearch.search.sort; +import org.apache.lucene.index.Term; import org.apache.lucene.search.SortField; +import org.apache.lucene.search.SortedNumericSelector; +import org.apache.lucene.search.SortedNumericSortField; +import org.apache.lucene.search.SortedSetSelector; +import org.apache.lucene.search.SortedSetSortField; +import org.apache.lucene.search.TermQuery; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.common.xcontent.json.JsonXContent; +import org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource; +import org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource.Nested; +import org.elasticsearch.index.mapper.KeywordFieldMapper; +import org.elasticsearch.index.mapper.MappedFieldType; +import org.elasticsearch.index.mapper.TypeFieldMapper; +import org.elasticsearch.index.query.MatchNoneQueryBuilder; +import org.elasticsearch.index.query.QueryBuilder; +import org.elasticsearch.index.query.QueryBuilders; +import org.elasticsearch.index.query.QueryRewriteContext; +import org.elasticsearch.index.query.QueryShardContext; +import org.elasticsearch.index.query.QueryShardException; +import org.elasticsearch.index.query.RangeQueryBuilder; import org.elasticsearch.search.DocValueFormat; +import org.elasticsearch.search.MultiValueMode; import java.io.IOException; +import java.util.ArrayList; import java.util.Arrays; import java.util.List; +import static org.elasticsearch.search.sort.NestedSortBuilderTests.createRandomNestedSort; +import static org.hamcrest.Matchers.instanceOf; + public class FieldSortBuilderTests extends AbstractSortTestCase { + /** + * {@link #provideMappedFieldType(String)} will return a + */ + private static String MAPPED_STRING_FIELDNAME = "_stringField"; + @Override protected FieldSortBuilder createTestItem() { return randomFieldSortBuilder(); @@ -60,45 +88,51 @@ public FieldSortBuilder randomFieldSortBuilder() { if (randomBoolean()) { builder.sortMode(randomFrom(SortMode.values())); } - - if (randomBoolean()) { - builder.setNestedFilter(randomNestedFilter()); - } - if (randomBoolean()) { - builder.setNestedPath(randomAlphaOfLengthBetween(1, 10)); + if (randomBoolean()) { + builder.setNestedSort(createRandomNestedSort(3)); + } else { + // the following are alternative ways to setNestedSort for nested sorting + if (randomBoolean()) { + builder.setNestedFilter(randomNestedFilter()); + } + if (randomBoolean()) { + builder.setNestedPath(randomAlphaOfLengthBetween(1, 10)); + } + } } - return builder; } @Override protected FieldSortBuilder mutate(FieldSortBuilder original) throws IOException { FieldSortBuilder mutated = new FieldSortBuilder(original); - int parameter = randomIntBetween(0, 5); + int parameter = randomIntBetween(0, 4); switch (parameter) { case 0: - mutated.setNestedPath(randomValueOtherThan( - original.getNestedPath(), - () -> randomAlphaOfLengthBetween(1, 10))); + if (original.getNestedPath() == null && original.getNestedFilter() == null) { + mutated.setNestedSort( + randomValueOtherThan(original.getNestedSort(), () -> NestedSortBuilderTests.createRandomNestedSort(3))); + } else { + if (randomBoolean()) { + mutated.setNestedPath(randomValueOtherThan(original.getNestedPath(), () -> randomAlphaOfLengthBetween(1, 10))); + } else { + mutated.setNestedFilter(randomValueOtherThan(original.getNestedFilter(), () -> randomNestedFilter())); + } + } break; case 1: - mutated.setNestedFilter(randomValueOtherThan( - original.getNestedFilter(), - () -> randomNestedFilter())); - break; - case 2: mutated.sortMode(randomValueOtherThan(original.sortMode(), () -> randomFrom(SortMode.values()))); break; - case 3: + case 2: mutated.unmappedType(randomValueOtherThan( original.unmappedType(), () -> randomAlphaOfLengthBetween(1, 10))); break; - case 4: + case 3: mutated.missing(randomValueOtherThan(original.missing(), () -> randomFrom(missingContent))); break; - case 5: + case 4: mutated.order(randomValueOtherThan(original.order(), () -> randomFrom(SortOrder.values()))); break; default: @@ -123,7 +157,150 @@ protected void sortFieldAssertions(FieldSortBuilder builder, SortField sortField assertEquals(DocValueFormat.RAW, format); } - public void testReverseOptionFails() throws IOException { + /** + * Test that missing values get transfered correctly to the SortField + */ + public void testBuildSortFieldMissingValue() throws IOException { + QueryShardContext shardContextMock = createMockShardContext(); + FieldSortBuilder fieldSortBuilder = new FieldSortBuilder("value").missing("_first"); + SortField sortField = fieldSortBuilder.build(shardContextMock).field; + SortedNumericSortField expectedSortField = new SortedNumericSortField("value", SortField.Type.DOUBLE); + expectedSortField.setMissingValue(Double.NEGATIVE_INFINITY); + assertEquals(expectedSortField, sortField); + + fieldSortBuilder = new FieldSortBuilder("value").missing("_last"); + sortField = fieldSortBuilder.build(shardContextMock).field; + expectedSortField = new SortedNumericSortField("value", SortField.Type.DOUBLE); + expectedSortField.setMissingValue(Double.POSITIVE_INFINITY); + assertEquals(expectedSortField, sortField); + + Double randomDouble = randomDouble(); + fieldSortBuilder = new FieldSortBuilder("value").missing(randomDouble); + sortField = fieldSortBuilder.build(shardContextMock).field; + expectedSortField = new SortedNumericSortField("value", SortField.Type.DOUBLE); + expectedSortField.setMissingValue(randomDouble); + assertEquals(expectedSortField, sortField); + + fieldSortBuilder = new FieldSortBuilder("value").missing(randomDouble.toString()); + sortField = fieldSortBuilder.build(shardContextMock).field; + expectedSortField = new SortedNumericSortField("value", SortField.Type.DOUBLE); + expectedSortField.setMissingValue(randomDouble); + assertEquals(expectedSortField, sortField); + } + + /** + * Test that the sort builder order gets transfered correctly to the SortField + */ + public void testBuildSortFieldOrder() throws IOException { + QueryShardContext shardContextMock = createMockShardContext(); + FieldSortBuilder fieldSortBuilder = new FieldSortBuilder("value"); + SortField sortField = fieldSortBuilder.build(shardContextMock).field; + SortedNumericSortField expectedSortField = new SortedNumericSortField("value", SortField.Type.DOUBLE, false); + expectedSortField.setMissingValue(Double.POSITIVE_INFINITY); + assertEquals(expectedSortField, sortField); + + fieldSortBuilder = new FieldSortBuilder("value").order(SortOrder.ASC); + sortField = fieldSortBuilder.build(shardContextMock).field; + expectedSortField = new SortedNumericSortField("value", SortField.Type.DOUBLE, false); + expectedSortField.setMissingValue(Double.POSITIVE_INFINITY); + assertEquals(expectedSortField, sortField); + + fieldSortBuilder = new FieldSortBuilder("value").order(SortOrder.DESC); + sortField = fieldSortBuilder.build(shardContextMock).field; + expectedSortField = new SortedNumericSortField("value", SortField.Type.DOUBLE, true, SortedNumericSelector.Type.MAX); + expectedSortField.setMissingValue(Double.NEGATIVE_INFINITY); + assertEquals(expectedSortField, sortField); + } + + /** + * Test that the sort builder mode gets transfered correctly to the SortField + */ + public void testMultiValueMode() throws IOException { + QueryShardContext shardContextMock = createMockShardContext(); + + FieldSortBuilder sortBuilder = new FieldSortBuilder("value").sortMode(SortMode.MIN); + SortField sortField = sortBuilder.build(shardContextMock).field; + assertThat(sortField, instanceOf(SortedNumericSortField.class)); + SortedNumericSortField numericSortField = (SortedNumericSortField) sortField; + assertEquals(SortedNumericSelector.Type.MIN, numericSortField.getSelector()); + + sortBuilder = new FieldSortBuilder("value").sortMode(SortMode.MAX); + sortField = sortBuilder.build(shardContextMock).field; + assertThat(sortField, instanceOf(SortedNumericSortField.class)); + numericSortField = (SortedNumericSortField) sortField; + assertEquals(SortedNumericSelector.Type.MAX, numericSortField.getSelector()); + + sortBuilder = new FieldSortBuilder("value").sortMode(SortMode.SUM); + sortField = sortBuilder.build(shardContextMock).field; + assertThat(sortField.getComparatorSource(), instanceOf(XFieldComparatorSource.class)); + XFieldComparatorSource comparatorSource = (XFieldComparatorSource) sortField.getComparatorSource(); + assertEquals(MultiValueMode.SUM, comparatorSource.sortMode()); + + sortBuilder = new FieldSortBuilder("value").sortMode(SortMode.AVG); + sortField = sortBuilder.build(shardContextMock).field; + assertThat(sortField.getComparatorSource(), instanceOf(XFieldComparatorSource.class)); + comparatorSource = (XFieldComparatorSource) sortField.getComparatorSource(); + assertEquals(MultiValueMode.AVG, comparatorSource.sortMode()); + + sortBuilder = new FieldSortBuilder("value").sortMode(SortMode.MEDIAN); + sortField = sortBuilder.build(shardContextMock).field; + assertThat(sortField.getComparatorSource(), instanceOf(XFieldComparatorSource.class)); + comparatorSource = (XFieldComparatorSource) sortField.getComparatorSource(); + assertEquals(MultiValueMode.MEDIAN, comparatorSource.sortMode()); + + // sort mode should also be set by build() implicitely to MIN or MAX if not set explicitely on builder + sortBuilder = new FieldSortBuilder("value"); + sortField = sortBuilder.build(shardContextMock).field; + assertThat(sortField, instanceOf(SortedNumericSortField.class)); + numericSortField = (SortedNumericSortField) sortField; + assertEquals(SortedNumericSelector.Type.MIN, numericSortField.getSelector()); + + sortBuilder = new FieldSortBuilder("value").order(SortOrder.DESC); + sortField = sortBuilder.build(shardContextMock).field; + assertThat(sortField, instanceOf(SortedNumericSortField.class)); + numericSortField = (SortedNumericSortField) sortField; + assertEquals(SortedNumericSelector.Type.MAX, numericSortField.getSelector()); + } + + /** + * Test that the sort builder nested object gets created in the SortField + */ + public void testBuildNested() throws IOException { + QueryShardContext shardContextMock = createMockShardContext(); + + FieldSortBuilder sortBuilder = new FieldSortBuilder("fieldName") + .setNestedSort(new NestedSortBuilder("path").setFilter(QueryBuilders.termQuery(MAPPED_STRING_FIELDNAME, "value"))); + SortField sortField = sortBuilder.build(shardContextMock).field; + assertThat(sortField.getComparatorSource(), instanceOf(XFieldComparatorSource.class)); + XFieldComparatorSource comparatorSource = (XFieldComparatorSource) sortField.getComparatorSource(); + Nested nested = comparatorSource.nested(); + assertNotNull(nested); + assertEquals(new TermQuery(new Term(MAPPED_STRING_FIELDNAME, "value")), nested.getInnerQuery()); + + sortBuilder = new FieldSortBuilder("fieldName").setNestedPath("path"); + sortField = sortBuilder.build(shardContextMock).field; + assertThat(sortField.getComparatorSource(), instanceOf(XFieldComparatorSource.class)); + comparatorSource = (XFieldComparatorSource) sortField.getComparatorSource(); + nested = comparatorSource.nested(); + assertNotNull(nested); + assertEquals(new TermQuery(new Term(TypeFieldMapper.NAME, "__path")), nested.getInnerQuery()); + + sortBuilder = new FieldSortBuilder("fieldName").setNestedPath("path") + .setNestedFilter(QueryBuilders.termQuery(MAPPED_STRING_FIELDNAME, "value")); + sortField = sortBuilder.build(shardContextMock).field; + assertThat(sortField.getComparatorSource(), instanceOf(XFieldComparatorSource.class)); + comparatorSource = (XFieldComparatorSource) sortField.getComparatorSource(); + nested = comparatorSource.nested(); + assertNotNull(nested); + assertEquals(new TermQuery(new Term(MAPPED_STRING_FIELDNAME, "value")), nested.getInnerQuery()); + + // if nested path is missing, we omit any filter and return a SortedNumericSortField + sortBuilder = new FieldSortBuilder("fieldName").setNestedFilter(QueryBuilders.termQuery(MAPPED_STRING_FIELDNAME, "value")); + sortField = sortBuilder.build(shardContextMock).field; + assertThat(sortField, instanceOf(SortedNumericSortField.class)); + } + + public void testUnknownOptionFails() throws IOException { String json = "{ \"post_date\" : {\"reverse\" : true} },\n"; XContentParser parser = createParser(JsonXContent.jsonXContent, json); @@ -132,14 +309,115 @@ public void testReverseOptionFails() throws IOException { parser.nextToken(); parser.nextToken(); - try { - FieldSortBuilder.fromXContent(parser, ""); - fail("adding reverse sorting option should fail with an exception"); - } catch (IllegalArgumentException e) { - // all good + IllegalArgumentException e = expectThrows(IllegalArgumentException.class, () -> FieldSortBuilder.fromXContent(parser, "")); + assertEquals("[field_sort] unknown field [reverse], parser not found", e.getMessage()); + } + + @Override + protected MappedFieldType provideMappedFieldType(String name) { + if (name.equals(MAPPED_STRING_FIELDNAME)) { + KeywordFieldMapper.KeywordFieldType fieldType = new KeywordFieldMapper.KeywordFieldType(); + fieldType.setName(name); + fieldType.setHasDocValues(true); + return fieldType; + } else { + return super.provideMappedFieldType(name); } } + /** + * Test that MIN, MAX mode work on non-numeric fields, but other modes throw exception + */ + public void testModeNonNumericField() throws IOException { + QueryShardContext shardContextMock = createMockShardContext(); + + FieldSortBuilder sortBuilder = new FieldSortBuilder(MAPPED_STRING_FIELDNAME).sortMode(SortMode.MIN); + SortField sortField = sortBuilder.build(shardContextMock).field; + assertThat(sortField, instanceOf(SortedSetSortField.class)); + assertEquals(SortedSetSelector.Type.MIN, ((SortedSetSortField) sortField).getSelector()); + + sortBuilder = new FieldSortBuilder(MAPPED_STRING_FIELDNAME).sortMode(SortMode.MAX); + sortField = sortBuilder.build(shardContextMock).field; + assertThat(sortField, instanceOf(SortedSetSortField.class)); + assertEquals(SortedSetSelector.Type.MAX, ((SortedSetSortField) sortField).getSelector()); + + String expectedError = "we only support AVG, MEDIAN and SUM on number based fields"; + QueryShardException e = expectThrows(QueryShardException.class, + () -> new FieldSortBuilder(MAPPED_STRING_FIELDNAME).sortMode(SortMode.AVG).build(shardContextMock)); + assertEquals(expectedError, e.getMessage()); + + e = expectThrows(QueryShardException.class, + () -> new FieldSortBuilder(MAPPED_STRING_FIELDNAME).sortMode(SortMode.SUM).build(shardContextMock)); + assertEquals(expectedError, e.getMessage()); + + e = expectThrows(QueryShardException.class, + () -> new FieldSortBuilder(MAPPED_STRING_FIELDNAME).sortMode(SortMode.MEDIAN).build(shardContextMock)); + assertEquals(expectedError, e.getMessage()); + } + + /** + * Test we can either set nested sort via path/filter or via nested sort builder, not both + */ + public void testNestedSortBothThrows() throws IOException { + FieldSortBuilder sortBuilder = new FieldSortBuilder(MAPPED_STRING_FIELDNAME); + IllegalArgumentException iae = expectThrows(IllegalArgumentException.class, + () -> sortBuilder.setNestedPath("nestedPath").setNestedSort(new NestedSortBuilder("otherPath"))); + assertEquals("Setting both nested_path/nested_filter and nested not allowed", iae.getMessage()); + iae = expectThrows(IllegalArgumentException.class, + () -> sortBuilder.setNestedSort(new NestedSortBuilder("otherPath")).setNestedPath("nestedPath")); + assertEquals("Setting both nested_path/nested_filter and nested not allowed", iae.getMessage()); + iae = expectThrows(IllegalArgumentException.class, + () -> sortBuilder.setNestedSort(new NestedSortBuilder("otherPath")).setNestedFilter(QueryBuilders.matchAllQuery())); + assertEquals("Setting both nested_path/nested_filter and nested not allowed", iae.getMessage()); + } + + /** + * Test the nested Filter gets rewritten + */ + public void testNestedRewrites() throws IOException { + FieldSortBuilder sortBuilder = new FieldSortBuilder(MAPPED_STRING_FIELDNAME); + RangeQueryBuilder rangeQuery = new RangeQueryBuilder("fieldName") { + @Override + public QueryBuilder doRewrite(QueryRewriteContext queryShardContext) throws IOException { + return new MatchNoneQueryBuilder(); + } + }; + sortBuilder.setNestedPath("path").setNestedFilter(rangeQuery); + FieldSortBuilder rewritten = (FieldSortBuilder) sortBuilder + .rewrite(createMockShardContext()); + assertNotSame(rangeQuery, rewritten.getNestedFilter()); + } + + /** + * Test the nested sort gets rewritten + */ + public void testNestedSortRewrites() throws IOException { + FieldSortBuilder sortBuilder = new FieldSortBuilder(MAPPED_STRING_FIELDNAME); + RangeQueryBuilder rangeQuery = new RangeQueryBuilder("fieldName") { + @Override + public QueryBuilder doRewrite(QueryRewriteContext queryShardContext) throws IOException { + return new MatchNoneQueryBuilder(); + } + }; + sortBuilder.setNestedSort(new NestedSortBuilder("path").setFilter(rangeQuery)); + FieldSortBuilder rewritten = (FieldSortBuilder) sortBuilder + .rewrite(createMockShardContext()); + assertNotSame(rangeQuery, rewritten.getNestedSort().getFilter()); + } + + @Override + protected void assertWarnings(FieldSortBuilder testItem) { + List expectedWarnings = new ArrayList<>(); + if (testItem.getNestedFilter() != null) { + expectedWarnings.add("[nested_filter] has been deprecated in favour for the [nested] parameter"); + } + if (testItem.getNestedPath() != null) { + expectedWarnings.add("[nested_path] has been deprecated in favor of the [nested] parameter"); + } + if (expectedWarnings.isEmpty() == false) { + assertWarnings(expectedWarnings.toArray(new String[expectedWarnings.size()])); + } + } @Override protected FieldSortBuilder fromXContent(XContentParser parser, String fieldName) throws IOException { diff --git a/core/src/test/java/org/elasticsearch/search/sort/FieldSortIT.java b/core/src/test/java/org/elasticsearch/search/sort/FieldSortIT.java index 20fd7fd378640..89c1537b8f169 100644 --- a/core/src/test/java/org/elasticsearch/search/sort/FieldSortIT.java +++ b/core/src/test/java/org/elasticsearch/search/sort/FieldSortIT.java @@ -20,7 +20,6 @@ package org.elasticsearch.search.sort; import org.apache.lucene.util.BytesRef; -import org.apache.lucene.util.LuceneTestCase; import org.apache.lucene.util.TestUtil; import org.apache.lucene.util.UnicodeUtil; import org.elasticsearch.action.admin.indices.alias.Alias; @@ -29,6 +28,7 @@ import org.elasticsearch.action.search.SearchResponse; import org.elasticsearch.action.search.ShardSearchFailure; import org.elasticsearch.cluster.metadata.IndexMetaData; +import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.common.xcontent.XContentType; @@ -114,7 +114,6 @@ protected Collection> nodePlugins() { return Arrays.asList(InternalSettingsPlugin.class, CustomScriptPlugin.class); } - @LuceneTestCase.AwaitsFix(bugUrl = "https://github.com/elastic/elasticsearch/issues/9421") public void testIssue8226() { int numIndices = between(5, 10); final boolean useMapping = randomBoolean(); @@ -1462,10 +1461,10 @@ public void testNestedSort() throws IOException, InterruptedException, Execution public void testSortDuelBetweenSingleShardAndMultiShardIndex() throws Exception { String sortField = "sortField"; assertAcked(prepareCreate("test1") - .setSettings(IndexMetaData.SETTING_NUMBER_OF_SHARDS, between(2, maximumNumberOfShards())) + .setSettings(Settings.builder().put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, between(2, maximumNumberOfShards()))) .addMapping("type", sortField, "type=long").get()); assertAcked(prepareCreate("test2") - .setSettings(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1) + .setSettings(Settings.builder().put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1)) .addMapping("type", sortField, "type=long").get()); for (String index : new String[]{"test1", "test2"}) { diff --git a/core/src/test/java/org/elasticsearch/search/sort/GeoDistanceSortBuilderTests.java b/core/src/test/java/org/elasticsearch/search/sort/GeoDistanceSortBuilderTests.java index 007695886c351..1109bdfc1f1a1 100644 --- a/core/src/test/java/org/elasticsearch/search/sort/GeoDistanceSortBuilderTests.java +++ b/core/src/test/java/org/elasticsearch/search/sort/GeoDistanceSortBuilderTests.java @@ -21,25 +21,41 @@ import org.apache.lucene.document.LatLonDocValuesField; +import org.apache.lucene.index.Term; +import org.apache.lucene.search.MatchAllDocsQuery; import org.apache.lucene.search.SortField; +import org.apache.lucene.search.TermQuery; +import org.elasticsearch.ElasticsearchParseException; import org.elasticsearch.common.geo.GeoDistance; import org.elasticsearch.common.geo.GeoPoint; import org.elasticsearch.common.unit.DistanceUnit; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.common.xcontent.json.JsonXContent; +import org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource; +import org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource.Nested; import org.elasticsearch.index.mapper.GeoPointFieldMapper; import org.elasticsearch.index.mapper.MappedFieldType; +import org.elasticsearch.index.mapper.TypeFieldMapper; import org.elasticsearch.index.query.GeoValidationMethod; import org.elasticsearch.index.query.MatchAllQueryBuilder; +import org.elasticsearch.index.query.MatchNoneQueryBuilder; +import org.elasticsearch.index.query.QueryBuilder; +import org.elasticsearch.index.query.QueryBuilders; +import org.elasticsearch.index.query.QueryRewriteContext; import org.elasticsearch.index.query.QueryShardContext; +import org.elasticsearch.index.query.RangeQueryBuilder; import org.elasticsearch.search.DocValueFormat; +import org.elasticsearch.search.MultiValueMode; import org.elasticsearch.test.geo.RandomGeoGenerator; import java.io.IOException; +import java.util.ArrayList; import java.util.Arrays; +import java.util.List; import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder; +import static org.hamcrest.Matchers.instanceOf; public class GeoDistanceSortBuilderTests extends AbstractSortTestCase { @@ -87,18 +103,24 @@ public static GeoDistanceSortBuilder randomGeoDistanceSortBuilder() { result.sortMode(randomValueOtherThan(SortMode.SUM, () -> randomFrom(SortMode.values()))); } if (randomBoolean()) { - result.setNestedFilter(new MatchAllQueryBuilder()); - } - if (randomBoolean()) { - result.setNestedPath( - randomValueOtherThan( - result.getNestedPath(), - () -> randomAlphaOfLengthBetween(1, 10))); + result.validation(randomValueOtherThan(result.validation(), () -> randomFrom(GeoValidationMethod.values()))); } if (randomBoolean()) { - result.validation(randomValueOtherThan(result.validation(), () -> randomFrom(GeoValidationMethod.values()))); + if (randomBoolean()) { + // don't fully randomize here, GeoDistanceSort is picky about the filters that are allowed + NestedSortBuilder nestedSort = new NestedSortBuilder(randomAlphaOfLengthBetween(3, 10)); + nestedSort.setFilter(new MatchAllQueryBuilder()); + result.setNestedSort(nestedSort); + } else { + // the following are alternative ways to setNestedSort for nested sorting + if (randomBoolean()) { + result.setNestedFilter(new MatchAllQueryBuilder()); + } + if (randomBoolean()) { + result.setNestedPath(randomAlphaOfLengthBetween(1, 10)); + } + } } - return result; } @@ -132,7 +154,7 @@ private static GeoDistance geoDistance(GeoDistance original) { @Override protected GeoDistanceSortBuilder mutate(GeoDistanceSortBuilder original) throws IOException { GeoDistanceSortBuilder result = new GeoDistanceSortBuilder(original); - int parameter = randomIntBetween(0, 8); + int parameter = randomIntBetween(0, 7); switch (parameter) { case 0: while (Arrays.deepEquals(original.points(), result.points())) { @@ -158,16 +180,18 @@ protected GeoDistanceSortBuilder mutate(GeoDistanceSortBuilder original) throws () -> randomFrom(SortMode.values()))); break; case 6: - result.setNestedFilter(randomValueOtherThan( - original.getNestedFilter(), - () -> randomNestedFilter())); + if (original.getNestedPath() == null && original.getNestedFilter() == null) { + result.setNestedSort( + randomValueOtherThan(original.getNestedSort(), () -> NestedSortBuilderTests.createRandomNestedSort(3))); + } else { + if (randomBoolean()) { + result.setNestedPath(randomValueOtherThan(original.getNestedPath(), () -> randomAlphaOfLengthBetween(1, 10))); + } else { + result.setNestedFilter(randomValueOtherThan(original.getNestedFilter(), () -> randomNestedFilter())); + } + } break; case 7: - result.setNestedPath(randomValueOtherThan( - result.getNestedPath(), - () -> randomAlphaOfLengthBetween(1, 10))); - break; - case 8: result.validation(randomValueOtherThan(result.validation(), () -> randomFrom(GeoValidationMethod.values()))); break; } @@ -217,11 +241,13 @@ public void testGeoDistanceSortCanBeParsedFromGeoHash() throws IOException { " \"unit\" : \"m\",\n" + " \"distance_type\" : \"arc\",\n" + " \"mode\" : \"MAX\",\n" + - " \"nested_filter\" : {\n" + - " \"ids\" : {\n" + - " \"type\" : [ ],\n" + - " \"values\" : [ ],\n" + - " \"boost\" : 5.711116\n" + + " \"nested\" : {\n" + + " \"filter\" : {\n" + + " \"ids\" : {\n" + + " \"type\" : [ ],\n" + + " \"values\" : [ ],\n" + + " \"boost\" : 5.711116\n" + + " }\n" + " }\n" + " },\n" + " \"validation_method\" : \"STRICT\"\n" + @@ -353,6 +379,21 @@ private GeoDistanceSortBuilder parse(XContentBuilder sortBuilder) throws Excepti return GeoDistanceSortBuilder.fromXContent(parser, null); } + @Override + protected void assertWarnings(GeoDistanceSortBuilder testItem) { + List expectedWarnings = new ArrayList<>(); + if (testItem.getNestedFilter() != null) { + expectedWarnings.add("[nested_filter] has been deprecated in favour of the [nested] parameter"); + } + if (testItem.getNestedPath() != null) { + expectedWarnings.add("[nested_path] has been deprecated in favour of the [nested] parameter"); + } + if (expectedWarnings.isEmpty() == false) { + assertWarnings(expectedWarnings.toArray(new String[expectedWarnings.size()])); + } + } + + @Override protected GeoDistanceSortBuilder fromXContent(XContentParser parser, String fieldName) throws IOException { return GeoDistanceSortBuilder.fromXContent(parser, fieldName); @@ -393,4 +434,191 @@ public void testCommonCaseIsOptimized() throws IOException { sort = builder.build(context); assertEquals(SortField.class, sort.field.getClass()); // can't use LatLon optimized sorting with DESC sorting } + + /** + * Test that the sort builder order gets transfered correctly to the SortField + */ + public void testBuildSortFieldOrder() throws IOException { + QueryShardContext shardContextMock = createMockShardContext(); + GeoDistanceSortBuilder geoDistanceSortBuilder = new GeoDistanceSortBuilder("fieldName", 1.0, 1.0); + assertEquals(false, geoDistanceSortBuilder.build(shardContextMock).field.getReverse()); + + geoDistanceSortBuilder.order(SortOrder.ASC); + assertEquals(false, geoDistanceSortBuilder.build(shardContextMock).field.getReverse()); + + geoDistanceSortBuilder.order(SortOrder.DESC); + assertEquals(true, geoDistanceSortBuilder.build(shardContextMock).field.getReverse()); + } + + /** + * Test that the sort builder mode gets transfered correctly to the SortField + */ + public void testMultiValueMode() throws IOException { + QueryShardContext shardContextMock = createMockShardContext(); + GeoDistanceSortBuilder geoDistanceSortBuilder = new GeoDistanceSortBuilder("fieldName", 1.0, 1.0); + geoDistanceSortBuilder.sortMode(SortMode.MAX); + SortField sortField = geoDistanceSortBuilder.build(shardContextMock).field; + assertThat(sortField.getComparatorSource(), instanceOf(XFieldComparatorSource.class)); + XFieldComparatorSource comparatorSource = (XFieldComparatorSource) sortField.getComparatorSource(); + assertEquals(MultiValueMode.MAX, comparatorSource.sortMode()); + + // also use MultiValueMode.Max if no Mode set but order is DESC + geoDistanceSortBuilder = new GeoDistanceSortBuilder("fieldName", 1.0, 1.0); + geoDistanceSortBuilder.order(SortOrder.DESC); + sortField = geoDistanceSortBuilder.build(shardContextMock).field; + assertThat(sortField.getComparatorSource(), instanceOf(XFieldComparatorSource.class)); + comparatorSource = (XFieldComparatorSource) sortField.getComparatorSource(); + assertEquals(MultiValueMode.MAX, comparatorSource.sortMode()); + + // use MultiValueMode.Min if no Mode and order is ASC + geoDistanceSortBuilder = new GeoDistanceSortBuilder("fieldName", 1.0, 1.0); + // need to use distance unit other than Meters to not get back a LatLonPointSortField + geoDistanceSortBuilder.order(SortOrder.ASC).unit(DistanceUnit.INCH); + sortField = geoDistanceSortBuilder.build(shardContextMock).field; + assertThat(sortField.getComparatorSource(), instanceOf(XFieldComparatorSource.class)); + comparatorSource = (XFieldComparatorSource) sortField.getComparatorSource(); + assertEquals(MultiValueMode.MIN, comparatorSource.sortMode()); + + geoDistanceSortBuilder = new GeoDistanceSortBuilder("fieldName", 1.0, 1.0); + // need to use distance unit other than Meters to not get back a LatLonPointSortField + geoDistanceSortBuilder.sortMode(SortMode.MIN).unit(DistanceUnit.INCH); + sortField = geoDistanceSortBuilder.build(shardContextMock).field; + assertThat(sortField.getComparatorSource(), instanceOf(XFieldComparatorSource.class)); + comparatorSource = (XFieldComparatorSource) sortField.getComparatorSource(); + assertEquals(MultiValueMode.MIN, comparatorSource.sortMode()); + + geoDistanceSortBuilder.sortMode(SortMode.AVG); + sortField = geoDistanceSortBuilder.build(shardContextMock).field; + assertThat(sortField.getComparatorSource(), instanceOf(XFieldComparatorSource.class)); + comparatorSource = (XFieldComparatorSource) sortField.getComparatorSource(); + assertEquals(MultiValueMode.AVG, comparatorSource.sortMode()); + + geoDistanceSortBuilder.sortMode(SortMode.MEDIAN); + sortField = geoDistanceSortBuilder.build(shardContextMock).field; + assertThat(sortField.getComparatorSource(), instanceOf(XFieldComparatorSource.class)); + comparatorSource = (XFieldComparatorSource) sortField.getComparatorSource(); + assertEquals(MultiValueMode.MEDIAN, comparatorSource.sortMode()); + } + + /** + * Test that the sort builder nested object gets created in the SortField + */ + public void testBuildNested() throws IOException { + QueryShardContext shardContextMock = createMockShardContext(); + + GeoDistanceSortBuilder sortBuilder = new GeoDistanceSortBuilder("fieldName", 1.0, 1.0) + .setNestedSort(new NestedSortBuilder("path").setFilter(QueryBuilders.matchAllQuery())); + SortField sortField = sortBuilder.build(shardContextMock).field; + assertThat(sortField.getComparatorSource(), instanceOf(XFieldComparatorSource.class)); + XFieldComparatorSource comparatorSource = (XFieldComparatorSource) sortField.getComparatorSource(); + Nested nested = comparatorSource.nested(); + assertNotNull(nested); + assertEquals(new MatchAllDocsQuery(), nested.getInnerQuery()); + + sortBuilder = new GeoDistanceSortBuilder("fieldName", 1.0, 1.0).setNestedPath("path"); + sortField = sortBuilder.build(shardContextMock).field; + assertThat(sortField.getComparatorSource(), instanceOf(XFieldComparatorSource.class)); + comparatorSource = (XFieldComparatorSource) sortField.getComparatorSource(); + nested = comparatorSource.nested(); + assertNotNull(nested); + assertEquals(new TermQuery(new Term(TypeFieldMapper.NAME, "__path")), nested.getInnerQuery()); + + sortBuilder = new GeoDistanceSortBuilder("fieldName", 1.0, 1.0).setNestedPath("path") + .setNestedFilter(QueryBuilders.matchAllQuery()); + sortField = sortBuilder.build(shardContextMock).field; + assertThat(sortField.getComparatorSource(), instanceOf(XFieldComparatorSource.class)); + comparatorSource = (XFieldComparatorSource) sortField.getComparatorSource(); + nested = comparatorSource.nested(); + assertNotNull(nested); + assertEquals(new MatchAllDocsQuery(), nested.getInnerQuery()); + + // if nested path is missing, we omit any filter and return a regular SortField + // (LatLonSortField) + sortBuilder = new GeoDistanceSortBuilder("fieldName", 1.0, 1.0).setNestedFilter(QueryBuilders.termQuery("fieldName", "value")); + sortField = sortBuilder.build(shardContextMock).field; + assertThat(sortField, instanceOf(SortField.class)); + } + + /** + * Test that if coercion is used, a point gets normalized but the original values in the builder are unchanged + */ + public void testBuildCoerce() throws IOException { + QueryShardContext shardContextMock = createMockShardContext(); + GeoDistanceSortBuilder sortBuilder = new GeoDistanceSortBuilder("fieldName", -180.0, -360.0); + sortBuilder.validation(GeoValidationMethod.COERCE); + assertEquals(-180.0, sortBuilder.points()[0].getLat(), 0.0); + assertEquals(-360.0, sortBuilder.points()[0].getLon(), 0.0); + SortField sortField = sortBuilder.build(shardContextMock).field; + assertEquals(LatLonDocValuesField.newDistanceSort("fieldName", 0.0, 180.0), sortField); + } + + /** + * Test that if validation is strict, invalid points throw an error + */ + public void testBuildInvalidPoints() throws IOException { + QueryShardContext shardContextMock = createMockShardContext(); + { + GeoDistanceSortBuilder sortBuilder = new GeoDistanceSortBuilder("fieldName", -180.0, 0.0); + sortBuilder.validation(GeoValidationMethod.STRICT); + ElasticsearchParseException ex = expectThrows(ElasticsearchParseException.class, () -> sortBuilder.build(shardContextMock)); + assertEquals("illegal latitude value [-180.0] for [GeoDistanceSort] for field [fieldName].", ex.getMessage()); + } + { + GeoDistanceSortBuilder sortBuilder = new GeoDistanceSortBuilder("fieldName", 0.0, -360.0); + sortBuilder.validation(GeoValidationMethod.STRICT); + ElasticsearchParseException ex = expectThrows(ElasticsearchParseException.class, () -> sortBuilder.build(shardContextMock)); + assertEquals("illegal longitude value [-360.0] for [GeoDistanceSort] for field [fieldName].", ex.getMessage()); + } + } + + /** + * Test we can either set nested sort via path/filter or via nested sort builder, not both + */ + public void testNestedSortBothThrows() throws IOException { + GeoDistanceSortBuilder sortBuilder = new GeoDistanceSortBuilder("fieldName", 0.0, 0.0); + IllegalArgumentException iae = expectThrows(IllegalArgumentException.class, + () -> sortBuilder.setNestedPath("nestedPath").setNestedSort(new NestedSortBuilder("otherPath"))); + assertEquals("Setting both nested_path/nested_filter and nested not allowed", iae.getMessage()); + iae = expectThrows(IllegalArgumentException.class, + () -> sortBuilder.setNestedSort(new NestedSortBuilder("otherPath")).setNestedPath("nestedPath")); + assertEquals("Setting both nested_path/nested_filter and nested not allowed", iae.getMessage()); + iae = expectThrows(IllegalArgumentException.class, + () -> sortBuilder.setNestedSort(new NestedSortBuilder("otherPath")).setNestedFilter(QueryBuilders.matchAllQuery())); + assertEquals("Setting both nested_path/nested_filter and nested not allowed", iae.getMessage()); + } + + /** + * Test the nested Filter gets rewritten + */ + public void testNestedRewrites() throws IOException { + GeoDistanceSortBuilder sortBuilder = new GeoDistanceSortBuilder("fieldName", 0.0, 0.0); + RangeQueryBuilder rangeQuery = new RangeQueryBuilder("fieldName") { + @Override + public QueryBuilder doRewrite(QueryRewriteContext queryShardContext) throws IOException { + return new MatchNoneQueryBuilder(); + } + }; + sortBuilder.setNestedPath("path").setNestedFilter(rangeQuery); + GeoDistanceSortBuilder rewritten = (GeoDistanceSortBuilder) sortBuilder + .rewrite(createMockShardContext()); + assertNotSame(rangeQuery, rewritten.getNestedFilter()); + } + + /** + * Test the nested sort gets rewritten + */ + public void testNestedSortRewrites() throws IOException { + GeoDistanceSortBuilder sortBuilder = new GeoDistanceSortBuilder("fieldName", 0.0, 0.0); + RangeQueryBuilder rangeQuery = new RangeQueryBuilder("fieldName") { + @Override + public QueryBuilder doRewrite(QueryRewriteContext queryShardContext) throws IOException { + return new MatchNoneQueryBuilder(); + } + }; + sortBuilder.setNestedSort(new NestedSortBuilder("path").setFilter(rangeQuery)); + GeoDistanceSortBuilder rewritten = (GeoDistanceSortBuilder) sortBuilder + .rewrite(createMockShardContext()); + assertNotSame(rangeQuery, rewritten.getNestedSort().getFilter()); + } + } diff --git a/core/src/test/java/org/elasticsearch/search/sort/NestedSortBuilderTests.java b/core/src/test/java/org/elasticsearch/search/sort/NestedSortBuilderTests.java new file mode 100644 index 0000000000000..0908d83896f92 --- /dev/null +++ b/core/src/test/java/org/elasticsearch/search/sort/NestedSortBuilderTests.java @@ -0,0 +1,205 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.search.sort; + +import org.elasticsearch.common.io.stream.NamedWriteableRegistry; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; +import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentFactory; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.common.xcontent.XContentType; +import org.elasticsearch.index.query.ConstantScoreQueryBuilder; +import org.elasticsearch.index.query.MatchAllQueryBuilder; +import org.elasticsearch.index.query.MatchNoneQueryBuilder; +import org.elasticsearch.index.query.QueryBuilder; +import org.elasticsearch.index.query.QueryRewriteContext; +import org.elasticsearch.search.SearchModule; +import org.elasticsearch.test.ESTestCase; +import org.elasticsearch.test.EqualsHashCodeTestUtils; +import org.junit.AfterClass; +import org.junit.BeforeClass; +import org.mockito.Mockito; + +import java.io.IOException; + +import static java.util.Collections.emptyList; + +public class NestedSortBuilderTests extends ESTestCase { + + private static final int NUMBER_OF_TESTBUILDERS = 20; + private static NamedWriteableRegistry namedWriteableRegistry; + private static NamedXContentRegistry xContentRegistry; + + @BeforeClass + public static void init() { + SearchModule searchModule = new SearchModule(Settings.EMPTY, false, emptyList()); + namedWriteableRegistry = new NamedWriteableRegistry(searchModule.getNamedWriteables()); + xContentRegistry = new NamedXContentRegistry(searchModule.getNamedXContents()); + } + + @AfterClass + public static void afterClass() throws Exception { + namedWriteableRegistry = null; + xContentRegistry = null; + } + + @Override + protected NamedXContentRegistry xContentRegistry() { + return xContentRegistry; + } + + public void testFromXContent() throws IOException { + for (int runs = 0; runs < NUMBER_OF_TESTBUILDERS; runs++) { + NestedSortBuilder testItem = createRandomNestedSort(3); + XContentBuilder builder = XContentFactory.contentBuilder(randomFrom(XContentType.values())); + testItem.toXContent(builder, ToXContent.EMPTY_PARAMS); + XContentBuilder shuffled = shuffleXContent(builder); + XContentParser parser = createParser(shuffled); + parser.nextToken(); + NestedSortBuilder parsedItem = NestedSortBuilder.fromXContent(parser); + assertNotSame(testItem, parsedItem); + assertEquals(testItem, parsedItem); + assertEquals(testItem.hashCode(), parsedItem.hashCode()); + } + } + + /** + * Create a {@link NestedSortBuilder} with random path and filter of the given depth. + */ + public static NestedSortBuilder createRandomNestedSort(int depth) { + NestedSortBuilder nestedSort = new NestedSortBuilder(randomAlphaOfLengthBetween(3, 10)); + if (randomBoolean()) { + nestedSort.setFilter(AbstractSortTestCase.randomNestedFilter()); + } + if (depth > 0) { + nestedSort.setNestedSort(createRandomNestedSort(depth - 1)); + } + return nestedSort; + } + + /** + * Test serialization of the test nested sort. + */ + public void testSerialization() throws IOException { + for (int runs = 0; runs < NUMBER_OF_TESTBUILDERS; runs++) { + NestedSortBuilder testsort = createRandomNestedSort(3); + NestedSortBuilder deserializedsort = copy(testsort); + assertEquals(testsort, deserializedsort); + assertEquals(testsort.hashCode(), deserializedsort.hashCode()); + assertNotSame(testsort, deserializedsort); + } + } + + private static NestedSortBuilder copy(NestedSortBuilder nestedSort) throws IOException { + return copyWriteable(nestedSort, namedWriteableRegistry, NestedSortBuilder::new); + } + + private static NestedSortBuilder mutate(NestedSortBuilder original) throws IOException { + NestedSortBuilder mutated = original.getNestedSort(); + int parameter = randomIntBetween(0, 2); + switch (parameter) { + case 0: + mutated = new NestedSortBuilder(original.getPath()+"_suffix"); + mutated.setFilter(original.getFilter()); + mutated.setNestedSort(original.getNestedSort()); + break; + case 1: + mutated.setFilter(randomValueOtherThan(original.getFilter(), AbstractSortTestCase::randomNestedFilter)); + break; + case 2: + default: + mutated.setNestedSort(randomValueOtherThan(original.getNestedSort(), () -> NestedSortBuilderTests.createRandomNestedSort(3))); + break; + } + return mutated; + } + + /** + * Test equality and hashCode properties + */ + public void testEqualsAndHashcode() { + for (int runs = 0; runs < NUMBER_OF_TESTBUILDERS; runs++) { + EqualsHashCodeTestUtils.checkEqualsAndHashCode(createRandomNestedSort(3), NestedSortBuilderTests::copy, + NestedSortBuilderTests::mutate); + } + } + + /** + * Test that filters and inner nested sorts get rewritten + */ + public void testRewrite() throws IOException { + QueryBuilder filterThatRewrites = new MatchNoneQueryBuilder() { + @Override + protected QueryBuilder doRewrite(org.elasticsearch.index.query.QueryRewriteContext queryShardContext) throws IOException { + return new MatchAllQueryBuilder(); + }; + }; + // test that filter gets rewritten + NestedSortBuilder original = new NestedSortBuilder("path").setFilter(filterThatRewrites); + QueryRewriteContext mockRewriteContext = Mockito.mock(QueryRewriteContext.class); + NestedSortBuilder rewritten = original.rewrite(mockRewriteContext); + assertNotSame(rewritten, original); + assertNotSame(rewritten.getFilter(), original.getFilter()); + + // test that inner nested sort gets rewritten + original = new NestedSortBuilder("path"); + original.setNestedSort(new NestedSortBuilder("otherPath").setFilter(filterThatRewrites)); + rewritten = original.rewrite(mockRewriteContext); + assertNotSame(rewritten, original); + assertNotSame(rewritten.getNestedSort(), original.getNestedSort()); + + // test that both filter and inner nested sort get rewritten + original = new NestedSortBuilder("path"); + original.setFilter(filterThatRewrites); + original.setNestedSort(new NestedSortBuilder("otherPath").setFilter(filterThatRewrites)); + rewritten = original.rewrite(mockRewriteContext); + assertNotSame(rewritten, original); + assertNotSame(rewritten.getFilter(), original.getFilter()); + assertNotSame(rewritten.getNestedSort(), original.getNestedSort()); + + // test that original stays unchanged if no element rewrites + original = new NestedSortBuilder("path"); + original.setFilter(new MatchNoneQueryBuilder()); + original.setNestedSort(new NestedSortBuilder("otherPath").setFilter(new MatchNoneQueryBuilder())); + rewritten = original.rewrite(mockRewriteContext); + assertSame(rewritten, original); + assertSame(rewritten.getFilter(), original.getFilter()); + assertSame(rewritten.getNestedSort(), original.getNestedSort()); + + // test that rewrite works recursively + original = new NestedSortBuilder("firstLevel"); + ConstantScoreQueryBuilder constantScoreQueryBuilder = new ConstantScoreQueryBuilder(filterThatRewrites); + original.setFilter(constantScoreQueryBuilder); + NestedSortBuilder nestedSortThatRewrites = new NestedSortBuilder("thirdLevel") + .setFilter(filterThatRewrites); + original.setNestedSort(new NestedSortBuilder("secondLevel").setNestedSort(nestedSortThatRewrites)); + rewritten = original.rewrite(mockRewriteContext); + assertNotSame(rewritten, original); + assertNotSame(rewritten.getFilter(), constantScoreQueryBuilder); + assertNotSame(((ConstantScoreQueryBuilder) rewritten.getFilter()).innerQuery(), constantScoreQueryBuilder.innerQuery()); + + assertEquals("secondLevel", rewritten.getNestedSort().getPath()); + assertNotSame(rewritten.getNestedSort(), original.getNestedSort()); + assertEquals("thirdLevel", rewritten.getNestedSort().getNestedSort().getPath()); + assertNotSame(rewritten.getNestedSort().getNestedSort(), nestedSortThatRewrites); + } +} diff --git a/core/src/test/java/org/elasticsearch/search/sort/ScriptSortBuilderTests.java b/core/src/test/java/org/elasticsearch/search/sort/ScriptSortBuilderTests.java index 345991c265285..9a28740d7271f 100644 --- a/core/src/test/java/org/elasticsearch/search/sort/ScriptSortBuilderTests.java +++ b/core/src/test/java/org/elasticsearch/search/sort/ScriptSortBuilderTests.java @@ -20,12 +20,28 @@ package org.elasticsearch.search.sort; +import org.apache.lucene.index.Term; +import org.apache.lucene.search.MatchAllDocsQuery; import org.apache.lucene.search.SortField; +import org.apache.lucene.search.TermQuery; +import org.elasticsearch.common.ParsingException; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.common.xcontent.json.JsonXContent; +import org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource; +import org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource.Nested; +import org.elasticsearch.index.fielddata.fieldcomparator.BytesRefFieldComparatorSource; +import org.elasticsearch.index.fielddata.fieldcomparator.DoubleValuesComparatorSource; +import org.elasticsearch.index.mapper.TypeFieldMapper; +import org.elasticsearch.index.query.MatchNoneQueryBuilder; +import org.elasticsearch.index.query.QueryBuilder; +import org.elasticsearch.index.query.QueryBuilders; +import org.elasticsearch.index.query.QueryRewriteContext; +import org.elasticsearch.index.query.QueryShardContext; +import org.elasticsearch.index.query.RangeQueryBuilder; import org.elasticsearch.script.Script; import org.elasticsearch.script.ScriptType; import org.elasticsearch.search.DocValueFormat; +import org.elasticsearch.search.MultiValueMode; import org.elasticsearch.search.sort.ScriptSortBuilder.ScriptSortType; import java.io.IOException; @@ -33,6 +49,9 @@ import java.util.HashSet; import java.util.Set; +import static org.elasticsearch.search.sort.NestedSortBuilderTests.createRandomNestedSort; +import static org.hamcrest.Matchers.instanceOf; + public class ScriptSortBuilderTests extends AbstractSortTestCase { @Override @@ -42,7 +61,7 @@ protected ScriptSortBuilder createTestItem() { public static ScriptSortBuilder randomScriptSortBuilder() { ScriptSortType type = randomBoolean() ? ScriptSortType.NUMBER : ScriptSortType.STRING; - ScriptSortBuilder builder = new ScriptSortBuilder(mockScript("dummy"), + ScriptSortBuilder builder = new ScriptSortBuilder(mockScript(MOCK_SCRIPT_NAME), type); if (randomBoolean()) { builder.order(randomFrom(SortOrder.values())); @@ -59,10 +78,7 @@ public static ScriptSortBuilder randomScriptSortBuilder() { } } if (randomBoolean()) { - builder.setNestedFilter(randomNestedFilter()); - } - if (randomBoolean()) { - builder.setNestedPath(randomAlphaOfLengthBetween(1, 10)); + builder.setNestedSort(createRandomNestedSort(3)); } return builder; } @@ -83,12 +99,11 @@ protected ScriptSortBuilder mutate(ScriptSortBuilder original) throws IOExceptio if (original.sortMode() != null && result.type() == ScriptSortType.NUMBER) { result.sortMode(original.sortMode()); } - result.setNestedFilter(original.getNestedFilter()); - result.setNestedPath(original.getNestedPath()); + result.setNestedSort(original.getNestedSort()); return result; } result = new ScriptSortBuilder(original); - switch (randomIntBetween(0, 3)) { + switch (randomIntBetween(0, 2)) { case 0: if (original.order() == SortOrder.ASC) { result.order(SortOrder.DESC); @@ -109,12 +124,8 @@ protected ScriptSortBuilder mutate(ScriptSortBuilder original) throws IOExceptio } break; case 2: - result.setNestedFilter(randomValueOtherThan( - original.getNestedFilter(), - () -> randomNestedFilter())); - break; - case 3: - result.setNestedPath(original.getNestedPath() + "_some_suffix"); + result.setNestedSort(randomValueOtherThan(original.getNestedSort(), + () -> NestedSortBuilderTests.createRandomNestedSort(3))); break; } return result; @@ -178,8 +189,7 @@ public void testParseJson() throws IOException { assertEquals(ScriptSortType.NUMBER, builder.type()); assertEquals(SortOrder.ASC, builder.order()); assertEquals(SortMode.MAX, builder.sortMode()); - assertNull(builder.getNestedFilter()); - assertNull(builder.getNestedPath()); + assertNull(builder.getNestedSort()); } public void testParseJson_simple() throws IOException { @@ -203,8 +213,7 @@ public void testParseJson_simple() throws IOException { assertEquals(ScriptSortType.NUMBER, builder.type()); assertEquals(SortOrder.ASC, builder.order()); assertEquals(SortMode.MAX, builder.sortMode()); - assertNull(builder.getNestedFilter()); - assertNull(builder.getNestedPath()); + assertNull(builder.getNestedSort()); } public void testParseBadFieldNameExceptions() throws IOException { @@ -237,7 +246,7 @@ public void testParseUnexpectedToken() throws IOException { parser.nextToken(); parser.nextToken(); - Exception e = expectThrows(IllegalArgumentException.class, () -> ScriptSortBuilder.fromXContent(parser, null)); + Exception e = expectThrows(ParsingException.class, () -> ScriptSortBuilder.fromXContent(parser, null)); assertEquals("[_script] script doesn't support values of type: START_ARRAY", e.getMessage()); } @@ -245,12 +254,146 @@ public void testParseUnexpectedToken() throws IOException { * script sort of type {@link ScriptSortType} does not work with {@link SortMode#AVG}, {@link SortMode#MEDIAN} or {@link SortMode#SUM} */ public void testBadSortMode() throws IOException { - ScriptSortBuilder builder = new ScriptSortBuilder(mockScript("something"), ScriptSortType.STRING); + ScriptSortBuilder builder = new ScriptSortBuilder(mockScript(MOCK_SCRIPT_NAME), ScriptSortType.STRING); String sortMode = randomFrom(new String[] { "avg", "median", "sum" }); Exception e = expectThrows(IllegalArgumentException.class, () -> builder.sortMode(SortMode.fromString(sortMode))); assertEquals("script sort of type [string] doesn't support mode [" + sortMode + "]", e.getMessage()); } + /** + * Test that the sort builder mode gets transfered correctly to the SortField + */ + public void testMultiValueMode() throws IOException { + QueryShardContext shardContextMock = createMockShardContext(); + for (SortMode mode : SortMode.values()) { + ScriptSortBuilder sortBuilder = new ScriptSortBuilder(mockScript(MOCK_SCRIPT_NAME), ScriptSortType.NUMBER); + sortBuilder.sortMode(mode); + SortField sortField = sortBuilder.build(shardContextMock).field; + assertThat(sortField.getComparatorSource(), instanceOf(XFieldComparatorSource.class)); + XFieldComparatorSource comparatorSource = (XFieldComparatorSource) sortField.getComparatorSource(); + assertEquals(MultiValueMode.fromString(mode.toString()), comparatorSource.sortMode()); + } + + // check that without mode set, order ASC sets mode to MIN, DESC to MAX + ScriptSortBuilder sortBuilder = new ScriptSortBuilder(mockScript(MOCK_SCRIPT_NAME), ScriptSortType.NUMBER); + sortBuilder.order(SortOrder.ASC); + SortField sortField = sortBuilder.build(shardContextMock).field; + assertThat(sortField.getComparatorSource(), instanceOf(XFieldComparatorSource.class)); + XFieldComparatorSource comparatorSource = (XFieldComparatorSource) sortField.getComparatorSource(); + assertEquals(MultiValueMode.MIN, comparatorSource.sortMode()); + + sortBuilder = new ScriptSortBuilder(mockScript(MOCK_SCRIPT_NAME), ScriptSortType.NUMBER); + sortBuilder.order(SortOrder.DESC); + sortField = sortBuilder.build(shardContextMock).field; + assertThat(sortField.getComparatorSource(), instanceOf(XFieldComparatorSource.class)); + comparatorSource = (XFieldComparatorSource) sortField.getComparatorSource(); + assertEquals(MultiValueMode.MAX, comparatorSource.sortMode()); + } + + /** + * Test that the correct comparator sort is returned, based on the script type + */ + public void testBuildCorrectComparatorType() throws IOException { + ScriptSortBuilder sortBuilder = new ScriptSortBuilder(mockScript(MOCK_SCRIPT_NAME), ScriptSortType.STRING); + SortField sortField = sortBuilder.build(createMockShardContext()).field; + assertThat(sortField.getComparatorSource(), instanceOf(BytesRefFieldComparatorSource.class)); + + sortBuilder = new ScriptSortBuilder(mockScript(MOCK_SCRIPT_NAME), ScriptSortType.NUMBER); + sortField = sortBuilder.build(createMockShardContext()).field; + assertThat(sortField.getComparatorSource(), instanceOf(DoubleValuesComparatorSource.class)); + } + + /** + * Test that the sort builder nested object gets created in the SortField + */ + public void testBuildNested() throws IOException { + QueryShardContext shardContextMock = createMockShardContext(); + + ScriptSortBuilder sortBuilder = new ScriptSortBuilder(mockScript(MOCK_SCRIPT_NAME), ScriptSortType.NUMBER) + .setNestedSort(new NestedSortBuilder("path").setFilter(QueryBuilders.matchAllQuery())); + SortField sortField = sortBuilder.build(shardContextMock).field; + assertThat(sortField.getComparatorSource(), instanceOf(XFieldComparatorSource.class)); + XFieldComparatorSource comparatorSource = (XFieldComparatorSource) sortField.getComparatorSource(); + Nested nested = comparatorSource.nested(); + assertNotNull(nested); + assertEquals(new MatchAllDocsQuery(), nested.getInnerQuery()); + + sortBuilder = new ScriptSortBuilder(mockScript(MOCK_SCRIPT_NAME), ScriptSortType.NUMBER).setNestedPath("path"); + sortField = sortBuilder.build(shardContextMock).field; + assertThat(sortField.getComparatorSource(), instanceOf(XFieldComparatorSource.class)); + comparatorSource = (XFieldComparatorSource) sortField.getComparatorSource(); + nested = comparatorSource.nested(); + assertNotNull(nested); + assertEquals(new TermQuery(new Term(TypeFieldMapper.NAME, "__path")), nested.getInnerQuery()); + + sortBuilder = new ScriptSortBuilder(mockScript(MOCK_SCRIPT_NAME), ScriptSortType.NUMBER).setNestedPath("path") + .setNestedFilter(QueryBuilders.matchAllQuery()); + sortField = sortBuilder.build(shardContextMock).field; + assertThat(sortField.getComparatorSource(), instanceOf(XFieldComparatorSource.class)); + comparatorSource = (XFieldComparatorSource) sortField.getComparatorSource(); + nested = comparatorSource.nested(); + assertNotNull(nested); + assertEquals(new MatchAllDocsQuery(), nested.getInnerQuery()); + + // if nested path is missing, we omit nested element in the comparator + sortBuilder = new ScriptSortBuilder(mockScript(MOCK_SCRIPT_NAME), ScriptSortType.NUMBER) + .setNestedFilter(QueryBuilders.matchAllQuery()); + sortField = sortBuilder.build(shardContextMock).field; + assertThat(sortField.getComparatorSource(), instanceOf(XFieldComparatorSource.class)); + comparatorSource = (XFieldComparatorSource) sortField.getComparatorSource(); + assertNull(comparatorSource.nested()); + } + + /** + * Test we can either set nested sort via path/filter or via nested sort builder, not both + */ + public void testNestedSortBothThrows() throws IOException { + ScriptSortBuilder sortBuilder = new ScriptSortBuilder(mockScript(MOCK_SCRIPT_NAME), ScriptSortType.NUMBER); + IllegalArgumentException iae = expectThrows(IllegalArgumentException.class, + () -> sortBuilder.setNestedPath("nestedPath").setNestedSort(new NestedSortBuilder("otherPath"))); + assertEquals("Setting both nested_path/nested_filter and nested not allowed", iae.getMessage()); + iae = expectThrows(IllegalArgumentException.class, + () -> sortBuilder.setNestedSort(new NestedSortBuilder("otherPath")).setNestedPath("nestedPath")); + assertEquals("Setting both nested_path/nested_filter and nested not allowed", iae.getMessage()); + iae = expectThrows(IllegalArgumentException.class, + () -> sortBuilder.setNestedSort(new NestedSortBuilder("otherPath")).setNestedFilter(QueryBuilders.matchAllQuery())); + assertEquals("Setting both nested_path/nested_filter and nested not allowed", iae.getMessage()); + } + + /** + * Test the nested Filter gets rewritten + */ + public void testNestedRewrites() throws IOException { + ScriptSortBuilder sortBuilder = new ScriptSortBuilder(mockScript("something"), ScriptSortType.STRING); + RangeQueryBuilder rangeQuery = new RangeQueryBuilder("fieldName") { + @Override + public QueryBuilder doRewrite(QueryRewriteContext queryShardContext) throws IOException { + return new MatchNoneQueryBuilder(); + } + }; + sortBuilder.setNestedPath("path").setNestedFilter(rangeQuery); + ScriptSortBuilder rewritten = (ScriptSortBuilder) sortBuilder + .rewrite(createMockShardContext()); + assertNotSame(rangeQuery, rewritten.getNestedFilter()); + } + + /** + * Test the nested sort gets rewritten + */ + public void testNestedSortRewrites() throws IOException { + ScriptSortBuilder sortBuilder = new ScriptSortBuilder(mockScript("something"), ScriptSortType.STRING); + RangeQueryBuilder rangeQuery = new RangeQueryBuilder("fieldName") { + @Override + public QueryBuilder doRewrite(QueryRewriteContext queryShardContext) throws IOException { + return new MatchNoneQueryBuilder(); + } + }; + sortBuilder.setNestedSort(new NestedSortBuilder("path").setFilter(rangeQuery)); + ScriptSortBuilder rewritten = (ScriptSortBuilder) sortBuilder + .rewrite(createMockShardContext()); + assertNotSame(rangeQuery, rewritten.getNestedSort().getFilter()); + } + @Override protected ScriptSortBuilder fromXContent(XContentParser parser, String fieldName) throws IOException { return ScriptSortBuilder.fromXContent(parser, fieldName); diff --git a/core/src/test/java/org/elasticsearch/search/sort/SortBuilderTests.java b/core/src/test/java/org/elasticsearch/search/sort/SortBuilderTests.java index 252c55d84137f..06f5ccf696ce4 100644 --- a/core/src/test/java/org/elasticsearch/search/sort/SortBuilderTests.java +++ b/core/src/test/java/org/elasticsearch/search/sort/SortBuilderTests.java @@ -34,8 +34,10 @@ import java.io.IOException; import java.util.ArrayList; +import java.util.HashSet; import java.util.Iterator; import java.util.List; +import java.util.Set; import static java.util.Collections.emptyList; @@ -126,10 +128,11 @@ public void testSingleFieldSort() throws IOException { } /** - * test random syntax variations + * test parsing random syntax variations */ public void testRandomSortBuilders() throws IOException { for (int runs = 0; runs < NUMBER_OF_RUNS; runs++) { + SetexpectedWarningHeaders = new HashSet<>(); List> testBuilders = randomSortBuilderList(); XContentBuilder xContentBuilder = XContentFactory.jsonBuilder(); xContentBuilder.startObject(); @@ -139,6 +142,16 @@ public void testRandomSortBuilders() throws IOException { xContentBuilder.field("sort"); } for (SortBuilder builder : testBuilders) { + if (builder instanceof GeoDistanceSortBuilder) { + GeoDistanceSortBuilder gdsb = (GeoDistanceSortBuilder) builder; + if (gdsb.getNestedFilter() != null) { + expectedWarningHeaders.add("[nested_filter] has been deprecated in favour of the [nested] parameter"); + } + if (gdsb.getNestedPath() != null) { + expectedWarningHeaders.add("[nested_path] has been deprecated in favour of the [nested] parameter"); + } + } + if (builder instanceof ScoreSortBuilder || builder instanceof FieldSortBuilder) { switch (randomIntBetween(0, 2)) { case 0: @@ -176,6 +189,9 @@ public void testRandomSortBuilders() throws IOException { for (SortBuilder parsedBuilder : parsedSort) { assertEquals(iterator.next(), parsedBuilder); } + if (expectedWarningHeaders.size() > 0) { + assertWarnings(expectedWarningHeaders.toArray(new String[expectedWarningHeaders.size()])); + } } } diff --git a/core/src/test/java/org/elasticsearch/search/suggest/CompletionSuggestSearchIT.java b/core/src/test/java/org/elasticsearch/search/suggest/CompletionSuggestSearchIT.java index 5bd2bad31d134..01b16bb9fb698 100644 --- a/core/src/test/java/org/elasticsearch/search/suggest/CompletionSuggestSearchIT.java +++ b/core/src/test/java/org/elasticsearch/search/suggest/CompletionSuggestSearchIT.java @@ -528,9 +528,9 @@ public void testThatSynonymsWork() throws Exception { Settings.Builder settingsBuilder = Settings.builder() .put("analysis.analyzer.suggest_analyzer_synonyms.type", "custom") .put("analysis.analyzer.suggest_analyzer_synonyms.tokenizer", "standard") - .putArray("analysis.analyzer.suggest_analyzer_synonyms.filter", "standard", "lowercase", "my_synonyms") + .putList("analysis.analyzer.suggest_analyzer_synonyms.filter", "standard", "lowercase", "my_synonyms") .put("analysis.filter.my_synonyms.type", "synonym") - .putArray("analysis.filter.my_synonyms.synonyms", "foo,renamed"); + .putList("analysis.filter.my_synonyms.synonyms", "foo,renamed"); completionMappingBuilder.searchAnalyzer("suggest_analyzer_synonyms").indexAnalyzer("suggest_analyzer_synonyms"); createIndexAndMappingAndSettings(settingsBuilder.build(), completionMappingBuilder); @@ -806,7 +806,7 @@ public void testThatSortingOnCompletionFieldReturnsUsefulException() throws Exce public void testThatSuggestStopFilterWorks() throws Exception { Settings.Builder settingsBuilder = Settings.builder() .put("index.analysis.analyzer.stoptest.tokenizer", "standard") - .putArray("index.analysis.analyzer.stoptest.filter", "standard", "suggest_stop_filter") + .putList("index.analysis.analyzer.stoptest.filter", "standard", "suggest_stop_filter") .put("index.analysis.filter.suggest_stop_filter.type", "stop") .put("index.analysis.filter.suggest_stop_filter.remove_trailing", false); @@ -858,6 +858,38 @@ public void testThatIndexingInvalidFieldsInCompletionFieldResultsInException() t } } + public void testSkipDuplicates() throws Exception { + final CompletionMappingBuilder mapping = new CompletionMappingBuilder(); + createIndexAndMapping(mapping); + int numDocs = randomIntBetween(10, 100); + int numUnique = randomIntBetween(1, numDocs); + List indexRequestBuilders = new ArrayList<>(); + for (int i = 1; i <= numDocs; i++) { + int id = i % numUnique; + indexRequestBuilders.add(client().prepareIndex(INDEX, TYPE, "" + i) + .setSource(jsonBuilder() + .startObject() + .startObject(FIELD) + .field("input", "suggestion" + id) + .field("weight", id) + .endObject() + .endObject() + )); + } + String[] expected = new String[numUnique]; + int sugg = numUnique - 1; + for (int i = 0; i < numUnique; i++) { + expected[i] = "suggestion" + sugg--; + } + indexRandom(true, indexRequestBuilders); + CompletionSuggestionBuilder completionSuggestionBuilder = + SuggestBuilders.completionSuggestion(FIELD).prefix("sugg").skipDuplicates(true).size(numUnique); + + SearchResponse searchResponse = client().prepareSearch(INDEX) + .suggest(new SuggestBuilder().addSuggestion("suggestions", completionSuggestionBuilder)).execute().actionGet(); + assertSuggestions(searchResponse, true, "suggestions", expected); + } + public void assertSuggestions(String suggestionName, SuggestionBuilder suggestBuilder, String... suggestions) { SearchResponse searchResponse = client().prepareSearch(INDEX).suggest(new SuggestBuilder().addSuggestion(suggestionName, suggestBuilder)).execute().actionGet(); assertSuggestions(searchResponse, suggestionName, suggestions); @@ -1108,6 +1140,28 @@ public void testIndexingUnrelatedNullValue() throws Exception { } } + public void testMultiDocSuggestions() throws Exception { + final CompletionMappingBuilder mapping = new CompletionMappingBuilder(); + createIndexAndMapping(mapping); + int numDocs = 10; + List indexRequestBuilders = new ArrayList<>(); + for (int i = 1; i <= numDocs; i++) { + indexRequestBuilders.add(client().prepareIndex(INDEX, TYPE, "" + i) + .setSource(jsonBuilder() + .startObject() + .startObject(FIELD) + .array("input", "suggestion" + i, "suggestions" + i, "suggester" + i) + .field("weight", i) + .endObject() + .endObject() + )); + } + indexRandom(true, indexRequestBuilders); + CompletionSuggestionBuilder prefix = SuggestBuilders.completionSuggestion(FIELD).prefix("sugg").shardSize(15); + assertSuggestions("foo", prefix, "suggester10", "suggester9", "suggester8", "suggester7", "suggester6"); + } + + public static boolean isReservedChar(char c) { switch (c) { case '\u001F': diff --git a/core/src/test/java/org/elasticsearch/search/suggest/ContextCompletionSuggestSearchIT.java b/core/src/test/java/org/elasticsearch/search/suggest/ContextCompletionSuggestSearchIT.java index a9741c3170008..13f7e55277cc4 100644 --- a/core/src/test/java/org/elasticsearch/search/suggest/ContextCompletionSuggestSearchIT.java +++ b/core/src/test/java/org/elasticsearch/search/suggest/ContextCompletionSuggestSearchIT.java @@ -502,12 +502,12 @@ public void testGeoBoosting() throws Exception { CompletionSuggestionBuilder prefix = SuggestBuilders.completionSuggestion(FIELD).prefix("sugg"); assertSuggestions("foo", prefix, "suggestion9", "suggestion8", "suggestion7", "suggestion6", "suggestion5"); - GeoQueryContext context1 = GeoQueryContext.builder().setGeoPoint(geoPoints[0]).setBoost(2).build(); + GeoQueryContext context1 = GeoQueryContext.builder().setGeoPoint(geoPoints[0]).setBoost(11).build(); GeoQueryContext context2 = GeoQueryContext.builder().setGeoPoint(geoPoints[1]).build(); CompletionSuggestionBuilder geoBoostingPrefix = SuggestBuilders.completionSuggestion(FIELD).prefix("sugg") .contexts(Collections.singletonMap("geo", Arrays.asList(context1, context2))); - assertSuggestions("foo", geoBoostingPrefix, "suggestion8", "suggestion6", "suggestion4", "suggestion9", "suggestion7"); + assertSuggestions("foo", geoBoostingPrefix, "suggestion8", "suggestion6", "suggestion4", "suggestion2", "suggestion0"); } public void testGeoPointContext() throws Exception { @@ -639,6 +639,50 @@ public void testGeoField() throws Exception { assertEquals("Hotel Amsterdam in Berlin", searchResponse.getSuggest().getSuggestion(suggestionName).iterator().next().getOptions().iterator().next().getText().string()); } + public void testSkipDuplicatesWithContexts() throws Exception { + LinkedHashMap map = new LinkedHashMap<>(); + map.put("type", ContextBuilder.category("type").field("type").build()); + map.put("cat", ContextBuilder.category("cat").field("cat").build()); + final CompletionMappingBuilder mapping = new CompletionMappingBuilder().context(map); + createIndexAndMapping(mapping); + int numDocs = randomIntBetween(10, 100); + int numUnique = randomIntBetween(1, numDocs); + List indexRequestBuilders = new ArrayList<>(); + for (int i = 0; i < numDocs; i++) { + int id = i % numUnique; + XContentBuilder source = jsonBuilder() + .startObject() + .startObject(FIELD) + .field("input", "suggestion" + id) + .field("weight", id) + .endObject() + .field("cat", "cat" + id % 2) + .field("type", "type" + id) + .endObject(); + indexRequestBuilders.add(client().prepareIndex(INDEX, TYPE, "" + i) + .setSource(source)); + } + String[] expected = new String[numUnique]; + for (int i = 0; i < numUnique; i++) { + expected[i] = "suggestion" + (numUnique-1-i); + } + indexRandom(true, indexRequestBuilders); + CompletionSuggestionBuilder completionSuggestionBuilder = + SuggestBuilders.completionSuggestion(FIELD).prefix("sugg").skipDuplicates(true).size(numUnique); + + assertSuggestions("suggestions", completionSuggestionBuilder, expected); + + Map> contextMap = new HashMap<>(); + contextMap.put("cat", Arrays.asList(CategoryQueryContext.builder().setCategory("cat0").build())); + completionSuggestionBuilder = + SuggestBuilders.completionSuggestion(FIELD).prefix("sugg").contexts(contextMap).skipDuplicates(true).size(numUnique); + + String[] expectedModulo = Arrays.stream(expected) + .filter((s) -> Integer.parseInt(s.substring("suggestion".length())) % 2 == 0) + .toArray(String[]::new); + assertSuggestions("suggestions", completionSuggestionBuilder, expectedModulo); + } + public void assertSuggestions(String suggestionName, SuggestionBuilder suggestBuilder, String... suggestions) { SearchResponse searchResponse = client().prepareSearch(INDEX).suggest( new SuggestBuilder().addSuggestion(suggestionName, suggestBuilder) diff --git a/core/src/test/java/org/elasticsearch/search/suggest/SuggestSearchIT.java b/core/src/test/java/org/elasticsearch/search/suggest/SuggestSearchIT.java index 5142c25229d58..3b1c88cfc5779 100644 --- a/core/src/test/java/org/elasticsearch/search/suggest/SuggestSearchIT.java +++ b/core/src/test/java/org/elasticsearch/search/suggest/SuggestSearchIT.java @@ -28,9 +28,9 @@ import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentFactory; +import org.elasticsearch.index.IndexSettings; import org.elasticsearch.plugins.Plugin; import org.elasticsearch.plugins.ScriptPlugin; -import org.elasticsearch.script.ExecutableScript; import org.elasticsearch.script.ScriptContext; import org.elasticsearch.script.ScriptEngine; import org.elasticsearch.script.TemplateScript; @@ -173,7 +173,7 @@ public void testSuggestModes() throws IOException { .put(SETTING_NUMBER_OF_SHARDS, 1) .put(SETTING_NUMBER_OF_REPLICAS, 0) .put("index.analysis.analyzer.biword.tokenizer", "standard") - .putArray("index.analysis.analyzer.biword.filter", "shingler", "lowercase") + .putList("index.analysis.analyzer.biword.filter", "shingler", "lowercase") .put("index.analysis.filter.shingler.type", "shingle") .put("index.analysis.filter.shingler.min_shingle_size", 2) .put("index.analysis.filter.shingler.max_shingle_size", 3)); @@ -226,9 +226,7 @@ private DirectCandidateGeneratorBuilder candidateGenerator(String field) { // see #2729 public void testSizeOneShard() throws Exception { - prepareCreate("test").setSettings( - SETTING_NUMBER_OF_SHARDS, 1, - SETTING_NUMBER_OF_REPLICAS, 0).get(); + prepareCreate("test").setSettings(Settings.builder().put(SETTING_NUMBER_OF_SHARDS, 1).put(SETTING_NUMBER_OF_REPLICAS, 0)).get(); ensureGreen(); for (int i = 0; i < 15; i++) { @@ -255,7 +253,7 @@ public void testUnmappedField() throws IOException, InterruptedException, Execut CreateIndexRequestBuilder builder = prepareCreate("test").setSettings(Settings.builder() .put(indexSettings()) .put("index.analysis.analyzer.biword.tokenizer", "standard") - .putArray("index.analysis.analyzer.biword.filter", "shingler", "lowercase") + .putList("index.analysis.analyzer.biword.filter", "shingler", "lowercase") .put("index.analysis.filter.shingler.type", "shingle") .put("index.analysis.filter.shingler.min_shingle_size", 2) .put("index.analysis.filter.shingler.max_shingle_size", 3)); @@ -429,7 +427,7 @@ public void testStopwordsOnlyPhraseSuggest() throws IOException { assertAcked(prepareCreate("test").addMapping("typ1", "body", "type=text,analyzer=stopwd").setSettings( Settings.builder() .put("index.analysis.analyzer.stopwd.tokenizer", "whitespace") - .putArray("index.analysis.analyzer.stopwd.filter", "stop") + .putList("index.analysis.analyzer.stopwd.filter", "stop") )); ensureGreen(); index("test", "typ1", "1", "body", "this is a test"); @@ -446,9 +444,9 @@ public void testPrefixLength() throws IOException { CreateIndexRequestBuilder builder = prepareCreate("test").setSettings(Settings.builder() .put(SETTING_NUMBER_OF_SHARDS, 1) .put("index.analysis.analyzer.body.tokenizer", "standard") - .putArray("index.analysis.analyzer.body.filter", "lowercase") + .putList("index.analysis.analyzer.body.filter", "lowercase") .put("index.analysis.analyzer.bigram.tokenizer", "standard") - .putArray("index.analysis.analyzer.bigram.filter", "my_shingle", "lowercase") + .putList("index.analysis.analyzer.bigram.filter", "my_shingle", "lowercase") .put("index.analysis.filter.my_shingle.type", "shingle") .put("index.analysis.filter.my_shingle.output_unigrams", false) .put("index.analysis.filter.my_shingle.min_shingle_size", 2) @@ -484,9 +482,9 @@ public void testBasicPhraseSuggest() throws IOException, URISyntaxException { CreateIndexRequestBuilder builder = prepareCreate("test").setSettings(Settings.builder() .put(indexSettings()) .put("index.analysis.analyzer.body.tokenizer", "standard") - .putArray("index.analysis.analyzer.body.filter", "lowercase") + .putList("index.analysis.analyzer.body.filter", "lowercase") .put("index.analysis.analyzer.bigram.tokenizer", "standard") - .putArray("index.analysis.analyzer.bigram.filter", "my_shingle", "lowercase") + .putList("index.analysis.analyzer.bigram.filter", "my_shingle", "lowercase") .put("index.analysis.filter.my_shingle.type", "shingle") .put("index.analysis.filter.my_shingle.output_unigrams", false) .put("index.analysis.filter.my_shingle.min_shingle_size", 2) @@ -617,9 +615,9 @@ public void testSizeParam() throws IOException { CreateIndexRequestBuilder builder = prepareCreate("test").setSettings(Settings.builder() .put(SETTING_NUMBER_OF_SHARDS, 1) .put("index.analysis.analyzer.body.tokenizer", "standard") - .putArray("index.analysis.analyzer.body.filter", "lowercase") + .putList("index.analysis.analyzer.body.filter", "lowercase") .put("index.analysis.analyzer.bigram.tokenizer", "standard") - .putArray("index.analysis.analyzer.bigram.filter", "my_shingle", "lowercase") + .putList("index.analysis.analyzer.bigram.filter", "my_shingle", "lowercase") .put("index.analysis.filter.my_shingle.type", "shingle") .put("index.analysis.filter.my_shingle.output_unigrams", false) .put("index.analysis.filter.my_shingle.min_shingle_size", 2) @@ -686,8 +684,9 @@ public void testDifferentShardSize() throws Exception { public void testShardFailures() throws IOException, InterruptedException { CreateIndexRequestBuilder builder = prepareCreate("test").setSettings(Settings.builder() .put(indexSettings()) + .put(IndexSettings.MAX_SHINGLE_DIFF_SETTING.getKey(), 4) .put("index.analysis.analyzer.suggest.tokenizer", "standard") - .putArray("index.analysis.analyzer.suggest.filter", "standard", "lowercase", "shingler") + .putList("index.analysis.analyzer.suggest.filter", "standard", "lowercase", "shingler") .put("index.analysis.filter.shingler.type", "shingle") .put("index.analysis.filter.shingler.min_shingle_size", 2) .put("index.analysis.filter.shingler.max_shingle_size", 5) @@ -746,8 +745,9 @@ public void testEmptyShards() throws IOException, InterruptedException { endObject(); assertAcked(prepareCreate("test").setSettings(Settings.builder() .put(indexSettings()) + .put(IndexSettings.MAX_SHINGLE_DIFF_SETTING.getKey(), 4) .put("index.analysis.analyzer.suggest.tokenizer", "standard") - .putArray("index.analysis.analyzer.suggest.filter", "standard", "lowercase", "shingler") + .putList("index.analysis.analyzer.suggest.filter", "standard", "lowercase", "shingler") .put("index.analysis.filter.shingler.type", "shingle") .put("index.analysis.filter.shingler.min_shingle_size", 2) .put("index.analysis.filter.shingler.max_shingle_size", 5) @@ -783,7 +783,7 @@ public void testSearchForRarePhrase() throws IOException { CreateIndexRequestBuilder builder = prepareCreate("test").setSettings(Settings.builder() .put(indexSettings()) .put("index.analysis.analyzer.body.tokenizer", "standard") - .putArray("index.analysis.analyzer.body.filter", "lowercase", "my_shingle") + .putList("index.analysis.analyzer.body.filter", "lowercase", "my_shingle") .put("index.analysis.filter.my_shingle.type", "shingle") .put("index.analysis.filter.my_shingle.output_unigrams", true) .put("index.analysis.filter.my_shingle.min_shingle_size", 2) @@ -838,7 +838,7 @@ public void testSuggestWithManyCandidates() throws InterruptedException, Executi .put(indexSettings()) .put(SETTING_NUMBER_OF_SHARDS, 1) // A single shard will help to keep the tests repeatable. .put("index.analysis.analyzer.text.tokenizer", "standard") - .putArray("index.analysis.analyzer.text.filter", "lowercase", "my_shingle") + .putList("index.analysis.analyzer.text.filter", "lowercase", "my_shingle") .put("index.analysis.filter.my_shingle.type", "shingle") .put("index.analysis.filter.my_shingle.output_unigrams", true) .put("index.analysis.filter.my_shingle.min_shingle_size", 2) @@ -1028,7 +1028,7 @@ public void testPhraseSuggesterCollate() throws InterruptedException, ExecutionE .put(indexSettings()) .put(SETTING_NUMBER_OF_SHARDS, 1) // A single shard will help to keep the tests repeatable. .put("index.analysis.analyzer.text.tokenizer", "standard") - .putArray("index.analysis.analyzer.text.filter", "lowercase", "my_shingle") + .putList("index.analysis.analyzer.text.filter", "lowercase", "my_shingle") .put("index.analysis.filter.my_shingle.type", "shingle") .put("index.analysis.filter.my_shingle.output_unigrams", true) .put("index.analysis.filter.my_shingle.min_shingle_size", 2) diff --git a/core/src/test/java/org/elasticsearch/search/suggest/SuggestTests.java b/core/src/test/java/org/elasticsearch/search/suggest/SuggestTests.java index f1a630fab3742..d53cbfdab6e80 100644 --- a/core/src/test/java/org/elasticsearch/search/suggest/SuggestTests.java +++ b/core/src/test/java/org/elasticsearch/search/suggest/SuggestTests.java @@ -139,7 +139,7 @@ public void testToXContent() throws IOException { public void testFilter() throws Exception { List>> suggestions; - CompletionSuggestion completionSuggestion = new CompletionSuggestion(randomAlphaOfLength(10), 2); + CompletionSuggestion completionSuggestion = new CompletionSuggestion(randomAlphaOfLength(10), 2, false); PhraseSuggestion phraseSuggestion = new PhraseSuggestion(randomAlphaOfLength(10), 2); TermSuggestion termSuggestion = new TermSuggestion(randomAlphaOfLength(10), 2, SortBy.SCORE); suggestions = Arrays.asList(completionSuggestion, phraseSuggestion, termSuggestion); @@ -160,7 +160,7 @@ public void testSuggestionOrdering() throws Exception { suggestions = new ArrayList<>(); int n = randomIntBetween(2, 5); for (int i = 0; i < n; i++) { - suggestions.add(new CompletionSuggestion(randomAlphaOfLength(10), randomIntBetween(3, 5))); + suggestions.add(new CompletionSuggestion(randomAlphaOfLength(10), randomIntBetween(3, 5), false)); } Collections.shuffle(suggestions, random()); Suggest suggest = new Suggest(suggestions); diff --git a/core/src/test/java/org/elasticsearch/search/suggest/SuggestionTests.java b/core/src/test/java/org/elasticsearch/search/suggest/SuggestionTests.java index 3c56597299dd3..70c4396ce8867 100644 --- a/core/src/test/java/org/elasticsearch/search/suggest/SuggestionTests.java +++ b/core/src/test/java/org/elasticsearch/search/suggest/SuggestionTests.java @@ -79,7 +79,7 @@ public static Suggestion> createTestItem(Class suggestion = new PhraseSuggestion(name, size); entrySupplier = () -> SuggestionEntryTests.createTestItem(PhraseSuggestion.Entry.class); } else if (type == CompletionSuggestion.class) { - suggestion = new CompletionSuggestion(name, size); + suggestion = new CompletionSuggestion(name, size, randomBoolean()); entrySupplier = () -> SuggestionEntryTests.createTestItem(CompletionSuggestion.Entry.class); } else { throw new UnsupportedOperationException("type not supported [" + type + "]"); @@ -249,7 +249,7 @@ public void testToXContent() throws IOException { CompletionSuggestion.Entry.Option option = new CompletionSuggestion.Entry.Option(1, new Text("someText"), 1.3f, contexts); CompletionSuggestion.Entry entry = new CompletionSuggestion.Entry(new Text("entryText"), 42, 313); entry.addOption(option); - CompletionSuggestion suggestion = new CompletionSuggestion("suggestionName", 5); + CompletionSuggestion suggestion = new CompletionSuggestion("suggestionName", 5, randomBoolean()); suggestion.addTerm(entry); BytesReference xContent = toXContent(suggestion, XContentType.JSON, params, randomBoolean()); assertEquals( @@ -265,4 +265,4 @@ public void testToXContent() throws IOException { + "}]}", xContent.utf8ToString()); } } -} \ No newline at end of file +} diff --git a/core/src/test/java/org/elasticsearch/search/suggest/completion/CompletionSuggesterBuilderTests.java b/core/src/test/java/org/elasticsearch/search/suggest/completion/CompletionSuggesterBuilderTests.java index d8eb885823b8e..862916890e1bb 100644 --- a/core/src/test/java/org/elasticsearch/search/suggest/completion/CompletionSuggesterBuilderTests.java +++ b/core/src/test/java/org/elasticsearch/search/suggest/completion/CompletionSuggesterBuilderTests.java @@ -114,6 +114,7 @@ public static CompletionSuggestionBuilder randomCompletionSuggestionBuilder() { contextMap.put(geoQueryContextName, contexts); } testBuilder.contexts(contextMap); + testBuilder.skipDuplicates(randomBoolean()); return testBuilder; } @@ -128,7 +129,7 @@ protected String[] shuffleProtectedFields() { @Override protected void mutateSpecificParameters(CompletionSuggestionBuilder builder) throws IOException { - switch (randomIntBetween(0, 4)) { + switch (randomIntBetween(0, 5)) { case 0: int nCatContext = randomIntBetween(1, 5); List contexts = new ArrayList<>(nCatContext); @@ -154,6 +155,9 @@ protected void mutateSpecificParameters(CompletionSuggestionBuilder builder) thr case 4: builder.regex(randomAlphaOfLength(10), RegexOptionsTests.randomRegexOptions()); break; + case 5: + builder.skipDuplicates(!builder.skipDuplicates); + break; default: throw new IllegalStateException("should not through"); } @@ -182,5 +186,6 @@ protected void assertSuggestionContext(CompletionSuggestionBuilder builder, Sugg assertEquals(parsedContextBytes.get(contextName), queryContexts.get(contextName)); } assertEquals(builder.regexOptions, completionSuggestionCtx.getRegexOptions()); + assertEquals(builder.skipDuplicates, completionSuggestionCtx.isSkipDuplicates()); } } diff --git a/core/src/test/java/org/elasticsearch/search/suggest/completion/CompletionSuggestionTests.java b/core/src/test/java/org/elasticsearch/search/suggest/completion/CompletionSuggestionTests.java index 4b0e60a1d00db..2a5a89bdde332 100644 --- a/core/src/test/java/org/elasticsearch/search/suggest/completion/CompletionSuggestionTests.java +++ b/core/src/test/java/org/elasticsearch/search/suggest/completion/CompletionSuggestionTests.java @@ -24,6 +24,7 @@ import org.elasticsearch.test.ESTestCase; import java.util.ArrayList; +import java.util.Arrays; import java.util.Collections; import java.util.List; @@ -38,7 +39,7 @@ public void testToReduce() throws Exception { String name = randomAlphaOfLength(10); int size = randomIntBetween(3, 5); for (int i = 0; i < nShards; i++) { - CompletionSuggestion suggestion = new CompletionSuggestion(name, size); + CompletionSuggestion suggestion = new CompletionSuggestion(name, size, false); suggestion.addTerm(new CompletionSuggestion.Entry(new Text(""), 0, 0)); shardSuggestions.add(suggestion); } diff --git a/core/src/test/java/org/elasticsearch/search/suggest/phrase/DirectCandidateGeneratorTests.java b/core/src/test/java/org/elasticsearch/search/suggest/phrase/DirectCandidateGeneratorTests.java index 10022cc289a71..c92fba09d8cb4 100644 --- a/core/src/test/java/org/elasticsearch/search/suggest/phrase/DirectCandidateGeneratorTests.java +++ b/core/src/test/java/org/elasticsearch/search/suggest/phrase/DirectCandidateGeneratorTests.java @@ -146,7 +146,7 @@ public void testIllegalXContent() throws IOException { logger.info("Skipping test as it uses a custom duplicate check that is obsolete when strict duplicate checks are enabled."); } else { directGenerator = "{ \"field\" : \"f1\", \"field\" : \"f2\" }"; - assertIllegalXContent(directGenerator, ParsingException.class, + assertIllegalXContent(directGenerator, IllegalArgumentException.class, "[direct_generator] failed to parse field [field]"); } @@ -162,7 +162,7 @@ public void testIllegalXContent() throws IOException { // test unexpected token directGenerator = "{ \"size\" : [ \"xxl\" ] }"; - assertIllegalXContent(directGenerator, IllegalArgumentException.class, + assertIllegalXContent(directGenerator, ParsingException.class, "[direct_generator] size doesn't support values of type: START_ARRAY"); } diff --git a/core/src/test/java/org/elasticsearch/search/suggest/phrase/NoisyChannelSpellCheckerTests.java b/core/src/test/java/org/elasticsearch/search/suggest/phrase/NoisyChannelSpellCheckerTests.java index d66fd8596bb90..40b2b023334ca 100644 --- a/core/src/test/java/org/elasticsearch/search/suggest/phrase/NoisyChannelSpellCheckerTests.java +++ b/core/src/test/java/org/elasticsearch/search/suggest/phrase/NoisyChannelSpellCheckerTests.java @@ -26,6 +26,7 @@ import org.apache.lucene.analysis.miscellaneous.PerFieldAnalyzerWrapper; import org.apache.lucene.analysis.reverse.ReverseStringFilter; import org.apache.lucene.analysis.shingle.ShingleFilter; +import org.apache.lucene.analysis.standard.StandardAnalyzer; import org.apache.lucene.analysis.standard.StandardTokenizer; import org.apache.lucene.analysis.synonym.SolrSynonymParser; import org.apache.lucene.analysis.synonym.SynonymFilter; @@ -38,16 +39,14 @@ import org.apache.lucene.index.MultiFields; import org.apache.lucene.search.spell.DirectSpellChecker; import org.apache.lucene.search.spell.SuggestMode; +import org.apache.lucene.store.Directory; import org.apache.lucene.store.RAMDirectory; import org.apache.lucene.util.BytesRef; import org.elasticsearch.search.suggest.phrase.NoisyChannelSpellChecker.Result; import org.elasticsearch.test.ESTestCase; -import java.io.BufferedReader; import java.io.IOException; -import java.io.InputStreamReader; import java.io.StringReader; -import java.nio.charset.StandardCharsets; import java.util.HashMap; import java.util.Map; @@ -439,4 +438,29 @@ protected TokenStreamComponents createComponents(String fieldName) { assertThat(corrections[0].join(new BytesRef(" ")).utf8ToString(), equalTo("xorr the god jewel")); assertThat(corrections[1].join(new BytesRef(" ")).utf8ToString(), equalTo("xor the god jewel")); } + + public void testFewDocsEgdeCase() throws Exception { + try (Directory dir = newDirectory()) { + try (IndexWriter iw = new IndexWriter(dir, newIndexWriterConfig())) { + Document document = new Document(); + document.add(new TextField("field", "value", Field.Store.NO)); + iw.addDocument(document); + iw.commit(); + document = new Document(); + document.add(new TextField("other_field", "value", Field.Store.NO)); + iw.addDocument(document); + } + + try (DirectoryReader ir = DirectoryReader.open(dir)) { + WordScorer wordScorer = new StupidBackoffScorer(ir, MultiFields.getTerms(ir, "field"), "field", 0.95d, new BytesRef(" "), 0.4f); + NoisyChannelSpellChecker suggester = new NoisyChannelSpellChecker(); + DirectSpellChecker spellchecker = new DirectSpellChecker(); + DirectCandidateGenerator generator = new DirectCandidateGenerator(spellchecker, "field", SuggestMode.SUGGEST_MORE_POPULAR, ir, 0.95, 5); + Result result = suggester.getCorrections(new StandardAnalyzer(), new BytesRef("valeu"), generator, 1, 1, ir, "field", wordScorer, 1, 2); + assertThat(result.corrections.length, equalTo(1)); + assertThat(result.corrections[0].join(space).utf8ToString(), equalTo("value")); + } + } + } + } diff --git a/core/src/test/java/org/elasticsearch/snapshots/DedicatedClusterSnapshotRestoreIT.java b/core/src/test/java/org/elasticsearch/snapshots/DedicatedClusterSnapshotRestoreIT.java index 1529b050baa05..5341b268544e7 100644 --- a/core/src/test/java/org/elasticsearch/snapshots/DedicatedClusterSnapshotRestoreIT.java +++ b/core/src/test/java/org/elasticsearch/snapshots/DedicatedClusterSnapshotRestoreIT.java @@ -678,7 +678,7 @@ public void testRegistrationFailure() { } public void testThatSensitiveRepositorySettingsAreNotExposed() throws Exception { - Settings nodeSettings = Settings.builder().put().build(); + Settings nodeSettings = Settings.EMPTY; logger.info("--> start two nodes"); internalCluster().startNodes(2, nodeSettings); // Register mock repositories @@ -950,7 +950,7 @@ public void testRestoreShrinkIndex() throws Exception { logger.info("--> shrink the index"); assertAcked(client.admin().indices().prepareUpdateSettings(sourceIdx) .setSettings(Settings.builder().put("index.blocks.write", true)).get()); - assertAcked(client.admin().indices().prepareShrinkIndex(sourceIdx, shrunkIdx).get()); + assertAcked(client.admin().indices().prepareResizeIndex(sourceIdx, shrunkIdx).get()); logger.info("--> snapshot the shrunk index"); CreateSnapshotResponse createResponse = client.admin().cluster() diff --git a/core/src/test/java/org/elasticsearch/snapshots/RepositoriesIT.java b/core/src/test/java/org/elasticsearch/snapshots/RepositoriesIT.java index 0f6dcec818fc9..d9d06c26b7dcf 100644 --- a/core/src/test/java/org/elasticsearch/snapshots/RepositoriesIT.java +++ b/core/src/test/java/org/elasticsearch/snapshots/RepositoriesIT.java @@ -207,39 +207,4 @@ public void testRepositoryVerification() throws Exception { assertThat(ex.getMessage(), containsString("is not shared")); } } - - public void testRepositoryVerificationTimeout() throws Exception { - Client client = client(); - - Settings settings = Settings.builder() - .put("location", randomRepoPath()) - .put("random_control_io_exception_rate", 1.0).build(); - logger.info("--> creating repository that cannot write any files - should fail"); - assertThrows(client.admin().cluster().preparePutRepository("test-repo-1") - .setType("mock").setSettings(settings), - RepositoryVerificationException.class); - - logger.info("--> creating repository that cannot write any files, but suppress verification - should be acked"); - assertAcked(client.admin().cluster().preparePutRepository("test-repo-1") - .setType("mock").setSettings(settings).setVerify(false)); - - logger.info("--> verifying repository"); - assertThrows(client.admin().cluster().prepareVerifyRepository("test-repo-1"), RepositoryVerificationException.class); - - Path location = randomRepoPath(); - - logger.info("--> creating repository"); - try { - client.admin().cluster().preparePutRepository("test-repo-1") - .setType("mock") - .setSettings(Settings.builder() - .put("location", location) - .put("localize_location", true) - ).get(); - fail("RepositoryVerificationException wasn't generated"); - } catch (RepositoryVerificationException ex) { - assertThat(ex.getMessage(), containsString("is not shared")); - } - } - } diff --git a/core/src/test/java/org/elasticsearch/snapshots/SharedClusterSnapshotRestoreIT.java b/core/src/test/java/org/elasticsearch/snapshots/SharedClusterSnapshotRestoreIT.java index 601ca1b8210d3..6a6825a8adaac 100644 --- a/core/src/test/java/org/elasticsearch/snapshots/SharedClusterSnapshotRestoreIT.java +++ b/core/src/test/java/org/elasticsearch/snapshots/SharedClusterSnapshotRestoreIT.java @@ -36,6 +36,7 @@ import org.elasticsearch.action.admin.cluster.storedscripts.GetStoredScriptResponse; import org.elasticsearch.action.admin.indices.flush.FlushResponse; import org.elasticsearch.action.admin.indices.settings.get.GetSettingsResponse; +import org.elasticsearch.action.admin.indices.stats.ShardStats; import org.elasticsearch.action.admin.indices.template.get.GetIndexTemplatesResponse; import org.elasticsearch.action.index.IndexRequestBuilder; import org.elasticsearch.action.ingest.DeletePipelineRequest; @@ -66,6 +67,7 @@ import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.index.IndexService; +import org.elasticsearch.index.engine.Engine; import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.indices.IndicesService; import org.elasticsearch.indices.InvalidIndexNameException; @@ -102,6 +104,7 @@ import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_NUMBER_OF_SHARDS; import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder; import static org.elasticsearch.index.IndexSettings.INDEX_REFRESH_INTERVAL_SETTING; +import static org.elasticsearch.index.query.QueryBuilders.boolQuery; import static org.elasticsearch.index.query.QueryBuilders.matchQuery; import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked; import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAliasesExist; @@ -119,6 +122,7 @@ import static org.hamcrest.Matchers.greaterThan; import static org.hamcrest.Matchers.is; import static org.hamcrest.Matchers.lessThan; +import static org.hamcrest.Matchers.not; import static org.hamcrest.Matchers.notNullValue; import static org.hamcrest.Matchers.nullValue; import static org.hamcrest.Matchers.startsWith; @@ -132,15 +136,29 @@ protected Collection> nodePlugins() { MockRepository.Plugin.class); } + private Settings randomRepoSettings() { + Settings.Builder repoSettings = Settings.builder(); + repoSettings.put("location", randomRepoPath()); + if (randomBoolean()) { + repoSettings.put("compress", randomBoolean()); + } + if (randomBoolean()) { + repoSettings.put("chunk_size", randomIntBetween(100, 1000), ByteSizeUnit.BYTES); + } else { + if (randomBoolean()) { + repoSettings.put("chunk_size", randomIntBetween(100, 1000), ByteSizeUnit.BYTES); + } else { + repoSettings.put("chunk_size", (String) null); + } + } + return repoSettings.build(); + } + public void testBasicWorkFlow() throws Exception { Client client = client(); logger.info("--> creating repository"); - assertAcked(client.admin().cluster().preparePutRepository("test-repo") - .setType("fs").setSettings(Settings.builder() - .put("location", randomRepoPath()) - .put("compress", randomBoolean()) - .put("chunk_size", randomIntBetween(100, 1000), ByteSizeUnit.BYTES))); + assertAcked(client.admin().cluster().preparePutRepository("test-repo").setType("fs").setSettings(randomRepoSettings())); createIndex("test-idx-1", "test-idx-2", "test-idx-3"); ensureGreen(); @@ -170,8 +188,23 @@ public void testBasicWorkFlow() throws Exception { flushResponseFuture = client.admin().indices().prepareFlush(indices).execute(); } } + + final String[] indicesToSnapshot = {"test-idx-*", "-test-idx-3"}; + + logger.info("--> capturing history UUIDs"); + final Map historyUUIDs = new HashMap<>(); + for (ShardStats shardStats: client().admin().indices().prepareStats(indicesToSnapshot).clear().get().getShards()) { + String historyUUID = shardStats.getCommitStats().getUserData().get(Engine.HISTORY_UUID_KEY); + ShardId shardId = shardStats.getShardRouting().shardId(); + if (historyUUIDs.containsKey(shardId)) { + assertThat(shardStats.getShardRouting() + " has a different history uuid", historyUUID, equalTo(historyUUIDs.get(shardId))); + } else { + historyUUIDs.put(shardId, historyUUID); + } + } + logger.info("--> snapshot"); - CreateSnapshotResponse createSnapshotResponse = client.admin().cluster().prepareCreateSnapshot("test-repo", "test-snap").setWaitForCompletion(true).setIndices("test-idx-*", "-test-idx-3").get(); + CreateSnapshotResponse createSnapshotResponse = client.admin().cluster().prepareCreateSnapshot("test-repo", "test-snap").setWaitForCompletion(true).setIndices(indicesToSnapshot).get(); assertThat(createSnapshotResponse.getSnapshotInfo().successfulShards(), greaterThan(0)); assertThat(createSnapshotResponse.getSnapshotInfo().successfulShards(), equalTo(createSnapshotResponse.getSnapshotInfo().totalShards())); @@ -211,6 +244,13 @@ public void testBasicWorkFlow() throws Exception { assertHitCount(client.prepareSearch("test-idx-3").setSize(0).get(), 50L); } + for (ShardStats shardStats: client().admin().indices().prepareStats(indicesToSnapshot).clear().get().getShards()) { + String historyUUID = shardStats.getCommitStats().getUserData().get(Engine.HISTORY_UUID_KEY); + ShardId shardId = shardStats.getShardRouting().shardId(); + assertThat(shardStats.getShardRouting() + " doesn't have a history uuid", historyUUID, notNullValue()); + assertThat(shardStats.getShardRouting() + " doesn't have a new history", historyUUID, not(equalTo(historyUUIDs.get(shardId)))); + } + // Test restore after index deletion logger.info("--> delete indices"); cluster().wipeIndices("test-idx-1", "test-idx-2"); @@ -226,6 +266,13 @@ public void testBasicWorkFlow() throws Exception { assertThat(clusterState.getMetaData().hasIndex("test-idx-1"), equalTo(true)); assertThat(clusterState.getMetaData().hasIndex("test-idx-2"), equalTo(false)); + for (ShardStats shardStats: client().admin().indices().prepareStats(indicesToSnapshot).clear().get().getShards()) { + String historyUUID = shardStats.getCommitStats().getUserData().get(Engine.HISTORY_UUID_KEY); + ShardId shardId = shardStats.getShardRouting().shardId(); + assertThat(shardStats.getShardRouting() + " doesn't have a history uuid", historyUUID, notNullValue()); + assertThat(shardStats.getShardRouting() + " doesn't have a new history", historyUUID, not(equalTo(historyUUIDs.get(shardId)))); + } + if (flushResponseFuture != null) { // Finish flush flushResponseFuture.actionGet(); @@ -276,11 +323,7 @@ public void testFreshIndexUUID() { Client client = client(); logger.info("--> creating repository"); - assertAcked(client.admin().cluster().preparePutRepository("test-repo") - .setType("fs").setSettings(Settings.builder() - .put("location", randomRepoPath()) - .put("compress", randomBoolean()) - .put("chunk_size", randomIntBetween(100, 1000), ByteSizeUnit.BYTES))); + assertAcked(client.admin().cluster().preparePutRepository("test-repo").setType("fs").setSettings(randomRepoSettings())); createIndex("test"); String originalIndexUUID = client().admin().indices().prepareGetSettings("test").get().getSetting("test", IndexMetaData.SETTING_INDEX_UUID); @@ -324,11 +367,7 @@ public void testRestoreWithDifferentMappingsAndSettings() throws Exception { Client client = client(); logger.info("--> creating repository"); - assertAcked(client.admin().cluster().preparePutRepository("test-repo") - .setType("fs").setSettings(Settings.builder() - .put("location", randomRepoPath()) - .put("compress", randomBoolean()) - .put("chunk_size", randomIntBetween(100, 1000), ByteSizeUnit.BYTES))); + assertAcked(client.admin().cluster().preparePutRepository("test-repo").setType("fs").setSettings(randomRepoSettings())); logger.info("--> create index with foo type"); assertAcked(prepareCreate("test-idx", 2, Settings.builder() @@ -932,7 +971,7 @@ public void testUnallocatedShards() throws Exception { .put("location", randomRepoPath()))); logger.info("--> creating index that cannot be allocated"); - prepareCreate("test-idx", 2, Settings.builder().put(IndexMetaData.INDEX_ROUTING_INCLUDE_GROUP_SETTING.getKey() + ".tag", "nowhere").put("index.number_of_shards", 3)).setWaitForActiveShards(ActiveShardCount.NONE).get(); + prepareCreate("test-idx", 2, Settings.builder().put(IndexMetaData.INDEX_ROUTING_INCLUDE_GROUP_SETTING.getKey() + "tag", "nowhere").put("index.number_of_shards", 3)).setWaitForActiveShards(ActiveShardCount.NONE).get(); logger.info("--> snapshot"); CreateSnapshotResponse createSnapshotResponse = client.admin().cluster().prepareCreateSnapshot("test-repo", "test-snap").setWaitForCompletion(true).setIndices("test-idx").get(); @@ -1795,7 +1834,7 @@ public void testChangeSettingsOnRestore() throws Exception { .put(INDEX_REFRESH_INTERVAL_SETTING.getKey(), "10s") .put("index.analysis.analyzer.my_analyzer.type", "custom") .put("index.analysis.analyzer.my_analyzer.tokenizer", "standard") - .putArray("index.analysis.analyzer.my_analyzer.filter", "lowercase", "my_synonym") + .putList("index.analysis.analyzer.my_analyzer.filter", "lowercase", "my_synonym") .put("index.analysis.filter.my_synonym.type", "synonym") .put("index.analysis.filter.my_synonym.synonyms", "foo => bar"); @@ -1917,7 +1956,7 @@ public void testRecreateBlocksOnRestore() throws Exception { initialSettingsBuilder.put(blockSetting, true); } Settings initialSettings = initialSettingsBuilder.build(); - logger.info("--> using initial block settings {}", initialSettings.getAsMap()); + logger.info("--> using initial block settings {}", initialSettings); if (!initialSettings.isEmpty()) { logger.info("--> apply initial blocks to index"); @@ -1946,7 +1985,7 @@ public void testRecreateBlocksOnRestore() throws Exception { changedSettingsBuilder.put(blockSetting, randomBoolean()); } Settings changedSettings = changedSettingsBuilder.build(); - logger.info("--> applying changed block settings {}", changedSettings.getAsMap()); + logger.info("--> applying changed block settings {}", changedSettings); RestoreSnapshotResponse restoreSnapshotResponse = client.admin().cluster() .prepareRestoreSnapshot("test-repo", "test-snap") @@ -1960,7 +1999,7 @@ public void testRecreateBlocksOnRestore() throws Exception { .put(initialSettings) .put(changedSettings) .build(); - logger.info("--> merged block settings {}", mergedSettings.getAsMap()); + logger.info("--> merged block settings {}", mergedSettings); logger.info("--> checking consistency between settings and blocks"); assertThat(mergedSettings.getAsBoolean(IndexMetaData.SETTING_BLOCKS_METADATA, false), diff --git a/core/src/test/java/org/elasticsearch/test/search/aggregations/bucket/SharedSignificantTermsTestMethods.java b/core/src/test/java/org/elasticsearch/test/search/aggregations/bucket/SharedSignificantTermsTestMethods.java index 6dd4fa384e99b..e5081481859ab 100644 --- a/core/src/test/java/org/elasticsearch/test/search/aggregations/bucket/SharedSignificantTermsTestMethods.java +++ b/core/src/test/java/org/elasticsearch/test/search/aggregations/bucket/SharedSignificantTermsTestMethods.java @@ -50,7 +50,7 @@ public class SharedSignificantTermsTestMethods { public static void aggregateAndCheckFromSeveralShards(ESIntegTestCase testCase) throws ExecutionException, InterruptedException { String type = ESTestCase.randomBoolean() ? "text" : "keyword"; - String settings = "{\"index.number_of_shards\": 5, \"index.number_of_replicas\": 0}"; + String settings = "{\"index.number_of_shards\": 7, \"index.number_of_replicas\": 0}"; index01Docs(type, settings, testCase); testCase.ensureGreen(); testCase.logClusterState(); diff --git a/core/src/test/java/org/elasticsearch/threadpool/ScheduleWithFixedDelayTests.java b/core/src/test/java/org/elasticsearch/threadpool/ScheduleWithFixedDelayTests.java index dd1f4991f9570..da0125d6f65d9 100644 --- a/core/src/test/java/org/elasticsearch/threadpool/ScheduleWithFixedDelayTests.java +++ b/core/src/test/java/org/elasticsearch/threadpool/ScheduleWithFixedDelayTests.java @@ -26,9 +26,9 @@ import org.elasticsearch.common.util.concurrent.EsRejectedExecutionException; import org.elasticsearch.node.Node; import org.elasticsearch.test.ESTestCase; -import org.elasticsearch.threadpool.ThreadPool.Cancellable; +import org.elasticsearch.threadpool.Scheduler.Cancellable; import org.elasticsearch.threadpool.ThreadPool.Names; -import org.elasticsearch.threadpool.ThreadPool.ReschedulingRunnable; +import org.elasticsearch.threadpool.Scheduler.ReschedulingRunnable; import org.junit.After; import org.junit.Before; @@ -80,7 +80,8 @@ public void testDoesNotRescheduleUntilExecutionFinished() throws Exception { Thread.currentThread().interrupt(); } }; - ReschedulingRunnable reschedulingRunnable = new ReschedulingRunnable(runnable, delay, Names.GENERIC, threadPool); + ReschedulingRunnable reschedulingRunnable = new ReschedulingRunnable(runnable, delay, Names.GENERIC, threadPool, + (e) -> {}, (e) -> {}); // this call was made during construction of the runnable verify(threadPool, times(1)).schedule(delay, Names.GENERIC, reschedulingRunnable); @@ -260,7 +261,8 @@ public ScheduledFuture schedule(TimeValue delay, String executor, Runnable co } }; Runnable runnable = () -> {}; - ReschedulingRunnable reschedulingRunnable = new ReschedulingRunnable(runnable, delay, Names.GENERIC, threadPool); + ReschedulingRunnable reschedulingRunnable = new ReschedulingRunnable(runnable, delay, Names.GENERIC, + threadPool, (e) -> {}, (e) -> {}); assertTrue(reschedulingRunnable.isCancelled()); } diff --git a/core/src/test/java/org/elasticsearch/transport/RemoteClusterConnectionTests.java b/core/src/test/java/org/elasticsearch/transport/RemoteClusterConnectionTests.java index d70032ca065f7..856385531d7ec 100644 --- a/core/src/test/java/org/elasticsearch/transport/RemoteClusterConnectionTests.java +++ b/core/src/test/java/org/elasticsearch/transport/RemoteClusterConnectionTests.java @@ -110,12 +110,12 @@ public static MockTransportService startTransport( ClusterName clusterName = ClusterName.CLUSTER_NAME_SETTING.get(s); MockTransportService newService = MockTransportService.createNewService(s, version, threadPool, null); try { - newService.registerRequestHandler(ClusterSearchShardsAction.NAME, ClusterSearchShardsRequest::new, ThreadPool.Names.SAME, + newService.registerRequestHandler(ClusterSearchShardsAction.NAME,ThreadPool.Names.SAME, ClusterSearchShardsRequest::new, (request, channel) -> { channel.sendResponse(new ClusterSearchShardsResponse(new ClusterSearchShardsGroup[0], knownNodes.toArray(new DiscoveryNode[0]), Collections.emptyMap())); }); - newService.registerRequestHandler(ClusterStateAction.NAME, ClusterStateRequest::new, ThreadPool.Names.SAME, + newService.registerRequestHandler(ClusterStateAction.NAME, ThreadPool.Names.SAME, ClusterStateRequest::new, (request, channel) -> { DiscoveryNodes.Builder builder = DiscoveryNodes.builder(); for (DiscoveryNode node : knownNodes) { diff --git a/core/src/test/java/org/elasticsearch/transport/RemoteClusterServiceTests.java b/core/src/test/java/org/elasticsearch/transport/RemoteClusterServiceTests.java index aa4c7415a4c45..8e0c039176207 100644 --- a/core/src/test/java/org/elasticsearch/transport/RemoteClusterServiceTests.java +++ b/core/src/test/java/org/elasticsearch/transport/RemoteClusterServiceTests.java @@ -125,8 +125,8 @@ public void testGroupClusterIndices() throws IOException { transportService.start(); transportService.acceptIncomingRequests(); Settings.Builder builder = Settings.builder(); - builder.putArray("search.remote.cluster_1.seeds", seedNode.getAddress().toString()); - builder.putArray("search.remote.cluster_2.seeds", otherSeedNode.getAddress().toString()); + builder.putList("search.remote.cluster_1.seeds", seedNode.getAddress().toString()); + builder.putList("search.remote.cluster_2.seeds", otherSeedNode.getAddress().toString()); try (RemoteClusterService service = new RemoteClusterService(builder.build(), transportService)) { assertFalse(service.isCrossClusterSearchEnabled()); service.initializeRemoteClusters(); @@ -171,8 +171,8 @@ public void testIncrementallyAddClusters() throws IOException { transportService.start(); transportService.acceptIncomingRequests(); Settings.Builder builder = Settings.builder(); - builder.putArray("search.remote.cluster_1.seeds", seedNode.getAddress().toString()); - builder.putArray("search.remote.cluster_2.seeds", otherSeedNode.getAddress().toString()); + builder.putList("search.remote.cluster_1.seeds", seedNode.getAddress().toString()); + builder.putList("search.remote.cluster_2.seeds", otherSeedNode.getAddress().toString()); try (RemoteClusterService service = new RemoteClusterService(Settings.EMPTY, transportService)) { assertFalse(service.isCrossClusterSearchEnabled()); service.initializeRemoteClusters(); @@ -225,9 +225,9 @@ public void testRemoteNodeAttribute() throws IOException, InterruptedException { transportService.start(); transportService.acceptIncomingRequests(); final Settings.Builder builder = Settings.builder(); - builder.putArray( + builder.putList( "search.remote.cluster_1.seeds", c1N1Node.getAddress().toString()); - builder.putArray( + builder.putList( "search.remote.cluster_2.seeds", c2N1Node.getAddress().toString()); try (RemoteClusterService service = new RemoteClusterService(settings, transportService)) { @@ -302,9 +302,9 @@ public void testCollectNodes() throws InterruptedException, IOException { transportService.start(); transportService.acceptIncomingRequests(); final Settings.Builder builder = Settings.builder(); - builder.putArray( + builder.putList( "search.remote.cluster_1.seeds", c1N1Node.getAddress().toString()); - builder.putArray( + builder.putList( "search.remote.cluster_2.seeds", c2N1Node.getAddress().toString()); try (RemoteClusterService service = new RemoteClusterService(settings, transportService)) { diff --git a/core/src/test/java/org/elasticsearch/transport/TCPTransportTests.java b/core/src/test/java/org/elasticsearch/transport/TcpTransportTests.java similarity index 97% rename from core/src/test/java/org/elasticsearch/transport/TCPTransportTests.java rename to core/src/test/java/org/elasticsearch/transport/TcpTransportTests.java index c386b2865afa3..54efd231182b6 100644 --- a/core/src/test/java/org/elasticsearch/transport/TCPTransportTests.java +++ b/core/src/test/java/org/elasticsearch/transport/TcpTransportTests.java @@ -45,8 +45,8 @@ import static org.hamcrest.Matchers.equalTo; -/** Unit tests for TCPTransport */ -public class TCPTransportTests extends ESTestCase { +/** Unit tests for {@link TcpTransport} */ +public class TcpTransportTests extends ESTestCase { /** Test ipv4 host with a default port works */ public void testParseV4DefaultPort() throws Exception { @@ -175,7 +175,7 @@ public void testCompressRequest() throws IOException { final boolean compressed = randomBoolean(); final AtomicBoolean called = new AtomicBoolean(false); Req request = new Req(randomRealisticUnicodeOfLengthBetween(10, 100)); - ThreadPool threadPool = new TestThreadPool(TCPTransportTests.class.getName()); + ThreadPool threadPool = new TestThreadPool(TcpTransportTests.class.getName()); AtomicReference exceptionReference = new AtomicReference<>(); try { TcpTransport transport = new TcpTransport("test", Settings.builder().put("transport.tcp.compress", compressed).build(), @@ -191,7 +191,7 @@ protected Object bind(String name, InetSocketAddress address) throws IOException } @Override - protected void closeChannels(List channel, boolean blocking) throws IOException { + protected void closeChannels(List channel, boolean blocking, boolean doNotLinger) throws IOException { } @@ -224,8 +224,8 @@ protected void sendMessage(Object o, BytesReference reference, ActionListener li } @Override - protected NodeChannels connectToChannels(DiscoveryNode node, ConnectionProfile profile, - Consumer onChannelClose) throws IOException { + protected NodeChannels connectToChannels( + DiscoveryNode node, ConnectionProfile profile, Consumer onChannelClose) throws IOException { return new NodeChannels(node, new Object[profile.getNumConnections()], profile); } diff --git a/core/src/test/java/org/elasticsearch/transport/TransportActionProxyTests.java b/core/src/test/java/org/elasticsearch/transport/TransportActionProxyTests.java index e73ad8e439cb8..64f4182550935 100644 --- a/core/src/test/java/org/elasticsearch/transport/TransportActionProxyTests.java +++ b/core/src/test/java/org/elasticsearch/transport/TransportActionProxyTests.java @@ -267,7 +267,7 @@ public void testIsProxyAction() { } public void testIsProxyRequest() { - assertTrue(TransportActionProxy.isProxyRequest(new TransportActionProxy.ProxyRequest<>(() -> null))); + assertTrue(TransportActionProxy.isProxyRequest(new TransportActionProxy.ProxyRequest<>((in) -> null))); assertFalse(TransportActionProxy.isProxyRequest(TransportRequest.Empty.INSTANCE)); } } diff --git a/core/src/test/java/org/elasticsearch/update/UpdateIT.java b/core/src/test/java/org/elasticsearch/update/UpdateIT.java index dc46ef12e3669..0f7e242a4cb80 100644 --- a/core/src/test/java/org/elasticsearch/update/UpdateIT.java +++ b/core/src/test/java/org/elasticsearch/update/UpdateIT.java @@ -462,7 +462,7 @@ public void testUpdateRequestWithScriptAndShouldUpsertDoc() throws Exception { public void testContextVariables() throws Exception { assertAcked(prepareCreate("test") - .setSettings("index.version.created", Version.V_5_6_0.id) + .setSettings(Settings.builder().put("index.version.created", Version.V_5_6_0.id)) .addAlias(new Alias("alias")) .addMapping("type1", XContentFactory.jsonBuilder() .startObject() diff --git a/core/src/test/java/org/elasticsearch/validate/SimpleValidateQueryIT.java b/core/src/test/java/org/elasticsearch/validate/SimpleValidateQueryIT.java index 0dc9f24ff8978..a87f428fec51e 100644 --- a/core/src/test/java/org/elasticsearch/validate/SimpleValidateQueryIT.java +++ b/core/src/test/java/org/elasticsearch/validate/SimpleValidateQueryIT.java @@ -180,9 +180,9 @@ public void testExplainMatchPhrasePrefix() { assertAcked(prepareCreate("test").setSettings( Settings.builder().put(indexSettings()) .put("index.analysis.filter.syns.type", "synonym") - .putArray("index.analysis.filter.syns.synonyms", "one,two") + .putList("index.analysis.filter.syns.synonyms", "one,two") .put("index.analysis.analyzer.syns.tokenizer", "standard") - .putArray("index.analysis.analyzer.syns.filter", "syns") + .putList("index.analysis.analyzer.syns.filter", "syns") ).addMapping("test", "field","type=text,analyzer=syns")); ensureGreen(); @@ -215,7 +215,7 @@ public void testExplainMatchPhrasePrefix() { public void testExplainWithRewriteValidateQuery() throws Exception { client().admin().indices().prepareCreate("test") .addMapping("type1", "field", "type=text,analyzer=whitespace") - .setSettings(SETTING_NUMBER_OF_SHARDS, 1).get(); + .setSettings(Settings.builder().put(SETTING_NUMBER_OF_SHARDS, 1)).get(); client().prepareIndex("test", "type1", "1").setSource("field", "quick lazy huge brown pidgin").get(); client().prepareIndex("test", "type1", "2").setSource("field", "the quick brown fox").get(); client().prepareIndex("test", "type1", "3").setSource("field", "the quick lazy huge brown fox jumps over the tree").get(); @@ -258,7 +258,7 @@ public void testExplainWithRewriteValidateQuery() throws Exception { public void testExplainWithRewriteValidateQueryAllShards() throws Exception { client().admin().indices().prepareCreate("test") .addMapping("type1", "field", "type=text,analyzer=whitespace") - .setSettings(SETTING_NUMBER_OF_SHARDS, 2).get(); + .setSettings(Settings.builder().put(SETTING_NUMBER_OF_SHARDS, 2)).get(); // We are relying on specific routing behaviors for the result to be right, so // we cannot randomize the number of shards or change ids here. client().prepareIndex("test", "type1", "1") diff --git a/core/src/test/java/org/elasticsearch/versioning/SimpleVersioningIT.java b/core/src/test/java/org/elasticsearch/versioning/SimpleVersioningIT.java index 23b849b0742d3..caf4f725fa453 100644 --- a/core/src/test/java/org/elasticsearch/versioning/SimpleVersioningIT.java +++ b/core/src/test/java/org/elasticsearch/versioning/SimpleVersioningIT.java @@ -43,6 +43,7 @@ import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery; import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked; +import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertHitCount; import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertThrows; import static org.hamcrest.Matchers.equalTo; import static org.hamcrest.Matchers.lessThanOrEqualTo; @@ -268,12 +269,10 @@ public void testSimpleVersioningWithFlush() throws Exception { assertThat(indexResponse.getVersion(), equalTo(1L)); client().admin().indices().prepareFlush().execute().actionGet(); - indexResponse = client().prepareIndex("test", "type", "1").setSource("field1", "value1_2").setVersion(1).execute().actionGet(); assertThat(indexResponse.getVersion(), equalTo(2L)); client().admin().indices().prepareFlush().execute().actionGet(); - assertThrows(client().prepareIndex("test", "type", "1").setSource("field1", "value1_1").setVersion(1).execute(), VersionConflictEngineException.class); @@ -286,13 +285,16 @@ public void testSimpleVersioningWithFlush() throws Exception { assertThrows(client().prepareDelete("test", "type", "1").setVersion(1).execute(), VersionConflictEngineException.class); assertThrows(client().prepareDelete("test", "type", "1").setVersion(1).execute(), VersionConflictEngineException.class); - client().admin().indices().prepareRefresh().execute().actionGet(); for (int i = 0; i < 10; i++) { assertThat(client().prepareGet("test", "type", "1").execute().actionGet().getVersion(), equalTo(2L)); } + client().admin().indices().prepareRefresh().execute().actionGet(); + for (int i = 0; i < 10; i++) { - SearchResponse searchResponse = client().prepareSearch().setQuery(matchAllQuery()).setVersion(true).execute().actionGet(); + SearchResponse searchResponse = client().prepareSearch().setQuery(matchAllQuery()).setVersion(true). + execute().actionGet(); + assertHitCount(searchResponse, 1); assertThat(searchResponse.getHits().getAt(0).getVersion(), equalTo(2L)); } } diff --git a/core/src/test/resources/org/elasticsearch/index/analysis/shingle_analysis2.json b/core/src/test/resources/org/elasticsearch/index/analysis/shingle_analysis2.json new file mode 100644 index 0000000000000..a81ea538f19fe --- /dev/null +++ b/core/src/test/resources/org/elasticsearch/index/analysis/shingle_analysis2.json @@ -0,0 +1,15 @@ +{ + "index":{ + "analysis":{ + "filter":{ + "shingle_filler":{ + "type":"shingle", + "max_shingle_size" : 10, + "min_shingle_size" : 2, + "output_unigrams" : false, + "filler_token" : "FILLER" + } + } + } + } +} \ No newline at end of file diff --git a/core/src/test/resources/org/elasticsearch/index/mapper/multifield/test-multi-fields.json b/core/src/test/resources/org/elasticsearch/index/mapper/multifield/test-multi-fields.json index 7d4f819a2508f..b7317aba3c148 100644 --- a/core/src/test/resources/org/elasticsearch/index/mapper/multifield/test-multi-fields.json +++ b/core/src/test/resources/org/elasticsearch/index/mapper/multifield/test-multi-fields.json @@ -18,12 +18,6 @@ "type": "text", "store": true, "eager_global_ordinals": true - }, - "test2": { - "type": "token_count", - "index": true, - "store": true, - "analyzer": "simple" } } }, diff --git a/core/src/test/resources/org/elasticsearch/search/query/all-example-document.json b/core/src/test/resources/org/elasticsearch/search/query/all-example-document.json index 9e4d04930a71a..abc22939b6422 100644 --- a/core/src/test/resources/org/elasticsearch/search/query/all-example-document.json +++ b/core/src/test/resources/org/elasticsearch/search/query/all-example-document.json @@ -21,7 +21,6 @@ "f_long": "42", "f_float": "1.7", "f_hfloat": "1.5", - "f_sfloat": "12.23", "f_ip": "127.0.0.1", "f_binary": "VGhpcyBpcyBzb21lIGJpbmFyeSBkYXRhCg==", "f_suggest": { diff --git a/core/src/test/resources/org/elasticsearch/search/query/all-query-index-with-all.json b/core/src/test/resources/org/elasticsearch/search/query/all-query-index-with-all.json deleted file mode 100644 index d9cbb485d1318..0000000000000 --- a/core/src/test/resources/org/elasticsearch/search/query/all-query-index-with-all.json +++ /dev/null @@ -1,23 +0,0 @@ -{ - "settings": { - "index": { - "number_of_shards": 1, - "number_of_replicas": 0, - "version": { - "created": "5000099" - }, - "query.default_field": "f1" - } - }, - "mappings": { - "doc": { - "_all": { - "enabled": true - }, - "properties": { - "f1": {"type": "text"}, - "f2": {"type": "text"} - } - } - } -} diff --git a/core/src/test/resources/org/elasticsearch/search/query/all-query-index.json b/core/src/test/resources/org/elasticsearch/search/query/all-query-index.json index 89c412171254a..3b068132d5142 100644 --- a/core/src/test/resources/org/elasticsearch/search/query/all-query-index.json +++ b/core/src/test/resources/org/elasticsearch/search/query/all-query-index.json @@ -18,8 +18,7 @@ "f_multi": { "type": "text", "fields": { - "raw": {"type": "keyword"}, - "f_token_count": {"type": "token_count", "analyzer": "standard"} + "raw": {"type": "keyword"} } }, "f_object": { @@ -49,7 +48,6 @@ "f_long": {"type": "long"}, "f_float": {"type": "float"}, "f_hfloat": {"type": "half_float"}, - "f_sfloat": {"type": "scaled_float", "scaling_factor": 100}, "f_ip": {"type": "ip"}, "f_binary": {"type": "binary"}, "f_suggest": {"type": "completion"}, diff --git a/distribution/build.gradle b/distribution/build.gradle index 3522becc53ec2..5c809de568d82 100644 --- a/distribution/build.gradle +++ b/distribution/build.gradle @@ -537,6 +537,12 @@ Map expansionsForDistribution(distributionType) { 'heap.min': defaultHeapSize, 'heap.max': defaultHeapSize, + 'heap.dump.path': [ + 'deb': "-XX:HeapDumpPath=/var/lib/elasticsearch", + 'rpm': "-XX:HeapDumpPath=/var/lib/elasticsearch", + 'def': "#-XX:HeapDumpPath=/heap/dump/path" + ], + 'stopping.timeout': [ 'rpm': 86400, ], diff --git a/distribution/bwc/build.gradle b/distribution/bwc/build.gradle index f51577d668aaf..f4cc2628df9e0 100644 --- a/distribution/bwc/build.gradle +++ b/distribution/bwc/build.gradle @@ -56,69 +56,89 @@ if (enabled) { def (String major, String minor, String bugfix) = bwcVersion.split('\\.') def (String currentMajor, String currentMinor, String currentBugfix) = version.split('\\.') String bwcBranch - if (project.name == 'bwc-stable-snapshot') { + if (project.name == 'bwc-stable-snapshot' && major != currentMajor) { bwcBranch = "${major}.x" } else { bwcBranch = "${major}.${minor}" } File checkoutDir = file("${buildDir}/bwc/checkout-${bwcBranch}") + final String remote = System.getProperty("tests.bwc.remote", "elastic") + task createClone(type: LoggedExec) { onlyIf { checkoutDir.exists() == false } commandLine = ['git', 'clone', rootDir, checkoutDir] } - // we use regular Exec here to ensure we always get output, regardless of logging level - task findUpstream(type: Exec) { + task findRemote(type: LoggedExec) { dependsOn createClone workingDir = checkoutDir commandLine = ['git', 'remote', '-v'] - ignoreExitValue = true ByteArrayOutputStream output = new ByteArrayOutputStream() standardOutput = output doLast { - if (execResult.exitValue != 0) { - output.toString('UTF-8').eachLine { line -> logger.error(line) } - execResult.assertNormalExitValue() - } - project.ext.upstreamExists = false + project.ext.remoteExists = false output.toString('UTF-8').eachLine { - if (it.contains("upstream")) { - project.ext.upstreamExists = true + if (it.contains("${remote}\thttps://github.com/${remote}/elasticsearch.git")) { + project.ext.remoteExists = true } } } } - task addUpstream(type: LoggedExec) { - dependsOn findUpstream - onlyIf { project.ext.upstreamExists == false } + task addRemote(type: LoggedExec) { + dependsOn findRemote + onlyIf { project.ext.remoteExists == false } workingDir = checkoutDir - commandLine = ['git', 'remote', 'add', 'upstream', 'https://github.com/elastic/elasticsearch.git'] + commandLine = ['git', 'remote', 'add', "${remote}", "https://github.com/${remote}/elasticsearch.git"] } task fetchLatest(type: LoggedExec) { onlyIf { project.gradle.startParameter.isOffline() == false } - dependsOn addUpstream + dependsOn addRemote workingDir = checkoutDir commandLine = ['git', 'fetch', '--all'] } - // this is an Exec task so that the SHA that is checked out is logged - task checkoutBwcBranch(type: Exec) { - def String refspec = System.getProperty("tests.bwc.refspec", "upstream/${bwcBranch}") + String buildMetadataKey = "bwc_refspec_${project.path.substring(1)}" + task checkoutBwcBranch(type: LoggedExec) { + String refspec = System.getProperty("tests.bwc.refspec", buildMetadata.get(buildMetadataKey, "${remote}/${bwcBranch}")) dependsOn fetchLatest workingDir = checkoutDir commandLine = ['git', 'checkout', refspec] + doFirst { + println "Checking out elasticsearch ${refspec} for branch ${bwcBranch}" + } + } + + File buildMetadataFile = project.file("build/${project.name}/build_metadata") + task writeBuildMetadata(type: LoggedExec) { + dependsOn checkoutBwcBranch + workingDir = checkoutDir + commandLine = ['git', 'rev-parse', 'HEAD'] + ignoreExitValue = true + ByteArrayOutputStream output = new ByteArrayOutputStream() + standardOutput = output + doLast { + if (execResult.exitValue != 0) { + output.toString('UTF-8').eachLine { line -> logger.error(line) } + execResult.assertNormalExitValue() + } + project.mkdir(buildMetadataFile.parent) + String commit = output.toString('UTF-8') + buildMetadataFile.setText("${buildMetadataKey}=${commit}", 'UTF-8') + println "Checked out elasticsearch commit ${commit}" + } } File bwcDeb = file("${checkoutDir}/distribution/deb/build/distributions/elasticsearch-${bwcVersion}.deb") File bwcRpm = file("${checkoutDir}/distribution/rpm/build/distributions/elasticsearch-${bwcVersion}.rpm") File bwcZip = file("${checkoutDir}/distribution/zip/build/distributions/elasticsearch-${bwcVersion}.zip") task buildBwcVersion(type: GradleBuild) { - dependsOn checkoutBwcBranch + dependsOn checkoutBwcBranch, writeBuildMetadata dir = checkoutDir tasks = [':distribution:deb:assemble', ':distribution:rpm:assemble', ':distribution:zip:assemble'] + startParameter.systemPropertiesArgs = ['build.snapshot': 'true'] doLast { List missing = [bwcDeb, bwcRpm, bwcZip].grep { file -> false == file.exists() } @@ -129,7 +149,6 @@ if (enabled) { } } - artifacts { 'default' file: bwcDeb, name: 'elasticsearch', type: 'deb', builtBy: buildBwcVersion 'default' file: bwcRpm, name: 'elasticsearch', type: 'rpm', builtBy: buildBwcVersion diff --git a/distribution/integ-test-zip/src/test/java/org/elasticsearch/test/rest/CreatedLocationHeaderIT.java b/distribution/integ-test-zip/src/test/java/org/elasticsearch/test/rest/CreatedLocationHeaderIT.java index c124a0695c95f..c61b736bf6db1 100644 --- a/distribution/integ-test-zip/src/test/java/org/elasticsearch/test/rest/CreatedLocationHeaderIT.java +++ b/distribution/integ-test-zip/src/test/java/org/elasticsearch/test/rest/CreatedLocationHeaderIT.java @@ -19,8 +19,8 @@ package org.elasticsearch.test.rest; -import org.elasticsearch.client.http.entity.ContentType; -import org.elasticsearch.client.http.entity.StringEntity; +import org.apache.http.entity.ContentType; +import org.apache.http.entity.StringEntity; import org.elasticsearch.client.Response; import java.io.IOException; diff --git a/distribution/integ-test-zip/src/test/java/org/elasticsearch/test/rest/NodeRestUsageIT.java b/distribution/integ-test-zip/src/test/java/org/elasticsearch/test/rest/NodeRestUsageIT.java index 851d1ad8e51ec..b94aa71b04029 100644 --- a/distribution/integ-test-zip/src/test/java/org/elasticsearch/test/rest/NodeRestUsageIT.java +++ b/distribution/integ-test-zip/src/test/java/org/elasticsearch/test/rest/NodeRestUsageIT.java @@ -19,8 +19,8 @@ package org.elasticsearch.test.rest; -import org.elasticsearch.client.http.entity.ContentType; -import org.elasticsearch.client.http.entity.StringEntity; +import org.apache.http.entity.ContentType; +import org.apache.http.entity.StringEntity; import org.elasticsearch.client.Response; import org.elasticsearch.client.ResponseException; diff --git a/distribution/integ-test-zip/src/test/java/org/elasticsearch/test/rest/WaitForRefreshAndCloseTests.java b/distribution/integ-test-zip/src/test/java/org/elasticsearch/test/rest/WaitForRefreshAndCloseTests.java index 9bcee067cf177..0b1ad2a6dd9ab 100644 --- a/distribution/integ-test-zip/src/test/java/org/elasticsearch/test/rest/WaitForRefreshAndCloseTests.java +++ b/distribution/integ-test-zip/src/test/java/org/elasticsearch/test/rest/WaitForRefreshAndCloseTests.java @@ -19,10 +19,10 @@ package org.elasticsearch.test.rest; -import org.elasticsearch.client.http.HttpEntity; -import org.elasticsearch.client.http.entity.ContentType; -import org.elasticsearch.client.http.entity.StringEntity; -import org.elasticsearch.client.http.util.EntityUtils; +import org.apache.http.HttpEntity; +import org.apache.http.entity.ContentType; +import org.apache.http.entity.StringEntity; +import org.apache.http.util.EntityUtils; import org.elasticsearch.action.ActionFuture; import org.elasticsearch.action.support.PlainActionFuture; import org.elasticsearch.client.Response; diff --git a/distribution/src/main/packaging/scripts/postinst b/distribution/src/main/packaging/scripts/postinst index f2bf4c083cecf..a7ffda92d1e9d 100644 --- a/distribution/src/main/packaging/scripts/postinst +++ b/distribution/src/main/packaging/scripts/postinst @@ -100,6 +100,7 @@ fi chown -R elasticsearch:elasticsearch /var/lib/elasticsearch chown -R elasticsearch:elasticsearch /var/log/elasticsearch chown -R root:elasticsearch /etc/elasticsearch +chmod g+s /etc/elasticsearch chmod 0750 /etc/elasticsearch if [ -f /etc/default/elasticsearch ]; then diff --git a/distribution/src/main/resources/bin/elasticsearch-env b/distribution/src/main/resources/bin/elasticsearch-env index 83737ae12534b..7e74195e60784 100644 --- a/distribution/src/main/resources/bin/elasticsearch-env +++ b/distribution/src/main/resources/bin/elasticsearch-env @@ -65,6 +65,8 @@ fi # check the Java version "$JAVA" -cp "$ES_CLASSPATH" org.elasticsearch.tools.JavaVersionChecker +export HOSTNAME=$HOSTNAME + ${source.path.env} if [ -z "$ES_PATH_CONF" ]; then diff --git a/distribution/src/main/resources/bin/elasticsearch-env.bat b/distribution/src/main/resources/bin/elasticsearch-env.bat index 7f3c9f99f0375..00cc7fa09804e 100644 --- a/distribution/src/main/resources/bin/elasticsearch-env.bat +++ b/distribution/src/main/resources/bin/elasticsearch-env.bat @@ -29,21 +29,23 @@ if not exist %JAVA% ( ) rem do not let JAVA_TOOL_OPTIONS slip in (as the JVM does by default) -if not "%JAVA_TOOL_OPTIONS%" == "" ( - echo "warning: ignoring JAVA_TOOL_OPTIONS=$JAVA_TOOL_OPTIONS" +if defined JAVA_TOOL_OPTIONS ( + echo warning: ignoring JAVA_TOOL_OPTIONS=%JAVA_TOOL_OPTIONS% set JAVA_TOOL_OPTIONS= ) rem JAVA_OPTS is not a built-in JVM mechanism but some people think it is so we rem warn them that we are not observing the value of %JAVA_OPTS% -if not "%JAVA_OPTS%" == "" ( - echo|set /p="warning: ignoring JAVA_OPTS=%JAVA_OPTS%; " +if defined JAVA_OPTS ( + (echo|set /p=warning: ignoring JAVA_OPTS=%JAVA_OPTS%; ) echo pass JVM parameters via ES_JAVA_OPTS ) rem check the Java version %JAVA% -cp "%ES_CLASSPATH%" "org.elasticsearch.tools.JavaVersionChecker" || exit /b 1 -if "%ES_PATH_CONF%" == "" ( +set HOSTNAME=%COMPUTERNAME% + +if not defined ES_PATH_CONF ( set ES_PATH_CONF=!ES_HOME!\config ) diff --git a/distribution/src/main/resources/bin/elasticsearch-keystore.bat b/distribution/src/main/resources/bin/elasticsearch-keystore.bat index f12ffd60092d4..7e131a80a1b6c 100644 --- a/distribution/src/main/resources/bin/elasticsearch-keystore.bat +++ b/distribution/src/main/resources/bin/elasticsearch-keystore.bat @@ -1,6 +1,7 @@ @echo off setlocal enabledelayedexpansion +setlocal enableextensions call "%~dp0elasticsearch-env.bat" || exit /b 1 @@ -13,3 +14,4 @@ call "%~dp0elasticsearch-env.bat" || exit /b 1 %* endlocal +endlocal diff --git a/distribution/src/main/resources/bin/elasticsearch-plugin.bat b/distribution/src/main/resources/bin/elasticsearch-plugin.bat index 95de9c1c41787..1d059aaaceee9 100644 --- a/distribution/src/main/resources/bin/elasticsearch-plugin.bat +++ b/distribution/src/main/resources/bin/elasticsearch-plugin.bat @@ -1,6 +1,7 @@ @echo off setlocal enabledelayedexpansion +setlocal enableextensions call "%~dp0elasticsearch-env.bat" || exit /b 1 @@ -13,3 +14,4 @@ call "%~dp0elasticsearch-env.bat" || exit /b 1 %* endlocal +endlocal diff --git a/distribution/src/main/resources/bin/elasticsearch-service.bat b/distribution/src/main/resources/bin/elasticsearch-service.bat index b1270ec029833..dfb854a47087e 100644 --- a/distribution/src/main/resources/bin/elasticsearch-service.bat +++ b/distribution/src/main/resources/bin/elasticsearch-service.bat @@ -1,6 +1,7 @@ @echo off setlocal enabledelayedexpansion +setlocal enableextensions call "%~dp0elasticsearch-env.bat" || exit /b 1 @@ -162,15 +163,15 @@ for %%a in ("%ES_JAVA_OPTS:;=","%") do ( @endlocal & set JVM_MS=%JVM_MS% & set JVM_MX=%JVM_MX% & set JVM_SS=%JVM_SS% if "%JVM_MS%" == "" ( - echo minimum heap size not set; configure using -Xms via %ES_JVM_OPTIONS% or ES_JAVA_OPTS + echo minimum heap size not set; configure using -Xms via "%ES_JVM_OPTIONS%" or ES_JAVA_OPTS goto:eof ) if "%JVM_MX%" == "" ( - echo maximum heap size not set; configure using -Xmx via %ES_JVM_OPTIONS% or ES_JAVA_OPTS + echo maximum heap size not set; configure using -Xmx via "%ES_JVM_OPTIONS%" or ES_JAVA_OPTS goto:eof ) if "%JVM_SS%" == "" ( - echo thread stack size not set; configure using -Xss via %ES_JVM_OPTIONS% or ES_JAVA_OPTS + echo thread stack size not set; configure using -Xss via "%ES_JVM_OPTIONS%" or ES_JAVA_OPTS goto:eof ) @@ -266,3 +267,4 @@ set "%~2=%conv%" goto:eof endlocal +endlocal diff --git a/distribution/src/main/resources/bin/elasticsearch-translog.bat b/distribution/src/main/resources/bin/elasticsearch-translog.bat index 9b375f283c29b..4f15e9b379250 100644 --- a/distribution/src/main/resources/bin/elasticsearch-translog.bat +++ b/distribution/src/main/resources/bin/elasticsearch-translog.bat @@ -1,6 +1,7 @@ @echo off setlocal enabledelayedexpansion +setlocal enableextensions call "%~dp0elasticsearch-env.bat" || exit /b 1 @@ -13,3 +14,4 @@ call "%~dp0elasticsearch-env.bat" || exit /b 1 %* endlocal +endlocal diff --git a/distribution/src/main/resources/bin/elasticsearch.bat b/distribution/src/main/resources/bin/elasticsearch.bat index 057e686cdad15..51f2047679007 100644 --- a/distribution/src/main/resources/bin/elasticsearch.bat +++ b/distribution/src/main/resources/bin/elasticsearch.bat @@ -1,6 +1,7 @@ @echo off setlocal enabledelayedexpansion +setlocal enableextensions SET params='%*' @@ -51,3 +52,4 @@ for /F "usebackq delims=" %%a in (`findstr /b \- "%ES_JVM_OPTIONS%"`) do set JVM %JAVA% %ES_JAVA_OPTS% -Delasticsearch -Des.path.home="%ES_HOME%" -Des.path.conf="%ES_PATH_CONF%" -cp "%ES_CLASSPATH%" "org.elasticsearch.bootstrap.Elasticsearch" !newparams! endlocal +endlocal diff --git a/distribution/src/main/resources/config/elasticsearch.yml b/distribution/src/main/resources/config/elasticsearch.yml index 8d4527e39bd55..445c6f5c07fce 100644 --- a/distribution/src/main/resources/config/elasticsearch.yml +++ b/distribution/src/main/resources/config/elasticsearch.yml @@ -69,7 +69,7 @@ ${path.logs} # # Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1): # -#discovery.zen.minimum_master_nodes: 3 +#discovery.zen.minimum_master_nodes: # # For more information, consult the zen discovery module documentation. # diff --git a/distribution/src/main/resources/config/jvm.options b/distribution/src/main/resources/config/jvm.options index e6f3c5e29f4bd..52bd4244bb853 100644 --- a/distribution/src/main/resources/config/jvm.options +++ b/distribution/src/main/resources/config/jvm.options @@ -63,9 +63,6 @@ # exceptions because stack traces are important for debugging -XX:-OmitStackTraceInFastThrow -# use old-style file permissions on JDK9 --Djdk.io.permissionsUseCanonicalPath=true - # flags to configure Netty -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true @@ -83,7 +80,7 @@ # specify an alternative path for heap dumps # ensure the directory exists and has sufficient space -#-XX:HeapDumpPath=${heap.dump.path} +${heap.dump.path} ## GC logging diff --git a/distribution/tools/plugin-cli/src/main/java/org/elasticsearch/plugins/InstallPluginCommand.java b/distribution/tools/plugin-cli/src/main/java/org/elasticsearch/plugins/InstallPluginCommand.java index 04aa2660e32d3..7029ba048d031 100644 --- a/distribution/tools/plugin-cli/src/main/java/org/elasticsearch/plugins/InstallPluginCommand.java +++ b/distribution/tools/plugin-cli/src/main/java/org/elasticsearch/plugins/InstallPluginCommand.java @@ -34,6 +34,7 @@ import org.elasticsearch.common.collect.Tuple; import org.elasticsearch.common.hash.MessageDigests; import org.elasticsearch.common.io.FileSystemUtils; +import org.elasticsearch.common.settings.KeyStoreWrapper; import org.elasticsearch.env.Environment; import java.io.BufferedReader; @@ -42,6 +43,7 @@ import java.io.InputStreamReader; import java.io.OutputStream; import java.net.HttpURLConnection; +import java.net.URI; import java.net.URL; import java.net.URLConnection; import java.net.URLDecoder; @@ -57,6 +59,7 @@ import java.nio.file.attribute.PosixFileAttributes; import java.nio.file.attribute.PosixFilePermission; import java.nio.file.attribute.PosixFilePermissions; +import java.security.MessageDigest; import java.util.ArrayList; import java.util.Arrays; import java.util.Collections; @@ -217,7 +220,7 @@ private Path download(Terminal terminal, String pluginId, Path tmpDir) throws Ex if (OFFICIAL_PLUGINS.contains(pluginId)) { final String url = getElasticUrl(terminal, getStagingHash(), Version.CURRENT, pluginId, Platforms.PLATFORM_NAME); terminal.println("-> Downloading " + pluginId + " from elastic"); - return downloadZipAndChecksum(terminal, url, tmpDir); + return downloadZipAndChecksum(terminal, url, tmpDir, false); } // now try as maven coordinates, a valid URL would only have a colon and slash @@ -225,7 +228,7 @@ private Path download(Terminal terminal, String pluginId, Path tmpDir) throws Ex if (coordinates.length == 3 && pluginId.contains("/") == false) { String mavenUrl = getMavenUrl(terminal, coordinates, Platforms.PLATFORM_NAME); terminal.println("-> Downloading " + pluginId + " from maven central"); - return downloadZipAndChecksum(terminal, mavenUrl, tmpDir); + return downloadZipAndChecksum(terminal, mavenUrl, tmpDir, true); } // fall back to plain old URL @@ -311,8 +314,9 @@ private List checkMisspelledPlugin(String pluginId) { } /** Downloads a zip from the url, into a temp file under the given temp dir. */ + // pkg private for tests @SuppressForbidden(reason = "We use getInputStream to download plugins") - private Path downloadZip(Terminal terminal, String urlString, Path tmpDir) throws IOException { + Path downloadZip(Terminal terminal, String urlString, Path tmpDir) throws IOException { terminal.println(VERBOSE, "Retrieving zip from " + urlString); URL url = new URL(urlString); Path zip = Files.createTempFile(tmpDir, null, ".zip"); @@ -360,32 +364,90 @@ public void onProgress(int percent) { } } - /** Downloads a zip from the url, as well as a SHA1 checksum, and checks the checksum. */ + /** Downloads a zip from the url, as well as a SHA512 (or SHA1) checksum, and checks the checksum. */ // pkg private for tests @SuppressForbidden(reason = "We use openStream to download plugins") - Path downloadZipAndChecksum(Terminal terminal, String urlString, Path tmpDir) throws Exception { + private Path downloadZipAndChecksum(Terminal terminal, String urlString, Path tmpDir, boolean allowSha1) throws Exception { Path zip = downloadZip(terminal, urlString, tmpDir); pathsToDeleteOnShutdown.add(zip); - URL checksumUrl = new URL(urlString + ".sha1"); + String checksumUrlString = urlString + ".sha512"; + URL checksumUrl = openUrl(checksumUrlString); + String digestAlgo = "SHA-512"; + if (checksumUrl == null && allowSha1) { + // fallback to sha1, until 7.0, but with warning + terminal.println("Warning: sha512 not found, falling back to sha1. This behavior is deprecated and will be removed in a " + + "future release. Please update the plugin to use a sha512 checksum."); + checksumUrlString = urlString + ".sha1"; + checksumUrl = openUrl(checksumUrlString); + digestAlgo = "SHA-1"; + } + if (checksumUrl == null) { + throw new UserException(ExitCodes.IO_ERROR, "Plugin checksum missing: " + checksumUrlString); + } final String expectedChecksum; try (InputStream in = checksumUrl.openStream()) { - BufferedReader checksumReader = new BufferedReader(new InputStreamReader(in, StandardCharsets.UTF_8)); - expectedChecksum = checksumReader.readLine(); - if (checksumReader.readLine() != null) { - throw new UserException(ExitCodes.IO_ERROR, "Invalid checksum file at " + checksumUrl); + /* + * The supported format of the SHA-1 files is a single-line file containing the SHA-1. The supported format of the SHA-512 files + * is a single-line file containing the SHA-512 and the filename, separated by two spaces. For SHA-1, we verify that the hash + * matches, and that the file contains a single line. For SHA-512, we verify that the hash and the filename match, and that the + * file contains a single line. + */ + if (digestAlgo.equals("SHA-1")) { + final BufferedReader checksumReader = new BufferedReader(new InputStreamReader(in, StandardCharsets.UTF_8)); + expectedChecksum = checksumReader.readLine(); + if (checksumReader.readLine() != null) { + throw new UserException(ExitCodes.IO_ERROR, "Invalid checksum file at " + checksumUrl); + } + } else { + final BufferedReader checksumReader = new BufferedReader(new InputStreamReader(in, StandardCharsets.UTF_8)); + final String checksumLine = checksumReader.readLine(); + final String[] fields = checksumLine.split(" {2}"); + if (fields.length != 2) { + throw new UserException(ExitCodes.IO_ERROR, "Invalid checksum file at " + checksumUrl); + } + expectedChecksum = fields[0]; + final String[] segments = URI.create(urlString).getPath().split("/"); + final String expectedFile = segments[segments.length - 1]; + if (fields[1].equals(expectedFile) == false) { + final String message = String.format( + Locale.ROOT, + "checksum file at [%s] is not for this plugin, expected [%s] but was [%s]", + checksumUrl, + expectedFile, + fields[1]); + throw new UserException(ExitCodes.IO_ERROR, message); + } + if (checksumReader.readLine() != null) { + throw new UserException(ExitCodes.IO_ERROR, "Invalid checksum file at " + checksumUrl); + } } } byte[] zipbytes = Files.readAllBytes(zip); - String gotChecksum = MessageDigests.toHexString(MessageDigests.sha1().digest(zipbytes)); + String gotChecksum = MessageDigests.toHexString(MessageDigest.getInstance(digestAlgo).digest(zipbytes)); if (expectedChecksum.equals(gotChecksum) == false) { throw new UserException(ExitCodes.IO_ERROR, - "SHA1 mismatch, expected " + expectedChecksum + " but got " + gotChecksum); + digestAlgo + " mismatch, expected " + expectedChecksum + " but got " + gotChecksum); } return zip; } + /** + * Creates a URL and opens a connection. + * + * If the URL returns a 404, {@code null} is returned, otherwise the open URL opject is returned. + */ + // pkg private for tests + URL openUrl(String urlString) throws Exception { + URL checksumUrl = new URL(urlString); + HttpURLConnection connection = (HttpURLConnection)checksumUrl.openConnection(); + if (connection.getResponseCode() == 404) { + return null; + } + return checksumUrl; + } + private Path unzip(Path zip, Path pluginsDir) throws IOException, UserException { // unzip plugin to a staging temp dir @@ -572,6 +634,15 @@ public FileVisitResult postVisitDirectory(final Path dir, final IOException exc) } }); + if (info.requiresKeystore()) { + KeyStoreWrapper keystore = KeyStoreWrapper.load(env.configFile()); + if (keystore == null) { + terminal.println("Elasticsearch keystore is required by plugin [" + info.getName() + "], creating..."); + keystore = KeyStoreWrapper.create(new char[0]); + keystore.save(env.configFile()); + } + } + terminal.println("-> Installed " + info.getName()); } catch (Exception installProblem) { diff --git a/distribution/tools/plugin-cli/src/test/java/org/elasticsearch/plugins/InstallPluginCommandTests.java b/distribution/tools/plugin-cli/src/test/java/org/elasticsearch/plugins/InstallPluginCommandTests.java index 29426537e8b2c..8e37b10efc83f 100644 --- a/distribution/tools/plugin-cli/src/test/java/org/elasticsearch/plugins/InstallPluginCommandTests.java +++ b/distribution/tools/plugin-cli/src/test/java/org/elasticsearch/plugins/InstallPluginCommandTests.java @@ -24,16 +24,20 @@ import com.google.common.jimfs.Jimfs; import org.apache.lucene.util.LuceneTestCase; import org.elasticsearch.Version; +import org.elasticsearch.cli.ExitCodes; import org.elasticsearch.cli.MockTerminal; import org.elasticsearch.cli.Terminal; import org.elasticsearch.cli.UserException; import org.elasticsearch.common.SuppressForbidden; import org.elasticsearch.common.collect.Tuple; +import org.elasticsearch.common.hash.MessageDigests; import org.elasticsearch.common.io.FileSystemUtils; import org.elasticsearch.common.io.PathUtils; import org.elasticsearch.common.io.PathUtilsForTesting; +import org.elasticsearch.common.settings.KeyStoreWrapper; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.env.Environment; +import org.elasticsearch.env.TestEnvironment; import org.elasticsearch.test.ESTestCase; import org.elasticsearch.test.PosixPermissionsResetter; import org.junit.After; @@ -44,6 +48,7 @@ import java.io.InputStream; import java.io.StringReader; import java.net.MalformedURLException; +import java.net.URI; import java.net.URL; import java.nio.charset.StandardCharsets; import java.nio.file.DirectoryStream; @@ -61,16 +66,20 @@ import java.nio.file.attribute.PosixFileAttributes; import java.nio.file.attribute.PosixFilePermission; import java.nio.file.attribute.UserPrincipal; +import java.security.MessageDigest; import java.util.ArrayList; +import java.util.Arrays; import java.util.HashSet; import java.util.List; import java.util.Locale; import java.util.Set; import java.util.function.Function; import java.util.stream.Collectors; +import java.util.stream.Stream; import java.util.zip.ZipEntry; import java.util.zip.ZipOutputStream; +import static org.elasticsearch.test.hamcrest.RegexMatcher.matches; import static org.hamcrest.CoreMatchers.equalTo; import static org.hamcrest.Matchers.containsInAnyOrder; import static org.hamcrest.Matchers.containsString; @@ -168,7 +177,7 @@ static Tuple createEnv(FileSystem fs, Function Settings settings = Settings.builder() .put("path.home", home) .build(); - return Tuple.tuple(home, new Environment(settings)); + return Tuple.tuple(home, TestEnvironment.newEnvironment(settings)); } static Path createPluginDir(Function temp) throws IOException { @@ -201,18 +210,20 @@ public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) throws IO } /** creates a plugin .zip and returns the url for testing */ - static String createPluginUrl(String name, Path structure) throws IOException { - return createPlugin(name, structure, false).toUri().toURL().toString(); + static String createPluginUrl(String name, Path structure, String... additionalProps) throws IOException { + return createPlugin(name, structure, false, additionalProps).toUri().toURL().toString(); } - static Path createPlugin(String name, Path structure, boolean createSecurityPolicyFile) throws IOException { - PluginTestUtil.writeProperties(structure, + static Path createPlugin(String name, Path structure, boolean createSecurityPolicyFile, String... additionalProps) throws IOException { + String[] properties = Stream.concat(Stream.of( "description", "fake desc", "name", name, "version", "1.0", "elasticsearch.version", Version.CURRENT.toString(), "java.version", System.getProperty("java.specification.version"), - "classname", "FakePlugin"); + "classname", "FakePlugin" + ), Arrays.stream(additionalProps)).toArray(String[]::new); + PluginTestUtil.writeProperties(structure, properties); if (createSecurityPolicyFile) { String securityPolicyContent = "grant {\n permission java.lang.RuntimePermission \"setFactory\";\n};\n"; Files.write(structure.resolve("plugin-security.policy"), securityPolicyContent.getBytes(StandardCharsets.UTF_8)); @@ -226,7 +237,7 @@ MockTerminal installPlugin(String pluginUrl, Path home) throws Exception { } MockTerminal installPlugin(String pluginUrl, Path home, InstallPluginCommand command) throws Exception { - Environment env = new Environment(Settings.builder().put("path.home", home).build()); + Environment env = TestEnvironment.newEnvironment(Settings.builder().put("path.home", home).build()); MockTerminal terminal = new MockTerminal(); command.execute(terminal, pluginUrl, true, env); return terminal; @@ -745,19 +756,33 @@ private void installPlugin(MockTerminal terminal, boolean isBatch) throws Except skipJarHellCommand.execute(terminal, pluginZip, isBatch, env.v2()); } - public void assertInstallPluginFromUrl(String pluginId, String name, String url, String stagingHash) throws Exception { + public MockTerminal assertInstallPluginFromUrl(String pluginId, String name, String url, String stagingHash, + String shaExtension, Function shaCalculator) throws Exception { Tuple env = createEnv(fs, temp); Path pluginDir = createPluginDir(temp); Path pluginZip = createPlugin(name, pluginDir, false); InstallPluginCommand command = new InstallPluginCommand() { @Override - Path downloadZipAndChecksum(Terminal terminal, String urlString, Path tmpDir) throws Exception { + Path downloadZip(Terminal terminal, String urlString, Path tmpDir) throws IOException { assertEquals(url, urlString); Path downloadedPath = tmpDir.resolve("downloaded.zip"); Files.copy(pluginZip, downloadedPath); return downloadedPath; } @Override + URL openUrl(String urlString) throws Exception { + String expectedUrl = url + shaExtension; + if (expectedUrl.equals(urlString)) { + // calc sha an return file URL to it + Path shaFile = temp.apply("shas").resolve("downloaded.zip" + shaExtension); + byte[] zipbytes = Files.readAllBytes(pluginZip); + String checksum = shaCalculator.apply(zipbytes); + Files.write(shaFile, checksum.getBytes(StandardCharsets.UTF_8)); + return shaFile.toUri().toURL(); + } + return null; + } + @Override boolean urlExists(Terminal terminal, String urlString) throws IOException { return urlString.equals(url); } @@ -770,8 +795,14 @@ void jarHellCheck(Path candidate, Path pluginsDir) throws Exception { // no jarhell check } }; - installPlugin(pluginId, env.v1(), command); + MockTerminal terminal = installPlugin(pluginId, env.v1(), command); assertPlugin(name, pluginDir, env.v2()); + return terminal; + } + + public void assertInstallPluginFromUrl(String pluginId, String name, String url, String stagingHash) throws Exception { + MessageDigest digest = MessageDigest.getInstance("SHA-512"); + assertInstallPluginFromUrl(pluginId, name, url, stagingHash, ".sha512", checksumAndFilename(digest, url)); } public void testOfficalPlugin() throws Exception { @@ -807,5 +838,130 @@ public void testMavenPlatformPlugin() throws Exception { assertInstallPluginFromUrl("mygroup:myplugin:1.0.0", "myplugin", url, null); } - // TODO: test checksum (need maven/official below) + public void testMavenSha1Backcompat() throws Exception { + String url = "https://repo1.maven.org/maven2/mygroup/myplugin/1.0.0/myplugin-1.0.0.zip"; + MessageDigest digest = MessageDigest.getInstance("SHA-1"); + MockTerminal terminal = assertInstallPluginFromUrl("mygroup:myplugin:1.0.0", "myplugin", url, null, ".sha1", checksum(digest)); + assertTrue(terminal.getOutput(), terminal.getOutput().contains("sha512 not found, falling back to sha1")); + } + + public void testOfficialShaMissing() throws Exception { + String url = "https://artifacts.elastic.co/downloads/elasticsearch-plugins/analysis-icu/analysis-icu-" + Version.CURRENT + ".zip"; + MessageDigest digest = MessageDigest.getInstance("SHA-1"); + UserException e = expectThrows(UserException.class, () -> + assertInstallPluginFromUrl("analysis-icu", "analysis-icu", url, null, ".sha1", checksum(digest))); + assertEquals(ExitCodes.IO_ERROR, e.exitCode); + assertEquals("Plugin checksum missing: " + url + ".sha512", e.getMessage()); + } + + public void testMavenShaMissing() throws Exception { + String url = "https://repo1.maven.org/maven2/mygroup/myplugin/1.0.0/myplugin-1.0.0.zip"; + UserException e = expectThrows(UserException.class, () -> + assertInstallPluginFromUrl("mygroup:myplugin:1.0.0", "myplugin", url, null, ".dne", bytes -> null)); + assertEquals(ExitCodes.IO_ERROR, e.exitCode); + assertEquals("Plugin checksum missing: " + url + ".sha1", e.getMessage()); + } + + public void testInvalidShaFileMissingFilename() throws Exception { + String url = "https://artifacts.elastic.co/downloads/elasticsearch-plugins/analysis-icu/analysis-icu-" + Version.CURRENT + ".zip"; + MessageDigest digest = MessageDigest.getInstance("SHA-512"); + UserException e = expectThrows(UserException.class, () -> + assertInstallPluginFromUrl("analysis-icu", "analysis-icu", url, null, ".sha512", checksum(digest))); + assertEquals(ExitCodes.IO_ERROR, e.exitCode); + assertTrue(e.getMessage(), e.getMessage().startsWith("Invalid checksum file")); + } + + public void testInvalidShaFileMismatchFilename() throws Exception { + String url = "https://artifacts.elastic.co/downloads/elasticsearch-plugins/analysis-icu/analysis-icu-" + Version.CURRENT + ".zip"; + MessageDigest digest = MessageDigest.getInstance("SHA-512"); + UserException e = expectThrows(UserException.class, () -> + assertInstallPluginFromUrl( + "analysis-icu", + "analysis-icu", + url, + null, + ".sha512", + checksumAndString(digest, " repository-s3-" + Version.CURRENT + ".zip"))); + assertEquals(ExitCodes.IO_ERROR, e.exitCode); + assertThat(e, hasToString(matches("checksum file at \\[.*\\] is not for this plugin"))); + } + + public void testInvalidShaFileContainingExtraLine() throws Exception { + String url = "https://artifacts.elastic.co/downloads/elasticsearch-plugins/analysis-icu/analysis-icu-" + Version.CURRENT + ".zip"; + MessageDigest digest = MessageDigest.getInstance("SHA-512"); + UserException e = expectThrows(UserException.class, () -> + assertInstallPluginFromUrl( + "analysis-icu", + "analysis-icu", + url, + null, + ".sha512", + checksumAndString(digest, " analysis-icu-" + Version.CURRENT + ".zip\nfoobar"))); + assertEquals(ExitCodes.IO_ERROR, e.exitCode); + assertTrue(e.getMessage(), e.getMessage().startsWith("Invalid checksum file")); + } + + public void testSha512Mismatch() throws Exception { + String url = "https://artifacts.elastic.co/downloads/elasticsearch-plugins/analysis-icu/analysis-icu-" + Version.CURRENT + ".zip"; + UserException e = expectThrows(UserException.class, () -> + assertInstallPluginFromUrl( + "analysis-icu", + "analysis-icu", + url, + null, + ".sha512", + bytes -> "foobar analysis-icu-" + Version.CURRENT + ".zip")); + assertEquals(ExitCodes.IO_ERROR, e.exitCode); + assertTrue(e.getMessage(), e.getMessage().contains("SHA-512 mismatch, expected foobar")); + } + + public void testSha1Mismatch() throws Exception { + String url = "https://repo1.maven.org/maven2/mygroup/myplugin/1.0.0/myplugin-1.0.0.zip"; + UserException e = expectThrows(UserException.class, () -> + assertInstallPluginFromUrl("mygroup:myplugin:1.0.0", "myplugin", url, null, ".sha1", bytes -> "foobar")); + assertEquals(ExitCodes.IO_ERROR, e.exitCode); + assertTrue(e.getMessage(), e.getMessage().contains("SHA-1 mismatch, expected foobar")); + } + + public void testKeystoreNotRequired() throws Exception { + Tuple env = createEnv(fs, temp); + Path pluginDir = createPluginDir(temp); + String pluginZip = createPluginUrl("fake", pluginDir, "requires.keystore", "false"); + installPlugin(pluginZip, env.v1()); + assertFalse(Files.exists(KeyStoreWrapper.keystorePath(env.v2().configFile()))); + } + + public void testKeystoreRequiredAlreadyExists() throws Exception { + Tuple env = createEnv(fs, temp); + KeyStoreWrapper keystore = KeyStoreWrapper.create(new char[0]); + keystore.save(env.v2().configFile()); + byte[] expectedBytes = Files.readAllBytes(KeyStoreWrapper.keystorePath(env.v2().configFile())); + Path pluginDir = createPluginDir(temp); + String pluginZip = createPluginUrl("fake", pluginDir, "requires.keystore", "true"); + installPlugin(pluginZip, env.v1()); + byte[] gotBytes = Files.readAllBytes(KeyStoreWrapper.keystorePath(env.v2().configFile())); + assertArrayEquals("Keystore was modified", expectedBytes, gotBytes); + } + + public void testKeystoreRequiredCreated() throws Exception { + Tuple env = createEnv(fs, temp); + Path pluginDir = createPluginDir(temp); + String pluginZip = createPluginUrl("fake", pluginDir, "requires.keystore", "true"); + MockTerminal terminal = installPlugin(pluginZip, env.v1()); + assertTrue(Files.exists(KeyStoreWrapper.keystorePath(env.v2().configFile()))); + } + + private Function checksum(final MessageDigest digest) { + return checksumAndString(digest, ""); + } + + private Function checksumAndFilename(final MessageDigest digest, final String url) throws MalformedURLException { + final String[] segments = URI.create(url).getPath().split("/"); + return checksumAndString(digest, " " + segments[segments.length - 1]); + } + + private Function checksumAndString(final MessageDigest digest, final String s) { + return bytes -> MessageDigests.toHexString(digest.digest(bytes)) + s; + } + } diff --git a/distribution/tools/plugin-cli/src/test/java/org/elasticsearch/plugins/ListPluginsCommandTests.java b/distribution/tools/plugin-cli/src/test/java/org/elasticsearch/plugins/ListPluginsCommandTests.java index 6f536acbf74c6..9a1f61c0d889c 100644 --- a/distribution/tools/plugin-cli/src/test/java/org/elasticsearch/plugins/ListPluginsCommandTests.java +++ b/distribution/tools/plugin-cli/src/test/java/org/elasticsearch/plugins/ListPluginsCommandTests.java @@ -37,6 +37,7 @@ import org.elasticsearch.cli.UserException; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.env.Environment; +import org.elasticsearch.env.TestEnvironment; import org.elasticsearch.test.ESTestCase; import org.junit.Before; @@ -54,7 +55,7 @@ public void setUp() throws Exception { Settings settings = Settings.builder() .put("path.home", home) .build(); - env = new Environment(settings); + env = TestEnvironment.newEnvironment(settings); } static MockTerminal listPlugins(Path home) throws Exception { @@ -69,7 +70,9 @@ static MockTerminal listPlugins(Path home, String[] args) throws Exception { int status = new ListPluginsCommand() { @Override protected Environment createEnv(Terminal terminal, Map settings) throws UserException { - final Settings realSettings = Settings.builder().put("path.home", home).put(settings).build(); + Settings.Builder builder = Settings.builder().put("path.home", home); + settings.forEach((k,v) -> builder.put(k, v)); + final Settings realSettings = builder.build(); return new Environment(realSettings, home.resolve("config")); } @@ -91,7 +94,7 @@ private static void buildFakePlugin( final String description, final String name, final String classname) throws IOException { - buildFakePlugin(env, description, name, classname, false); + buildFakePlugin(env, description, name, classname, false, false); } private static void buildFakePlugin( @@ -99,7 +102,8 @@ private static void buildFakePlugin( final String description, final String name, final String classname, - final boolean hasNativeController) throws IOException { + final boolean hasNativeController, + final boolean requiresKeystore) throws IOException { PluginTestUtil.writeProperties( env.pluginsFile().resolve(name), "description", description, @@ -108,7 +112,8 @@ private static void buildFakePlugin( "elasticsearch.version", Version.CURRENT.toString(), "java.version", System.getProperty("java.specification.version"), "classname", classname, - "has.native.controller", Boolean.toString(hasNativeController)); + "has.native.controller", Boolean.toString(hasNativeController), + "requires.keystore", Boolean.toString(requiresKeystore)); } public void testPluginsDirMissing() throws Exception { @@ -148,25 +153,45 @@ public void testPluginWithVerbose() throws Exception { "Description: fake desc", "Version: 1.0", "Native Controller: false", + "Requires Keystore: false", " * Classname: org.fake"), terminal.getOutput()); } public void testPluginWithNativeController() throws Exception { - buildFakePlugin(env, "fake desc 1", "fake_plugin1", "org.fake", true); + buildFakePlugin(env, "fake desc 1", "fake_plugin1", "org.fake", true, false); String[] params = { "-v" }; MockTerminal terminal = listPlugins(home, params); assertEquals( - buildMultiline( - "Plugins directory: " + env.pluginsFile(), - "fake_plugin1", - "- Plugin information:", - "Name: fake_plugin1", - "Description: fake desc 1", - "Version: 1.0", - "Native Controller: true", - " * Classname: org.fake"), - terminal.getOutput()); + buildMultiline( + "Plugins directory: " + env.pluginsFile(), + "fake_plugin1", + "- Plugin information:", + "Name: fake_plugin1", + "Description: fake desc 1", + "Version: 1.0", + "Native Controller: true", + "Requires Keystore: false", + " * Classname: org.fake"), + terminal.getOutput()); + } + + public void testPluginWithRequiresKeystore() throws Exception { + buildFakePlugin(env, "fake desc 1", "fake_plugin1", "org.fake", false, true); + String[] params = { "-v" }; + MockTerminal terminal = listPlugins(home, params); + assertEquals( + buildMultiline( + "Plugins directory: " + env.pluginsFile(), + "fake_plugin1", + "- Plugin information:", + "Name: fake_plugin1", + "Description: fake desc 1", + "Version: 1.0", + "Native Controller: false", + "Requires Keystore: true", + " * Classname: org.fake"), + terminal.getOutput()); } public void testPluginWithVerboseMultiplePlugins() throws Exception { @@ -183,6 +208,7 @@ public void testPluginWithVerboseMultiplePlugins() throws Exception { "Description: fake desc 1", "Version: 1.0", "Native Controller: false", + "Requires Keystore: false", " * Classname: org.fake", "fake_plugin2", "- Plugin information:", @@ -190,6 +216,7 @@ public void testPluginWithVerboseMultiplePlugins() throws Exception { "Description: fake desc 2", "Version: 1.0", "Native Controller: false", + "Requires Keystore: false", " * Classname: org.fake2"), terminal.getOutput()); } diff --git a/distribution/tools/plugin-cli/src/test/java/org/elasticsearch/plugins/RemovePluginCommandTests.java b/distribution/tools/plugin-cli/src/test/java/org/elasticsearch/plugins/RemovePluginCommandTests.java index 3a78da6b28404..6c462d39e5775 100644 --- a/distribution/tools/plugin-cli/src/test/java/org/elasticsearch/plugins/RemovePluginCommandTests.java +++ b/distribution/tools/plugin-cli/src/test/java/org/elasticsearch/plugins/RemovePluginCommandTests.java @@ -26,6 +26,7 @@ import org.elasticsearch.cli.UserException; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.env.Environment; +import org.elasticsearch.env.TestEnvironment; import org.elasticsearch.test.ESTestCase; import org.junit.Before; @@ -73,11 +74,11 @@ public void setUp() throws Exception { Settings settings = Settings.builder() .put("path.home", home) .build(); - env = new Environment(settings); + env = TestEnvironment.newEnvironment(settings); } static MockTerminal removePlugin(String name, Path home, boolean purge) throws Exception { - Environment env = new Environment(Settings.builder().put("path.home", home).build()); + Environment env = TestEnvironment.newEnvironment(Settings.builder().put("path.home", home).build()); MockTerminal terminal = new MockTerminal(); new MockRemovePluginCommand(env).execute(terminal, env, name, purge); return terminal; diff --git a/docs/Versions.asciidoc b/docs/Versions.asciidoc index fba6f0caa77a8..65c9e55a365c1 100644 --- a/docs/Versions.asciidoc +++ b/docs/Versions.asciidoc @@ -1,7 +1,7 @@ :version: 7.0.0-alpha1 :major-version: 7.x -:lucene_version: 7.0.0-SNAPSHOT -:lucene_version_path: 7_0_0 +:lucene_version: 7.1.0 +:lucene_version_path: 7_1_0 :branch: master :jdk: 1.8.0_131 @@ -14,7 +14,8 @@ release-state can be: released | prerelease | unreleased :issue: https://github.com/elastic/elasticsearch/issues/ :pull: https://github.com/elastic/elasticsearch/pull/ -:docker-image: docker.elastic.co/elasticsearch/elasticsearch:{version} +:docker-repo: docker.elastic.co/elasticsearch/elasticsearch +:docker-image: {docker-repo}:{version} :plugin_url: https://artifacts.elastic.co/downloads/elasticsearch-plugins /////// diff --git a/docs/build.gradle b/docs/build.gradle index 4b963248c571f..4cb82b97152f8 100644 --- a/docs/build.gradle +++ b/docs/build.gradle @@ -19,39 +19,7 @@ apply plugin: 'elasticsearch.docs-test' -/* List of files that have snippets that probably should be converted to - * `// CONSOLE` and `// TESTRESPONSE` but have yet to be converted. Try and - * only remove entries from this list. When it is empty we'll remove it - * entirely and have a party! There will be cake and everything.... */ -buildRestTests.expectedUnconvertedCandidates = [ - 'reference/aggregations/bucket/nested-aggregation.asciidoc', - 'reference/aggregations/bucket/range-aggregation.asciidoc', - 'reference/aggregations/bucket/reverse-nested-aggregation.asciidoc', - 'reference/aggregations/bucket/significantterms-aggregation.asciidoc', - 'reference/aggregations/bucket/terms-aggregation.asciidoc', - 'reference/aggregations/matrix/stats-aggregation.asciidoc', - 'reference/aggregations/metrics/tophits-aggregation.asciidoc', - 'reference/cluster/allocation-explain.asciidoc', - 'reference/cluster/nodes-info.asciidoc', - 'reference/cluster/pending.asciidoc', - 'reference/cluster/state.asciidoc', - 'reference/cluster/stats.asciidoc', - 'reference/cluster/tasks.asciidoc', - 'reference/docs/delete-by-query.asciidoc', - 'reference/docs/reindex.asciidoc', - 'reference/docs/update-by-query.asciidoc', - 'reference/index-modules/similarity.asciidoc', - 'reference/index-modules/store.asciidoc', - 'reference/index-modules/translog.asciidoc', - 'reference/indices/recovery.asciidoc', - 'reference/indices/segments.asciidoc', - 'reference/indices/shard-stores.asciidoc', - 'reference/migration/migrate_6_0/scripting.asciidoc', - 'reference/search/profile.asciidoc', -] - integTestCluster { - setting 'script.max_compilations_per_minute', '1000' /* Enable regexes in painless so our tests don't complain about example * snippets that use them. */ setting 'script.painless.regex.enabled', 'true' @@ -542,4 +510,4 @@ for (int i = 0; i < 5; i++) { buildRestTests.setups['iprange'] += """ {"index":{}} {"ip": "12.0.0.$i"}""" -} \ No newline at end of file +} diff --git a/docs/java-api/docs/index_.asciidoc b/docs/java-api/docs/index_.asciidoc index 0b91622dd8174..b32955d9d4f50 100644 --- a/docs/java-api/docs/index_.asciidoc +++ b/docs/java-api/docs/index_.asciidoc @@ -140,7 +140,7 @@ String json = "{" + "}"; IndexResponse response = client.prepareIndex("twitter", "tweet") - .setSource(json) +       .setSource(json, XContentType.JSON) .get(); -------------------------------------------------- diff --git a/docs/java-api/index.asciidoc b/docs/java-api/index.asciidoc index bdf85f515e827..10430adc9a6f3 100644 --- a/docs/java-api/index.asciidoc +++ b/docs/java-api/index.asciidoc @@ -83,7 +83,7 @@ You need to also include Log4j 2 dependencies: org.apache.logging.log4j log4j-core - 2.8.2 + 2.9.1 -------------------------------------------------- @@ -111,7 +111,7 @@ If you want to use another logger than Log4j 2, you can use http://www.slf4j.org org.apache.logging.log4j log4j-to-slf4j - 2.8.2 + 2.9.1 org.slf4j @@ -188,51 +188,6 @@ it to the `transformers`: -------------------------------------------------- - -== Deploying in JBoss EAP6 module - -Elasticsearch and Lucene classes need to be in the same JBoss module. - -You should define a `module.xml` file like this: - -[source,xml] --------------------------------------------------- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - --------------------------------------------------- - - include::client.asciidoc[] include::docs.asciidoc[] diff --git a/docs/java-rest/high-level/apis/bulk.asciidoc b/docs/java-rest/high-level/apis/bulk.asciidoc index 9bbc0b3106267..3102e96519e58 100644 --- a/docs/java-rest/high-level/apis/bulk.asciidoc +++ b/docs/java-rest/high-level/apis/bulk.asciidoc @@ -125,27 +125,24 @@ The `BulkProcessor` simplifies the usage of the Bulk API by providing a utility class that allows index/update/delete operations to be transparently executed as they are added to the processor. -In order to execute the requests, the `BulkProcessor` requires 3 components: +In order to execute the requests, the `BulkProcessor` requires the following +components: `RestHighLevelClient`:: This client is used to execute the `BulkRequest` and to retrieve the `BulkResponse` `BulkProcessor.Listener`:: This listener is called before and after every `BulkRequest` execution or when a `BulkRequest` failed -`ThreadPool`:: The `BulkRequest` executions are done using threads from this -pool, allowing the `BulkProcessor` to work in a non-blocking manner and to -accept new index/update/delete requests while bulk requests are executing. -Then the `BulkProcessor.Builder` class can be used to build a new `BulkProcessor`: +Then the `BulkProcessor.builder` method can be used to build a new `BulkProcessor`: ["source","java",subs="attributes,callouts,macros"] -------------------------------------------------- include-tagged::{doc-tests}/CRUDDocumentationIT.java[bulk-processor-init] -------------------------------------------------- -<1> Create the `ThreadPool` using the given `Settings` -<2> Create the `BulkProcessor.Listener` -<3> This method is called before each execution of a `BulkRequest` -<4> This method is called after each execution of a `BulkRequest` -<5> This method is called when a `BulkRequest` failed -<6> Create the `BulkProcessor` by calling the `build()` method from +<1> Create the `BulkProcessor.Listener` +<2> This method is called before each execution of a `BulkRequest` +<3> This method is called after each execution of a `BulkRequest` +<4> This method is called when a `BulkRequest` failed +<5> Create the `BulkProcessor` by calling the `build()` method from the `BulkProcessor.Builder`. The `RestHighLevelClient.bulkAsync()` method will be used to execute the `BulkRequest` under the hood. @@ -190,7 +187,7 @@ to know if the `BulkResponse` contains errors the failure Once all requests have been added to the `BulkProcessor`, its instance needs to -be closed closed using one of the two available closing methods. +be closed using one of the two available closing methods. The `awaitClose()` method can be used to wait until all requests have been processed or the specified waiting time elapses: @@ -209,3 +206,4 @@ include-tagged::{doc-tests}/CRUDDocumentationIT.java[bulk-processor-close] Both methods flush the requests added to the processor before closing the processor and also forbid any new request to be added to it. + diff --git a/docs/java-rest/high-level/apis/deleteindex.asciidoc b/docs/java-rest/high-level/apis/deleteindex.asciidoc new file mode 100644 index 0000000000000..3c0627de49a92 --- /dev/null +++ b/docs/java-rest/high-level/apis/deleteindex.asciidoc @@ -0,0 +1,76 @@ +[[java-rest-high-delete-index]] +=== Delete Index API + +[[java-rest-high-delete-index-request]] +==== Delete Index Request + +A `DeleteIndexRequest` requires an `index` argument: + +["source","java",subs="attributes,callouts,macros"] +-------------------------------------------------- +include-tagged::{doc-tests}/IndicesClientDocumentationIT.java[delete-index-request] +-------------------------------------------------- +<1> Index + +==== Optional arguments +The following arguments can optionally be provided: + +["source","java",subs="attributes,callouts,macros"] +-------------------------------------------------- +include-tagged::{doc-tests}/IndicesClientDocumentationIT.java[delete-index-request-timeout] +-------------------------------------------------- +<1> Timeout to wait for the all the nodes to acknowledge the index deletion as a `TimeValue` +<2> Timeout to wait for the all the nodes to acknowledge the index deletion as a `String` + +["source","java",subs="attributes,callouts,macros"] +-------------------------------------------------- +include-tagged::{doc-tests}/IndicesClientDocumentationIT.java[delete-index-request-masterTimeout] +-------------------------------------------------- +<1> Timeout to connect to the master node as a `TimeValue` +<2> Timeout to connect to the master node as a `String` + +["source","java",subs="attributes,callouts,macros"] +-------------------------------------------------- +include-tagged::{doc-tests}/IndicesClientDocumentationIT.java[delete-index-request-indicesOptions] +-------------------------------------------------- +<1> Setting `IndicesOptions` controls how unavailable indices are resolved and +how wildcard expressions are expanded + +[[java-rest-high-delete-index-sync]] +==== Synchronous Execution + +["source","java",subs="attributes,callouts,macros"] +-------------------------------------------------- +include-tagged::{doc-tests}/IndicesClientDocumentationIT.java[delete-index-execute] +-------------------------------------------------- + +[[java-rest-high-delete-index-async]] +==== Asynchronous Execution + +["source","java",subs="attributes,callouts,macros"] +-------------------------------------------------- +include-tagged::{doc-tests}/IndicesClientDocumentationIT.java[delete-index-execute-async] +-------------------------------------------------- +<1> Called when the execution is successfully completed. The response is +provided as an argument +<2> Called in case of failure. The raised exception is provided as an argument + +[[java-rest-high-delete-index-response]] +==== Delete Index Response + +The returned `DeleteIndexResponse` allows to retrieve information about the executed + operation as follows: + +["source","java",subs="attributes,callouts,macros"] +-------------------------------------------------- +include-tagged::{doc-tests}/IndicesClientDocumentationIT.java[delete-index-response] +-------------------------------------------------- +<1> Indicates whether all of the nodes have acknowledged the request or not + +If the index was not found, an `ElasticsearchException` will be thrown: + +["source","java",subs="attributes,callouts,macros"] +-------------------------------------------------- +include-tagged::{doc-tests}/IndicesClientDocumentationIT.java[delete-index-notfound] +-------------------------------------------------- +<1> Do something if the index to be deleted was not found diff --git a/docs/java-rest/high-level/apis/index.asciidoc b/docs/java-rest/high-level/apis/index.asciidoc index 1f5301fdbd4a6..993951b5ae7f5 100644 --- a/docs/java-rest/high-level/apis/index.asciidoc +++ b/docs/java-rest/high-level/apis/index.asciidoc @@ -1,3 +1,4 @@ +include::deleteindex.asciidoc[] include::_index.asciidoc[] include::get.asciidoc[] include::delete.asciidoc[] @@ -6,5 +7,3 @@ include::bulk.asciidoc[] include::search.asciidoc[] include::scroll.asciidoc[] include::main.asciidoc[] -include::queries.asciidoc[] -include::aggs.asciidoc[] diff --git a/docs/java-rest/high-level/apis/search.asciidoc b/docs/java-rest/high-level/apis/search.asciidoc index 0c0bee985cadc..de336fd60d8e4 100644 --- a/docs/java-rest/high-level/apis/search.asciidoc +++ b/docs/java-rest/high-level/apis/search.asciidoc @@ -82,6 +82,7 @@ After this, the `SearchSourceBuilder` only needs to be added to the include-tagged::{doc-tests}/SearchDocumentationIT.java[search-source-setter] -------------------------------------------------- +[[java-rest-high-document-search-request-building-queries]] ===== Building queries Search queries are created using `QueryBuilder` objects. A `QueryBuilder` exists @@ -125,7 +126,7 @@ to the `SearchSourceBuilder` as follows: include-tagged::{doc-tests}/SearchDocumentationIT.java[search-query-setter] -------------------------------------------------- -The <> page gives a list of all available search queries with +The <> page gives a list of all available search queries with their corresponding `QueryBuilder` objects and `QueryBuilders` helper methods. @@ -178,6 +179,7 @@ setters with a similar name (e.g. `#preTags(String ...)`). Highlighted text fragments can <> from the `SearchResponse`. +[[java-rest-high-document-search-request-building-aggs]] ===== Requesting Aggregations Aggregations can be added to the search by first creating the appropriate @@ -190,7 +192,7 @@ sub-aggregation on the average age of employees in the company: include-tagged::{doc-tests}/SearchDocumentationIT.java[search-request-aggregations] -------------------------------------------------- -The <> page gives a list of all available aggregations with +The <> page gives a list of all available aggregations with their corresponding `AggregationBuilder` objects and `AggregationBuilders` helper methods. We will later see how to <> in the `SearchResponse`. diff --git a/docs/java-rest/high-level/apis/aggs.asciidoc b/docs/java-rest/high-level/builders/aggs.asciidoc similarity index 99% rename from docs/java-rest/high-level/apis/aggs.asciidoc rename to docs/java-rest/high-level/builders/aggs.asciidoc index 101251bee19aa..3b15243f5b2c6 100644 --- a/docs/java-rest/high-level/apis/aggs.asciidoc +++ b/docs/java-rest/high-level/builders/aggs.asciidoc @@ -1,4 +1,4 @@ -[[java-rest-high-aggregations]] +[[java-rest-high-aggregation-builders]] === Building Aggregations This page lists all the available aggregations with their corresponding `AggregationBuilder` class name and helper method name in the diff --git a/docs/java-rest/high-level/apis/queries.asciidoc b/docs/java-rest/high-level/builders/queries.asciidoc similarity index 99% rename from docs/java-rest/high-level/apis/queries.asciidoc rename to docs/java-rest/high-level/builders/queries.asciidoc index b46823fcf7503..88204baa8745d 100644 --- a/docs/java-rest/high-level/apis/queries.asciidoc +++ b/docs/java-rest/high-level/builders/queries.asciidoc @@ -1,5 +1,5 @@ -[[java-rest-high-search-queries]] -=== Building Search Queries +[[java-rest-high-query-builders]] +=== Building Queries This page lists all the available search queries with their corresponding `QueryBuilder` class name and helper method name in the `QueryBuilders` utility class. diff --git a/docs/java-rest/high-level/usage.asciidoc b/docs/java-rest/high-level/getting-started.asciidoc similarity index 58% rename from docs/java-rest/high-level/usage.asciidoc rename to docs/java-rest/high-level/getting-started.asciidoc index a51f0483c1e22..86fe473fb29e0 100644 --- a/docs/java-rest/high-level/usage.asciidoc +++ b/docs/java-rest/high-level/getting-started.asciidoc @@ -1,4 +1,4 @@ -[[java-rest-high-usage]] +[[java-rest-high-getting-started]] == Getting started This section describes how to get started with the high-level REST client from @@ -9,22 +9,34 @@ getting the artifact to using it in an application. The Java High Level REST Client requires Java 1.8 and depends on the Elasticsearch core project. The client version is the same as the Elasticsearch version that the client was developed for. It accepts the same request arguments as the `TransportClient` -and returns the same response objects. - -The High Level Client is backwards compatible but can only communicate with Elasticsearch -version 5.5 and onwards. The High Level Client is forward compatible as well, meaning that -it supports communicating with a later version of Elasticsearch than the one it was developed -for. It is recommended to upgrade the High Level Client when upgrading the Elasticsearch -cluster to a new major version, as REST API breaking changes may cause unexpected results, -and newly added APIs will only be supported by the newer version of the client. The client -should be updated last, once all of the nodes in the cluster have been upgraded. +and returns the same response objects. See the <> +if you need to migrate an application from `TransportClient` to the new REST client. + +The High Level Client is guaranteed to be able to communicate with any Elasticsearch +node running on the same major version and greater or equal minor version. It +doesn't need to be in the same minor version as the Elasticsearch nodes it +communicates with, as it is forward compatible meaning that it supports +communicating with later versions of Elasticsearch than the one it was developed for. + +The 6.0 client is able to communicate with any 6.x Elasticsearch node, while the 6.1 +client is for sure able to communicate with 6.1, 6.2 and any later 6.x version, but +there may be incompatibility issues when communicating with a previous Elasticsearch +node version, for instance between 6.1 and 6.0, in case the 6.1 client supports new +request body fields for some APIs that are not known by the 6.0 node(s). + +It is recommended to upgrade the High Level Client when upgrading the Elasticsearch +cluster to a new major version, as REST API breaking changes may cause unexpected +results depending on the node that is hit by the request, and newly added APIs will +only be supported by the newer version of the client. The client should always be +updated last, once all of the nodes in the cluster have been upgraded to the new +major version. [[java-rest-high-javadoc]] === Javadoc The javadoc for the REST high level client can be found at {rest-high-level-client-javadoc}/index.html. -[[java-rest-high-usage-maven]] +[[java-rest-high-getting-started-maven]] === Maven Repository The high-level Java REST client is hosted on @@ -34,7 +46,7 @@ Central]. The minimum Java version required is `1.8`. The High Level REST Client is subject to the same release cycle as Elasticsearch. Replace the version with the desired client version. -[[java-rest-high-usage-maven-maven]] +[[java-rest-high-getting-started-maven-maven]] ==== Maven configuration Here is how you can configure the dependency using maven as a dependency manager. @@ -49,7 +61,7 @@ Add the following to your `pom.xml` file: -------------------------------------------------- -[[java-rest-high-usage-maven-gradle]] +[[java-rest-high-getting-started-maven-gradle]] ==== Gradle configuration Here is how you can configure the dependency using gradle as a dependency manager. @@ -62,7 +74,7 @@ dependencies { } -------------------------------------------------- -[[java-rest-high-usage-maven-lucene]] +[[java-rest-high-getting-started-maven-lucene]] ==== Lucene Snapshot repository The very first releases of any major version (like a beta), might have been built on top of a Lucene Snapshot version. @@ -93,7 +105,7 @@ maven { } -------------------------------------------------- -[[java-rest-high-usage-dependencies]] +[[java-rest-high-getting-started-dependencies]] === Dependencies The High Level Java REST Client depends on the following artifacts and their @@ -103,18 +115,29 @@ transitive dependencies: - org.elasticsearch:elasticsearch -[[java-rest-high-usage-initialization]] +[[java-rest-high-getting-started-initialization]] === Initialization -A `RestHighLevelClient` instance needs a <> +A `RestHighLevelClient` instance needs a <> to be built as follows: -[source,java] +["source","java",subs="attributes,callouts,macros"] +-------------------------------------------------- +include-tagged::{doc-tests}/MainDocumentationIT.java[rest-high-level-client-init] +-------------------------------------------------- + +The high-level client will internally create the low-level client used to +perform requests based on the provided builder, and manage its lifecycle. + +The high-level client instance needs to be closed when no longer needed so that +all the resources used by it get properly released, as well as the underlying +http client instance and its threads. This can be done through the `close` +method, which will close the internal `RestClient` instance. + +["source","java",subs="attributes,callouts,macros"] -------------------------------------------------- -RestHighLevelClient client = - new RestHighLevelClient(lowLevelRestClient); <1> +include-tagged::{doc-tests}/MainDocumentationIT.java[rest-high-level-client-close] -------------------------------------------------- -<1> We pass the <> instance In the rest of this documentation about the Java High Level Client, the `RestHighLevelClient` instance will be referenced as `client`. diff --git a/docs/java-rest/high-level/index.asciidoc b/docs/java-rest/high-level/index.asciidoc index ec7a1bdaf7d9a..bc4c2dd89bb08 100644 --- a/docs/java-rest/high-level/index.asciidoc +++ b/docs/java-rest/high-level/index.asciidoc @@ -24,14 +24,10 @@ the same response objects. :doc-tests: {docdir}/../../client/rest-high-level/src/test/java/org/elasticsearch/client/documentation -include::usage.asciidoc[] - -include::apis.asciidoc[] - -include::apis/index.asciidoc[] - +include::getting-started.asciidoc[] +include::supported-apis.asciidoc[] +include::java-builders.asciidoc[] include::migration.asciidoc[] - include::../license.asciidoc[] :doc-tests!: diff --git a/docs/java-rest/high-level/java-builders.asciidoc b/docs/java-rest/high-level/java-builders.asciidoc new file mode 100644 index 0000000000000..be5c19293cd30 --- /dev/null +++ b/docs/java-rest/high-level/java-builders.asciidoc @@ -0,0 +1,32 @@ +[[java-rest-high-java-builders]] +== Using Java Builders + +The Java High Level REST Client depends on the Elasticsearch core project which provides +different types of Java `Builders` objects, including: + +Query Builders:: + +The query builders are used to create the query to execute within a search request. There +is a query builder for every type of query supported by the Query DSL. Each query builder +implements the `QueryBuilder` interface and allows to set the specific options for a given +type of query. Once created, the `QueryBuilder` object can be set as the query parameter of +`SearchSourceBuilder`. The <> +page shows an example of how to build a full search request using `SearchSourceBuilder` and +`QueryBuilder` objects. The <> page +gives a list of all available search queries with their corresponding `QueryBuilder` objects +and `QueryBuilders` helper methods. + +Aggregation Builders:: + +Similarly to query builders, the aggregation builders are used to create the aggregations to +compute during a search request execution. There is an aggregation builder for every type of +aggregation (or pipeline aggregation) supported by Elasticsearch. All builders extend the +`AggregationBuilder` class (or `PipelineAggregationBuilder`class). Once created, `AggregationBuilder` +objects can be set as the aggregation parameter of `SearchSourceBuilder`. There is a example +of how `AggregationBuilder` objects are used with `SearchSourceBuilder` objects to define the aggregations +to compute with a search query in <> page. +The <> page gives a list of all available +aggregations with their corresponding `AggregationBuilder` objects and `AggregationBuilders` helper methods. + +include::builders/queries.asciidoc[] +include::builders/aggs.asciidoc[] diff --git a/docs/java-rest/high-level/migration.asciidoc b/docs/java-rest/high-level/migration.asciidoc index 20b6472a32e49..7ce0b00bdb07b 100644 --- a/docs/java-rest/high-level/migration.asciidoc +++ b/docs/java-rest/high-level/migration.asciidoc @@ -40,9 +40,9 @@ Java application that uses the `TransportClient` depends on the `org.elasticsearch.client:transport` artifact. This dependency must be replaced by a new dependency on the high-level client. -The <> page shows +The <> page shows typical configurations for Maven and Gradle and presents the - <> brought by the + <> brought by the high-level client. === Changing the client's initialization code @@ -54,52 +54,44 @@ Settings settings = Settings.builder() .put("cluster.name", "prod").build(); TransportClient transportClient = new PreBuiltTransportClient(settings) - .addTransportAddress(new TransportAddress(InetAddress.getByName("host"), 9300)); + .addTransportAddress(new TransportAddress(InetAddress.getByName("localhost"), 9300)) + .addTransportAddress(new TransportAddress(InetAddress.getByName("localhost"), 9301)); -------------------------------------------------- -The initialization of a `RestHighLevelClient` is different. It first requires the initialization -of a <>: +The initialization of a `RestHighLevelClient` is different. It requires to provide +a <> as a constructor +argument: -[source,java] +["source","java",subs="attributes,callouts,macros"] -------------------------------------------------- -RestClient lowLevelRestClient = RestClient.builder( - new HttpHost("host", 9200, "http")).build(); +include-tagged::{doc-tests}/MainDocumentationIT.java[rest-high-level-client-init] -------------------------------------------------- NOTE: The `RestClient` uses Elasticsearch's HTTP service which is bounded by default on `9200`. This port is different from the port used to connect to Elasticsearch with a `TransportClient`. -Which is then passed to the constructor of the `RestHighLevelClient`: - -[source,java] --------------------------------------------------- -RestHighLevelClient client = - new RestHighLevelClient(lowLevelRestClient); --------------------------------------------------- - -Both `RestClient` and `RestHighLevelClient` are thread safe. They are - typically instantiated by the application at startup time or when the - first request is executed. +The `RestHighLevelClient` is thread-safe. It is typically instantiated by the +application at startup time or when the first request is executed. -Once the `RestHighLevelClient` is initialized, it can then be used to -execute any of the <>. +Once the `RestHighLevelClient` is initialized, it can be used to execute any +of the <>. -As with the `TransportClient`, the `RestClient` must be closed when it +As with the `TransportClient`, the `RestHighLevelClient` must be closed when it is not needed anymore or when the application is stopped. -So the code that closes the `TransportClient`: +The code that closes the `TransportClient`: [source,java] -------------------------------------------------- transportClient.close(); -------------------------------------------------- -Must be replaced with: +must be replaced with: -[source,java] +["source","java",subs="attributes,callouts,macros"] -------------------------------------------------- -lowLevelRestClient.close(); +include-tagged::{doc-tests}/MainDocumentationIT.java[rest-high-level-client-close] -------------------------------------------------- === Changing the application's code @@ -309,7 +301,8 @@ include-tagged::{doc-tests}/MigrationDocumentationIT.java[migration-create-inded set its content type (here, JSON) <6> Execute the request using the low-level client. The execution is synchronous and blocks on the `performRequest()` method until the remote cluster returns -a response. +a response. The low-level client can be retrieved from an existing `RestHighLevelClient` +instance through the `getLowLevelClient` getter method. <7> Handle the situation where the index has not been created @@ -339,7 +332,7 @@ With the low-level client, the code can be changed to: include-tagged::{doc-tests}/MigrationDocumentationIT.java[migration-cluster-health] -------------------------------------------------- <1> Call the cluster's health REST endpoint using the default paramaters -and gets back a `Response` object +and gets back a `Response` object. <2> Retrieve an `InputStream` object in order to read the response's content <3> Parse the response's content using Elasticsearch's helper class `XContentHelper`. This helper requires the content type of the response to be passed as an argument and returns diff --git a/docs/java-rest/high-level/apis.asciidoc b/docs/java-rest/high-level/supported-apis.asciidoc similarity index 72% rename from docs/java-rest/high-level/apis.asciidoc rename to docs/java-rest/high-level/supported-apis.asciidoc index b294974bacd75..9e902e1715766 100644 --- a/docs/java-rest/high-level/apis.asciidoc +++ b/docs/java-rest/high-level/supported-apis.asciidoc @@ -3,19 +3,24 @@ The Java High Level REST Client supports the following APIs: -.Single document APIs +Indices APIs:: +* <> + +Single document APIs:: * <> * <> * <> * <> -.Multi-document APIs +Multi document APIs:: * <> -.Search APIs +Search APIs:: * <> * <> * <> -.Miscellaneous APIs +Miscellaneous APIs:: * <> + +include::apis/index.asciidoc[] \ No newline at end of file diff --git a/docs/java-rest/low-level/configuration.asciidoc b/docs/java-rest/low-level/configuration.asciidoc index a9aeb62485481..54f7cd2817354 100644 --- a/docs/java-rest/low-level/configuration.asciidoc +++ b/docs/java-rest/low-level/configuration.asciidoc @@ -12,8 +12,8 @@ additional configuration for the low-level Java REST Client. Configuring requests timeouts can be done by providing an instance of `RequestConfigCallback` while building the `RestClient` through its builder. -The interface has one method that receives an instance of `org.elasticsearch.client.http.client.config.RequestConfig.Builder` -(see the https://hc.apache.org/httpcomponents-client-ga/httpclient/apidocs/org/apache/http/client/config/RequestConfig.Builder.html[Apache documentation]) +The interface has one method that receives an instance of +https://hc.apache.org/httpcomponents-client-ga/httpclient/apidocs/org/apache/http/client/config/RequestConfig.Builder.html[`org.apache.http.client.config.RequestConfig.Builder`] as an argument and has the same return type. The request config builder can be modified and then returned. In the following example we increase the connect timeout (defaults to 1 second) and the socket timeout (defaults to 30 @@ -42,8 +42,8 @@ include-tagged::{doc-tests}/RestClientDocumentation.java[rest-client-config-thre Configuring basic authentication can be done by providing an `HttpClientConfigCallback` while building the `RestClient` through its builder. -The interface has one method that receives an instance of `org.elasticsearch.client.http.impl.nio.client.HttpAsyncClientBuilder` -(see the https://hc.apache.org/httpcomponents-asyncclient-dev/httpasyncclient/apidocs/org/apache/http/impl/nio/client/HttpAsyncClientBuilder.html[Apache documentation]) +The interface has one method that receives an instance of +https://hc.apache.org/httpcomponents-asyncclient-dev/httpasyncclient/apidocs/org/apache/http/impl/nio/client/HttpAsyncClientBuilder.html[`org.apache.http.impl.nio.client.HttpAsyncClientBuilder`] as an argument and has the same return type. The http client builder can be modified and then returned. In the following example we set a default credentials provider that requires basic authentication. @@ -67,8 +67,8 @@ include-tagged::{doc-tests}/RestClientDocumentation.java[rest-client-config-disa === Encrypted communication Encrypted communication can also be configured through the -`HttpClientConfigCallback`. The `org.elasticsearch.client.http.impl.nio.client.HttpAsyncClientBuilder` -(see the https://hc.apache.org/httpcomponents-asyncclient-dev/httpasyncclient/apidocs/org/apache/http/impl/nio/client/HttpAsyncClientBuilder.html[Apache documentation]) +`HttpClientConfigCallback`. The +https://hc.apache.org/httpcomponents-asyncclient-dev/httpasyncclient/apidocs/org/apache/http/impl/nio/client/HttpAsyncClientBuilder.html[`org.apache.http.impl.nio.client.HttpAsyncClientBuilder`] received as an argument exposes multiple methods to configure encrypted communication: `setSSLContext`, `setSSLSessionStrategy` and `setConnectionManager`, in order of precedence from the least important. diff --git a/docs/java-rest/low-level/index.asciidoc b/docs/java-rest/low-level/index.asciidoc index 5c17d6e5e04ba..92230a657a91d 100644 --- a/docs/java-rest/low-level/index.asciidoc +++ b/docs/java-rest/low-level/index.asciidoc @@ -6,6 +6,8 @@ The low-level client's features include: +* minimal dependencies + * load balancing across all available nodes * failover in case of node failures and upon specific response codes @@ -20,8 +22,6 @@ The low-level client's features include: * optional automatic <> -* packaged as a single JAR file that shades all dependencies - -- :doc-tests: {docdir}/../../client/rest/src/test/java/org/elasticsearch/client/documentation diff --git a/docs/java-rest/low-level/usage.asciidoc b/docs/java-rest/low-level/usage.asciidoc index c9664613cca58..d39fab38dda09 100644 --- a/docs/java-rest/low-level/usage.asciidoc +++ b/docs/java-rest/low-level/usage.asciidoc @@ -53,7 +53,10 @@ dependencies { [[java-rest-low-usage-dependencies]] === Dependencies -The low-level Java REST client uses several https://www.apache.org/[Apache] libraries: +The low-level Java REST client internally uses the +http://hc.apache.org/httpcomponents-asyncclient-dev/[Apache Http Async Client] + to send http requests. It depends on the following artifacts, namely the async + http client and its own transitive dependencies: - org.apache.httpcomponents:httpasyncclient - org.apache.httpcomponents:httpcore-nio @@ -62,13 +65,82 @@ The low-level Java REST client uses several https://www.apache.org/[Apache] libr - commons-codec:commons-codec - commons-logging:commons-logging +[[java-rest-low-usage-shading]] +=== Shading -One of the most important is the http://hc.apache.org/httpcomponents-asyncclient-dev/[Apache Http Async Client] - which is used to send http requests. In order to avoid version conflicts, these dependencies are shaded and - packaged within the client in a single JAR file (sometimes called "uber jar" or "fat jar"). Shading a dependency - consists of taking its content (resources files and Java class files), rename its packages (all package names - that start with `org.apache` are renamed to `org.elasticsearch.client`) before putting them in the same JAR file -as the low-level Java REST client. +In order to avoid version conflicts, the dependencies can be shaded and packaged +within the client in a single JAR file (sometimes called an "uber JAR" or "fat +JAR"). Shading a dependency consists of taking its content (resources files and +Java class files) and renaming some of its packages before putting them in the +same JAR file as the low-level Java REST client. Shading a JAR can be +accomplished by 3rd-party plugins for Gradle and Maven. + +Be advised that shading a JAR also has implications. Shading the Commons Logging +layer, for instance, means that 3rd-party logging backends need to be shaded as +well. + +[[java-rest-low-usage-shading-maven]] +==== Maven configuration + +Here is a configuration using the Maven +https://maven.apache.org/plugins/maven-shade-plugin/index.html[Shade] +plugin. Add the following to your `pom.xml` file: + +["source","xml",subs="attributes"] +-------------------------------------------------- + + + + org.apache.maven.plugins + maven-shade-plugin + 3.1.0 + + + package + shade + + + + org.apache.http + hidden.org.apache.http + + + org.apache.logging + hidden.org.apache.logging + + + org.apache.commons.codec + hidden.org.apache.commons.codec + + + org.apache.commons.logging + hidden.org.apache.commons.logging + + + + + + + + +-------------------------------------------------- + +[[java-rest-low-usage-shading-gradle]] +==== Gradle configuration + +Here is a configuration using the Gradle +https://github.com/johnrengelman/shadow[ShadowJar] plugin. Add the following to +your `build.gradle` file: + +["source","groovy",subs="attributes"] +-------------------------------------------------- +shadowJar { + relocate 'org.apache.http', 'hidden.org.apache.http' + relocate 'org.apache.logging', 'hidden.org.apache.logging' + relocate 'org.apache.commons.codec', 'hidden.org.apache.commons.codec' + relocate 'org.apache.commons.logging', 'hidden.org.apache.commons.logging' +} +-------------------------------------------------- [[java-rest-low-usage-initialization]] === Initialization @@ -126,16 +198,18 @@ need to be taken. Used internally when sniffing on failure is enabled. include-tagged::{doc-tests}/RestClientDocumentation.java[rest-client-init-request-config-callback] -------------------------------------------------- <1> Set a callback that allows to modify the default request configuration -(e.g. request timeouts, authentication, or anything that the `org.elasticsearch.client.http.client.config.RequestConfig.Builder` -allows to set). For more information, see the https://hc.apache.org/httpcomponents-client-ga/httpclient/apidocs/org/apache/http/client/config/RequestConfig.Builder.html[Apache documentation] +(e.g. request timeouts, authentication, or anything that the +https://hc.apache.org/httpcomponents-client-ga/httpclient/apidocs/org/apache/http/client/config/RequestConfig.Builder.html[`org.apache.http.client.config.RequestConfig.Builder`] + allows to set) ["source","java",subs="attributes,callouts,macros"] -------------------------------------------------- include-tagged::{doc-tests}/RestClientDocumentation.java[rest-client-init-client-config-callback] -------------------------------------------------- <1> Set a callback that allows to modify the http client configuration -(e.g. encrypted communication over ssl, or anything that the `org.elasticsearch.client.http.impl.nio.client.HttpAsyncClientBuilder` - allows to set). For more information, see the http://hc.apache.org/httpcomponents-asyncclient-dev/httpasyncclient/apidocs/org/apache/http/impl/nio/client/HttpAsyncClientBuilder.html[Apache documentation] +(e.g. encrypted communication over ssl, or anything that the +http://hc.apache.org/httpcomponents-asyncclient-dev/httpasyncclient/apidocs/org/apache/http/impl/nio/client/HttpAsyncClientBuilder.html[`org.apache.http.impl.nio.client.HttpAsyncClientBuilder`] + allows to set) [[java-rest-low-usage-requests]] @@ -169,7 +243,7 @@ parameter include-tagged::{doc-tests}/RestClientDocumentation.java[rest-client-verb-endpoint-params-body] -------------------------------------------------- <1> Send a request by providing the verb, the endpoint, optional querystring -parameters and the request body enclosed in an `org.elasticsearch.client.http.HttpEntity` +parameters and the request body enclosed in an `org.apache.http.HttpEntity` object IMPORTANT: The `ContentType` specified for the `HttpEntity` is important @@ -182,7 +256,7 @@ include-tagged::{doc-tests}/RestClientDocumentation.java[rest-client-response-co -------------------------------------------------- <1> Send a request by providing the verb, the endpoint, optional querystring parameters, optional request body and the optional factory that is used to -create a `org.elasticsearch.client.http.nio.protocol.HttpAsyncResponseConsumer` (see the http://hc.apache.org/httpcomponents-core-ga/httpcore-nio/apidocs/org/apache/http/nio/protocol/HttpAsyncResponseConsumer.html[Apache documentation]) +create an http://hc.apache.org/httpcomponents-core-ga/httpcore-nio/apidocs/org/apache/http/nio/protocol/HttpAsyncResponseConsumer.html[`org.apache.http.nio.protocol.HttpAsyncResponseConsumer`] callback instance per request attempt. Controls how the response body gets streamed from a non-blocking HTTP connection on the client side. When not provided, the default implementation is used which buffers the whole response @@ -212,7 +286,7 @@ include-tagged::{doc-tests}/RestClientDocumentation.java[rest-client-verb-endpoi -------------------------------------------------- <1> Send an async request by providing the verb, the endpoint, optional querystring parameters, the request body enclosed in an -`org.elasticsearch.client.http.HttpEntity` object and the response listener to be +`org.apache.http.HttpEntity` object and the response listener to be notified once the request is completed ["source","java",subs="attributes,callouts,macros"] @@ -221,7 +295,7 @@ include-tagged::{doc-tests}/RestClientDocumentation.java[rest-client-response-co -------------------------------------------------- <1> Send an async request by providing the verb, the endpoint, optional querystring parameters, optional request body and the optional factory that is -used to create a `org.elasticsearch.client.http.nio.protocol.HttpAsyncResponseConsumer` (see the http://hc.apache.org/httpcomponents-core-ga/httpcore-nio/apidocs/org/apache/http/nio/protocol/HttpAsyncResponseConsumer.html[Apache documentation]) +used to create an http://hc.apache.org/httpcomponents-core-ga/httpcore-nio/apidocs/org/apache/http/nio/protocol/HttpAsyncResponseConsumer.html[`org.apache.http.nio.protocol.HttpAsyncResponseConsumer`] callback instance per request attempt. Controls how the response body gets streamed from a non-blocking HTTP connection on the client side. When not provided, the default implementation is used which buffers the whole response @@ -265,8 +339,8 @@ include-tagged::{doc-tests}/RestClientDocumentation.java[rest-client-response2] <2> The host that returned the response <3> The response status line, from which you can for instance retrieve the status code <4> The response headers, which can also be retrieved by name though `getHeader(String)` -<5> The response body enclosed in a `org.elasticsearch.client.http.HttpEntity` object -(see the https://hc.apache.org/httpcomponents-core-ga/httpcore/apidocs/org/apache/http/HttpEntity.html[Apache documentation] +<5> The response body enclosed in an https://hc.apache.org/httpcomponents-core-ga/httpcore/apidocs/org/apache/http/HttpEntity.html[`org.apache.http.HttpEntity`] + object When performing a request, an exception is thrown (or received as an argument in `ResponseListener#onFailure(Exception)` in the following scenarios: @@ -293,12 +367,14 @@ Note that the low-level client doesn't expose any helper for json marshalling and un-marshalling. Users are free to use the library that they prefer for that purpose. -The low-level Java Rest Client ships with different `org.elasticsearch.client.http.HttpEntity` +The underlying Apache Async Http Client ships with different +https://hc.apache.org/httpcomponents-core-ga/httpcore/apidocs/org/apache/http/HttpEntity.html[`org.apache.http.HttpEntity`] implementations that allow to provide the request body in different formats (stream, byte array, string etc.). As for reading the response body, the `HttpEntity#getContent` method comes handy which returns an `InputStream` reading from the previously buffered response body. As an alternative, it is -possible to provide a custom org.elasticsearch.client.http.nio.protocol.HttpAsyncResponseConsumer` +possible to provide a custom +http://hc.apache.org/httpcomponents-core-ga/httpcore-nio/apidocs/org/apache/http/nio/protocol/HttpAsyncResponseConsumer.html[`org.apache.http.nio.protocol.HttpAsyncResponseConsumer`] that controls how bytes are read and buffered. [[java-rest-low-usage-logging]] diff --git a/docs/painless/painless-api-reference/AbstractMap.SimpleEntry.asciidoc b/docs/painless/painless-api-reference/AbstractMap.SimpleEntry.asciidoc index 327939e009d5e..bdd784070f4fb 100644 --- a/docs/painless/painless-api-reference/AbstractMap.SimpleEntry.asciidoc +++ b/docs/painless/painless-api-reference/AbstractMap.SimpleEntry.asciidoc @@ -4,6 +4,6 @@ Rebuild by running `gradle generatePainlessApi`. //// [[painless-api-reference-AbstractMap-SimpleEntry]]++AbstractMap.SimpleEntry++:: -* ++[[painless-api-reference-AbstractMap-SimpleEntry-AbstractMap.SimpleEntry-1]]link:{java8-javadoc}/java/util/AbstractMap$SimpleEntry.html#AbstractMap.SimpleEntry%2Djava.util.Map$Entry%2D[AbstractMap.SimpleEntry](<>)++ (link:{java9-javadoc}/java/util/AbstractMap$SimpleEntry.html#AbstractMap.SimpleEntry%2Djava.util.Map$Entry%2D[java 9]) -* ++[[painless-api-reference-AbstractMap-SimpleEntry-AbstractMap.SimpleEntry-2]]link:{java8-javadoc}/java/util/AbstractMap$SimpleEntry.html#AbstractMap.SimpleEntry%2Djava.lang.Object%2Djava.lang.Object%2D[AbstractMap.SimpleEntry](def, def)++ (link:{java9-javadoc}/java/util/AbstractMap$SimpleEntry.html#AbstractMap.SimpleEntry%2Djava.lang.Object%2Djava.lang.Object%2D[java 9]) +* ++[[painless-api-reference-AbstractMap-SimpleEntry-AbstractMap.SimpleEntry-1]]link:{java8-javadoc}/java/util/AbstractMap.SimpleEntry.html#AbstractMap.SimpleEntry%2Djava.util.Map$Entry%2D[AbstractMap.SimpleEntry](<>)++ (link:{java9-javadoc}/java/util/AbstractMap.SimpleEntry.html#AbstractMap.SimpleEntry%2Djava.util.Map$Entry%2D[java 9]) +* ++[[painless-api-reference-AbstractMap-SimpleEntry-AbstractMap.SimpleEntry-2]]link:{java8-javadoc}/java/util/AbstractMap.SimpleEntry.html#AbstractMap.SimpleEntry%2Djava.lang.Object%2Djava.lang.Object%2D[AbstractMap.SimpleEntry](def, def)++ (link:{java9-javadoc}/java/util/AbstractMap.SimpleEntry.html#AbstractMap.SimpleEntry%2Djava.lang.Object%2Djava.lang.Object%2D[java 9]) * Inherits methods from ++<>++, ++<>++ diff --git a/docs/painless/painless-api-reference/AbstractMap.SimpleImmutableEntry.asciidoc b/docs/painless/painless-api-reference/AbstractMap.SimpleImmutableEntry.asciidoc index 94a4b0c307aa2..5b5605a42ae81 100644 --- a/docs/painless/painless-api-reference/AbstractMap.SimpleImmutableEntry.asciidoc +++ b/docs/painless/painless-api-reference/AbstractMap.SimpleImmutableEntry.asciidoc @@ -4,6 +4,6 @@ Rebuild by running `gradle generatePainlessApi`. //// [[painless-api-reference-AbstractMap-SimpleImmutableEntry]]++AbstractMap.SimpleImmutableEntry++:: -* ++[[painless-api-reference-AbstractMap-SimpleImmutableEntry-AbstractMap.SimpleImmutableEntry-1]]link:{java8-javadoc}/java/util/AbstractMap$SimpleImmutableEntry.html#AbstractMap.SimpleImmutableEntry%2Djava.util.Map$Entry%2D[AbstractMap.SimpleImmutableEntry](<>)++ (link:{java9-javadoc}/java/util/AbstractMap$SimpleImmutableEntry.html#AbstractMap.SimpleImmutableEntry%2Djava.util.Map$Entry%2D[java 9]) -* ++[[painless-api-reference-AbstractMap-SimpleImmutableEntry-AbstractMap.SimpleImmutableEntry-2]]link:{java8-javadoc}/java/util/AbstractMap$SimpleImmutableEntry.html#AbstractMap.SimpleImmutableEntry%2Djava.lang.Object%2Djava.lang.Object%2D[AbstractMap.SimpleImmutableEntry](def, def)++ (link:{java9-javadoc}/java/util/AbstractMap$SimpleImmutableEntry.html#AbstractMap.SimpleImmutableEntry%2Djava.lang.Object%2Djava.lang.Object%2D[java 9]) +* ++[[painless-api-reference-AbstractMap-SimpleImmutableEntry-AbstractMap.SimpleImmutableEntry-1]]link:{java8-javadoc}/java/util/AbstractMap.SimpleImmutableEntry.html#AbstractMap.SimpleImmutableEntry%2Djava.util.Map$Entry%2D[AbstractMap.SimpleImmutableEntry](<>)++ (link:{java9-javadoc}/java/util/AbstractMap.SimpleImmutableEntry.html#AbstractMap.SimpleImmutableEntry%2Djava.util.Map$Entry%2D[java 9]) +* ++[[painless-api-reference-AbstractMap-SimpleImmutableEntry-AbstractMap.SimpleImmutableEntry-2]]link:{java8-javadoc}/java/util/AbstractMap.SimpleImmutableEntry.html#AbstractMap.SimpleImmutableEntry%2Djava.lang.Object%2Djava.lang.Object%2D[AbstractMap.SimpleImmutableEntry](def, def)++ (link:{java9-javadoc}/java/util/AbstractMap.SimpleImmutableEntry.html#AbstractMap.SimpleImmutableEntry%2Djava.lang.Object%2Djava.lang.Object%2D[java 9]) * Inherits methods from ++<>++, ++<>++ diff --git a/docs/painless/painless-api-reference/AttributedCharacterIterator.Attribute.asciidoc b/docs/painless/painless-api-reference/AttributedCharacterIterator.Attribute.asciidoc index 56ccfbe6aabc1..dbcdd9446f3f9 100644 --- a/docs/painless/painless-api-reference/AttributedCharacterIterator.Attribute.asciidoc +++ b/docs/painless/painless-api-reference/AttributedCharacterIterator.Attribute.asciidoc @@ -4,7 +4,7 @@ Rebuild by running `gradle generatePainlessApi`. //// [[painless-api-reference-AttributedCharacterIterator-Attribute]]++AttributedCharacterIterator.Attribute++:: -** [[painless-api-reference-AttributedCharacterIterator-Attribute-INPUT_METHOD_SEGMENT]]static <> link:{java8-javadoc}/java/text/AttributedCharacterIterator$Attribute.html#INPUT_METHOD_SEGMENT[INPUT_METHOD_SEGMENT] (link:{java9-javadoc}/java/text/AttributedCharacterIterator$Attribute.html#INPUT_METHOD_SEGMENT[java 9]) -** [[painless-api-reference-AttributedCharacterIterator-Attribute-LANGUAGE]]static <> link:{java8-javadoc}/java/text/AttributedCharacterIterator$Attribute.html#LANGUAGE[LANGUAGE] (link:{java9-javadoc}/java/text/AttributedCharacterIterator$Attribute.html#LANGUAGE[java 9]) -** [[painless-api-reference-AttributedCharacterIterator-Attribute-READING]]static <> link:{java8-javadoc}/java/text/AttributedCharacterIterator$Attribute.html#READING[READING] (link:{java9-javadoc}/java/text/AttributedCharacterIterator$Attribute.html#READING[java 9]) +** [[painless-api-reference-AttributedCharacterIterator-Attribute-INPUT_METHOD_SEGMENT]]static <> link:{java8-javadoc}/java/text/AttributedCharacterIterator.Attribute.html#INPUT_METHOD_SEGMENT[INPUT_METHOD_SEGMENT] (link:{java9-javadoc}/java/text/AttributedCharacterIterator.Attribute.html#INPUT_METHOD_SEGMENT[java 9]) +** [[painless-api-reference-AttributedCharacterIterator-Attribute-LANGUAGE]]static <> link:{java8-javadoc}/java/text/AttributedCharacterIterator.Attribute.html#LANGUAGE[LANGUAGE] (link:{java9-javadoc}/java/text/AttributedCharacterIterator.Attribute.html#LANGUAGE[java 9]) +** [[painless-api-reference-AttributedCharacterIterator-Attribute-READING]]static <> link:{java8-javadoc}/java/text/AttributedCharacterIterator.Attribute.html#READING[READING] (link:{java9-javadoc}/java/text/AttributedCharacterIterator.Attribute.html#READING[java 9]) * Inherits methods from ++<>++ diff --git a/docs/painless/painless-api-reference/Base64.Decoder.asciidoc b/docs/painless/painless-api-reference/Base64.Decoder.asciidoc index 3b3acd16e61ef..3324f1a230308 100644 --- a/docs/painless/painless-api-reference/Base64.Decoder.asciidoc +++ b/docs/painless/painless-api-reference/Base64.Decoder.asciidoc @@ -4,6 +4,6 @@ Rebuild by running `gradle generatePainlessApi`. //// [[painless-api-reference-Base64-Decoder]]++Base64.Decoder++:: -* ++[[painless-api-reference-Base64-Decoder-decode-1]]byte[] link:{java8-javadoc}/java/util/Base64$Decoder.html#decode%2Djava.lang.String%2D[decode](<>)++ (link:{java9-javadoc}/java/util/Base64$Decoder.html#decode%2Djava.lang.String%2D[java 9]) -* ++[[painless-api-reference-Base64-Decoder-decode-2]]int link:{java8-javadoc}/java/util/Base64$Decoder.html#decode%2Dbyte:A%2Dbyte:A%2D[decode](byte[], byte[])++ (link:{java9-javadoc}/java/util/Base64$Decoder.html#decode%2Dbyte:A%2Dbyte:A%2D[java 9]) +* ++[[painless-api-reference-Base64-Decoder-decode-1]]byte[] link:{java8-javadoc}/java/util/Base64.Decoder.html#decode%2Djava.lang.String%2D[decode](<>)++ (link:{java9-javadoc}/java/util/Base64.Decoder.html#decode%2Djava.lang.String%2D[java 9]) +* ++[[painless-api-reference-Base64-Decoder-decode-2]]int link:{java8-javadoc}/java/util/Base64.Decoder.html#decode%2Dbyte:A%2Dbyte:A%2D[decode](byte[], byte[])++ (link:{java9-javadoc}/java/util/Base64.Decoder.html#decode%2Dbyte:A%2Dbyte:A%2D[java 9]) * Inherits methods from ++<>++ diff --git a/docs/painless/painless-api-reference/Base64.Encoder.asciidoc b/docs/painless/painless-api-reference/Base64.Encoder.asciidoc index bc632fc79950e..6cdcd0536e5be 100644 --- a/docs/painless/painless-api-reference/Base64.Encoder.asciidoc +++ b/docs/painless/painless-api-reference/Base64.Encoder.asciidoc @@ -4,7 +4,7 @@ Rebuild by running `gradle generatePainlessApi`. //// [[painless-api-reference-Base64-Encoder]]++Base64.Encoder++:: -* ++[[painless-api-reference-Base64-Encoder-encode-2]]int link:{java8-javadoc}/java/util/Base64$Encoder.html#encode%2Dbyte:A%2Dbyte:A%2D[encode](byte[], byte[])++ (link:{java9-javadoc}/java/util/Base64$Encoder.html#encode%2Dbyte:A%2Dbyte:A%2D[java 9]) -* ++[[painless-api-reference-Base64-Encoder-encodeToString-1]]<> link:{java8-javadoc}/java/util/Base64$Encoder.html#encodeToString%2Dbyte:A%2D[encodeToString](byte[])++ (link:{java9-javadoc}/java/util/Base64$Encoder.html#encodeToString%2Dbyte:A%2D[java 9]) -* ++[[painless-api-reference-Base64-Encoder-withoutPadding-0]]<> link:{java8-javadoc}/java/util/Base64$Encoder.html#withoutPadding%2D%2D[withoutPadding]()++ (link:{java9-javadoc}/java/util/Base64$Encoder.html#withoutPadding%2D%2D[java 9]) +* ++[[painless-api-reference-Base64-Encoder-encode-2]]int link:{java8-javadoc}/java/util/Base64.Encoder.html#encode%2Dbyte:A%2Dbyte:A%2D[encode](byte[], byte[])++ (link:{java9-javadoc}/java/util/Base64.Encoder.html#encode%2Dbyte:A%2Dbyte:A%2D[java 9]) +* ++[[painless-api-reference-Base64-Encoder-encodeToString-1]]<> link:{java8-javadoc}/java/util/Base64.Encoder.html#encodeToString%2Dbyte:A%2D[encodeToString](byte[])++ (link:{java9-javadoc}/java/util/Base64.Encoder.html#encodeToString%2Dbyte:A%2D[java 9]) +* ++[[painless-api-reference-Base64-Encoder-withoutPadding-0]]<> link:{java8-javadoc}/java/util/Base64.Encoder.html#withoutPadding%2D%2D[withoutPadding]()++ (link:{java9-javadoc}/java/util/Base64.Encoder.html#withoutPadding%2D%2D[java 9]) * Inherits methods from ++<>++ diff --git a/docs/painless/painless-api-reference/Calendar.Builder.asciidoc b/docs/painless/painless-api-reference/Calendar.Builder.asciidoc index 2fd4cafa03637..2ab66c5ecac9e 100644 --- a/docs/painless/painless-api-reference/Calendar.Builder.asciidoc +++ b/docs/painless/painless-api-reference/Calendar.Builder.asciidoc @@ -4,18 +4,18 @@ Rebuild by running `gradle generatePainlessApi`. //// [[painless-api-reference-Calendar-Builder]]++Calendar.Builder++:: -* ++[[painless-api-reference-Calendar-Builder-Calendar.Builder-0]]link:{java8-javadoc}/java/util/Calendar$Builder.html#Calendar.Builder%2D%2D[Calendar.Builder]()++ (link:{java9-javadoc}/java/util/Calendar$Builder.html#Calendar.Builder%2D%2D[java 9]) -* ++[[painless-api-reference-Calendar-Builder-build-0]]<> link:{java8-javadoc}/java/util/Calendar$Builder.html#build%2D%2D[build]()++ (link:{java9-javadoc}/java/util/Calendar$Builder.html#build%2D%2D[java 9]) -* ++[[painless-api-reference-Calendar-Builder-set-2]]<> link:{java8-javadoc}/java/util/Calendar$Builder.html#set%2Dint%2Dint%2D[set](int, int)++ (link:{java9-javadoc}/java/util/Calendar$Builder.html#set%2Dint%2Dint%2D[java 9]) -* ++[[painless-api-reference-Calendar-Builder-setCalendarType-1]]<> link:{java8-javadoc}/java/util/Calendar$Builder.html#setCalendarType%2Djava.lang.String%2D[setCalendarType](<>)++ (link:{java9-javadoc}/java/util/Calendar$Builder.html#setCalendarType%2Djava.lang.String%2D[java 9]) -* ++[[painless-api-reference-Calendar-Builder-setDate-3]]<> link:{java8-javadoc}/java/util/Calendar$Builder.html#setDate%2Dint%2Dint%2Dint%2D[setDate](int, int, int)++ (link:{java9-javadoc}/java/util/Calendar$Builder.html#setDate%2Dint%2Dint%2Dint%2D[java 9]) -* ++[[painless-api-reference-Calendar-Builder-setFields-1]]<> link:{java8-javadoc}/java/util/Calendar$Builder.html#setFields%2Dint:A%2D[setFields](int[])++ (link:{java9-javadoc}/java/util/Calendar$Builder.html#setFields%2Dint:A%2D[java 9]) -* ++[[painless-api-reference-Calendar-Builder-setInstant-1]]<> link:{java8-javadoc}/java/util/Calendar$Builder.html#setInstant%2Dlong%2D[setInstant](long)++ (link:{java9-javadoc}/java/util/Calendar$Builder.html#setInstant%2Dlong%2D[java 9]) -* ++[[painless-api-reference-Calendar-Builder-setLenient-1]]<> link:{java8-javadoc}/java/util/Calendar$Builder.html#setLenient%2Dboolean%2D[setLenient](boolean)++ (link:{java9-javadoc}/java/util/Calendar$Builder.html#setLenient%2Dboolean%2D[java 9]) -* ++[[painless-api-reference-Calendar-Builder-setLocale-1]]<> link:{java8-javadoc}/java/util/Calendar$Builder.html#setLocale%2Djava.util.Locale%2D[setLocale](<>)++ (link:{java9-javadoc}/java/util/Calendar$Builder.html#setLocale%2Djava.util.Locale%2D[java 9]) -* ++[[painless-api-reference-Calendar-Builder-setTimeOfDay-3]]<> link:{java8-javadoc}/java/util/Calendar$Builder.html#setTimeOfDay%2Dint%2Dint%2Dint%2D[setTimeOfDay](int, int, int)++ (link:{java9-javadoc}/java/util/Calendar$Builder.html#setTimeOfDay%2Dint%2Dint%2Dint%2D[java 9]) -* ++[[painless-api-reference-Calendar-Builder-setTimeOfDay-4]]<> link:{java8-javadoc}/java/util/Calendar$Builder.html#setTimeOfDay%2Dint%2Dint%2Dint%2Dint%2D[setTimeOfDay](int, int, int, int)++ (link:{java9-javadoc}/java/util/Calendar$Builder.html#setTimeOfDay%2Dint%2Dint%2Dint%2Dint%2D[java 9]) -* ++[[painless-api-reference-Calendar-Builder-setTimeZone-1]]<> link:{java8-javadoc}/java/util/Calendar$Builder.html#setTimeZone%2Djava.util.TimeZone%2D[setTimeZone](<>)++ (link:{java9-javadoc}/java/util/Calendar$Builder.html#setTimeZone%2Djava.util.TimeZone%2D[java 9]) -* ++[[painless-api-reference-Calendar-Builder-setWeekDate-3]]<> link:{java8-javadoc}/java/util/Calendar$Builder.html#setWeekDate%2Dint%2Dint%2Dint%2D[setWeekDate](int, int, int)++ (link:{java9-javadoc}/java/util/Calendar$Builder.html#setWeekDate%2Dint%2Dint%2Dint%2D[java 9]) -* ++[[painless-api-reference-Calendar-Builder-setWeekDefinition-2]]<> link:{java8-javadoc}/java/util/Calendar$Builder.html#setWeekDefinition%2Dint%2Dint%2D[setWeekDefinition](int, int)++ (link:{java9-javadoc}/java/util/Calendar$Builder.html#setWeekDefinition%2Dint%2Dint%2D[java 9]) +* ++[[painless-api-reference-Calendar-Builder-Calendar.Builder-0]]link:{java8-javadoc}/java/util/Calendar.Builder.html#Calendar.Builder%2D%2D[Calendar.Builder]()++ (link:{java9-javadoc}/java/util/Calendar.Builder.html#Calendar.Builder%2D%2D[java 9]) +* ++[[painless-api-reference-Calendar-Builder-build-0]]<> link:{java8-javadoc}/java/util/Calendar.Builder.html#build%2D%2D[build]()++ (link:{java9-javadoc}/java/util/Calendar.Builder.html#build%2D%2D[java 9]) +* ++[[painless-api-reference-Calendar-Builder-set-2]]<> link:{java8-javadoc}/java/util/Calendar.Builder.html#set%2Dint%2Dint%2D[set](int, int)++ (link:{java9-javadoc}/java/util/Calendar.Builder.html#set%2Dint%2Dint%2D[java 9]) +* ++[[painless-api-reference-Calendar-Builder-setCalendarType-1]]<> link:{java8-javadoc}/java/util/Calendar.Builder.html#setCalendarType%2Djava.lang.String%2D[setCalendarType](<>)++ (link:{java9-javadoc}/java/util/Calendar.Builder.html#setCalendarType%2Djava.lang.String%2D[java 9]) +* ++[[painless-api-reference-Calendar-Builder-setDate-3]]<> link:{java8-javadoc}/java/util/Calendar.Builder.html#setDate%2Dint%2Dint%2Dint%2D[setDate](int, int, int)++ (link:{java9-javadoc}/java/util/Calendar.Builder.html#setDate%2Dint%2Dint%2Dint%2D[java 9]) +* ++[[painless-api-reference-Calendar-Builder-setFields-1]]<> link:{java8-javadoc}/java/util/Calendar.Builder.html#setFields%2Dint:A%2D[setFields](int[])++ (link:{java9-javadoc}/java/util/Calendar.Builder.html#setFields%2Dint:A%2D[java 9]) +* ++[[painless-api-reference-Calendar-Builder-setInstant-1]]<> link:{java8-javadoc}/java/util/Calendar.Builder.html#setInstant%2Dlong%2D[setInstant](long)++ (link:{java9-javadoc}/java/util/Calendar.Builder.html#setInstant%2Dlong%2D[java 9]) +* ++[[painless-api-reference-Calendar-Builder-setLenient-1]]<> link:{java8-javadoc}/java/util/Calendar.Builder.html#setLenient%2Dboolean%2D[setLenient](boolean)++ (link:{java9-javadoc}/java/util/Calendar.Builder.html#setLenient%2Dboolean%2D[java 9]) +* ++[[painless-api-reference-Calendar-Builder-setLocale-1]]<> link:{java8-javadoc}/java/util/Calendar.Builder.html#setLocale%2Djava.util.Locale%2D[setLocale](<>)++ (link:{java9-javadoc}/java/util/Calendar.Builder.html#setLocale%2Djava.util.Locale%2D[java 9]) +* ++[[painless-api-reference-Calendar-Builder-setTimeOfDay-3]]<> link:{java8-javadoc}/java/util/Calendar.Builder.html#setTimeOfDay%2Dint%2Dint%2Dint%2D[setTimeOfDay](int, int, int)++ (link:{java9-javadoc}/java/util/Calendar.Builder.html#setTimeOfDay%2Dint%2Dint%2Dint%2D[java 9]) +* ++[[painless-api-reference-Calendar-Builder-setTimeOfDay-4]]<> link:{java8-javadoc}/java/util/Calendar.Builder.html#setTimeOfDay%2Dint%2Dint%2Dint%2Dint%2D[setTimeOfDay](int, int, int, int)++ (link:{java9-javadoc}/java/util/Calendar.Builder.html#setTimeOfDay%2Dint%2Dint%2Dint%2Dint%2D[java 9]) +* ++[[painless-api-reference-Calendar-Builder-setTimeZone-1]]<> link:{java8-javadoc}/java/util/Calendar.Builder.html#setTimeZone%2Djava.util.TimeZone%2D[setTimeZone](<>)++ (link:{java9-javadoc}/java/util/Calendar.Builder.html#setTimeZone%2Djava.util.TimeZone%2D[java 9]) +* ++[[painless-api-reference-Calendar-Builder-setWeekDate-3]]<> link:{java8-javadoc}/java/util/Calendar.Builder.html#setWeekDate%2Dint%2Dint%2Dint%2D[setWeekDate](int, int, int)++ (link:{java9-javadoc}/java/util/Calendar.Builder.html#setWeekDate%2Dint%2Dint%2Dint%2D[java 9]) +* ++[[painless-api-reference-Calendar-Builder-setWeekDefinition-2]]<> link:{java8-javadoc}/java/util/Calendar.Builder.html#setWeekDefinition%2Dint%2Dint%2D[setWeekDefinition](int, int)++ (link:{java9-javadoc}/java/util/Calendar.Builder.html#setWeekDefinition%2Dint%2Dint%2D[java 9]) * Inherits methods from ++<>++ diff --git a/docs/painless/painless-api-reference/Character.UnicodeBlock.asciidoc b/docs/painless/painless-api-reference/Character.UnicodeBlock.asciidoc index b06cc70397a23..6fdf81adfc922 100644 --- a/docs/painless/painless-api-reference/Character.UnicodeBlock.asciidoc +++ b/docs/painless/painless-api-reference/Character.UnicodeBlock.asciidoc @@ -4,226 +4,226 @@ Rebuild by running `gradle generatePainlessApi`. //// [[painless-api-reference-Character-UnicodeBlock]]++Character.UnicodeBlock++:: -** [[painless-api-reference-Character-UnicodeBlock-AEGEAN_NUMBERS]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#AEGEAN_NUMBERS[AEGEAN_NUMBERS] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#AEGEAN_NUMBERS[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-ALCHEMICAL_SYMBOLS]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#ALCHEMICAL_SYMBOLS[ALCHEMICAL_SYMBOLS] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#ALCHEMICAL_SYMBOLS[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-ALPHABETIC_PRESENTATION_FORMS]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#ALPHABETIC_PRESENTATION_FORMS[ALPHABETIC_PRESENTATION_FORMS] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#ALPHABETIC_PRESENTATION_FORMS[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-ANCIENT_GREEK_MUSICAL_NOTATION]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#ANCIENT_GREEK_MUSICAL_NOTATION[ANCIENT_GREEK_MUSICAL_NOTATION] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#ANCIENT_GREEK_MUSICAL_NOTATION[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-ANCIENT_GREEK_NUMBERS]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#ANCIENT_GREEK_NUMBERS[ANCIENT_GREEK_NUMBERS] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#ANCIENT_GREEK_NUMBERS[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-ANCIENT_SYMBOLS]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#ANCIENT_SYMBOLS[ANCIENT_SYMBOLS] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#ANCIENT_SYMBOLS[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-ARABIC]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#ARABIC[ARABIC] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#ARABIC[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-ARABIC_EXTENDED_A]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#ARABIC_EXTENDED_A[ARABIC_EXTENDED_A] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#ARABIC_EXTENDED_A[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-ARABIC_MATHEMATICAL_ALPHABETIC_SYMBOLS]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#ARABIC_MATHEMATICAL_ALPHABETIC_SYMBOLS[ARABIC_MATHEMATICAL_ALPHABETIC_SYMBOLS] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#ARABIC_MATHEMATICAL_ALPHABETIC_SYMBOLS[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-ARABIC_PRESENTATION_FORMS_A]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#ARABIC_PRESENTATION_FORMS_A[ARABIC_PRESENTATION_FORMS_A] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#ARABIC_PRESENTATION_FORMS_A[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-ARABIC_PRESENTATION_FORMS_B]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#ARABIC_PRESENTATION_FORMS_B[ARABIC_PRESENTATION_FORMS_B] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#ARABIC_PRESENTATION_FORMS_B[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-ARABIC_SUPPLEMENT]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#ARABIC_SUPPLEMENT[ARABIC_SUPPLEMENT] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#ARABIC_SUPPLEMENT[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-ARMENIAN]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#ARMENIAN[ARMENIAN] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#ARMENIAN[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-ARROWS]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#ARROWS[ARROWS] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#ARROWS[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-AVESTAN]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#AVESTAN[AVESTAN] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#AVESTAN[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-BALINESE]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#BALINESE[BALINESE] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#BALINESE[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-BAMUM]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#BAMUM[BAMUM] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#BAMUM[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-BAMUM_SUPPLEMENT]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#BAMUM_SUPPLEMENT[BAMUM_SUPPLEMENT] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#BAMUM_SUPPLEMENT[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-BASIC_LATIN]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#BASIC_LATIN[BASIC_LATIN] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#BASIC_LATIN[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-BATAK]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#BATAK[BATAK] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#BATAK[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-BENGALI]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#BENGALI[BENGALI] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#BENGALI[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-BLOCK_ELEMENTS]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#BLOCK_ELEMENTS[BLOCK_ELEMENTS] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#BLOCK_ELEMENTS[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-BOPOMOFO]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#BOPOMOFO[BOPOMOFO] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#BOPOMOFO[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-BOPOMOFO_EXTENDED]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#BOPOMOFO_EXTENDED[BOPOMOFO_EXTENDED] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#BOPOMOFO_EXTENDED[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-BOX_DRAWING]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#BOX_DRAWING[BOX_DRAWING] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#BOX_DRAWING[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-BRAHMI]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#BRAHMI[BRAHMI] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#BRAHMI[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-BRAILLE_PATTERNS]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#BRAILLE_PATTERNS[BRAILLE_PATTERNS] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#BRAILLE_PATTERNS[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-BUGINESE]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#BUGINESE[BUGINESE] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#BUGINESE[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-BUHID]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#BUHID[BUHID] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#BUHID[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-BYZANTINE_MUSICAL_SYMBOLS]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#BYZANTINE_MUSICAL_SYMBOLS[BYZANTINE_MUSICAL_SYMBOLS] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#BYZANTINE_MUSICAL_SYMBOLS[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-CARIAN]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#CARIAN[CARIAN] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#CARIAN[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-CHAKMA]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#CHAKMA[CHAKMA] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#CHAKMA[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-CHAM]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#CHAM[CHAM] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#CHAM[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-CHEROKEE]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#CHEROKEE[CHEROKEE] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#CHEROKEE[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-CJK_COMPATIBILITY]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#CJK_COMPATIBILITY[CJK_COMPATIBILITY] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#CJK_COMPATIBILITY[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-CJK_COMPATIBILITY_FORMS]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#CJK_COMPATIBILITY_FORMS[CJK_COMPATIBILITY_FORMS] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#CJK_COMPATIBILITY_FORMS[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-CJK_COMPATIBILITY_IDEOGRAPHS]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#CJK_COMPATIBILITY_IDEOGRAPHS[CJK_COMPATIBILITY_IDEOGRAPHS] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#CJK_COMPATIBILITY_IDEOGRAPHS[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-CJK_COMPATIBILITY_IDEOGRAPHS_SUPPLEMENT]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#CJK_COMPATIBILITY_IDEOGRAPHS_SUPPLEMENT[CJK_COMPATIBILITY_IDEOGRAPHS_SUPPLEMENT] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#CJK_COMPATIBILITY_IDEOGRAPHS_SUPPLEMENT[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-CJK_RADICALS_SUPPLEMENT]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#CJK_RADICALS_SUPPLEMENT[CJK_RADICALS_SUPPLEMENT] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#CJK_RADICALS_SUPPLEMENT[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-CJK_STROKES]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#CJK_STROKES[CJK_STROKES] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#CJK_STROKES[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-CJK_SYMBOLS_AND_PUNCTUATION]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#CJK_SYMBOLS_AND_PUNCTUATION[CJK_SYMBOLS_AND_PUNCTUATION] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#CJK_SYMBOLS_AND_PUNCTUATION[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-CJK_UNIFIED_IDEOGRAPHS]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#CJK_UNIFIED_IDEOGRAPHS[CJK_UNIFIED_IDEOGRAPHS] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#CJK_UNIFIED_IDEOGRAPHS[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-CJK_UNIFIED_IDEOGRAPHS_EXTENSION_A]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#CJK_UNIFIED_IDEOGRAPHS_EXTENSION_A[CJK_UNIFIED_IDEOGRAPHS_EXTENSION_A] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#CJK_UNIFIED_IDEOGRAPHS_EXTENSION_A[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-CJK_UNIFIED_IDEOGRAPHS_EXTENSION_B]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#CJK_UNIFIED_IDEOGRAPHS_EXTENSION_B[CJK_UNIFIED_IDEOGRAPHS_EXTENSION_B] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#CJK_UNIFIED_IDEOGRAPHS_EXTENSION_B[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-CJK_UNIFIED_IDEOGRAPHS_EXTENSION_C]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#CJK_UNIFIED_IDEOGRAPHS_EXTENSION_C[CJK_UNIFIED_IDEOGRAPHS_EXTENSION_C] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#CJK_UNIFIED_IDEOGRAPHS_EXTENSION_C[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-CJK_UNIFIED_IDEOGRAPHS_EXTENSION_D]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#CJK_UNIFIED_IDEOGRAPHS_EXTENSION_D[CJK_UNIFIED_IDEOGRAPHS_EXTENSION_D] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#CJK_UNIFIED_IDEOGRAPHS_EXTENSION_D[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-COMBINING_DIACRITICAL_MARKS]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#COMBINING_DIACRITICAL_MARKS[COMBINING_DIACRITICAL_MARKS] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#COMBINING_DIACRITICAL_MARKS[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-COMBINING_DIACRITICAL_MARKS_SUPPLEMENT]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#COMBINING_DIACRITICAL_MARKS_SUPPLEMENT[COMBINING_DIACRITICAL_MARKS_SUPPLEMENT] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#COMBINING_DIACRITICAL_MARKS_SUPPLEMENT[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-COMBINING_HALF_MARKS]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#COMBINING_HALF_MARKS[COMBINING_HALF_MARKS] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#COMBINING_HALF_MARKS[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-COMBINING_MARKS_FOR_SYMBOLS]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#COMBINING_MARKS_FOR_SYMBOLS[COMBINING_MARKS_FOR_SYMBOLS] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#COMBINING_MARKS_FOR_SYMBOLS[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-COMMON_INDIC_NUMBER_FORMS]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#COMMON_INDIC_NUMBER_FORMS[COMMON_INDIC_NUMBER_FORMS] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#COMMON_INDIC_NUMBER_FORMS[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-CONTROL_PICTURES]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#CONTROL_PICTURES[CONTROL_PICTURES] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#CONTROL_PICTURES[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-COPTIC]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#COPTIC[COPTIC] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#COPTIC[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-COUNTING_ROD_NUMERALS]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#COUNTING_ROD_NUMERALS[COUNTING_ROD_NUMERALS] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#COUNTING_ROD_NUMERALS[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-CUNEIFORM]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#CUNEIFORM[CUNEIFORM] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#CUNEIFORM[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-CUNEIFORM_NUMBERS_AND_PUNCTUATION]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#CUNEIFORM_NUMBERS_AND_PUNCTUATION[CUNEIFORM_NUMBERS_AND_PUNCTUATION] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#CUNEIFORM_NUMBERS_AND_PUNCTUATION[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-CURRENCY_SYMBOLS]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#CURRENCY_SYMBOLS[CURRENCY_SYMBOLS] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#CURRENCY_SYMBOLS[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-CYPRIOT_SYLLABARY]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#CYPRIOT_SYLLABARY[CYPRIOT_SYLLABARY] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#CYPRIOT_SYLLABARY[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-CYRILLIC]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#CYRILLIC[CYRILLIC] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#CYRILLIC[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-CYRILLIC_EXTENDED_A]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#CYRILLIC_EXTENDED_A[CYRILLIC_EXTENDED_A] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#CYRILLIC_EXTENDED_A[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-CYRILLIC_EXTENDED_B]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#CYRILLIC_EXTENDED_B[CYRILLIC_EXTENDED_B] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#CYRILLIC_EXTENDED_B[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-CYRILLIC_SUPPLEMENTARY]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#CYRILLIC_SUPPLEMENTARY[CYRILLIC_SUPPLEMENTARY] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#CYRILLIC_SUPPLEMENTARY[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-DESERET]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#DESERET[DESERET] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#DESERET[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-DEVANAGARI]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#DEVANAGARI[DEVANAGARI] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#DEVANAGARI[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-DEVANAGARI_EXTENDED]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#DEVANAGARI_EXTENDED[DEVANAGARI_EXTENDED] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#DEVANAGARI_EXTENDED[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-DINGBATS]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#DINGBATS[DINGBATS] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#DINGBATS[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-DOMINO_TILES]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#DOMINO_TILES[DOMINO_TILES] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#DOMINO_TILES[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-EGYPTIAN_HIEROGLYPHS]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#EGYPTIAN_HIEROGLYPHS[EGYPTIAN_HIEROGLYPHS] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#EGYPTIAN_HIEROGLYPHS[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-EMOTICONS]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#EMOTICONS[EMOTICONS] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#EMOTICONS[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-ENCLOSED_ALPHANUMERICS]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#ENCLOSED_ALPHANUMERICS[ENCLOSED_ALPHANUMERICS] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#ENCLOSED_ALPHANUMERICS[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-ENCLOSED_ALPHANUMERIC_SUPPLEMENT]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#ENCLOSED_ALPHANUMERIC_SUPPLEMENT[ENCLOSED_ALPHANUMERIC_SUPPLEMENT] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#ENCLOSED_ALPHANUMERIC_SUPPLEMENT[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-ENCLOSED_CJK_LETTERS_AND_MONTHS]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#ENCLOSED_CJK_LETTERS_AND_MONTHS[ENCLOSED_CJK_LETTERS_AND_MONTHS] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#ENCLOSED_CJK_LETTERS_AND_MONTHS[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-ENCLOSED_IDEOGRAPHIC_SUPPLEMENT]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#ENCLOSED_IDEOGRAPHIC_SUPPLEMENT[ENCLOSED_IDEOGRAPHIC_SUPPLEMENT] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#ENCLOSED_IDEOGRAPHIC_SUPPLEMENT[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-ETHIOPIC]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#ETHIOPIC[ETHIOPIC] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#ETHIOPIC[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-ETHIOPIC_EXTENDED]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#ETHIOPIC_EXTENDED[ETHIOPIC_EXTENDED] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#ETHIOPIC_EXTENDED[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-ETHIOPIC_EXTENDED_A]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#ETHIOPIC_EXTENDED_A[ETHIOPIC_EXTENDED_A] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#ETHIOPIC_EXTENDED_A[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-ETHIOPIC_SUPPLEMENT]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#ETHIOPIC_SUPPLEMENT[ETHIOPIC_SUPPLEMENT] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#ETHIOPIC_SUPPLEMENT[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-GENERAL_PUNCTUATION]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#GENERAL_PUNCTUATION[GENERAL_PUNCTUATION] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#GENERAL_PUNCTUATION[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-GEOMETRIC_SHAPES]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#GEOMETRIC_SHAPES[GEOMETRIC_SHAPES] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#GEOMETRIC_SHAPES[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-GEORGIAN]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#GEORGIAN[GEORGIAN] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#GEORGIAN[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-GEORGIAN_SUPPLEMENT]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#GEORGIAN_SUPPLEMENT[GEORGIAN_SUPPLEMENT] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#GEORGIAN_SUPPLEMENT[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-GLAGOLITIC]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#GLAGOLITIC[GLAGOLITIC] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#GLAGOLITIC[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-GOTHIC]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#GOTHIC[GOTHIC] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#GOTHIC[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-GREEK]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#GREEK[GREEK] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#GREEK[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-GREEK_EXTENDED]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#GREEK_EXTENDED[GREEK_EXTENDED] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#GREEK_EXTENDED[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-GUJARATI]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#GUJARATI[GUJARATI] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#GUJARATI[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-GURMUKHI]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#GURMUKHI[GURMUKHI] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#GURMUKHI[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-HALFWIDTH_AND_FULLWIDTH_FORMS]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#HALFWIDTH_AND_FULLWIDTH_FORMS[HALFWIDTH_AND_FULLWIDTH_FORMS] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#HALFWIDTH_AND_FULLWIDTH_FORMS[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-HANGUL_COMPATIBILITY_JAMO]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#HANGUL_COMPATIBILITY_JAMO[HANGUL_COMPATIBILITY_JAMO] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#HANGUL_COMPATIBILITY_JAMO[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-HANGUL_JAMO]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#HANGUL_JAMO[HANGUL_JAMO] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#HANGUL_JAMO[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-HANGUL_JAMO_EXTENDED_A]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#HANGUL_JAMO_EXTENDED_A[HANGUL_JAMO_EXTENDED_A] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#HANGUL_JAMO_EXTENDED_A[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-HANGUL_JAMO_EXTENDED_B]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#HANGUL_JAMO_EXTENDED_B[HANGUL_JAMO_EXTENDED_B] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#HANGUL_JAMO_EXTENDED_B[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-HANGUL_SYLLABLES]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#HANGUL_SYLLABLES[HANGUL_SYLLABLES] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#HANGUL_SYLLABLES[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-HANUNOO]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#HANUNOO[HANUNOO] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#HANUNOO[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-HEBREW]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#HEBREW[HEBREW] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#HEBREW[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-HIGH_PRIVATE_USE_SURROGATES]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#HIGH_PRIVATE_USE_SURROGATES[HIGH_PRIVATE_USE_SURROGATES] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#HIGH_PRIVATE_USE_SURROGATES[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-HIGH_SURROGATES]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#HIGH_SURROGATES[HIGH_SURROGATES] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#HIGH_SURROGATES[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-HIRAGANA]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#HIRAGANA[HIRAGANA] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#HIRAGANA[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-IDEOGRAPHIC_DESCRIPTION_CHARACTERS]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#IDEOGRAPHIC_DESCRIPTION_CHARACTERS[IDEOGRAPHIC_DESCRIPTION_CHARACTERS] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#IDEOGRAPHIC_DESCRIPTION_CHARACTERS[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-IMPERIAL_ARAMAIC]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#IMPERIAL_ARAMAIC[IMPERIAL_ARAMAIC] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#IMPERIAL_ARAMAIC[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-INSCRIPTIONAL_PAHLAVI]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#INSCRIPTIONAL_PAHLAVI[INSCRIPTIONAL_PAHLAVI] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#INSCRIPTIONAL_PAHLAVI[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-INSCRIPTIONAL_PARTHIAN]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#INSCRIPTIONAL_PARTHIAN[INSCRIPTIONAL_PARTHIAN] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#INSCRIPTIONAL_PARTHIAN[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-IPA_EXTENSIONS]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#IPA_EXTENSIONS[IPA_EXTENSIONS] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#IPA_EXTENSIONS[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-JAVANESE]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#JAVANESE[JAVANESE] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#JAVANESE[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-KAITHI]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#KAITHI[KAITHI] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#KAITHI[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-KANA_SUPPLEMENT]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#KANA_SUPPLEMENT[KANA_SUPPLEMENT] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#KANA_SUPPLEMENT[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-KANBUN]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#KANBUN[KANBUN] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#KANBUN[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-KANGXI_RADICALS]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#KANGXI_RADICALS[KANGXI_RADICALS] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#KANGXI_RADICALS[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-KANNADA]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#KANNADA[KANNADA] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#KANNADA[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-KATAKANA]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#KATAKANA[KATAKANA] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#KATAKANA[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-KATAKANA_PHONETIC_EXTENSIONS]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#KATAKANA_PHONETIC_EXTENSIONS[KATAKANA_PHONETIC_EXTENSIONS] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#KATAKANA_PHONETIC_EXTENSIONS[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-KAYAH_LI]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#KAYAH_LI[KAYAH_LI] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#KAYAH_LI[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-KHAROSHTHI]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#KHAROSHTHI[KHAROSHTHI] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#KHAROSHTHI[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-KHMER]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#KHMER[KHMER] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#KHMER[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-KHMER_SYMBOLS]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#KHMER_SYMBOLS[KHMER_SYMBOLS] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#KHMER_SYMBOLS[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-LAO]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#LAO[LAO] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#LAO[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-LATIN_1_SUPPLEMENT]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#LATIN_1_SUPPLEMENT[LATIN_1_SUPPLEMENT] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#LATIN_1_SUPPLEMENT[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-LATIN_EXTENDED_A]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#LATIN_EXTENDED_A[LATIN_EXTENDED_A] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#LATIN_EXTENDED_A[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-LATIN_EXTENDED_ADDITIONAL]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#LATIN_EXTENDED_ADDITIONAL[LATIN_EXTENDED_ADDITIONAL] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#LATIN_EXTENDED_ADDITIONAL[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-LATIN_EXTENDED_B]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#LATIN_EXTENDED_B[LATIN_EXTENDED_B] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#LATIN_EXTENDED_B[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-LATIN_EXTENDED_C]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#LATIN_EXTENDED_C[LATIN_EXTENDED_C] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#LATIN_EXTENDED_C[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-LATIN_EXTENDED_D]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#LATIN_EXTENDED_D[LATIN_EXTENDED_D] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#LATIN_EXTENDED_D[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-LEPCHA]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#LEPCHA[LEPCHA] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#LEPCHA[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-LETTERLIKE_SYMBOLS]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#LETTERLIKE_SYMBOLS[LETTERLIKE_SYMBOLS] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#LETTERLIKE_SYMBOLS[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-LIMBU]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#LIMBU[LIMBU] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#LIMBU[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-LINEAR_B_IDEOGRAMS]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#LINEAR_B_IDEOGRAMS[LINEAR_B_IDEOGRAMS] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#LINEAR_B_IDEOGRAMS[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-LINEAR_B_SYLLABARY]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#LINEAR_B_SYLLABARY[LINEAR_B_SYLLABARY] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#LINEAR_B_SYLLABARY[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-LISU]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#LISU[LISU] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#LISU[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-LOW_SURROGATES]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#LOW_SURROGATES[LOW_SURROGATES] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#LOW_SURROGATES[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-LYCIAN]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#LYCIAN[LYCIAN] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#LYCIAN[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-LYDIAN]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#LYDIAN[LYDIAN] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#LYDIAN[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-MAHJONG_TILES]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#MAHJONG_TILES[MAHJONG_TILES] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#MAHJONG_TILES[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-MALAYALAM]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#MALAYALAM[MALAYALAM] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#MALAYALAM[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-MANDAIC]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#MANDAIC[MANDAIC] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#MANDAIC[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-MATHEMATICAL_ALPHANUMERIC_SYMBOLS]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#MATHEMATICAL_ALPHANUMERIC_SYMBOLS[MATHEMATICAL_ALPHANUMERIC_SYMBOLS] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#MATHEMATICAL_ALPHANUMERIC_SYMBOLS[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-MATHEMATICAL_OPERATORS]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#MATHEMATICAL_OPERATORS[MATHEMATICAL_OPERATORS] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#MATHEMATICAL_OPERATORS[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-MEETEI_MAYEK]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#MEETEI_MAYEK[MEETEI_MAYEK] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#MEETEI_MAYEK[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-MEETEI_MAYEK_EXTENSIONS]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#MEETEI_MAYEK_EXTENSIONS[MEETEI_MAYEK_EXTENSIONS] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#MEETEI_MAYEK_EXTENSIONS[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-MEROITIC_CURSIVE]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#MEROITIC_CURSIVE[MEROITIC_CURSIVE] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#MEROITIC_CURSIVE[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-MEROITIC_HIEROGLYPHS]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#MEROITIC_HIEROGLYPHS[MEROITIC_HIEROGLYPHS] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#MEROITIC_HIEROGLYPHS[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-MIAO]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#MIAO[MIAO] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#MIAO[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-MISCELLANEOUS_MATHEMATICAL_SYMBOLS_A]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#MISCELLANEOUS_MATHEMATICAL_SYMBOLS_A[MISCELLANEOUS_MATHEMATICAL_SYMBOLS_A] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#MISCELLANEOUS_MATHEMATICAL_SYMBOLS_A[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-MISCELLANEOUS_MATHEMATICAL_SYMBOLS_B]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#MISCELLANEOUS_MATHEMATICAL_SYMBOLS_B[MISCELLANEOUS_MATHEMATICAL_SYMBOLS_B] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#MISCELLANEOUS_MATHEMATICAL_SYMBOLS_B[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-MISCELLANEOUS_SYMBOLS]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#MISCELLANEOUS_SYMBOLS[MISCELLANEOUS_SYMBOLS] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#MISCELLANEOUS_SYMBOLS[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-MISCELLANEOUS_SYMBOLS_AND_ARROWS]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#MISCELLANEOUS_SYMBOLS_AND_ARROWS[MISCELLANEOUS_SYMBOLS_AND_ARROWS] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#MISCELLANEOUS_SYMBOLS_AND_ARROWS[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-MISCELLANEOUS_SYMBOLS_AND_PICTOGRAPHS]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#MISCELLANEOUS_SYMBOLS_AND_PICTOGRAPHS[MISCELLANEOUS_SYMBOLS_AND_PICTOGRAPHS] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#MISCELLANEOUS_SYMBOLS_AND_PICTOGRAPHS[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-MISCELLANEOUS_TECHNICAL]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#MISCELLANEOUS_TECHNICAL[MISCELLANEOUS_TECHNICAL] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#MISCELLANEOUS_TECHNICAL[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-MODIFIER_TONE_LETTERS]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#MODIFIER_TONE_LETTERS[MODIFIER_TONE_LETTERS] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#MODIFIER_TONE_LETTERS[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-MONGOLIAN]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#MONGOLIAN[MONGOLIAN] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#MONGOLIAN[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-MUSICAL_SYMBOLS]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#MUSICAL_SYMBOLS[MUSICAL_SYMBOLS] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#MUSICAL_SYMBOLS[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-MYANMAR]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#MYANMAR[MYANMAR] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#MYANMAR[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-MYANMAR_EXTENDED_A]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#MYANMAR_EXTENDED_A[MYANMAR_EXTENDED_A] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#MYANMAR_EXTENDED_A[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-NEW_TAI_LUE]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#NEW_TAI_LUE[NEW_TAI_LUE] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#NEW_TAI_LUE[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-NKO]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#NKO[NKO] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#NKO[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-NUMBER_FORMS]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#NUMBER_FORMS[NUMBER_FORMS] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#NUMBER_FORMS[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-OGHAM]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#OGHAM[OGHAM] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#OGHAM[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-OLD_ITALIC]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#OLD_ITALIC[OLD_ITALIC] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#OLD_ITALIC[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-OLD_PERSIAN]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#OLD_PERSIAN[OLD_PERSIAN] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#OLD_PERSIAN[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-OLD_SOUTH_ARABIAN]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#OLD_SOUTH_ARABIAN[OLD_SOUTH_ARABIAN] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#OLD_SOUTH_ARABIAN[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-OLD_TURKIC]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#OLD_TURKIC[OLD_TURKIC] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#OLD_TURKIC[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-OL_CHIKI]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#OL_CHIKI[OL_CHIKI] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#OL_CHIKI[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-OPTICAL_CHARACTER_RECOGNITION]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#OPTICAL_CHARACTER_RECOGNITION[OPTICAL_CHARACTER_RECOGNITION] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#OPTICAL_CHARACTER_RECOGNITION[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-ORIYA]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#ORIYA[ORIYA] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#ORIYA[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-OSMANYA]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#OSMANYA[OSMANYA] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#OSMANYA[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-PHAGS_PA]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#PHAGS_PA[PHAGS_PA] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#PHAGS_PA[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-PHAISTOS_DISC]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#PHAISTOS_DISC[PHAISTOS_DISC] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#PHAISTOS_DISC[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-PHOENICIAN]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#PHOENICIAN[PHOENICIAN] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#PHOENICIAN[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-PHONETIC_EXTENSIONS]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#PHONETIC_EXTENSIONS[PHONETIC_EXTENSIONS] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#PHONETIC_EXTENSIONS[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-PHONETIC_EXTENSIONS_SUPPLEMENT]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#PHONETIC_EXTENSIONS_SUPPLEMENT[PHONETIC_EXTENSIONS_SUPPLEMENT] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#PHONETIC_EXTENSIONS_SUPPLEMENT[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-PLAYING_CARDS]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#PLAYING_CARDS[PLAYING_CARDS] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#PLAYING_CARDS[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-PRIVATE_USE_AREA]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#PRIVATE_USE_AREA[PRIVATE_USE_AREA] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#PRIVATE_USE_AREA[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-REJANG]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#REJANG[REJANG] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#REJANG[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-RUMI_NUMERAL_SYMBOLS]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#RUMI_NUMERAL_SYMBOLS[RUMI_NUMERAL_SYMBOLS] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#RUMI_NUMERAL_SYMBOLS[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-RUNIC]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#RUNIC[RUNIC] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#RUNIC[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-SAMARITAN]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#SAMARITAN[SAMARITAN] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#SAMARITAN[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-SAURASHTRA]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#SAURASHTRA[SAURASHTRA] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#SAURASHTRA[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-SHARADA]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#SHARADA[SHARADA] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#SHARADA[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-SHAVIAN]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#SHAVIAN[SHAVIAN] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#SHAVIAN[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-SINHALA]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#SINHALA[SINHALA] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#SINHALA[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-SMALL_FORM_VARIANTS]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#SMALL_FORM_VARIANTS[SMALL_FORM_VARIANTS] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#SMALL_FORM_VARIANTS[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-SORA_SOMPENG]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#SORA_SOMPENG[SORA_SOMPENG] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#SORA_SOMPENG[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-SPACING_MODIFIER_LETTERS]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#SPACING_MODIFIER_LETTERS[SPACING_MODIFIER_LETTERS] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#SPACING_MODIFIER_LETTERS[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-SPECIALS]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#SPECIALS[SPECIALS] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#SPECIALS[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-SUNDANESE]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#SUNDANESE[SUNDANESE] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#SUNDANESE[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-SUNDANESE_SUPPLEMENT]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#SUNDANESE_SUPPLEMENT[SUNDANESE_SUPPLEMENT] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#SUNDANESE_SUPPLEMENT[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-SUPERSCRIPTS_AND_SUBSCRIPTS]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#SUPERSCRIPTS_AND_SUBSCRIPTS[SUPERSCRIPTS_AND_SUBSCRIPTS] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#SUPERSCRIPTS_AND_SUBSCRIPTS[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-SUPPLEMENTAL_ARROWS_A]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#SUPPLEMENTAL_ARROWS_A[SUPPLEMENTAL_ARROWS_A] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#SUPPLEMENTAL_ARROWS_A[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-SUPPLEMENTAL_ARROWS_B]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#SUPPLEMENTAL_ARROWS_B[SUPPLEMENTAL_ARROWS_B] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#SUPPLEMENTAL_ARROWS_B[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-SUPPLEMENTAL_MATHEMATICAL_OPERATORS]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#SUPPLEMENTAL_MATHEMATICAL_OPERATORS[SUPPLEMENTAL_MATHEMATICAL_OPERATORS] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#SUPPLEMENTAL_MATHEMATICAL_OPERATORS[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-SUPPLEMENTAL_PUNCTUATION]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#SUPPLEMENTAL_PUNCTUATION[SUPPLEMENTAL_PUNCTUATION] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#SUPPLEMENTAL_PUNCTUATION[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-SUPPLEMENTARY_PRIVATE_USE_AREA_A]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#SUPPLEMENTARY_PRIVATE_USE_AREA_A[SUPPLEMENTARY_PRIVATE_USE_AREA_A] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#SUPPLEMENTARY_PRIVATE_USE_AREA_A[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-SUPPLEMENTARY_PRIVATE_USE_AREA_B]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#SUPPLEMENTARY_PRIVATE_USE_AREA_B[SUPPLEMENTARY_PRIVATE_USE_AREA_B] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#SUPPLEMENTARY_PRIVATE_USE_AREA_B[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-SYLOTI_NAGRI]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#SYLOTI_NAGRI[SYLOTI_NAGRI] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#SYLOTI_NAGRI[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-SYRIAC]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#SYRIAC[SYRIAC] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#SYRIAC[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-TAGALOG]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#TAGALOG[TAGALOG] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#TAGALOG[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-TAGBANWA]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#TAGBANWA[TAGBANWA] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#TAGBANWA[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-TAGS]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#TAGS[TAGS] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#TAGS[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-TAI_LE]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#TAI_LE[TAI_LE] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#TAI_LE[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-TAI_THAM]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#TAI_THAM[TAI_THAM] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#TAI_THAM[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-TAI_VIET]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#TAI_VIET[TAI_VIET] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#TAI_VIET[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-TAI_XUAN_JING_SYMBOLS]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#TAI_XUAN_JING_SYMBOLS[TAI_XUAN_JING_SYMBOLS] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#TAI_XUAN_JING_SYMBOLS[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-TAKRI]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#TAKRI[TAKRI] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#TAKRI[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-TAMIL]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#TAMIL[TAMIL] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#TAMIL[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-TELUGU]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#TELUGU[TELUGU] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#TELUGU[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-THAANA]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#THAANA[THAANA] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#THAANA[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-THAI]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#THAI[THAI] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#THAI[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-TIBETAN]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#TIBETAN[TIBETAN] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#TIBETAN[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-TIFINAGH]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#TIFINAGH[TIFINAGH] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#TIFINAGH[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-TRANSPORT_AND_MAP_SYMBOLS]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#TRANSPORT_AND_MAP_SYMBOLS[TRANSPORT_AND_MAP_SYMBOLS] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#TRANSPORT_AND_MAP_SYMBOLS[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-UGARITIC]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#UGARITIC[UGARITIC] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#UGARITIC[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-UNIFIED_CANADIAN_ABORIGINAL_SYLLABICS]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#UNIFIED_CANADIAN_ABORIGINAL_SYLLABICS[UNIFIED_CANADIAN_ABORIGINAL_SYLLABICS] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#UNIFIED_CANADIAN_ABORIGINAL_SYLLABICS[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-UNIFIED_CANADIAN_ABORIGINAL_SYLLABICS_EXTENDED]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#UNIFIED_CANADIAN_ABORIGINAL_SYLLABICS_EXTENDED[UNIFIED_CANADIAN_ABORIGINAL_SYLLABICS_EXTENDED] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#UNIFIED_CANADIAN_ABORIGINAL_SYLLABICS_EXTENDED[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-VAI]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#VAI[VAI] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#VAI[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-VARIATION_SELECTORS]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#VARIATION_SELECTORS[VARIATION_SELECTORS] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#VARIATION_SELECTORS[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-VARIATION_SELECTORS_SUPPLEMENT]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#VARIATION_SELECTORS_SUPPLEMENT[VARIATION_SELECTORS_SUPPLEMENT] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#VARIATION_SELECTORS_SUPPLEMENT[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-VEDIC_EXTENSIONS]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#VEDIC_EXTENSIONS[VEDIC_EXTENSIONS] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#VEDIC_EXTENSIONS[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-VERTICAL_FORMS]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#VERTICAL_FORMS[VERTICAL_FORMS] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#VERTICAL_FORMS[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-YIJING_HEXAGRAM_SYMBOLS]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#YIJING_HEXAGRAM_SYMBOLS[YIJING_HEXAGRAM_SYMBOLS] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#YIJING_HEXAGRAM_SYMBOLS[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-YI_RADICALS]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#YI_RADICALS[YI_RADICALS] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#YI_RADICALS[java 9]) -** [[painless-api-reference-Character-UnicodeBlock-YI_SYLLABLES]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#YI_SYLLABLES[YI_SYLLABLES] (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#YI_SYLLABLES[java 9]) -* ++[[painless-api-reference-Character-UnicodeBlock-forName-1]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#forName%2Djava.lang.String%2D[forName](<>)++ (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#forName%2Djava.lang.String%2D[java 9]) -* ++[[painless-api-reference-Character-UnicodeBlock-of-1]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeBlock.html#of%2Dint%2D[of](int)++ (link:{java9-javadoc}/java/lang/Character$UnicodeBlock.html#of%2Dint%2D[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-AEGEAN_NUMBERS]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#AEGEAN_NUMBERS[AEGEAN_NUMBERS] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#AEGEAN_NUMBERS[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-ALCHEMICAL_SYMBOLS]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#ALCHEMICAL_SYMBOLS[ALCHEMICAL_SYMBOLS] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#ALCHEMICAL_SYMBOLS[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-ALPHABETIC_PRESENTATION_FORMS]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#ALPHABETIC_PRESENTATION_FORMS[ALPHABETIC_PRESENTATION_FORMS] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#ALPHABETIC_PRESENTATION_FORMS[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-ANCIENT_GREEK_MUSICAL_NOTATION]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#ANCIENT_GREEK_MUSICAL_NOTATION[ANCIENT_GREEK_MUSICAL_NOTATION] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#ANCIENT_GREEK_MUSICAL_NOTATION[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-ANCIENT_GREEK_NUMBERS]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#ANCIENT_GREEK_NUMBERS[ANCIENT_GREEK_NUMBERS] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#ANCIENT_GREEK_NUMBERS[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-ANCIENT_SYMBOLS]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#ANCIENT_SYMBOLS[ANCIENT_SYMBOLS] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#ANCIENT_SYMBOLS[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-ARABIC]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#ARABIC[ARABIC] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#ARABIC[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-ARABIC_EXTENDED_A]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#ARABIC_EXTENDED_A[ARABIC_EXTENDED_A] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#ARABIC_EXTENDED_A[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-ARABIC_MATHEMATICAL_ALPHABETIC_SYMBOLS]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#ARABIC_MATHEMATICAL_ALPHABETIC_SYMBOLS[ARABIC_MATHEMATICAL_ALPHABETIC_SYMBOLS] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#ARABIC_MATHEMATICAL_ALPHABETIC_SYMBOLS[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-ARABIC_PRESENTATION_FORMS_A]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#ARABIC_PRESENTATION_FORMS_A[ARABIC_PRESENTATION_FORMS_A] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#ARABIC_PRESENTATION_FORMS_A[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-ARABIC_PRESENTATION_FORMS_B]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#ARABIC_PRESENTATION_FORMS_B[ARABIC_PRESENTATION_FORMS_B] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#ARABIC_PRESENTATION_FORMS_B[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-ARABIC_SUPPLEMENT]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#ARABIC_SUPPLEMENT[ARABIC_SUPPLEMENT] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#ARABIC_SUPPLEMENT[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-ARMENIAN]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#ARMENIAN[ARMENIAN] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#ARMENIAN[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-ARROWS]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#ARROWS[ARROWS] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#ARROWS[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-AVESTAN]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#AVESTAN[AVESTAN] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#AVESTAN[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-BALINESE]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#BALINESE[BALINESE] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#BALINESE[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-BAMUM]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#BAMUM[BAMUM] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#BAMUM[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-BAMUM_SUPPLEMENT]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#BAMUM_SUPPLEMENT[BAMUM_SUPPLEMENT] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#BAMUM_SUPPLEMENT[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-BASIC_LATIN]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#BASIC_LATIN[BASIC_LATIN] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#BASIC_LATIN[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-BATAK]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#BATAK[BATAK] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#BATAK[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-BENGALI]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#BENGALI[BENGALI] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#BENGALI[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-BLOCK_ELEMENTS]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#BLOCK_ELEMENTS[BLOCK_ELEMENTS] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#BLOCK_ELEMENTS[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-BOPOMOFO]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#BOPOMOFO[BOPOMOFO] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#BOPOMOFO[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-BOPOMOFO_EXTENDED]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#BOPOMOFO_EXTENDED[BOPOMOFO_EXTENDED] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#BOPOMOFO_EXTENDED[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-BOX_DRAWING]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#BOX_DRAWING[BOX_DRAWING] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#BOX_DRAWING[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-BRAHMI]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#BRAHMI[BRAHMI] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#BRAHMI[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-BRAILLE_PATTERNS]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#BRAILLE_PATTERNS[BRAILLE_PATTERNS] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#BRAILLE_PATTERNS[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-BUGINESE]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#BUGINESE[BUGINESE] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#BUGINESE[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-BUHID]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#BUHID[BUHID] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#BUHID[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-BYZANTINE_MUSICAL_SYMBOLS]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#BYZANTINE_MUSICAL_SYMBOLS[BYZANTINE_MUSICAL_SYMBOLS] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#BYZANTINE_MUSICAL_SYMBOLS[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-CARIAN]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#CARIAN[CARIAN] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#CARIAN[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-CHAKMA]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#CHAKMA[CHAKMA] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#CHAKMA[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-CHAM]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#CHAM[CHAM] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#CHAM[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-CHEROKEE]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#CHEROKEE[CHEROKEE] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#CHEROKEE[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-CJK_COMPATIBILITY]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#CJK_COMPATIBILITY[CJK_COMPATIBILITY] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#CJK_COMPATIBILITY[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-CJK_COMPATIBILITY_FORMS]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#CJK_COMPATIBILITY_FORMS[CJK_COMPATIBILITY_FORMS] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#CJK_COMPATIBILITY_FORMS[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-CJK_COMPATIBILITY_IDEOGRAPHS]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#CJK_COMPATIBILITY_IDEOGRAPHS[CJK_COMPATIBILITY_IDEOGRAPHS] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#CJK_COMPATIBILITY_IDEOGRAPHS[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-CJK_COMPATIBILITY_IDEOGRAPHS_SUPPLEMENT]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#CJK_COMPATIBILITY_IDEOGRAPHS_SUPPLEMENT[CJK_COMPATIBILITY_IDEOGRAPHS_SUPPLEMENT] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#CJK_COMPATIBILITY_IDEOGRAPHS_SUPPLEMENT[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-CJK_RADICALS_SUPPLEMENT]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#CJK_RADICALS_SUPPLEMENT[CJK_RADICALS_SUPPLEMENT] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#CJK_RADICALS_SUPPLEMENT[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-CJK_STROKES]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#CJK_STROKES[CJK_STROKES] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#CJK_STROKES[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-CJK_SYMBOLS_AND_PUNCTUATION]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#CJK_SYMBOLS_AND_PUNCTUATION[CJK_SYMBOLS_AND_PUNCTUATION] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#CJK_SYMBOLS_AND_PUNCTUATION[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-CJK_UNIFIED_IDEOGRAPHS]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#CJK_UNIFIED_IDEOGRAPHS[CJK_UNIFIED_IDEOGRAPHS] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#CJK_UNIFIED_IDEOGRAPHS[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-CJK_UNIFIED_IDEOGRAPHS_EXTENSION_A]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#CJK_UNIFIED_IDEOGRAPHS_EXTENSION_A[CJK_UNIFIED_IDEOGRAPHS_EXTENSION_A] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#CJK_UNIFIED_IDEOGRAPHS_EXTENSION_A[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-CJK_UNIFIED_IDEOGRAPHS_EXTENSION_B]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#CJK_UNIFIED_IDEOGRAPHS_EXTENSION_B[CJK_UNIFIED_IDEOGRAPHS_EXTENSION_B] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#CJK_UNIFIED_IDEOGRAPHS_EXTENSION_B[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-CJK_UNIFIED_IDEOGRAPHS_EXTENSION_C]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#CJK_UNIFIED_IDEOGRAPHS_EXTENSION_C[CJK_UNIFIED_IDEOGRAPHS_EXTENSION_C] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#CJK_UNIFIED_IDEOGRAPHS_EXTENSION_C[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-CJK_UNIFIED_IDEOGRAPHS_EXTENSION_D]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#CJK_UNIFIED_IDEOGRAPHS_EXTENSION_D[CJK_UNIFIED_IDEOGRAPHS_EXTENSION_D] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#CJK_UNIFIED_IDEOGRAPHS_EXTENSION_D[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-COMBINING_DIACRITICAL_MARKS]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#COMBINING_DIACRITICAL_MARKS[COMBINING_DIACRITICAL_MARKS] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#COMBINING_DIACRITICAL_MARKS[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-COMBINING_DIACRITICAL_MARKS_SUPPLEMENT]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#COMBINING_DIACRITICAL_MARKS_SUPPLEMENT[COMBINING_DIACRITICAL_MARKS_SUPPLEMENT] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#COMBINING_DIACRITICAL_MARKS_SUPPLEMENT[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-COMBINING_HALF_MARKS]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#COMBINING_HALF_MARKS[COMBINING_HALF_MARKS] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#COMBINING_HALF_MARKS[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-COMBINING_MARKS_FOR_SYMBOLS]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#COMBINING_MARKS_FOR_SYMBOLS[COMBINING_MARKS_FOR_SYMBOLS] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#COMBINING_MARKS_FOR_SYMBOLS[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-COMMON_INDIC_NUMBER_FORMS]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#COMMON_INDIC_NUMBER_FORMS[COMMON_INDIC_NUMBER_FORMS] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#COMMON_INDIC_NUMBER_FORMS[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-CONTROL_PICTURES]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#CONTROL_PICTURES[CONTROL_PICTURES] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#CONTROL_PICTURES[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-COPTIC]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#COPTIC[COPTIC] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#COPTIC[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-COUNTING_ROD_NUMERALS]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#COUNTING_ROD_NUMERALS[COUNTING_ROD_NUMERALS] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#COUNTING_ROD_NUMERALS[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-CUNEIFORM]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#CUNEIFORM[CUNEIFORM] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#CUNEIFORM[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-CUNEIFORM_NUMBERS_AND_PUNCTUATION]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#CUNEIFORM_NUMBERS_AND_PUNCTUATION[CUNEIFORM_NUMBERS_AND_PUNCTUATION] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#CUNEIFORM_NUMBERS_AND_PUNCTUATION[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-CURRENCY_SYMBOLS]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#CURRENCY_SYMBOLS[CURRENCY_SYMBOLS] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#CURRENCY_SYMBOLS[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-CYPRIOT_SYLLABARY]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#CYPRIOT_SYLLABARY[CYPRIOT_SYLLABARY] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#CYPRIOT_SYLLABARY[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-CYRILLIC]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#CYRILLIC[CYRILLIC] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#CYRILLIC[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-CYRILLIC_EXTENDED_A]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#CYRILLIC_EXTENDED_A[CYRILLIC_EXTENDED_A] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#CYRILLIC_EXTENDED_A[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-CYRILLIC_EXTENDED_B]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#CYRILLIC_EXTENDED_B[CYRILLIC_EXTENDED_B] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#CYRILLIC_EXTENDED_B[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-CYRILLIC_SUPPLEMENTARY]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#CYRILLIC_SUPPLEMENTARY[CYRILLIC_SUPPLEMENTARY] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#CYRILLIC_SUPPLEMENTARY[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-DESERET]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#DESERET[DESERET] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#DESERET[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-DEVANAGARI]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#DEVANAGARI[DEVANAGARI] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#DEVANAGARI[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-DEVANAGARI_EXTENDED]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#DEVANAGARI_EXTENDED[DEVANAGARI_EXTENDED] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#DEVANAGARI_EXTENDED[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-DINGBATS]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#DINGBATS[DINGBATS] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#DINGBATS[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-DOMINO_TILES]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#DOMINO_TILES[DOMINO_TILES] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#DOMINO_TILES[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-EGYPTIAN_HIEROGLYPHS]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#EGYPTIAN_HIEROGLYPHS[EGYPTIAN_HIEROGLYPHS] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#EGYPTIAN_HIEROGLYPHS[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-EMOTICONS]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#EMOTICONS[EMOTICONS] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#EMOTICONS[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-ENCLOSED_ALPHANUMERICS]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#ENCLOSED_ALPHANUMERICS[ENCLOSED_ALPHANUMERICS] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#ENCLOSED_ALPHANUMERICS[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-ENCLOSED_ALPHANUMERIC_SUPPLEMENT]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#ENCLOSED_ALPHANUMERIC_SUPPLEMENT[ENCLOSED_ALPHANUMERIC_SUPPLEMENT] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#ENCLOSED_ALPHANUMERIC_SUPPLEMENT[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-ENCLOSED_CJK_LETTERS_AND_MONTHS]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#ENCLOSED_CJK_LETTERS_AND_MONTHS[ENCLOSED_CJK_LETTERS_AND_MONTHS] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#ENCLOSED_CJK_LETTERS_AND_MONTHS[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-ENCLOSED_IDEOGRAPHIC_SUPPLEMENT]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#ENCLOSED_IDEOGRAPHIC_SUPPLEMENT[ENCLOSED_IDEOGRAPHIC_SUPPLEMENT] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#ENCLOSED_IDEOGRAPHIC_SUPPLEMENT[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-ETHIOPIC]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#ETHIOPIC[ETHIOPIC] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#ETHIOPIC[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-ETHIOPIC_EXTENDED]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#ETHIOPIC_EXTENDED[ETHIOPIC_EXTENDED] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#ETHIOPIC_EXTENDED[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-ETHIOPIC_EXTENDED_A]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#ETHIOPIC_EXTENDED_A[ETHIOPIC_EXTENDED_A] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#ETHIOPIC_EXTENDED_A[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-ETHIOPIC_SUPPLEMENT]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#ETHIOPIC_SUPPLEMENT[ETHIOPIC_SUPPLEMENT] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#ETHIOPIC_SUPPLEMENT[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-GENERAL_PUNCTUATION]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#GENERAL_PUNCTUATION[GENERAL_PUNCTUATION] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#GENERAL_PUNCTUATION[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-GEOMETRIC_SHAPES]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#GEOMETRIC_SHAPES[GEOMETRIC_SHAPES] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#GEOMETRIC_SHAPES[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-GEORGIAN]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#GEORGIAN[GEORGIAN] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#GEORGIAN[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-GEORGIAN_SUPPLEMENT]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#GEORGIAN_SUPPLEMENT[GEORGIAN_SUPPLEMENT] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#GEORGIAN_SUPPLEMENT[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-GLAGOLITIC]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#GLAGOLITIC[GLAGOLITIC] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#GLAGOLITIC[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-GOTHIC]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#GOTHIC[GOTHIC] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#GOTHIC[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-GREEK]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#GREEK[GREEK] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#GREEK[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-GREEK_EXTENDED]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#GREEK_EXTENDED[GREEK_EXTENDED] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#GREEK_EXTENDED[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-GUJARATI]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#GUJARATI[GUJARATI] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#GUJARATI[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-GURMUKHI]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#GURMUKHI[GURMUKHI] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#GURMUKHI[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-HALFWIDTH_AND_FULLWIDTH_FORMS]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#HALFWIDTH_AND_FULLWIDTH_FORMS[HALFWIDTH_AND_FULLWIDTH_FORMS] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#HALFWIDTH_AND_FULLWIDTH_FORMS[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-HANGUL_COMPATIBILITY_JAMO]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#HANGUL_COMPATIBILITY_JAMO[HANGUL_COMPATIBILITY_JAMO] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#HANGUL_COMPATIBILITY_JAMO[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-HANGUL_JAMO]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#HANGUL_JAMO[HANGUL_JAMO] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#HANGUL_JAMO[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-HANGUL_JAMO_EXTENDED_A]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#HANGUL_JAMO_EXTENDED_A[HANGUL_JAMO_EXTENDED_A] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#HANGUL_JAMO_EXTENDED_A[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-HANGUL_JAMO_EXTENDED_B]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#HANGUL_JAMO_EXTENDED_B[HANGUL_JAMO_EXTENDED_B] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#HANGUL_JAMO_EXTENDED_B[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-HANGUL_SYLLABLES]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#HANGUL_SYLLABLES[HANGUL_SYLLABLES] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#HANGUL_SYLLABLES[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-HANUNOO]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#HANUNOO[HANUNOO] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#HANUNOO[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-HEBREW]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#HEBREW[HEBREW] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#HEBREW[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-HIGH_PRIVATE_USE_SURROGATES]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#HIGH_PRIVATE_USE_SURROGATES[HIGH_PRIVATE_USE_SURROGATES] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#HIGH_PRIVATE_USE_SURROGATES[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-HIGH_SURROGATES]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#HIGH_SURROGATES[HIGH_SURROGATES] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#HIGH_SURROGATES[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-HIRAGANA]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#HIRAGANA[HIRAGANA] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#HIRAGANA[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-IDEOGRAPHIC_DESCRIPTION_CHARACTERS]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#IDEOGRAPHIC_DESCRIPTION_CHARACTERS[IDEOGRAPHIC_DESCRIPTION_CHARACTERS] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#IDEOGRAPHIC_DESCRIPTION_CHARACTERS[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-IMPERIAL_ARAMAIC]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#IMPERIAL_ARAMAIC[IMPERIAL_ARAMAIC] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#IMPERIAL_ARAMAIC[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-INSCRIPTIONAL_PAHLAVI]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#INSCRIPTIONAL_PAHLAVI[INSCRIPTIONAL_PAHLAVI] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#INSCRIPTIONAL_PAHLAVI[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-INSCRIPTIONAL_PARTHIAN]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#INSCRIPTIONAL_PARTHIAN[INSCRIPTIONAL_PARTHIAN] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#INSCRIPTIONAL_PARTHIAN[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-IPA_EXTENSIONS]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#IPA_EXTENSIONS[IPA_EXTENSIONS] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#IPA_EXTENSIONS[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-JAVANESE]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#JAVANESE[JAVANESE] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#JAVANESE[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-KAITHI]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#KAITHI[KAITHI] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#KAITHI[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-KANA_SUPPLEMENT]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#KANA_SUPPLEMENT[KANA_SUPPLEMENT] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#KANA_SUPPLEMENT[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-KANBUN]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#KANBUN[KANBUN] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#KANBUN[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-KANGXI_RADICALS]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#KANGXI_RADICALS[KANGXI_RADICALS] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#KANGXI_RADICALS[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-KANNADA]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#KANNADA[KANNADA] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#KANNADA[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-KATAKANA]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#KATAKANA[KATAKANA] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#KATAKANA[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-KATAKANA_PHONETIC_EXTENSIONS]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#KATAKANA_PHONETIC_EXTENSIONS[KATAKANA_PHONETIC_EXTENSIONS] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#KATAKANA_PHONETIC_EXTENSIONS[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-KAYAH_LI]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#KAYAH_LI[KAYAH_LI] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#KAYAH_LI[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-KHAROSHTHI]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#KHAROSHTHI[KHAROSHTHI] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#KHAROSHTHI[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-KHMER]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#KHMER[KHMER] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#KHMER[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-KHMER_SYMBOLS]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#KHMER_SYMBOLS[KHMER_SYMBOLS] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#KHMER_SYMBOLS[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-LAO]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#LAO[LAO] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#LAO[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-LATIN_1_SUPPLEMENT]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#LATIN_1_SUPPLEMENT[LATIN_1_SUPPLEMENT] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#LATIN_1_SUPPLEMENT[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-LATIN_EXTENDED_A]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#LATIN_EXTENDED_A[LATIN_EXTENDED_A] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#LATIN_EXTENDED_A[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-LATIN_EXTENDED_ADDITIONAL]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#LATIN_EXTENDED_ADDITIONAL[LATIN_EXTENDED_ADDITIONAL] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#LATIN_EXTENDED_ADDITIONAL[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-LATIN_EXTENDED_B]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#LATIN_EXTENDED_B[LATIN_EXTENDED_B] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#LATIN_EXTENDED_B[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-LATIN_EXTENDED_C]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#LATIN_EXTENDED_C[LATIN_EXTENDED_C] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#LATIN_EXTENDED_C[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-LATIN_EXTENDED_D]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#LATIN_EXTENDED_D[LATIN_EXTENDED_D] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#LATIN_EXTENDED_D[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-LEPCHA]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#LEPCHA[LEPCHA] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#LEPCHA[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-LETTERLIKE_SYMBOLS]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#LETTERLIKE_SYMBOLS[LETTERLIKE_SYMBOLS] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#LETTERLIKE_SYMBOLS[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-LIMBU]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#LIMBU[LIMBU] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#LIMBU[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-LINEAR_B_IDEOGRAMS]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#LINEAR_B_IDEOGRAMS[LINEAR_B_IDEOGRAMS] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#LINEAR_B_IDEOGRAMS[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-LINEAR_B_SYLLABARY]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#LINEAR_B_SYLLABARY[LINEAR_B_SYLLABARY] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#LINEAR_B_SYLLABARY[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-LISU]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#LISU[LISU] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#LISU[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-LOW_SURROGATES]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#LOW_SURROGATES[LOW_SURROGATES] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#LOW_SURROGATES[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-LYCIAN]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#LYCIAN[LYCIAN] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#LYCIAN[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-LYDIAN]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#LYDIAN[LYDIAN] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#LYDIAN[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-MAHJONG_TILES]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#MAHJONG_TILES[MAHJONG_TILES] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#MAHJONG_TILES[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-MALAYALAM]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#MALAYALAM[MALAYALAM] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#MALAYALAM[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-MANDAIC]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#MANDAIC[MANDAIC] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#MANDAIC[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-MATHEMATICAL_ALPHANUMERIC_SYMBOLS]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#MATHEMATICAL_ALPHANUMERIC_SYMBOLS[MATHEMATICAL_ALPHANUMERIC_SYMBOLS] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#MATHEMATICAL_ALPHANUMERIC_SYMBOLS[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-MATHEMATICAL_OPERATORS]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#MATHEMATICAL_OPERATORS[MATHEMATICAL_OPERATORS] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#MATHEMATICAL_OPERATORS[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-MEETEI_MAYEK]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#MEETEI_MAYEK[MEETEI_MAYEK] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#MEETEI_MAYEK[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-MEETEI_MAYEK_EXTENSIONS]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#MEETEI_MAYEK_EXTENSIONS[MEETEI_MAYEK_EXTENSIONS] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#MEETEI_MAYEK_EXTENSIONS[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-MEROITIC_CURSIVE]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#MEROITIC_CURSIVE[MEROITIC_CURSIVE] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#MEROITIC_CURSIVE[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-MEROITIC_HIEROGLYPHS]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#MEROITIC_HIEROGLYPHS[MEROITIC_HIEROGLYPHS] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#MEROITIC_HIEROGLYPHS[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-MIAO]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#MIAO[MIAO] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#MIAO[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-MISCELLANEOUS_MATHEMATICAL_SYMBOLS_A]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#MISCELLANEOUS_MATHEMATICAL_SYMBOLS_A[MISCELLANEOUS_MATHEMATICAL_SYMBOLS_A] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#MISCELLANEOUS_MATHEMATICAL_SYMBOLS_A[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-MISCELLANEOUS_MATHEMATICAL_SYMBOLS_B]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#MISCELLANEOUS_MATHEMATICAL_SYMBOLS_B[MISCELLANEOUS_MATHEMATICAL_SYMBOLS_B] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#MISCELLANEOUS_MATHEMATICAL_SYMBOLS_B[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-MISCELLANEOUS_SYMBOLS]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#MISCELLANEOUS_SYMBOLS[MISCELLANEOUS_SYMBOLS] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#MISCELLANEOUS_SYMBOLS[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-MISCELLANEOUS_SYMBOLS_AND_ARROWS]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#MISCELLANEOUS_SYMBOLS_AND_ARROWS[MISCELLANEOUS_SYMBOLS_AND_ARROWS] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#MISCELLANEOUS_SYMBOLS_AND_ARROWS[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-MISCELLANEOUS_SYMBOLS_AND_PICTOGRAPHS]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#MISCELLANEOUS_SYMBOLS_AND_PICTOGRAPHS[MISCELLANEOUS_SYMBOLS_AND_PICTOGRAPHS] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#MISCELLANEOUS_SYMBOLS_AND_PICTOGRAPHS[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-MISCELLANEOUS_TECHNICAL]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#MISCELLANEOUS_TECHNICAL[MISCELLANEOUS_TECHNICAL] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#MISCELLANEOUS_TECHNICAL[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-MODIFIER_TONE_LETTERS]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#MODIFIER_TONE_LETTERS[MODIFIER_TONE_LETTERS] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#MODIFIER_TONE_LETTERS[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-MONGOLIAN]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#MONGOLIAN[MONGOLIAN] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#MONGOLIAN[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-MUSICAL_SYMBOLS]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#MUSICAL_SYMBOLS[MUSICAL_SYMBOLS] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#MUSICAL_SYMBOLS[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-MYANMAR]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#MYANMAR[MYANMAR] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#MYANMAR[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-MYANMAR_EXTENDED_A]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#MYANMAR_EXTENDED_A[MYANMAR_EXTENDED_A] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#MYANMAR_EXTENDED_A[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-NEW_TAI_LUE]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#NEW_TAI_LUE[NEW_TAI_LUE] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#NEW_TAI_LUE[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-NKO]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#NKO[NKO] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#NKO[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-NUMBER_FORMS]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#NUMBER_FORMS[NUMBER_FORMS] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#NUMBER_FORMS[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-OGHAM]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#OGHAM[OGHAM] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#OGHAM[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-OLD_ITALIC]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#OLD_ITALIC[OLD_ITALIC] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#OLD_ITALIC[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-OLD_PERSIAN]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#OLD_PERSIAN[OLD_PERSIAN] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#OLD_PERSIAN[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-OLD_SOUTH_ARABIAN]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#OLD_SOUTH_ARABIAN[OLD_SOUTH_ARABIAN] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#OLD_SOUTH_ARABIAN[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-OLD_TURKIC]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#OLD_TURKIC[OLD_TURKIC] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#OLD_TURKIC[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-OL_CHIKI]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#OL_CHIKI[OL_CHIKI] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#OL_CHIKI[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-OPTICAL_CHARACTER_RECOGNITION]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#OPTICAL_CHARACTER_RECOGNITION[OPTICAL_CHARACTER_RECOGNITION] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#OPTICAL_CHARACTER_RECOGNITION[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-ORIYA]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#ORIYA[ORIYA] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#ORIYA[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-OSMANYA]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#OSMANYA[OSMANYA] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#OSMANYA[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-PHAGS_PA]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#PHAGS_PA[PHAGS_PA] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#PHAGS_PA[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-PHAISTOS_DISC]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#PHAISTOS_DISC[PHAISTOS_DISC] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#PHAISTOS_DISC[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-PHOENICIAN]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#PHOENICIAN[PHOENICIAN] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#PHOENICIAN[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-PHONETIC_EXTENSIONS]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#PHONETIC_EXTENSIONS[PHONETIC_EXTENSIONS] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#PHONETIC_EXTENSIONS[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-PHONETIC_EXTENSIONS_SUPPLEMENT]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#PHONETIC_EXTENSIONS_SUPPLEMENT[PHONETIC_EXTENSIONS_SUPPLEMENT] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#PHONETIC_EXTENSIONS_SUPPLEMENT[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-PLAYING_CARDS]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#PLAYING_CARDS[PLAYING_CARDS] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#PLAYING_CARDS[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-PRIVATE_USE_AREA]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#PRIVATE_USE_AREA[PRIVATE_USE_AREA] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#PRIVATE_USE_AREA[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-REJANG]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#REJANG[REJANG] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#REJANG[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-RUMI_NUMERAL_SYMBOLS]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#RUMI_NUMERAL_SYMBOLS[RUMI_NUMERAL_SYMBOLS] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#RUMI_NUMERAL_SYMBOLS[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-RUNIC]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#RUNIC[RUNIC] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#RUNIC[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-SAMARITAN]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#SAMARITAN[SAMARITAN] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#SAMARITAN[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-SAURASHTRA]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#SAURASHTRA[SAURASHTRA] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#SAURASHTRA[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-SHARADA]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#SHARADA[SHARADA] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#SHARADA[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-SHAVIAN]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#SHAVIAN[SHAVIAN] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#SHAVIAN[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-SINHALA]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#SINHALA[SINHALA] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#SINHALA[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-SMALL_FORM_VARIANTS]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#SMALL_FORM_VARIANTS[SMALL_FORM_VARIANTS] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#SMALL_FORM_VARIANTS[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-SORA_SOMPENG]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#SORA_SOMPENG[SORA_SOMPENG] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#SORA_SOMPENG[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-SPACING_MODIFIER_LETTERS]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#SPACING_MODIFIER_LETTERS[SPACING_MODIFIER_LETTERS] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#SPACING_MODIFIER_LETTERS[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-SPECIALS]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#SPECIALS[SPECIALS] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#SPECIALS[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-SUNDANESE]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#SUNDANESE[SUNDANESE] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#SUNDANESE[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-SUNDANESE_SUPPLEMENT]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#SUNDANESE_SUPPLEMENT[SUNDANESE_SUPPLEMENT] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#SUNDANESE_SUPPLEMENT[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-SUPERSCRIPTS_AND_SUBSCRIPTS]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#SUPERSCRIPTS_AND_SUBSCRIPTS[SUPERSCRIPTS_AND_SUBSCRIPTS] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#SUPERSCRIPTS_AND_SUBSCRIPTS[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-SUPPLEMENTAL_ARROWS_A]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#SUPPLEMENTAL_ARROWS_A[SUPPLEMENTAL_ARROWS_A] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#SUPPLEMENTAL_ARROWS_A[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-SUPPLEMENTAL_ARROWS_B]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#SUPPLEMENTAL_ARROWS_B[SUPPLEMENTAL_ARROWS_B] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#SUPPLEMENTAL_ARROWS_B[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-SUPPLEMENTAL_MATHEMATICAL_OPERATORS]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#SUPPLEMENTAL_MATHEMATICAL_OPERATORS[SUPPLEMENTAL_MATHEMATICAL_OPERATORS] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#SUPPLEMENTAL_MATHEMATICAL_OPERATORS[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-SUPPLEMENTAL_PUNCTUATION]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#SUPPLEMENTAL_PUNCTUATION[SUPPLEMENTAL_PUNCTUATION] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#SUPPLEMENTAL_PUNCTUATION[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-SUPPLEMENTARY_PRIVATE_USE_AREA_A]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#SUPPLEMENTARY_PRIVATE_USE_AREA_A[SUPPLEMENTARY_PRIVATE_USE_AREA_A] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#SUPPLEMENTARY_PRIVATE_USE_AREA_A[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-SUPPLEMENTARY_PRIVATE_USE_AREA_B]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#SUPPLEMENTARY_PRIVATE_USE_AREA_B[SUPPLEMENTARY_PRIVATE_USE_AREA_B] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#SUPPLEMENTARY_PRIVATE_USE_AREA_B[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-SYLOTI_NAGRI]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#SYLOTI_NAGRI[SYLOTI_NAGRI] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#SYLOTI_NAGRI[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-SYRIAC]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#SYRIAC[SYRIAC] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#SYRIAC[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-TAGALOG]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#TAGALOG[TAGALOG] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#TAGALOG[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-TAGBANWA]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#TAGBANWA[TAGBANWA] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#TAGBANWA[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-TAGS]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#TAGS[TAGS] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#TAGS[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-TAI_LE]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#TAI_LE[TAI_LE] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#TAI_LE[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-TAI_THAM]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#TAI_THAM[TAI_THAM] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#TAI_THAM[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-TAI_VIET]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#TAI_VIET[TAI_VIET] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#TAI_VIET[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-TAI_XUAN_JING_SYMBOLS]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#TAI_XUAN_JING_SYMBOLS[TAI_XUAN_JING_SYMBOLS] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#TAI_XUAN_JING_SYMBOLS[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-TAKRI]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#TAKRI[TAKRI] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#TAKRI[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-TAMIL]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#TAMIL[TAMIL] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#TAMIL[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-TELUGU]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#TELUGU[TELUGU] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#TELUGU[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-THAANA]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#THAANA[THAANA] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#THAANA[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-THAI]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#THAI[THAI] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#THAI[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-TIBETAN]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#TIBETAN[TIBETAN] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#TIBETAN[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-TIFINAGH]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#TIFINAGH[TIFINAGH] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#TIFINAGH[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-TRANSPORT_AND_MAP_SYMBOLS]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#TRANSPORT_AND_MAP_SYMBOLS[TRANSPORT_AND_MAP_SYMBOLS] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#TRANSPORT_AND_MAP_SYMBOLS[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-UGARITIC]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#UGARITIC[UGARITIC] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#UGARITIC[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-UNIFIED_CANADIAN_ABORIGINAL_SYLLABICS]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#UNIFIED_CANADIAN_ABORIGINAL_SYLLABICS[UNIFIED_CANADIAN_ABORIGINAL_SYLLABICS] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#UNIFIED_CANADIAN_ABORIGINAL_SYLLABICS[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-UNIFIED_CANADIAN_ABORIGINAL_SYLLABICS_EXTENDED]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#UNIFIED_CANADIAN_ABORIGINAL_SYLLABICS_EXTENDED[UNIFIED_CANADIAN_ABORIGINAL_SYLLABICS_EXTENDED] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#UNIFIED_CANADIAN_ABORIGINAL_SYLLABICS_EXTENDED[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-VAI]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#VAI[VAI] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#VAI[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-VARIATION_SELECTORS]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#VARIATION_SELECTORS[VARIATION_SELECTORS] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#VARIATION_SELECTORS[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-VARIATION_SELECTORS_SUPPLEMENT]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#VARIATION_SELECTORS_SUPPLEMENT[VARIATION_SELECTORS_SUPPLEMENT] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#VARIATION_SELECTORS_SUPPLEMENT[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-VEDIC_EXTENSIONS]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#VEDIC_EXTENSIONS[VEDIC_EXTENSIONS] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#VEDIC_EXTENSIONS[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-VERTICAL_FORMS]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#VERTICAL_FORMS[VERTICAL_FORMS] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#VERTICAL_FORMS[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-YIJING_HEXAGRAM_SYMBOLS]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#YIJING_HEXAGRAM_SYMBOLS[YIJING_HEXAGRAM_SYMBOLS] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#YIJING_HEXAGRAM_SYMBOLS[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-YI_RADICALS]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#YI_RADICALS[YI_RADICALS] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#YI_RADICALS[java 9]) +** [[painless-api-reference-Character-UnicodeBlock-YI_SYLLABLES]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#YI_SYLLABLES[YI_SYLLABLES] (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#YI_SYLLABLES[java 9]) +* ++[[painless-api-reference-Character-UnicodeBlock-forName-1]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#forName%2Djava.lang.String%2D[forName](<>)++ (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#forName%2Djava.lang.String%2D[java 9]) +* ++[[painless-api-reference-Character-UnicodeBlock-of-1]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeBlock.html#of%2Dint%2D[of](int)++ (link:{java9-javadoc}/java/lang/Character.UnicodeBlock.html#of%2Dint%2D[java 9]) * Inherits methods from ++<>++ diff --git a/docs/painless/painless-api-reference/Character.UnicodeScript.asciidoc b/docs/painless/painless-api-reference/Character.UnicodeScript.asciidoc index cc189b1ee1e2f..0e9ff73e1e53d 100644 --- a/docs/painless/painless-api-reference/Character.UnicodeScript.asciidoc +++ b/docs/painless/painless-api-reference/Character.UnicodeScript.asciidoc @@ -4,111 +4,111 @@ Rebuild by running `gradle generatePainlessApi`. //// [[painless-api-reference-Character-UnicodeScript]]++Character.UnicodeScript++:: -** [[painless-api-reference-Character-UnicodeScript-ARABIC]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#ARABIC[ARABIC] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#ARABIC[java 9]) -** [[painless-api-reference-Character-UnicodeScript-ARMENIAN]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#ARMENIAN[ARMENIAN] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#ARMENIAN[java 9]) -** [[painless-api-reference-Character-UnicodeScript-AVESTAN]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#AVESTAN[AVESTAN] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#AVESTAN[java 9]) -** [[painless-api-reference-Character-UnicodeScript-BALINESE]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#BALINESE[BALINESE] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#BALINESE[java 9]) -** [[painless-api-reference-Character-UnicodeScript-BAMUM]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#BAMUM[BAMUM] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#BAMUM[java 9]) -** [[painless-api-reference-Character-UnicodeScript-BATAK]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#BATAK[BATAK] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#BATAK[java 9]) -** [[painless-api-reference-Character-UnicodeScript-BENGALI]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#BENGALI[BENGALI] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#BENGALI[java 9]) -** [[painless-api-reference-Character-UnicodeScript-BOPOMOFO]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#BOPOMOFO[BOPOMOFO] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#BOPOMOFO[java 9]) -** [[painless-api-reference-Character-UnicodeScript-BRAHMI]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#BRAHMI[BRAHMI] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#BRAHMI[java 9]) -** [[painless-api-reference-Character-UnicodeScript-BRAILLE]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#BRAILLE[BRAILLE] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#BRAILLE[java 9]) -** [[painless-api-reference-Character-UnicodeScript-BUGINESE]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#BUGINESE[BUGINESE] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#BUGINESE[java 9]) -** [[painless-api-reference-Character-UnicodeScript-BUHID]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#BUHID[BUHID] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#BUHID[java 9]) -** [[painless-api-reference-Character-UnicodeScript-CANADIAN_ABORIGINAL]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#CANADIAN_ABORIGINAL[CANADIAN_ABORIGINAL] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#CANADIAN_ABORIGINAL[java 9]) -** [[painless-api-reference-Character-UnicodeScript-CARIAN]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#CARIAN[CARIAN] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#CARIAN[java 9]) -** [[painless-api-reference-Character-UnicodeScript-CHAKMA]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#CHAKMA[CHAKMA] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#CHAKMA[java 9]) -** [[painless-api-reference-Character-UnicodeScript-CHAM]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#CHAM[CHAM] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#CHAM[java 9]) -** [[painless-api-reference-Character-UnicodeScript-CHEROKEE]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#CHEROKEE[CHEROKEE] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#CHEROKEE[java 9]) -** [[painless-api-reference-Character-UnicodeScript-COMMON]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#COMMON[COMMON] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#COMMON[java 9]) -** [[painless-api-reference-Character-UnicodeScript-COPTIC]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#COPTIC[COPTIC] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#COPTIC[java 9]) -** [[painless-api-reference-Character-UnicodeScript-CUNEIFORM]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#CUNEIFORM[CUNEIFORM] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#CUNEIFORM[java 9]) -** [[painless-api-reference-Character-UnicodeScript-CYPRIOT]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#CYPRIOT[CYPRIOT] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#CYPRIOT[java 9]) -** [[painless-api-reference-Character-UnicodeScript-CYRILLIC]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#CYRILLIC[CYRILLIC] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#CYRILLIC[java 9]) -** [[painless-api-reference-Character-UnicodeScript-DESERET]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#DESERET[DESERET] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#DESERET[java 9]) -** [[painless-api-reference-Character-UnicodeScript-DEVANAGARI]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#DEVANAGARI[DEVANAGARI] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#DEVANAGARI[java 9]) -** [[painless-api-reference-Character-UnicodeScript-EGYPTIAN_HIEROGLYPHS]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#EGYPTIAN_HIEROGLYPHS[EGYPTIAN_HIEROGLYPHS] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#EGYPTIAN_HIEROGLYPHS[java 9]) -** [[painless-api-reference-Character-UnicodeScript-ETHIOPIC]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#ETHIOPIC[ETHIOPIC] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#ETHIOPIC[java 9]) -** [[painless-api-reference-Character-UnicodeScript-GEORGIAN]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#GEORGIAN[GEORGIAN] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#GEORGIAN[java 9]) -** [[painless-api-reference-Character-UnicodeScript-GLAGOLITIC]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#GLAGOLITIC[GLAGOLITIC] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#GLAGOLITIC[java 9]) -** [[painless-api-reference-Character-UnicodeScript-GOTHIC]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#GOTHIC[GOTHIC] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#GOTHIC[java 9]) -** [[painless-api-reference-Character-UnicodeScript-GREEK]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#GREEK[GREEK] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#GREEK[java 9]) -** [[painless-api-reference-Character-UnicodeScript-GUJARATI]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#GUJARATI[GUJARATI] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#GUJARATI[java 9]) -** [[painless-api-reference-Character-UnicodeScript-GURMUKHI]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#GURMUKHI[GURMUKHI] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#GURMUKHI[java 9]) -** [[painless-api-reference-Character-UnicodeScript-HAN]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#HAN[HAN] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#HAN[java 9]) -** [[painless-api-reference-Character-UnicodeScript-HANGUL]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#HANGUL[HANGUL] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#HANGUL[java 9]) -** [[painless-api-reference-Character-UnicodeScript-HANUNOO]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#HANUNOO[HANUNOO] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#HANUNOO[java 9]) -** [[painless-api-reference-Character-UnicodeScript-HEBREW]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#HEBREW[HEBREW] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#HEBREW[java 9]) -** [[painless-api-reference-Character-UnicodeScript-HIRAGANA]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#HIRAGANA[HIRAGANA] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#HIRAGANA[java 9]) -** [[painless-api-reference-Character-UnicodeScript-IMPERIAL_ARAMAIC]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#IMPERIAL_ARAMAIC[IMPERIAL_ARAMAIC] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#IMPERIAL_ARAMAIC[java 9]) -** [[painless-api-reference-Character-UnicodeScript-INHERITED]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#INHERITED[INHERITED] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#INHERITED[java 9]) -** [[painless-api-reference-Character-UnicodeScript-INSCRIPTIONAL_PAHLAVI]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#INSCRIPTIONAL_PAHLAVI[INSCRIPTIONAL_PAHLAVI] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#INSCRIPTIONAL_PAHLAVI[java 9]) -** [[painless-api-reference-Character-UnicodeScript-INSCRIPTIONAL_PARTHIAN]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#INSCRIPTIONAL_PARTHIAN[INSCRIPTIONAL_PARTHIAN] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#INSCRIPTIONAL_PARTHIAN[java 9]) -** [[painless-api-reference-Character-UnicodeScript-JAVANESE]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#JAVANESE[JAVANESE] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#JAVANESE[java 9]) -** [[painless-api-reference-Character-UnicodeScript-KAITHI]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#KAITHI[KAITHI] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#KAITHI[java 9]) -** [[painless-api-reference-Character-UnicodeScript-KANNADA]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#KANNADA[KANNADA] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#KANNADA[java 9]) -** [[painless-api-reference-Character-UnicodeScript-KATAKANA]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#KATAKANA[KATAKANA] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#KATAKANA[java 9]) -** [[painless-api-reference-Character-UnicodeScript-KAYAH_LI]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#KAYAH_LI[KAYAH_LI] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#KAYAH_LI[java 9]) -** [[painless-api-reference-Character-UnicodeScript-KHAROSHTHI]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#KHAROSHTHI[KHAROSHTHI] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#KHAROSHTHI[java 9]) -** [[painless-api-reference-Character-UnicodeScript-KHMER]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#KHMER[KHMER] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#KHMER[java 9]) -** [[painless-api-reference-Character-UnicodeScript-LAO]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#LAO[LAO] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#LAO[java 9]) -** [[painless-api-reference-Character-UnicodeScript-LATIN]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#LATIN[LATIN] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#LATIN[java 9]) -** [[painless-api-reference-Character-UnicodeScript-LEPCHA]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#LEPCHA[LEPCHA] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#LEPCHA[java 9]) -** [[painless-api-reference-Character-UnicodeScript-LIMBU]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#LIMBU[LIMBU] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#LIMBU[java 9]) -** [[painless-api-reference-Character-UnicodeScript-LINEAR_B]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#LINEAR_B[LINEAR_B] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#LINEAR_B[java 9]) -** [[painless-api-reference-Character-UnicodeScript-LISU]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#LISU[LISU] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#LISU[java 9]) -** [[painless-api-reference-Character-UnicodeScript-LYCIAN]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#LYCIAN[LYCIAN] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#LYCIAN[java 9]) -** [[painless-api-reference-Character-UnicodeScript-LYDIAN]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#LYDIAN[LYDIAN] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#LYDIAN[java 9]) -** [[painless-api-reference-Character-UnicodeScript-MALAYALAM]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#MALAYALAM[MALAYALAM] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#MALAYALAM[java 9]) -** [[painless-api-reference-Character-UnicodeScript-MANDAIC]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#MANDAIC[MANDAIC] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#MANDAIC[java 9]) -** [[painless-api-reference-Character-UnicodeScript-MEETEI_MAYEK]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#MEETEI_MAYEK[MEETEI_MAYEK] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#MEETEI_MAYEK[java 9]) -** [[painless-api-reference-Character-UnicodeScript-MEROITIC_CURSIVE]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#MEROITIC_CURSIVE[MEROITIC_CURSIVE] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#MEROITIC_CURSIVE[java 9]) -** [[painless-api-reference-Character-UnicodeScript-MEROITIC_HIEROGLYPHS]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#MEROITIC_HIEROGLYPHS[MEROITIC_HIEROGLYPHS] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#MEROITIC_HIEROGLYPHS[java 9]) -** [[painless-api-reference-Character-UnicodeScript-MIAO]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#MIAO[MIAO] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#MIAO[java 9]) -** [[painless-api-reference-Character-UnicodeScript-MONGOLIAN]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#MONGOLIAN[MONGOLIAN] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#MONGOLIAN[java 9]) -** [[painless-api-reference-Character-UnicodeScript-MYANMAR]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#MYANMAR[MYANMAR] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#MYANMAR[java 9]) -** [[painless-api-reference-Character-UnicodeScript-NEW_TAI_LUE]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#NEW_TAI_LUE[NEW_TAI_LUE] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#NEW_TAI_LUE[java 9]) -** [[painless-api-reference-Character-UnicodeScript-NKO]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#NKO[NKO] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#NKO[java 9]) -** [[painless-api-reference-Character-UnicodeScript-OGHAM]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#OGHAM[OGHAM] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#OGHAM[java 9]) -** [[painless-api-reference-Character-UnicodeScript-OLD_ITALIC]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#OLD_ITALIC[OLD_ITALIC] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#OLD_ITALIC[java 9]) -** [[painless-api-reference-Character-UnicodeScript-OLD_PERSIAN]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#OLD_PERSIAN[OLD_PERSIAN] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#OLD_PERSIAN[java 9]) -** [[painless-api-reference-Character-UnicodeScript-OLD_SOUTH_ARABIAN]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#OLD_SOUTH_ARABIAN[OLD_SOUTH_ARABIAN] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#OLD_SOUTH_ARABIAN[java 9]) -** [[painless-api-reference-Character-UnicodeScript-OLD_TURKIC]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#OLD_TURKIC[OLD_TURKIC] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#OLD_TURKIC[java 9]) -** [[painless-api-reference-Character-UnicodeScript-OL_CHIKI]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#OL_CHIKI[OL_CHIKI] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#OL_CHIKI[java 9]) -** [[painless-api-reference-Character-UnicodeScript-ORIYA]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#ORIYA[ORIYA] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#ORIYA[java 9]) -** [[painless-api-reference-Character-UnicodeScript-OSMANYA]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#OSMANYA[OSMANYA] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#OSMANYA[java 9]) -** [[painless-api-reference-Character-UnicodeScript-PHAGS_PA]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#PHAGS_PA[PHAGS_PA] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#PHAGS_PA[java 9]) -** [[painless-api-reference-Character-UnicodeScript-PHOENICIAN]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#PHOENICIAN[PHOENICIAN] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#PHOENICIAN[java 9]) -** [[painless-api-reference-Character-UnicodeScript-REJANG]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#REJANG[REJANG] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#REJANG[java 9]) -** [[painless-api-reference-Character-UnicodeScript-RUNIC]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#RUNIC[RUNIC] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#RUNIC[java 9]) -** [[painless-api-reference-Character-UnicodeScript-SAMARITAN]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#SAMARITAN[SAMARITAN] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#SAMARITAN[java 9]) -** [[painless-api-reference-Character-UnicodeScript-SAURASHTRA]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#SAURASHTRA[SAURASHTRA] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#SAURASHTRA[java 9]) -** [[painless-api-reference-Character-UnicodeScript-SHARADA]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#SHARADA[SHARADA] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#SHARADA[java 9]) -** [[painless-api-reference-Character-UnicodeScript-SHAVIAN]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#SHAVIAN[SHAVIAN] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#SHAVIAN[java 9]) -** [[painless-api-reference-Character-UnicodeScript-SINHALA]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#SINHALA[SINHALA] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#SINHALA[java 9]) -** [[painless-api-reference-Character-UnicodeScript-SORA_SOMPENG]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#SORA_SOMPENG[SORA_SOMPENG] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#SORA_SOMPENG[java 9]) -** [[painless-api-reference-Character-UnicodeScript-SUNDANESE]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#SUNDANESE[SUNDANESE] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#SUNDANESE[java 9]) -** [[painless-api-reference-Character-UnicodeScript-SYLOTI_NAGRI]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#SYLOTI_NAGRI[SYLOTI_NAGRI] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#SYLOTI_NAGRI[java 9]) -** [[painless-api-reference-Character-UnicodeScript-SYRIAC]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#SYRIAC[SYRIAC] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#SYRIAC[java 9]) -** [[painless-api-reference-Character-UnicodeScript-TAGALOG]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#TAGALOG[TAGALOG] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#TAGALOG[java 9]) -** [[painless-api-reference-Character-UnicodeScript-TAGBANWA]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#TAGBANWA[TAGBANWA] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#TAGBANWA[java 9]) -** [[painless-api-reference-Character-UnicodeScript-TAI_LE]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#TAI_LE[TAI_LE] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#TAI_LE[java 9]) -** [[painless-api-reference-Character-UnicodeScript-TAI_THAM]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#TAI_THAM[TAI_THAM] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#TAI_THAM[java 9]) -** [[painless-api-reference-Character-UnicodeScript-TAI_VIET]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#TAI_VIET[TAI_VIET] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#TAI_VIET[java 9]) -** [[painless-api-reference-Character-UnicodeScript-TAKRI]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#TAKRI[TAKRI] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#TAKRI[java 9]) -** [[painless-api-reference-Character-UnicodeScript-TAMIL]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#TAMIL[TAMIL] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#TAMIL[java 9]) -** [[painless-api-reference-Character-UnicodeScript-TELUGU]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#TELUGU[TELUGU] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#TELUGU[java 9]) -** [[painless-api-reference-Character-UnicodeScript-THAANA]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#THAANA[THAANA] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#THAANA[java 9]) -** [[painless-api-reference-Character-UnicodeScript-THAI]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#THAI[THAI] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#THAI[java 9]) -** [[painless-api-reference-Character-UnicodeScript-TIBETAN]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#TIBETAN[TIBETAN] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#TIBETAN[java 9]) -** [[painless-api-reference-Character-UnicodeScript-TIFINAGH]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#TIFINAGH[TIFINAGH] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#TIFINAGH[java 9]) -** [[painless-api-reference-Character-UnicodeScript-UGARITIC]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#UGARITIC[UGARITIC] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#UGARITIC[java 9]) -** [[painless-api-reference-Character-UnicodeScript-UNKNOWN]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#UNKNOWN[UNKNOWN] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#UNKNOWN[java 9]) -** [[painless-api-reference-Character-UnicodeScript-VAI]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#VAI[VAI] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#VAI[java 9]) -** [[painless-api-reference-Character-UnicodeScript-YI]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#YI[YI] (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#YI[java 9]) -* ++[[painless-api-reference-Character-UnicodeScript-forName-1]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#forName%2Djava.lang.String%2D[forName](<>)++ (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#forName%2Djava.lang.String%2D[java 9]) -* ++[[painless-api-reference-Character-UnicodeScript-of-1]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#of%2Dint%2D[of](int)++ (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#of%2Dint%2D[java 9]) -* ++[[painless-api-reference-Character-UnicodeScript-valueOf-1]]static <> link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#valueOf%2Djava.lang.String%2D[valueOf](<>)++ (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#valueOf%2Djava.lang.String%2D[java 9]) -* ++[[painless-api-reference-Character-UnicodeScript-values-0]]static <>[] link:{java8-javadoc}/java/lang/Character$UnicodeScript.html#values%2D%2D[values]()++ (link:{java9-javadoc}/java/lang/Character$UnicodeScript.html#values%2D%2D[java 9]) +** [[painless-api-reference-Character-UnicodeScript-ARABIC]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#ARABIC[ARABIC] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#ARABIC[java 9]) +** [[painless-api-reference-Character-UnicodeScript-ARMENIAN]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#ARMENIAN[ARMENIAN] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#ARMENIAN[java 9]) +** [[painless-api-reference-Character-UnicodeScript-AVESTAN]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#AVESTAN[AVESTAN] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#AVESTAN[java 9]) +** [[painless-api-reference-Character-UnicodeScript-BALINESE]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#BALINESE[BALINESE] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#BALINESE[java 9]) +** [[painless-api-reference-Character-UnicodeScript-BAMUM]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#BAMUM[BAMUM] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#BAMUM[java 9]) +** [[painless-api-reference-Character-UnicodeScript-BATAK]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#BATAK[BATAK] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#BATAK[java 9]) +** [[painless-api-reference-Character-UnicodeScript-BENGALI]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#BENGALI[BENGALI] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#BENGALI[java 9]) +** [[painless-api-reference-Character-UnicodeScript-BOPOMOFO]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#BOPOMOFO[BOPOMOFO] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#BOPOMOFO[java 9]) +** [[painless-api-reference-Character-UnicodeScript-BRAHMI]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#BRAHMI[BRAHMI] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#BRAHMI[java 9]) +** [[painless-api-reference-Character-UnicodeScript-BRAILLE]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#BRAILLE[BRAILLE] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#BRAILLE[java 9]) +** [[painless-api-reference-Character-UnicodeScript-BUGINESE]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#BUGINESE[BUGINESE] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#BUGINESE[java 9]) +** [[painless-api-reference-Character-UnicodeScript-BUHID]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#BUHID[BUHID] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#BUHID[java 9]) +** [[painless-api-reference-Character-UnicodeScript-CANADIAN_ABORIGINAL]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#CANADIAN_ABORIGINAL[CANADIAN_ABORIGINAL] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#CANADIAN_ABORIGINAL[java 9]) +** [[painless-api-reference-Character-UnicodeScript-CARIAN]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#CARIAN[CARIAN] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#CARIAN[java 9]) +** [[painless-api-reference-Character-UnicodeScript-CHAKMA]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#CHAKMA[CHAKMA] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#CHAKMA[java 9]) +** [[painless-api-reference-Character-UnicodeScript-CHAM]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#CHAM[CHAM] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#CHAM[java 9]) +** [[painless-api-reference-Character-UnicodeScript-CHEROKEE]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#CHEROKEE[CHEROKEE] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#CHEROKEE[java 9]) +** [[painless-api-reference-Character-UnicodeScript-COMMON]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#COMMON[COMMON] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#COMMON[java 9]) +** [[painless-api-reference-Character-UnicodeScript-COPTIC]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#COPTIC[COPTIC] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#COPTIC[java 9]) +** [[painless-api-reference-Character-UnicodeScript-CUNEIFORM]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#CUNEIFORM[CUNEIFORM] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#CUNEIFORM[java 9]) +** [[painless-api-reference-Character-UnicodeScript-CYPRIOT]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#CYPRIOT[CYPRIOT] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#CYPRIOT[java 9]) +** [[painless-api-reference-Character-UnicodeScript-CYRILLIC]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#CYRILLIC[CYRILLIC] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#CYRILLIC[java 9]) +** [[painless-api-reference-Character-UnicodeScript-DESERET]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#DESERET[DESERET] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#DESERET[java 9]) +** [[painless-api-reference-Character-UnicodeScript-DEVANAGARI]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#DEVANAGARI[DEVANAGARI] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#DEVANAGARI[java 9]) +** [[painless-api-reference-Character-UnicodeScript-EGYPTIAN_HIEROGLYPHS]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#EGYPTIAN_HIEROGLYPHS[EGYPTIAN_HIEROGLYPHS] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#EGYPTIAN_HIEROGLYPHS[java 9]) +** [[painless-api-reference-Character-UnicodeScript-ETHIOPIC]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#ETHIOPIC[ETHIOPIC] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#ETHIOPIC[java 9]) +** [[painless-api-reference-Character-UnicodeScript-GEORGIAN]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#GEORGIAN[GEORGIAN] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#GEORGIAN[java 9]) +** [[painless-api-reference-Character-UnicodeScript-GLAGOLITIC]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#GLAGOLITIC[GLAGOLITIC] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#GLAGOLITIC[java 9]) +** [[painless-api-reference-Character-UnicodeScript-GOTHIC]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#GOTHIC[GOTHIC] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#GOTHIC[java 9]) +** [[painless-api-reference-Character-UnicodeScript-GREEK]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#GREEK[GREEK] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#GREEK[java 9]) +** [[painless-api-reference-Character-UnicodeScript-GUJARATI]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#GUJARATI[GUJARATI] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#GUJARATI[java 9]) +** [[painless-api-reference-Character-UnicodeScript-GURMUKHI]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#GURMUKHI[GURMUKHI] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#GURMUKHI[java 9]) +** [[painless-api-reference-Character-UnicodeScript-HAN]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#HAN[HAN] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#HAN[java 9]) +** [[painless-api-reference-Character-UnicodeScript-HANGUL]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#HANGUL[HANGUL] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#HANGUL[java 9]) +** [[painless-api-reference-Character-UnicodeScript-HANUNOO]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#HANUNOO[HANUNOO] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#HANUNOO[java 9]) +** [[painless-api-reference-Character-UnicodeScript-HEBREW]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#HEBREW[HEBREW] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#HEBREW[java 9]) +** [[painless-api-reference-Character-UnicodeScript-HIRAGANA]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#HIRAGANA[HIRAGANA] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#HIRAGANA[java 9]) +** [[painless-api-reference-Character-UnicodeScript-IMPERIAL_ARAMAIC]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#IMPERIAL_ARAMAIC[IMPERIAL_ARAMAIC] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#IMPERIAL_ARAMAIC[java 9]) +** [[painless-api-reference-Character-UnicodeScript-INHERITED]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#INHERITED[INHERITED] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#INHERITED[java 9]) +** [[painless-api-reference-Character-UnicodeScript-INSCRIPTIONAL_PAHLAVI]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#INSCRIPTIONAL_PAHLAVI[INSCRIPTIONAL_PAHLAVI] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#INSCRIPTIONAL_PAHLAVI[java 9]) +** [[painless-api-reference-Character-UnicodeScript-INSCRIPTIONAL_PARTHIAN]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#INSCRIPTIONAL_PARTHIAN[INSCRIPTIONAL_PARTHIAN] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#INSCRIPTIONAL_PARTHIAN[java 9]) +** [[painless-api-reference-Character-UnicodeScript-JAVANESE]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#JAVANESE[JAVANESE] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#JAVANESE[java 9]) +** [[painless-api-reference-Character-UnicodeScript-KAITHI]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#KAITHI[KAITHI] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#KAITHI[java 9]) +** [[painless-api-reference-Character-UnicodeScript-KANNADA]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#KANNADA[KANNADA] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#KANNADA[java 9]) +** [[painless-api-reference-Character-UnicodeScript-KATAKANA]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#KATAKANA[KATAKANA] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#KATAKANA[java 9]) +** [[painless-api-reference-Character-UnicodeScript-KAYAH_LI]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#KAYAH_LI[KAYAH_LI] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#KAYAH_LI[java 9]) +** [[painless-api-reference-Character-UnicodeScript-KHAROSHTHI]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#KHAROSHTHI[KHAROSHTHI] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#KHAROSHTHI[java 9]) +** [[painless-api-reference-Character-UnicodeScript-KHMER]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#KHMER[KHMER] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#KHMER[java 9]) +** [[painless-api-reference-Character-UnicodeScript-LAO]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#LAO[LAO] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#LAO[java 9]) +** [[painless-api-reference-Character-UnicodeScript-LATIN]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#LATIN[LATIN] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#LATIN[java 9]) +** [[painless-api-reference-Character-UnicodeScript-LEPCHA]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#LEPCHA[LEPCHA] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#LEPCHA[java 9]) +** [[painless-api-reference-Character-UnicodeScript-LIMBU]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#LIMBU[LIMBU] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#LIMBU[java 9]) +** [[painless-api-reference-Character-UnicodeScript-LINEAR_B]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#LINEAR_B[LINEAR_B] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#LINEAR_B[java 9]) +** [[painless-api-reference-Character-UnicodeScript-LISU]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#LISU[LISU] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#LISU[java 9]) +** [[painless-api-reference-Character-UnicodeScript-LYCIAN]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#LYCIAN[LYCIAN] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#LYCIAN[java 9]) +** [[painless-api-reference-Character-UnicodeScript-LYDIAN]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#LYDIAN[LYDIAN] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#LYDIAN[java 9]) +** [[painless-api-reference-Character-UnicodeScript-MALAYALAM]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#MALAYALAM[MALAYALAM] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#MALAYALAM[java 9]) +** [[painless-api-reference-Character-UnicodeScript-MANDAIC]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#MANDAIC[MANDAIC] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#MANDAIC[java 9]) +** [[painless-api-reference-Character-UnicodeScript-MEETEI_MAYEK]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#MEETEI_MAYEK[MEETEI_MAYEK] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#MEETEI_MAYEK[java 9]) +** [[painless-api-reference-Character-UnicodeScript-MEROITIC_CURSIVE]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#MEROITIC_CURSIVE[MEROITIC_CURSIVE] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#MEROITIC_CURSIVE[java 9]) +** [[painless-api-reference-Character-UnicodeScript-MEROITIC_HIEROGLYPHS]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#MEROITIC_HIEROGLYPHS[MEROITIC_HIEROGLYPHS] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#MEROITIC_HIEROGLYPHS[java 9]) +** [[painless-api-reference-Character-UnicodeScript-MIAO]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#MIAO[MIAO] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#MIAO[java 9]) +** [[painless-api-reference-Character-UnicodeScript-MONGOLIAN]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#MONGOLIAN[MONGOLIAN] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#MONGOLIAN[java 9]) +** [[painless-api-reference-Character-UnicodeScript-MYANMAR]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#MYANMAR[MYANMAR] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#MYANMAR[java 9]) +** [[painless-api-reference-Character-UnicodeScript-NEW_TAI_LUE]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#NEW_TAI_LUE[NEW_TAI_LUE] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#NEW_TAI_LUE[java 9]) +** [[painless-api-reference-Character-UnicodeScript-NKO]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#NKO[NKO] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#NKO[java 9]) +** [[painless-api-reference-Character-UnicodeScript-OGHAM]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#OGHAM[OGHAM] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#OGHAM[java 9]) +** [[painless-api-reference-Character-UnicodeScript-OLD_ITALIC]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#OLD_ITALIC[OLD_ITALIC] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#OLD_ITALIC[java 9]) +** [[painless-api-reference-Character-UnicodeScript-OLD_PERSIAN]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#OLD_PERSIAN[OLD_PERSIAN] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#OLD_PERSIAN[java 9]) +** [[painless-api-reference-Character-UnicodeScript-OLD_SOUTH_ARABIAN]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#OLD_SOUTH_ARABIAN[OLD_SOUTH_ARABIAN] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#OLD_SOUTH_ARABIAN[java 9]) +** [[painless-api-reference-Character-UnicodeScript-OLD_TURKIC]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#OLD_TURKIC[OLD_TURKIC] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#OLD_TURKIC[java 9]) +** [[painless-api-reference-Character-UnicodeScript-OL_CHIKI]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#OL_CHIKI[OL_CHIKI] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#OL_CHIKI[java 9]) +** [[painless-api-reference-Character-UnicodeScript-ORIYA]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#ORIYA[ORIYA] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#ORIYA[java 9]) +** [[painless-api-reference-Character-UnicodeScript-OSMANYA]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#OSMANYA[OSMANYA] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#OSMANYA[java 9]) +** [[painless-api-reference-Character-UnicodeScript-PHAGS_PA]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#PHAGS_PA[PHAGS_PA] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#PHAGS_PA[java 9]) +** [[painless-api-reference-Character-UnicodeScript-PHOENICIAN]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#PHOENICIAN[PHOENICIAN] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#PHOENICIAN[java 9]) +** [[painless-api-reference-Character-UnicodeScript-REJANG]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#REJANG[REJANG] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#REJANG[java 9]) +** [[painless-api-reference-Character-UnicodeScript-RUNIC]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#RUNIC[RUNIC] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#RUNIC[java 9]) +** [[painless-api-reference-Character-UnicodeScript-SAMARITAN]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#SAMARITAN[SAMARITAN] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#SAMARITAN[java 9]) +** [[painless-api-reference-Character-UnicodeScript-SAURASHTRA]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#SAURASHTRA[SAURASHTRA] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#SAURASHTRA[java 9]) +** [[painless-api-reference-Character-UnicodeScript-SHARADA]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#SHARADA[SHARADA] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#SHARADA[java 9]) +** [[painless-api-reference-Character-UnicodeScript-SHAVIAN]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#SHAVIAN[SHAVIAN] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#SHAVIAN[java 9]) +** [[painless-api-reference-Character-UnicodeScript-SINHALA]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#SINHALA[SINHALA] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#SINHALA[java 9]) +** [[painless-api-reference-Character-UnicodeScript-SORA_SOMPENG]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#SORA_SOMPENG[SORA_SOMPENG] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#SORA_SOMPENG[java 9]) +** [[painless-api-reference-Character-UnicodeScript-SUNDANESE]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#SUNDANESE[SUNDANESE] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#SUNDANESE[java 9]) +** [[painless-api-reference-Character-UnicodeScript-SYLOTI_NAGRI]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#SYLOTI_NAGRI[SYLOTI_NAGRI] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#SYLOTI_NAGRI[java 9]) +** [[painless-api-reference-Character-UnicodeScript-SYRIAC]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#SYRIAC[SYRIAC] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#SYRIAC[java 9]) +** [[painless-api-reference-Character-UnicodeScript-TAGALOG]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#TAGALOG[TAGALOG] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#TAGALOG[java 9]) +** [[painless-api-reference-Character-UnicodeScript-TAGBANWA]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#TAGBANWA[TAGBANWA] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#TAGBANWA[java 9]) +** [[painless-api-reference-Character-UnicodeScript-TAI_LE]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#TAI_LE[TAI_LE] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#TAI_LE[java 9]) +** [[painless-api-reference-Character-UnicodeScript-TAI_THAM]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#TAI_THAM[TAI_THAM] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#TAI_THAM[java 9]) +** [[painless-api-reference-Character-UnicodeScript-TAI_VIET]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#TAI_VIET[TAI_VIET] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#TAI_VIET[java 9]) +** [[painless-api-reference-Character-UnicodeScript-TAKRI]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#TAKRI[TAKRI] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#TAKRI[java 9]) +** [[painless-api-reference-Character-UnicodeScript-TAMIL]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#TAMIL[TAMIL] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#TAMIL[java 9]) +** [[painless-api-reference-Character-UnicodeScript-TELUGU]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#TELUGU[TELUGU] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#TELUGU[java 9]) +** [[painless-api-reference-Character-UnicodeScript-THAANA]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#THAANA[THAANA] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#THAANA[java 9]) +** [[painless-api-reference-Character-UnicodeScript-THAI]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#THAI[THAI] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#THAI[java 9]) +** [[painless-api-reference-Character-UnicodeScript-TIBETAN]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#TIBETAN[TIBETAN] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#TIBETAN[java 9]) +** [[painless-api-reference-Character-UnicodeScript-TIFINAGH]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#TIFINAGH[TIFINAGH] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#TIFINAGH[java 9]) +** [[painless-api-reference-Character-UnicodeScript-UGARITIC]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#UGARITIC[UGARITIC] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#UGARITIC[java 9]) +** [[painless-api-reference-Character-UnicodeScript-UNKNOWN]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#UNKNOWN[UNKNOWN] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#UNKNOWN[java 9]) +** [[painless-api-reference-Character-UnicodeScript-VAI]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#VAI[VAI] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#VAI[java 9]) +** [[painless-api-reference-Character-UnicodeScript-YI]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#YI[YI] (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#YI[java 9]) +* ++[[painless-api-reference-Character-UnicodeScript-forName-1]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#forName%2Djava.lang.String%2D[forName](<>)++ (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#forName%2Djava.lang.String%2D[java 9]) +* ++[[painless-api-reference-Character-UnicodeScript-of-1]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#of%2Dint%2D[of](int)++ (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#of%2Dint%2D[java 9]) +* ++[[painless-api-reference-Character-UnicodeScript-valueOf-1]]static <> link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#valueOf%2Djava.lang.String%2D[valueOf](<>)++ (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#valueOf%2Djava.lang.String%2D[java 9]) +* ++[[painless-api-reference-Character-UnicodeScript-values-0]]static <>[] link:{java8-javadoc}/java/lang/Character.UnicodeScript.html#values%2D%2D[values]()++ (link:{java9-javadoc}/java/lang/Character.UnicodeScript.html#values%2D%2D[java 9]) * Inherits methods from ++<>++, ++<>++ diff --git a/docs/painless/painless-api-reference/Collector.Characteristics.asciidoc b/docs/painless/painless-api-reference/Collector.Characteristics.asciidoc index 17c6bec94751c..a0c95223d57d0 100644 --- a/docs/painless/painless-api-reference/Collector.Characteristics.asciidoc +++ b/docs/painless/painless-api-reference/Collector.Characteristics.asciidoc @@ -4,9 +4,9 @@ Rebuild by running `gradle generatePainlessApi`. //// [[painless-api-reference-Collector-Characteristics]]++Collector.Characteristics++:: -** [[painless-api-reference-Collector-Characteristics-CONCURRENT]]static <> link:{java8-javadoc}/java/util/stream/Collector$Characteristics.html#CONCURRENT[CONCURRENT] (link:{java9-javadoc}/java/util/stream/Collector$Characteristics.html#CONCURRENT[java 9]) -** [[painless-api-reference-Collector-Characteristics-IDENTITY_FINISH]]static <> link:{java8-javadoc}/java/util/stream/Collector$Characteristics.html#IDENTITY_FINISH[IDENTITY_FINISH] (link:{java9-javadoc}/java/util/stream/Collector$Characteristics.html#IDENTITY_FINISH[java 9]) -** [[painless-api-reference-Collector-Characteristics-UNORDERED]]static <> link:{java8-javadoc}/java/util/stream/Collector$Characteristics.html#UNORDERED[UNORDERED] (link:{java9-javadoc}/java/util/stream/Collector$Characteristics.html#UNORDERED[java 9]) -* ++[[painless-api-reference-Collector-Characteristics-valueOf-1]]static <> link:{java8-javadoc}/java/util/stream/Collector$Characteristics.html#valueOf%2Djava.lang.String%2D[valueOf](<>)++ (link:{java9-javadoc}/java/util/stream/Collector$Characteristics.html#valueOf%2Djava.lang.String%2D[java 9]) -* ++[[painless-api-reference-Collector-Characteristics-values-0]]static <>[] link:{java8-javadoc}/java/util/stream/Collector$Characteristics.html#values%2D%2D[values]()++ (link:{java9-javadoc}/java/util/stream/Collector$Characteristics.html#values%2D%2D[java 9]) +** [[painless-api-reference-Collector-Characteristics-CONCURRENT]]static <> link:{java8-javadoc}/java/util/stream/Collector.Characteristics.html#CONCURRENT[CONCURRENT] (link:{java9-javadoc}/java/util/stream/Collector.Characteristics.html#CONCURRENT[java 9]) +** [[painless-api-reference-Collector-Characteristics-IDENTITY_FINISH]]static <> link:{java8-javadoc}/java/util/stream/Collector.Characteristics.html#IDENTITY_FINISH[IDENTITY_FINISH] (link:{java9-javadoc}/java/util/stream/Collector.Characteristics.html#IDENTITY_FINISH[java 9]) +** [[painless-api-reference-Collector-Characteristics-UNORDERED]]static <> link:{java8-javadoc}/java/util/stream/Collector.Characteristics.html#UNORDERED[UNORDERED] (link:{java9-javadoc}/java/util/stream/Collector.Characteristics.html#UNORDERED[java 9]) +* ++[[painless-api-reference-Collector-Characteristics-valueOf-1]]static <> link:{java8-javadoc}/java/util/stream/Collector.Characteristics.html#valueOf%2Djava.lang.String%2D[valueOf](<>)++ (link:{java9-javadoc}/java/util/stream/Collector.Characteristics.html#valueOf%2Djava.lang.String%2D[java 9]) +* ++[[painless-api-reference-Collector-Characteristics-values-0]]static <>[] link:{java8-javadoc}/java/util/stream/Collector.Characteristics.html#values%2D%2D[values]()++ (link:{java9-javadoc}/java/util/stream/Collector.Characteristics.html#values%2D%2D[java 9]) * Inherits methods from ++<>++, ++<>++ diff --git a/docs/painless/painless-api-reference/DateFormat.Field.asciidoc b/docs/painless/painless-api-reference/DateFormat.Field.asciidoc index b9ad2cda2926c..b949cfe830cdc 100644 --- a/docs/painless/painless-api-reference/DateFormat.Field.asciidoc +++ b/docs/painless/painless-api-reference/DateFormat.Field.asciidoc @@ -4,24 +4,24 @@ Rebuild by running `gradle generatePainlessApi`. //// [[painless-api-reference-DateFormat-Field]]++DateFormat.Field++:: -** [[painless-api-reference-DateFormat-Field-AM_PM]]static <> link:{java8-javadoc}/java/text/DateFormat$Field.html#AM_PM[AM_PM] (link:{java9-javadoc}/java/text/DateFormat$Field.html#AM_PM[java 9]) -** [[painless-api-reference-DateFormat-Field-DAY_OF_MONTH]]static <> link:{java8-javadoc}/java/text/DateFormat$Field.html#DAY_OF_MONTH[DAY_OF_MONTH] (link:{java9-javadoc}/java/text/DateFormat$Field.html#DAY_OF_MONTH[java 9]) -** [[painless-api-reference-DateFormat-Field-DAY_OF_WEEK]]static <> link:{java8-javadoc}/java/text/DateFormat$Field.html#DAY_OF_WEEK[DAY_OF_WEEK] (link:{java9-javadoc}/java/text/DateFormat$Field.html#DAY_OF_WEEK[java 9]) -** [[painless-api-reference-DateFormat-Field-DAY_OF_WEEK_IN_MONTH]]static <> link:{java8-javadoc}/java/text/DateFormat$Field.html#DAY_OF_WEEK_IN_MONTH[DAY_OF_WEEK_IN_MONTH] (link:{java9-javadoc}/java/text/DateFormat$Field.html#DAY_OF_WEEK_IN_MONTH[java 9]) -** [[painless-api-reference-DateFormat-Field-DAY_OF_YEAR]]static <> link:{java8-javadoc}/java/text/DateFormat$Field.html#DAY_OF_YEAR[DAY_OF_YEAR] (link:{java9-javadoc}/java/text/DateFormat$Field.html#DAY_OF_YEAR[java 9]) -** [[painless-api-reference-DateFormat-Field-ERA]]static <> link:{java8-javadoc}/java/text/DateFormat$Field.html#ERA[ERA] (link:{java9-javadoc}/java/text/DateFormat$Field.html#ERA[java 9]) -** [[painless-api-reference-DateFormat-Field-HOUR0]]static <> link:{java8-javadoc}/java/text/DateFormat$Field.html#HOUR0[HOUR0] (link:{java9-javadoc}/java/text/DateFormat$Field.html#HOUR0[java 9]) -** [[painless-api-reference-DateFormat-Field-HOUR1]]static <> link:{java8-javadoc}/java/text/DateFormat$Field.html#HOUR1[HOUR1] (link:{java9-javadoc}/java/text/DateFormat$Field.html#HOUR1[java 9]) -** [[painless-api-reference-DateFormat-Field-HOUR_OF_DAY0]]static <> link:{java8-javadoc}/java/text/DateFormat$Field.html#HOUR_OF_DAY0[HOUR_OF_DAY0] (link:{java9-javadoc}/java/text/DateFormat$Field.html#HOUR_OF_DAY0[java 9]) -** [[painless-api-reference-DateFormat-Field-HOUR_OF_DAY1]]static <> link:{java8-javadoc}/java/text/DateFormat$Field.html#HOUR_OF_DAY1[HOUR_OF_DAY1] (link:{java9-javadoc}/java/text/DateFormat$Field.html#HOUR_OF_DAY1[java 9]) -** [[painless-api-reference-DateFormat-Field-MILLISECOND]]static <> link:{java8-javadoc}/java/text/DateFormat$Field.html#MILLISECOND[MILLISECOND] (link:{java9-javadoc}/java/text/DateFormat$Field.html#MILLISECOND[java 9]) -** [[painless-api-reference-DateFormat-Field-MINUTE]]static <> link:{java8-javadoc}/java/text/DateFormat$Field.html#MINUTE[MINUTE] (link:{java9-javadoc}/java/text/DateFormat$Field.html#MINUTE[java 9]) -** [[painless-api-reference-DateFormat-Field-MONTH]]static <> link:{java8-javadoc}/java/text/DateFormat$Field.html#MONTH[MONTH] (link:{java9-javadoc}/java/text/DateFormat$Field.html#MONTH[java 9]) -** [[painless-api-reference-DateFormat-Field-SECOND]]static <> link:{java8-javadoc}/java/text/DateFormat$Field.html#SECOND[SECOND] (link:{java9-javadoc}/java/text/DateFormat$Field.html#SECOND[java 9]) -** [[painless-api-reference-DateFormat-Field-TIME_ZONE]]static <> link:{java8-javadoc}/java/text/DateFormat$Field.html#TIME_ZONE[TIME_ZONE] (link:{java9-javadoc}/java/text/DateFormat$Field.html#TIME_ZONE[java 9]) -** [[painless-api-reference-DateFormat-Field-WEEK_OF_MONTH]]static <> link:{java8-javadoc}/java/text/DateFormat$Field.html#WEEK_OF_MONTH[WEEK_OF_MONTH] (link:{java9-javadoc}/java/text/DateFormat$Field.html#WEEK_OF_MONTH[java 9]) -** [[painless-api-reference-DateFormat-Field-WEEK_OF_YEAR]]static <> link:{java8-javadoc}/java/text/DateFormat$Field.html#WEEK_OF_YEAR[WEEK_OF_YEAR] (link:{java9-javadoc}/java/text/DateFormat$Field.html#WEEK_OF_YEAR[java 9]) -** [[painless-api-reference-DateFormat-Field-YEAR]]static <> link:{java8-javadoc}/java/text/DateFormat$Field.html#YEAR[YEAR] (link:{java9-javadoc}/java/text/DateFormat$Field.html#YEAR[java 9]) -* ++[[painless-api-reference-DateFormat-Field-ofCalendarField-1]]static <> link:{java8-javadoc}/java/text/DateFormat$Field.html#ofCalendarField%2Dint%2D[ofCalendarField](int)++ (link:{java9-javadoc}/java/text/DateFormat$Field.html#ofCalendarField%2Dint%2D[java 9]) -* ++[[painless-api-reference-DateFormat-Field-getCalendarField-0]]int link:{java8-javadoc}/java/text/DateFormat$Field.html#getCalendarField%2D%2D[getCalendarField]()++ (link:{java9-javadoc}/java/text/DateFormat$Field.html#getCalendarField%2D%2D[java 9]) +** [[painless-api-reference-DateFormat-Field-AM_PM]]static <> link:{java8-javadoc}/java/text/DateFormat.Field.html#AM_PM[AM_PM] (link:{java9-javadoc}/java/text/DateFormat.Field.html#AM_PM[java 9]) +** [[painless-api-reference-DateFormat-Field-DAY_OF_MONTH]]static <> link:{java8-javadoc}/java/text/DateFormat.Field.html#DAY_OF_MONTH[DAY_OF_MONTH] (link:{java9-javadoc}/java/text/DateFormat.Field.html#DAY_OF_MONTH[java 9]) +** [[painless-api-reference-DateFormat-Field-DAY_OF_WEEK]]static <> link:{java8-javadoc}/java/text/DateFormat.Field.html#DAY_OF_WEEK[DAY_OF_WEEK] (link:{java9-javadoc}/java/text/DateFormat.Field.html#DAY_OF_WEEK[java 9]) +** [[painless-api-reference-DateFormat-Field-DAY_OF_WEEK_IN_MONTH]]static <> link:{java8-javadoc}/java/text/DateFormat.Field.html#DAY_OF_WEEK_IN_MONTH[DAY_OF_WEEK_IN_MONTH] (link:{java9-javadoc}/java/text/DateFormat.Field.html#DAY_OF_WEEK_IN_MONTH[java 9]) +** [[painless-api-reference-DateFormat-Field-DAY_OF_YEAR]]static <> link:{java8-javadoc}/java/text/DateFormat.Field.html#DAY_OF_YEAR[DAY_OF_YEAR] (link:{java9-javadoc}/java/text/DateFormat.Field.html#DAY_OF_YEAR[java 9]) +** [[painless-api-reference-DateFormat-Field-ERA]]static <> link:{java8-javadoc}/java/text/DateFormat.Field.html#ERA[ERA] (link:{java9-javadoc}/java/text/DateFormat.Field.html#ERA[java 9]) +** [[painless-api-reference-DateFormat-Field-HOUR0]]static <> link:{java8-javadoc}/java/text/DateFormat.Field.html#HOUR0[HOUR0] (link:{java9-javadoc}/java/text/DateFormat.Field.html#HOUR0[java 9]) +** [[painless-api-reference-DateFormat-Field-HOUR1]]static <> link:{java8-javadoc}/java/text/DateFormat.Field.html#HOUR1[HOUR1] (link:{java9-javadoc}/java/text/DateFormat.Field.html#HOUR1[java 9]) +** [[painless-api-reference-DateFormat-Field-HOUR_OF_DAY0]]static <> link:{java8-javadoc}/java/text/DateFormat.Field.html#HOUR_OF_DAY0[HOUR_OF_DAY0] (link:{java9-javadoc}/java/text/DateFormat.Field.html#HOUR_OF_DAY0[java 9]) +** [[painless-api-reference-DateFormat-Field-HOUR_OF_DAY1]]static <> link:{java8-javadoc}/java/text/DateFormat.Field.html#HOUR_OF_DAY1[HOUR_OF_DAY1] (link:{java9-javadoc}/java/text/DateFormat.Field.html#HOUR_OF_DAY1[java 9]) +** [[painless-api-reference-DateFormat-Field-MILLISECOND]]static <> link:{java8-javadoc}/java/text/DateFormat.Field.html#MILLISECOND[MILLISECOND] (link:{java9-javadoc}/java/text/DateFormat.Field.html#MILLISECOND[java 9]) +** [[painless-api-reference-DateFormat-Field-MINUTE]]static <> link:{java8-javadoc}/java/text/DateFormat.Field.html#MINUTE[MINUTE] (link:{java9-javadoc}/java/text/DateFormat.Field.html#MINUTE[java 9]) +** [[painless-api-reference-DateFormat-Field-MONTH]]static <> link:{java8-javadoc}/java/text/DateFormat.Field.html#MONTH[MONTH] (link:{java9-javadoc}/java/text/DateFormat.Field.html#MONTH[java 9]) +** [[painless-api-reference-DateFormat-Field-SECOND]]static <> link:{java8-javadoc}/java/text/DateFormat.Field.html#SECOND[SECOND] (link:{java9-javadoc}/java/text/DateFormat.Field.html#SECOND[java 9]) +** [[painless-api-reference-DateFormat-Field-TIME_ZONE]]static <> link:{java8-javadoc}/java/text/DateFormat.Field.html#TIME_ZONE[TIME_ZONE] (link:{java9-javadoc}/java/text/DateFormat.Field.html#TIME_ZONE[java 9]) +** [[painless-api-reference-DateFormat-Field-WEEK_OF_MONTH]]static <> link:{java8-javadoc}/java/text/DateFormat.Field.html#WEEK_OF_MONTH[WEEK_OF_MONTH] (link:{java9-javadoc}/java/text/DateFormat.Field.html#WEEK_OF_MONTH[java 9]) +** [[painless-api-reference-DateFormat-Field-WEEK_OF_YEAR]]static <> link:{java8-javadoc}/java/text/DateFormat.Field.html#WEEK_OF_YEAR[WEEK_OF_YEAR] (link:{java9-javadoc}/java/text/DateFormat.Field.html#WEEK_OF_YEAR[java 9]) +** [[painless-api-reference-DateFormat-Field-YEAR]]static <> link:{java8-javadoc}/java/text/DateFormat.Field.html#YEAR[YEAR] (link:{java9-javadoc}/java/text/DateFormat.Field.html#YEAR[java 9]) +* ++[[painless-api-reference-DateFormat-Field-ofCalendarField-1]]static <> link:{java8-javadoc}/java/text/DateFormat.Field.html#ofCalendarField%2Dint%2D[ofCalendarField](int)++ (link:{java9-javadoc}/java/text/DateFormat.Field.html#ofCalendarField%2Dint%2D[java 9]) +* ++[[painless-api-reference-DateFormat-Field-getCalendarField-0]]int link:{java8-javadoc}/java/text/DateFormat.Field.html#getCalendarField%2D%2D[getCalendarField]()++ (link:{java9-javadoc}/java/text/DateFormat.Field.html#getCalendarField%2D%2D[java 9]) * Inherits methods from ++<>++ diff --git a/docs/painless/painless-api-reference/DoubleStream.Builder.asciidoc b/docs/painless/painless-api-reference/DoubleStream.Builder.asciidoc index 278a3f99a9ab9..8f59249115f8f 100644 --- a/docs/painless/painless-api-reference/DoubleStream.Builder.asciidoc +++ b/docs/painless/painless-api-reference/DoubleStream.Builder.asciidoc @@ -4,6 +4,6 @@ Rebuild by running `gradle generatePainlessApi`. //// [[painless-api-reference-DoubleStream-Builder]]++DoubleStream.Builder++:: -* ++[[painless-api-reference-DoubleStream-Builder-add-1]]<> link:{java8-javadoc}/java/util/stream/DoubleStream$Builder.html#add%2Ddouble%2D[add](double)++ (link:{java9-javadoc}/java/util/stream/DoubleStream$Builder.html#add%2Ddouble%2D[java 9]) -* ++[[painless-api-reference-DoubleStream-Builder-build-0]]<> link:{java8-javadoc}/java/util/stream/DoubleStream$Builder.html#build%2D%2D[build]()++ (link:{java9-javadoc}/java/util/stream/DoubleStream$Builder.html#build%2D%2D[java 9]) +* ++[[painless-api-reference-DoubleStream-Builder-add-1]]<> link:{java8-javadoc}/java/util/stream/DoubleStream.Builder.html#add%2Ddouble%2D[add](double)++ (link:{java9-javadoc}/java/util/stream/DoubleStream.Builder.html#add%2Ddouble%2D[java 9]) +* ++[[painless-api-reference-DoubleStream-Builder-build-0]]<> link:{java8-javadoc}/java/util/stream/DoubleStream.Builder.html#build%2D%2D[build]()++ (link:{java9-javadoc}/java/util/stream/DoubleStream.Builder.html#build%2D%2D[java 9]) * Inherits methods from ++<>++, ++<>++ diff --git a/docs/painless/painless-api-reference/Formatter.BigDecimalLayoutForm.asciidoc b/docs/painless/painless-api-reference/Formatter.BigDecimalLayoutForm.asciidoc index 92d8852e13c09..4fb9ce48062e6 100644 --- a/docs/painless/painless-api-reference/Formatter.BigDecimalLayoutForm.asciidoc +++ b/docs/painless/painless-api-reference/Formatter.BigDecimalLayoutForm.asciidoc @@ -4,6 +4,6 @@ Rebuild by running `gradle generatePainlessApi`. //// [[painless-api-reference-Formatter-BigDecimalLayoutForm]]++Formatter.BigDecimalLayoutForm++:: -** [[painless-api-reference-Formatter-BigDecimalLayoutForm-DECIMAL_FLOAT]]static <> link:{java8-javadoc}/java/util/Formatter$BigDecimalLayoutForm.html#DECIMAL_FLOAT[DECIMAL_FLOAT] (link:{java9-javadoc}/java/util/Formatter$BigDecimalLayoutForm.html#DECIMAL_FLOAT[java 9]) -** [[painless-api-reference-Formatter-BigDecimalLayoutForm-SCIENTIFIC]]static <> link:{java8-javadoc}/java/util/Formatter$BigDecimalLayoutForm.html#SCIENTIFIC[SCIENTIFIC] (link:{java9-javadoc}/java/util/Formatter$BigDecimalLayoutForm.html#SCIENTIFIC[java 9]) +** [[painless-api-reference-Formatter-BigDecimalLayoutForm-DECIMAL_FLOAT]]static <> link:{java8-javadoc}/java/util/Formatter.BigDecimalLayoutForm.html#DECIMAL_FLOAT[DECIMAL_FLOAT] (link:{java9-javadoc}/java/util/Formatter.BigDecimalLayoutForm.html#DECIMAL_FLOAT[java 9]) +** [[painless-api-reference-Formatter-BigDecimalLayoutForm-SCIENTIFIC]]static <> link:{java8-javadoc}/java/util/Formatter.BigDecimalLayoutForm.html#SCIENTIFIC[SCIENTIFIC] (link:{java9-javadoc}/java/util/Formatter.BigDecimalLayoutForm.html#SCIENTIFIC[java 9]) * Inherits methods from ++<>++, ++<>++ diff --git a/docs/painless/painless-api-reference/IntStream.Builder.asciidoc b/docs/painless/painless-api-reference/IntStream.Builder.asciidoc index d99913881733a..0db51e7164b22 100644 --- a/docs/painless/painless-api-reference/IntStream.Builder.asciidoc +++ b/docs/painless/painless-api-reference/IntStream.Builder.asciidoc @@ -4,6 +4,6 @@ Rebuild by running `gradle generatePainlessApi`. //// [[painless-api-reference-IntStream-Builder]]++IntStream.Builder++:: -* ++[[painless-api-reference-IntStream-Builder-add-1]]<> link:{java8-javadoc}/java/util/stream/IntStream$Builder.html#add%2Dint%2D[add](int)++ (link:{java9-javadoc}/java/util/stream/IntStream$Builder.html#add%2Dint%2D[java 9]) -* ++[[painless-api-reference-IntStream-Builder-build-0]]<> link:{java8-javadoc}/java/util/stream/IntStream$Builder.html#build%2D%2D[build]()++ (link:{java9-javadoc}/java/util/stream/IntStream$Builder.html#build%2D%2D[java 9]) +* ++[[painless-api-reference-IntStream-Builder-add-1]]<> link:{java8-javadoc}/java/util/stream/IntStream.Builder.html#add%2Dint%2D[add](int)++ (link:{java9-javadoc}/java/util/stream/IntStream.Builder.html#add%2Dint%2D[java 9]) +* ++[[painless-api-reference-IntStream-Builder-build-0]]<> link:{java8-javadoc}/java/util/stream/IntStream.Builder.html#build%2D%2D[build]()++ (link:{java9-javadoc}/java/util/stream/IntStream.Builder.html#build%2D%2D[java 9]) * Inherits methods from ++<>++, ++<>++ diff --git a/docs/painless/painless-api-reference/Locale.Builder.asciidoc b/docs/painless/painless-api-reference/Locale.Builder.asciidoc index 8fad8099de9d9..3677c81e60411 100644 --- a/docs/painless/painless-api-reference/Locale.Builder.asciidoc +++ b/docs/painless/painless-api-reference/Locale.Builder.asciidoc @@ -4,18 +4,18 @@ Rebuild by running `gradle generatePainlessApi`. //// [[painless-api-reference-Locale-Builder]]++Locale.Builder++:: -* ++[[painless-api-reference-Locale-Builder-Locale.Builder-0]]link:{java8-javadoc}/java/util/Locale$Builder.html#Locale.Builder%2D%2D[Locale.Builder]()++ (link:{java9-javadoc}/java/util/Locale$Builder.html#Locale.Builder%2D%2D[java 9]) -* ++[[painless-api-reference-Locale-Builder-addUnicodeLocaleAttribute-1]]<> link:{java8-javadoc}/java/util/Locale$Builder.html#addUnicodeLocaleAttribute%2Djava.lang.String%2D[addUnicodeLocaleAttribute](<>)++ (link:{java9-javadoc}/java/util/Locale$Builder.html#addUnicodeLocaleAttribute%2Djava.lang.String%2D[java 9]) -* ++[[painless-api-reference-Locale-Builder-build-0]]<> link:{java8-javadoc}/java/util/Locale$Builder.html#build%2D%2D[build]()++ (link:{java9-javadoc}/java/util/Locale$Builder.html#build%2D%2D[java 9]) -* ++[[painless-api-reference-Locale-Builder-clear-0]]<> link:{java8-javadoc}/java/util/Locale$Builder.html#clear%2D%2D[clear]()++ (link:{java9-javadoc}/java/util/Locale$Builder.html#clear%2D%2D[java 9]) -* ++[[painless-api-reference-Locale-Builder-clearExtensions-0]]<> link:{java8-javadoc}/java/util/Locale$Builder.html#clearExtensions%2D%2D[clearExtensions]()++ (link:{java9-javadoc}/java/util/Locale$Builder.html#clearExtensions%2D%2D[java 9]) -* ++[[painless-api-reference-Locale-Builder-removeUnicodeLocaleAttribute-1]]<> link:{java8-javadoc}/java/util/Locale$Builder.html#removeUnicodeLocaleAttribute%2Djava.lang.String%2D[removeUnicodeLocaleAttribute](<>)++ (link:{java9-javadoc}/java/util/Locale$Builder.html#removeUnicodeLocaleAttribute%2Djava.lang.String%2D[java 9]) -* ++[[painless-api-reference-Locale-Builder-setExtension-2]]<> link:{java8-javadoc}/java/util/Locale$Builder.html#setExtension%2Dchar%2Djava.lang.String%2D[setExtension](char, <>)++ (link:{java9-javadoc}/java/util/Locale$Builder.html#setExtension%2Dchar%2Djava.lang.String%2D[java 9]) -* ++[[painless-api-reference-Locale-Builder-setLanguage-1]]<> link:{java8-javadoc}/java/util/Locale$Builder.html#setLanguage%2Djava.lang.String%2D[setLanguage](<>)++ (link:{java9-javadoc}/java/util/Locale$Builder.html#setLanguage%2Djava.lang.String%2D[java 9]) -* ++[[painless-api-reference-Locale-Builder-setLanguageTag-1]]<> link:{java8-javadoc}/java/util/Locale$Builder.html#setLanguageTag%2Djava.lang.String%2D[setLanguageTag](<>)++ (link:{java9-javadoc}/java/util/Locale$Builder.html#setLanguageTag%2Djava.lang.String%2D[java 9]) -* ++[[painless-api-reference-Locale-Builder-setLocale-1]]<> link:{java8-javadoc}/java/util/Locale$Builder.html#setLocale%2Djava.util.Locale%2D[setLocale](<>)++ (link:{java9-javadoc}/java/util/Locale$Builder.html#setLocale%2Djava.util.Locale%2D[java 9]) -* ++[[painless-api-reference-Locale-Builder-setRegion-1]]<> link:{java8-javadoc}/java/util/Locale$Builder.html#setRegion%2Djava.lang.String%2D[setRegion](<>)++ (link:{java9-javadoc}/java/util/Locale$Builder.html#setRegion%2Djava.lang.String%2D[java 9]) -* ++[[painless-api-reference-Locale-Builder-setScript-1]]<> link:{java8-javadoc}/java/util/Locale$Builder.html#setScript%2Djava.lang.String%2D[setScript](<>)++ (link:{java9-javadoc}/java/util/Locale$Builder.html#setScript%2Djava.lang.String%2D[java 9]) -* ++[[painless-api-reference-Locale-Builder-setUnicodeLocaleKeyword-2]]<> link:{java8-javadoc}/java/util/Locale$Builder.html#setUnicodeLocaleKeyword%2Djava.lang.String%2Djava.lang.String%2D[setUnicodeLocaleKeyword](<>, <>)++ (link:{java9-javadoc}/java/util/Locale$Builder.html#setUnicodeLocaleKeyword%2Djava.lang.String%2Djava.lang.String%2D[java 9]) -* ++[[painless-api-reference-Locale-Builder-setVariant-1]]<> link:{java8-javadoc}/java/util/Locale$Builder.html#setVariant%2Djava.lang.String%2D[setVariant](<>)++ (link:{java9-javadoc}/java/util/Locale$Builder.html#setVariant%2Djava.lang.String%2D[java 9]) +* ++[[painless-api-reference-Locale-Builder-Locale.Builder-0]]link:{java8-javadoc}/java/util/Locale.Builder.html#Locale.Builder%2D%2D[Locale.Builder]()++ (link:{java9-javadoc}/java/util/Locale.Builder.html#Locale.Builder%2D%2D[java 9]) +* ++[[painless-api-reference-Locale-Builder-addUnicodeLocaleAttribute-1]]<> link:{java8-javadoc}/java/util/Locale.Builder.html#addUnicodeLocaleAttribute%2Djava.lang.String%2D[addUnicodeLocaleAttribute](<>)++ (link:{java9-javadoc}/java/util/Locale.Builder.html#addUnicodeLocaleAttribute%2Djava.lang.String%2D[java 9]) +* ++[[painless-api-reference-Locale-Builder-build-0]]<> link:{java8-javadoc}/java/util/Locale.Builder.html#build%2D%2D[build]()++ (link:{java9-javadoc}/java/util/Locale.Builder.html#build%2D%2D[java 9]) +* ++[[painless-api-reference-Locale-Builder-clear-0]]<> link:{java8-javadoc}/java/util/Locale.Builder.html#clear%2D%2D[clear]()++ (link:{java9-javadoc}/java/util/Locale.Builder.html#clear%2D%2D[java 9]) +* ++[[painless-api-reference-Locale-Builder-clearExtensions-0]]<> link:{java8-javadoc}/java/util/Locale.Builder.html#clearExtensions%2D%2D[clearExtensions]()++ (link:{java9-javadoc}/java/util/Locale.Builder.html#clearExtensions%2D%2D[java 9]) +* ++[[painless-api-reference-Locale-Builder-removeUnicodeLocaleAttribute-1]]<> link:{java8-javadoc}/java/util/Locale.Builder.html#removeUnicodeLocaleAttribute%2Djava.lang.String%2D[removeUnicodeLocaleAttribute](<>)++ (link:{java9-javadoc}/java/util/Locale.Builder.html#removeUnicodeLocaleAttribute%2Djava.lang.String%2D[java 9]) +* ++[[painless-api-reference-Locale-Builder-setExtension-2]]<> link:{java8-javadoc}/java/util/Locale.Builder.html#setExtension%2Dchar%2Djava.lang.String%2D[setExtension](char, <>)++ (link:{java9-javadoc}/java/util/Locale.Builder.html#setExtension%2Dchar%2Djava.lang.String%2D[java 9]) +* ++[[painless-api-reference-Locale-Builder-setLanguage-1]]<> link:{java8-javadoc}/java/util/Locale.Builder.html#setLanguage%2Djava.lang.String%2D[setLanguage](<>)++ (link:{java9-javadoc}/java/util/Locale.Builder.html#setLanguage%2Djava.lang.String%2D[java 9]) +* ++[[painless-api-reference-Locale-Builder-setLanguageTag-1]]<> link:{java8-javadoc}/java/util/Locale.Builder.html#setLanguageTag%2Djava.lang.String%2D[setLanguageTag](<>)++ (link:{java9-javadoc}/java/util/Locale.Builder.html#setLanguageTag%2Djava.lang.String%2D[java 9]) +* ++[[painless-api-reference-Locale-Builder-setLocale-1]]<> link:{java8-javadoc}/java/util/Locale.Builder.html#setLocale%2Djava.util.Locale%2D[setLocale](<>)++ (link:{java9-javadoc}/java/util/Locale.Builder.html#setLocale%2Djava.util.Locale%2D[java 9]) +* ++[[painless-api-reference-Locale-Builder-setRegion-1]]<> link:{java8-javadoc}/java/util/Locale.Builder.html#setRegion%2Djava.lang.String%2D[setRegion](<>)++ (link:{java9-javadoc}/java/util/Locale.Builder.html#setRegion%2Djava.lang.String%2D[java 9]) +* ++[[painless-api-reference-Locale-Builder-setScript-1]]<> link:{java8-javadoc}/java/util/Locale.Builder.html#setScript%2Djava.lang.String%2D[setScript](<>)++ (link:{java9-javadoc}/java/util/Locale.Builder.html#setScript%2Djava.lang.String%2D[java 9]) +* ++[[painless-api-reference-Locale-Builder-setUnicodeLocaleKeyword-2]]<> link:{java8-javadoc}/java/util/Locale.Builder.html#setUnicodeLocaleKeyword%2Djava.lang.String%2Djava.lang.String%2D[setUnicodeLocaleKeyword](<>, <>)++ (link:{java9-javadoc}/java/util/Locale.Builder.html#setUnicodeLocaleKeyword%2Djava.lang.String%2Djava.lang.String%2D[java 9]) +* ++[[painless-api-reference-Locale-Builder-setVariant-1]]<> link:{java8-javadoc}/java/util/Locale.Builder.html#setVariant%2Djava.lang.String%2D[setVariant](<>)++ (link:{java9-javadoc}/java/util/Locale.Builder.html#setVariant%2Djava.lang.String%2D[java 9]) * Inherits methods from ++<>++ diff --git a/docs/painless/painless-api-reference/Locale.Category.asciidoc b/docs/painless/painless-api-reference/Locale.Category.asciidoc index 37a57018963dc..96b1cd3fd1011 100644 --- a/docs/painless/painless-api-reference/Locale.Category.asciidoc +++ b/docs/painless/painless-api-reference/Locale.Category.asciidoc @@ -4,8 +4,8 @@ Rebuild by running `gradle generatePainlessApi`. //// [[painless-api-reference-Locale-Category]]++Locale.Category++:: -** [[painless-api-reference-Locale-Category-DISPLAY]]static <> link:{java8-javadoc}/java/util/Locale$Category.html#DISPLAY[DISPLAY] (link:{java9-javadoc}/java/util/Locale$Category.html#DISPLAY[java 9]) -** [[painless-api-reference-Locale-Category-FORMAT]]static <> link:{java8-javadoc}/java/util/Locale$Category.html#FORMAT[FORMAT] (link:{java9-javadoc}/java/util/Locale$Category.html#FORMAT[java 9]) -* ++[[painless-api-reference-Locale-Category-valueOf-1]]static <> link:{java8-javadoc}/java/util/Locale$Category.html#valueOf%2Djava.lang.String%2D[valueOf](<>)++ (link:{java9-javadoc}/java/util/Locale$Category.html#valueOf%2Djava.lang.String%2D[java 9]) -* ++[[painless-api-reference-Locale-Category-values-0]]static <>[] link:{java8-javadoc}/java/util/Locale$Category.html#values%2D%2D[values]()++ (link:{java9-javadoc}/java/util/Locale$Category.html#values%2D%2D[java 9]) +** [[painless-api-reference-Locale-Category-DISPLAY]]static <> link:{java8-javadoc}/java/util/Locale.Category.html#DISPLAY[DISPLAY] (link:{java9-javadoc}/java/util/Locale.Category.html#DISPLAY[java 9]) +** [[painless-api-reference-Locale-Category-FORMAT]]static <> link:{java8-javadoc}/java/util/Locale.Category.html#FORMAT[FORMAT] (link:{java9-javadoc}/java/util/Locale.Category.html#FORMAT[java 9]) +* ++[[painless-api-reference-Locale-Category-valueOf-1]]static <> link:{java8-javadoc}/java/util/Locale.Category.html#valueOf%2Djava.lang.String%2D[valueOf](<>)++ (link:{java9-javadoc}/java/util/Locale.Category.html#valueOf%2Djava.lang.String%2D[java 9]) +* ++[[painless-api-reference-Locale-Category-values-0]]static <>[] link:{java8-javadoc}/java/util/Locale.Category.html#values%2D%2D[values]()++ (link:{java9-javadoc}/java/util/Locale.Category.html#values%2D%2D[java 9]) * Inherits methods from ++<>++, ++<>++ diff --git a/docs/painless/painless-api-reference/Locale.FilteringMode.asciidoc b/docs/painless/painless-api-reference/Locale.FilteringMode.asciidoc index e4399e146fcc0..f513e4c239668 100644 --- a/docs/painless/painless-api-reference/Locale.FilteringMode.asciidoc +++ b/docs/painless/painless-api-reference/Locale.FilteringMode.asciidoc @@ -4,11 +4,11 @@ Rebuild by running `gradle generatePainlessApi`. //// [[painless-api-reference-Locale-FilteringMode]]++Locale.FilteringMode++:: -** [[painless-api-reference-Locale-FilteringMode-AUTOSELECT_FILTERING]]static <> link:{java8-javadoc}/java/util/Locale$FilteringMode.html#AUTOSELECT_FILTERING[AUTOSELECT_FILTERING] (link:{java9-javadoc}/java/util/Locale$FilteringMode.html#AUTOSELECT_FILTERING[java 9]) -** [[painless-api-reference-Locale-FilteringMode-EXTENDED_FILTERING]]static <> link:{java8-javadoc}/java/util/Locale$FilteringMode.html#EXTENDED_FILTERING[EXTENDED_FILTERING] (link:{java9-javadoc}/java/util/Locale$FilteringMode.html#EXTENDED_FILTERING[java 9]) -** [[painless-api-reference-Locale-FilteringMode-IGNORE_EXTENDED_RANGES]]static <> link:{java8-javadoc}/java/util/Locale$FilteringMode.html#IGNORE_EXTENDED_RANGES[IGNORE_EXTENDED_RANGES] (link:{java9-javadoc}/java/util/Locale$FilteringMode.html#IGNORE_EXTENDED_RANGES[java 9]) -** [[painless-api-reference-Locale-FilteringMode-MAP_EXTENDED_RANGES]]static <> link:{java8-javadoc}/java/util/Locale$FilteringMode.html#MAP_EXTENDED_RANGES[MAP_EXTENDED_RANGES] (link:{java9-javadoc}/java/util/Locale$FilteringMode.html#MAP_EXTENDED_RANGES[java 9]) -** [[painless-api-reference-Locale-FilteringMode-REJECT_EXTENDED_RANGES]]static <> link:{java8-javadoc}/java/util/Locale$FilteringMode.html#REJECT_EXTENDED_RANGES[REJECT_EXTENDED_RANGES] (link:{java9-javadoc}/java/util/Locale$FilteringMode.html#REJECT_EXTENDED_RANGES[java 9]) -* ++[[painless-api-reference-Locale-FilteringMode-valueOf-1]]static <> link:{java8-javadoc}/java/util/Locale$FilteringMode.html#valueOf%2Djava.lang.String%2D[valueOf](<>)++ (link:{java9-javadoc}/java/util/Locale$FilteringMode.html#valueOf%2Djava.lang.String%2D[java 9]) -* ++[[painless-api-reference-Locale-FilteringMode-values-0]]static <>[] link:{java8-javadoc}/java/util/Locale$FilteringMode.html#values%2D%2D[values]()++ (link:{java9-javadoc}/java/util/Locale$FilteringMode.html#values%2D%2D[java 9]) +** [[painless-api-reference-Locale-FilteringMode-AUTOSELECT_FILTERING]]static <> link:{java8-javadoc}/java/util/Locale.FilteringMode.html#AUTOSELECT_FILTERING[AUTOSELECT_FILTERING] (link:{java9-javadoc}/java/util/Locale.FilteringMode.html#AUTOSELECT_FILTERING[java 9]) +** [[painless-api-reference-Locale-FilteringMode-EXTENDED_FILTERING]]static <> link:{java8-javadoc}/java/util/Locale.FilteringMode.html#EXTENDED_FILTERING[EXTENDED_FILTERING] (link:{java9-javadoc}/java/util/Locale.FilteringMode.html#EXTENDED_FILTERING[java 9]) +** [[painless-api-reference-Locale-FilteringMode-IGNORE_EXTENDED_RANGES]]static <> link:{java8-javadoc}/java/util/Locale.FilteringMode.html#IGNORE_EXTENDED_RANGES[IGNORE_EXTENDED_RANGES] (link:{java9-javadoc}/java/util/Locale.FilteringMode.html#IGNORE_EXTENDED_RANGES[java 9]) +** [[painless-api-reference-Locale-FilteringMode-MAP_EXTENDED_RANGES]]static <> link:{java8-javadoc}/java/util/Locale.FilteringMode.html#MAP_EXTENDED_RANGES[MAP_EXTENDED_RANGES] (link:{java9-javadoc}/java/util/Locale.FilteringMode.html#MAP_EXTENDED_RANGES[java 9]) +** [[painless-api-reference-Locale-FilteringMode-REJECT_EXTENDED_RANGES]]static <> link:{java8-javadoc}/java/util/Locale.FilteringMode.html#REJECT_EXTENDED_RANGES[REJECT_EXTENDED_RANGES] (link:{java9-javadoc}/java/util/Locale.FilteringMode.html#REJECT_EXTENDED_RANGES[java 9]) +* ++[[painless-api-reference-Locale-FilteringMode-valueOf-1]]static <> link:{java8-javadoc}/java/util/Locale.FilteringMode.html#valueOf%2Djava.lang.String%2D[valueOf](<>)++ (link:{java9-javadoc}/java/util/Locale.FilteringMode.html#valueOf%2Djava.lang.String%2D[java 9]) +* ++[[painless-api-reference-Locale-FilteringMode-values-0]]static <>[] link:{java8-javadoc}/java/util/Locale.FilteringMode.html#values%2D%2D[values]()++ (link:{java9-javadoc}/java/util/Locale.FilteringMode.html#values%2D%2D[java 9]) * Inherits methods from ++<>++, ++<>++ diff --git a/docs/painless/painless-api-reference/Locale.LanguageRange.asciidoc b/docs/painless/painless-api-reference/Locale.LanguageRange.asciidoc index 0e0038d3dd68d..a0a76b3282021 100644 --- a/docs/painless/painless-api-reference/Locale.LanguageRange.asciidoc +++ b/docs/painless/painless-api-reference/Locale.LanguageRange.asciidoc @@ -4,13 +4,13 @@ Rebuild by running `gradle generatePainlessApi`. //// [[painless-api-reference-Locale-LanguageRange]]++Locale.LanguageRange++:: -** [[painless-api-reference-Locale-LanguageRange-MAX_WEIGHT]]static double link:{java8-javadoc}/java/util/Locale$LanguageRange.html#MAX_WEIGHT[MAX_WEIGHT] (link:{java9-javadoc}/java/util/Locale$LanguageRange.html#MAX_WEIGHT[java 9]) -** [[painless-api-reference-Locale-LanguageRange-MIN_WEIGHT]]static double link:{java8-javadoc}/java/util/Locale$LanguageRange.html#MIN_WEIGHT[MIN_WEIGHT] (link:{java9-javadoc}/java/util/Locale$LanguageRange.html#MIN_WEIGHT[java 9]) -* ++[[painless-api-reference-Locale-LanguageRange-mapEquivalents-2]]static <> link:{java8-javadoc}/java/util/Locale$LanguageRange.html#mapEquivalents%2Djava.util.List%2Djava.util.Map%2D[mapEquivalents](<>, <>)++ (link:{java9-javadoc}/java/util/Locale$LanguageRange.html#mapEquivalents%2Djava.util.List%2Djava.util.Map%2D[java 9]) -* ++[[painless-api-reference-Locale-LanguageRange-parse-1]]static <> link:{java8-javadoc}/java/util/Locale$LanguageRange.html#parse%2Djava.lang.String%2D[parse](<>)++ (link:{java9-javadoc}/java/util/Locale$LanguageRange.html#parse%2Djava.lang.String%2D[java 9]) -* ++[[painless-api-reference-Locale-LanguageRange-parse-2]]static <> link:{java8-javadoc}/java/util/Locale$LanguageRange.html#parse%2Djava.lang.String%2Djava.util.Map%2D[parse](<>, <>)++ (link:{java9-javadoc}/java/util/Locale$LanguageRange.html#parse%2Djava.lang.String%2Djava.util.Map%2D[java 9]) -* ++[[painless-api-reference-Locale-LanguageRange-Locale.LanguageRange-1]]link:{java8-javadoc}/java/util/Locale$LanguageRange.html#Locale.LanguageRange%2Djava.lang.String%2D[Locale.LanguageRange](<>)++ (link:{java9-javadoc}/java/util/Locale$LanguageRange.html#Locale.LanguageRange%2Djava.lang.String%2D[java 9]) -* ++[[painless-api-reference-Locale-LanguageRange-Locale.LanguageRange-2]]link:{java8-javadoc}/java/util/Locale$LanguageRange.html#Locale.LanguageRange%2Djava.lang.String%2Ddouble%2D[Locale.LanguageRange](<>, double)++ (link:{java9-javadoc}/java/util/Locale$LanguageRange.html#Locale.LanguageRange%2Djava.lang.String%2Ddouble%2D[java 9]) -* ++[[painless-api-reference-Locale-LanguageRange-getRange-0]]<> link:{java8-javadoc}/java/util/Locale$LanguageRange.html#getRange%2D%2D[getRange]()++ (link:{java9-javadoc}/java/util/Locale$LanguageRange.html#getRange%2D%2D[java 9]) -* ++[[painless-api-reference-Locale-LanguageRange-getWeight-0]]double link:{java8-javadoc}/java/util/Locale$LanguageRange.html#getWeight%2D%2D[getWeight]()++ (link:{java9-javadoc}/java/util/Locale$LanguageRange.html#getWeight%2D%2D[java 9]) +** [[painless-api-reference-Locale-LanguageRange-MAX_WEIGHT]]static double link:{java8-javadoc}/java/util/Locale.LanguageRange.html#MAX_WEIGHT[MAX_WEIGHT] (link:{java9-javadoc}/java/util/Locale.LanguageRange.html#MAX_WEIGHT[java 9]) +** [[painless-api-reference-Locale-LanguageRange-MIN_WEIGHT]]static double link:{java8-javadoc}/java/util/Locale.LanguageRange.html#MIN_WEIGHT[MIN_WEIGHT] (link:{java9-javadoc}/java/util/Locale.LanguageRange.html#MIN_WEIGHT[java 9]) +* ++[[painless-api-reference-Locale-LanguageRange-mapEquivalents-2]]static <> link:{java8-javadoc}/java/util/Locale.LanguageRange.html#mapEquivalents%2Djava.util.List%2Djava.util.Map%2D[mapEquivalents](<>, <>)++ (link:{java9-javadoc}/java/util/Locale.LanguageRange.html#mapEquivalents%2Djava.util.List%2Djava.util.Map%2D[java 9]) +* ++[[painless-api-reference-Locale-LanguageRange-parse-1]]static <> link:{java8-javadoc}/java/util/Locale.LanguageRange.html#parse%2Djava.lang.String%2D[parse](<>)++ (link:{java9-javadoc}/java/util/Locale.LanguageRange.html#parse%2Djava.lang.String%2D[java 9]) +* ++[[painless-api-reference-Locale-LanguageRange-parse-2]]static <> link:{java8-javadoc}/java/util/Locale.LanguageRange.html#parse%2Djava.lang.String%2Djava.util.Map%2D[parse](<>, <>)++ (link:{java9-javadoc}/java/util/Locale.LanguageRange.html#parse%2Djava.lang.String%2Djava.util.Map%2D[java 9]) +* ++[[painless-api-reference-Locale-LanguageRange-Locale.LanguageRange-1]]link:{java8-javadoc}/java/util/Locale.LanguageRange.html#Locale.LanguageRange%2Djava.lang.String%2D[Locale.LanguageRange](<>)++ (link:{java9-javadoc}/java/util/Locale.LanguageRange.html#Locale.LanguageRange%2Djava.lang.String%2D[java 9]) +* ++[[painless-api-reference-Locale-LanguageRange-Locale.LanguageRange-2]]link:{java8-javadoc}/java/util/Locale.LanguageRange.html#Locale.LanguageRange%2Djava.lang.String%2Ddouble%2D[Locale.LanguageRange](<>, double)++ (link:{java9-javadoc}/java/util/Locale.LanguageRange.html#Locale.LanguageRange%2Djava.lang.String%2Ddouble%2D[java 9]) +* ++[[painless-api-reference-Locale-LanguageRange-getRange-0]]<> link:{java8-javadoc}/java/util/Locale.LanguageRange.html#getRange%2D%2D[getRange]()++ (link:{java9-javadoc}/java/util/Locale.LanguageRange.html#getRange%2D%2D[java 9]) +* ++[[painless-api-reference-Locale-LanguageRange-getWeight-0]]double link:{java8-javadoc}/java/util/Locale.LanguageRange.html#getWeight%2D%2D[getWeight]()++ (link:{java9-javadoc}/java/util/Locale.LanguageRange.html#getWeight%2D%2D[java 9]) * Inherits methods from ++<>++ diff --git a/docs/painless/painless-api-reference/LongStream.Builder.asciidoc b/docs/painless/painless-api-reference/LongStream.Builder.asciidoc index 98c55b149ae68..a5d8d0a874cb0 100644 --- a/docs/painless/painless-api-reference/LongStream.Builder.asciidoc +++ b/docs/painless/painless-api-reference/LongStream.Builder.asciidoc @@ -4,6 +4,6 @@ Rebuild by running `gradle generatePainlessApi`. //// [[painless-api-reference-LongStream-Builder]]++LongStream.Builder++:: -* ++[[painless-api-reference-LongStream-Builder-add-1]]<> link:{java8-javadoc}/java/util/stream/LongStream$Builder.html#add%2Dlong%2D[add](long)++ (link:{java9-javadoc}/java/util/stream/LongStream$Builder.html#add%2Dlong%2D[java 9]) -* ++[[painless-api-reference-LongStream-Builder-build-0]]<> link:{java8-javadoc}/java/util/stream/LongStream$Builder.html#build%2D%2D[build]()++ (link:{java9-javadoc}/java/util/stream/LongStream$Builder.html#build%2D%2D[java 9]) +* ++[[painless-api-reference-LongStream-Builder-add-1]]<> link:{java8-javadoc}/java/util/stream/LongStream.Builder.html#add%2Dlong%2D[add](long)++ (link:{java9-javadoc}/java/util/stream/LongStream.Builder.html#add%2Dlong%2D[java 9]) +* ++[[painless-api-reference-LongStream-Builder-build-0]]<> link:{java8-javadoc}/java/util/stream/LongStream.Builder.html#build%2D%2D[build]()++ (link:{java9-javadoc}/java/util/stream/LongStream.Builder.html#build%2D%2D[java 9]) * Inherits methods from ++<>++, ++<>++ diff --git a/docs/painless/painless-api-reference/Map.Entry.asciidoc b/docs/painless/painless-api-reference/Map.Entry.asciidoc index f40e61a70b060..bf40736759eea 100644 --- a/docs/painless/painless-api-reference/Map.Entry.asciidoc +++ b/docs/painless/painless-api-reference/Map.Entry.asciidoc @@ -4,13 +4,13 @@ Rebuild by running `gradle generatePainlessApi`. //// [[painless-api-reference-Map-Entry]]++Map.Entry++:: -* ++[[painless-api-reference-Map-Entry-comparingByKey-0]]static <> link:{java8-javadoc}/java/util/Map$Entry.html#comparingByKey%2D%2D[comparingByKey]()++ (link:{java9-javadoc}/java/util/Map$Entry.html#comparingByKey%2D%2D[java 9]) -* ++[[painless-api-reference-Map-Entry-comparingByKey-1]]static <> link:{java8-javadoc}/java/util/Map$Entry.html#comparingByKey%2Djava.util.Comparator%2D[comparingByKey](<>)++ (link:{java9-javadoc}/java/util/Map$Entry.html#comparingByKey%2Djava.util.Comparator%2D[java 9]) -* ++[[painless-api-reference-Map-Entry-comparingByValue-0]]static <> link:{java8-javadoc}/java/util/Map$Entry.html#comparingByValue%2D%2D[comparingByValue]()++ (link:{java9-javadoc}/java/util/Map$Entry.html#comparingByValue%2D%2D[java 9]) -* ++[[painless-api-reference-Map-Entry-comparingByValue-1]]static <> link:{java8-javadoc}/java/util/Map$Entry.html#comparingByValue%2Djava.util.Comparator%2D[comparingByValue](<>)++ (link:{java9-javadoc}/java/util/Map$Entry.html#comparingByValue%2Djava.util.Comparator%2D[java 9]) -* ++[[painless-api-reference-Map-Entry-equals-1]]boolean link:{java8-javadoc}/java/util/Map$Entry.html#equals%2Djava.lang.Object%2D[equals](<>)++ (link:{java9-javadoc}/java/util/Map$Entry.html#equals%2Djava.lang.Object%2D[java 9]) -* ++[[painless-api-reference-Map-Entry-getKey-0]]def link:{java8-javadoc}/java/util/Map$Entry.html#getKey%2D%2D[getKey]()++ (link:{java9-javadoc}/java/util/Map$Entry.html#getKey%2D%2D[java 9]) -* ++[[painless-api-reference-Map-Entry-getValue-0]]def link:{java8-javadoc}/java/util/Map$Entry.html#getValue%2D%2D[getValue]()++ (link:{java9-javadoc}/java/util/Map$Entry.html#getValue%2D%2D[java 9]) -* ++[[painless-api-reference-Map-Entry-hashCode-0]]int link:{java8-javadoc}/java/util/Map$Entry.html#hashCode%2D%2D[hashCode]()++ (link:{java9-javadoc}/java/util/Map$Entry.html#hashCode%2D%2D[java 9]) -* ++[[painless-api-reference-Map-Entry-setValue-1]]def link:{java8-javadoc}/java/util/Map$Entry.html#setValue%2Djava.lang.Object%2D[setValue](def)++ (link:{java9-javadoc}/java/util/Map$Entry.html#setValue%2Djava.lang.Object%2D[java 9]) +* ++[[painless-api-reference-Map-Entry-comparingByKey-0]]static <> link:{java8-javadoc}/java/util/Map.Entry.html#comparingByKey%2D%2D[comparingByKey]()++ (link:{java9-javadoc}/java/util/Map.Entry.html#comparingByKey%2D%2D[java 9]) +* ++[[painless-api-reference-Map-Entry-comparingByKey-1]]static <> link:{java8-javadoc}/java/util/Map.Entry.html#comparingByKey%2Djava.util.Comparator%2D[comparingByKey](<>)++ (link:{java9-javadoc}/java/util/Map.Entry.html#comparingByKey%2Djava.util.Comparator%2D[java 9]) +* ++[[painless-api-reference-Map-Entry-comparingByValue-0]]static <> link:{java8-javadoc}/java/util/Map.Entry.html#comparingByValue%2D%2D[comparingByValue]()++ (link:{java9-javadoc}/java/util/Map.Entry.html#comparingByValue%2D%2D[java 9]) +* ++[[painless-api-reference-Map-Entry-comparingByValue-1]]static <> link:{java8-javadoc}/java/util/Map.Entry.html#comparingByValue%2Djava.util.Comparator%2D[comparingByValue](<>)++ (link:{java9-javadoc}/java/util/Map.Entry.html#comparingByValue%2Djava.util.Comparator%2D[java 9]) +* ++[[painless-api-reference-Map-Entry-equals-1]]boolean link:{java8-javadoc}/java/util/Map.Entry.html#equals%2Djava.lang.Object%2D[equals](<>)++ (link:{java9-javadoc}/java/util/Map.Entry.html#equals%2Djava.lang.Object%2D[java 9]) +* ++[[painless-api-reference-Map-Entry-getKey-0]]def link:{java8-javadoc}/java/util/Map.Entry.html#getKey%2D%2D[getKey]()++ (link:{java9-javadoc}/java/util/Map.Entry.html#getKey%2D%2D[java 9]) +* ++[[painless-api-reference-Map-Entry-getValue-0]]def link:{java8-javadoc}/java/util/Map.Entry.html#getValue%2D%2D[getValue]()++ (link:{java9-javadoc}/java/util/Map.Entry.html#getValue%2D%2D[java 9]) +* ++[[painless-api-reference-Map-Entry-hashCode-0]]int link:{java8-javadoc}/java/util/Map.Entry.html#hashCode%2D%2D[hashCode]()++ (link:{java9-javadoc}/java/util/Map.Entry.html#hashCode%2D%2D[java 9]) +* ++[[painless-api-reference-Map-Entry-setValue-1]]def link:{java8-javadoc}/java/util/Map.Entry.html#setValue%2Djava.lang.Object%2D[setValue](def)++ (link:{java9-javadoc}/java/util/Map.Entry.html#setValue%2Djava.lang.Object%2D[java 9]) * Inherits methods from ++<>++ diff --git a/docs/painless/painless-api-reference/MessageFormat.Field.asciidoc b/docs/painless/painless-api-reference/MessageFormat.Field.asciidoc index 6b0e71e0a5f43..200d59df17de9 100644 --- a/docs/painless/painless-api-reference/MessageFormat.Field.asciidoc +++ b/docs/painless/painless-api-reference/MessageFormat.Field.asciidoc @@ -4,5 +4,5 @@ Rebuild by running `gradle generatePainlessApi`. //// [[painless-api-reference-MessageFormat-Field]]++MessageFormat.Field++:: -** [[painless-api-reference-MessageFormat-Field-ARGUMENT]]static <> link:{java8-javadoc}/java/text/MessageFormat$Field.html#ARGUMENT[ARGUMENT] (link:{java9-javadoc}/java/text/MessageFormat$Field.html#ARGUMENT[java 9]) +** [[painless-api-reference-MessageFormat-Field-ARGUMENT]]static <> link:{java8-javadoc}/java/text/MessageFormat.Field.html#ARGUMENT[ARGUMENT] (link:{java9-javadoc}/java/text/MessageFormat.Field.html#ARGUMENT[java 9]) * Inherits methods from ++<>++ diff --git a/docs/painless/painless-api-reference/Normalizer.Form.asciidoc b/docs/painless/painless-api-reference/Normalizer.Form.asciidoc index f74d4d243db95..5255435473004 100644 --- a/docs/painless/painless-api-reference/Normalizer.Form.asciidoc +++ b/docs/painless/painless-api-reference/Normalizer.Form.asciidoc @@ -4,10 +4,10 @@ Rebuild by running `gradle generatePainlessApi`. //// [[painless-api-reference-Normalizer-Form]]++Normalizer.Form++:: -** [[painless-api-reference-Normalizer-Form-NFC]]static <> link:{java8-javadoc}/java/text/Normalizer$Form.html#NFC[NFC] (link:{java9-javadoc}/java/text/Normalizer$Form.html#NFC[java 9]) -** [[painless-api-reference-Normalizer-Form-NFD]]static <> link:{java8-javadoc}/java/text/Normalizer$Form.html#NFD[NFD] (link:{java9-javadoc}/java/text/Normalizer$Form.html#NFD[java 9]) -** [[painless-api-reference-Normalizer-Form-NFKC]]static <> link:{java8-javadoc}/java/text/Normalizer$Form.html#NFKC[NFKC] (link:{java9-javadoc}/java/text/Normalizer$Form.html#NFKC[java 9]) -** [[painless-api-reference-Normalizer-Form-NFKD]]static <> link:{java8-javadoc}/java/text/Normalizer$Form.html#NFKD[NFKD] (link:{java9-javadoc}/java/text/Normalizer$Form.html#NFKD[java 9]) -* ++[[painless-api-reference-Normalizer-Form-valueOf-1]]static <> link:{java8-javadoc}/java/text/Normalizer$Form.html#valueOf%2Djava.lang.String%2D[valueOf](<>)++ (link:{java9-javadoc}/java/text/Normalizer$Form.html#valueOf%2Djava.lang.String%2D[java 9]) -* ++[[painless-api-reference-Normalizer-Form-values-0]]static <>[] link:{java8-javadoc}/java/text/Normalizer$Form.html#values%2D%2D[values]()++ (link:{java9-javadoc}/java/text/Normalizer$Form.html#values%2D%2D[java 9]) +** [[painless-api-reference-Normalizer-Form-NFC]]static <> link:{java8-javadoc}/java/text/Normalizer.Form.html#NFC[NFC] (link:{java9-javadoc}/java/text/Normalizer.Form.html#NFC[java 9]) +** [[painless-api-reference-Normalizer-Form-NFD]]static <> link:{java8-javadoc}/java/text/Normalizer.Form.html#NFD[NFD] (link:{java9-javadoc}/java/text/Normalizer.Form.html#NFD[java 9]) +** [[painless-api-reference-Normalizer-Form-NFKC]]static <> link:{java8-javadoc}/java/text/Normalizer.Form.html#NFKC[NFKC] (link:{java9-javadoc}/java/text/Normalizer.Form.html#NFKC[java 9]) +** [[painless-api-reference-Normalizer-Form-NFKD]]static <> link:{java8-javadoc}/java/text/Normalizer.Form.html#NFKD[NFKD] (link:{java9-javadoc}/java/text/Normalizer.Form.html#NFKD[java 9]) +* ++[[painless-api-reference-Normalizer-Form-valueOf-1]]static <> link:{java8-javadoc}/java/text/Normalizer.Form.html#valueOf%2Djava.lang.String%2D[valueOf](<>)++ (link:{java9-javadoc}/java/text/Normalizer.Form.html#valueOf%2Djava.lang.String%2D[java 9]) +* ++[[painless-api-reference-Normalizer-Form-values-0]]static <>[] link:{java8-javadoc}/java/text/Normalizer.Form.html#values%2D%2D[values]()++ (link:{java9-javadoc}/java/text/Normalizer.Form.html#values%2D%2D[java 9]) * Inherits methods from ++<>++, ++<>++ diff --git a/docs/painless/painless-api-reference/NumberFormat.Field.asciidoc b/docs/painless/painless-api-reference/NumberFormat.Field.asciidoc index aec4fc0635790..279b0dbaaf5b7 100644 --- a/docs/painless/painless-api-reference/NumberFormat.Field.asciidoc +++ b/docs/painless/painless-api-reference/NumberFormat.Field.asciidoc @@ -4,15 +4,15 @@ Rebuild by running `gradle generatePainlessApi`. //// [[painless-api-reference-NumberFormat-Field]]++NumberFormat.Field++:: -** [[painless-api-reference-NumberFormat-Field-CURRENCY]]static <> link:{java8-javadoc}/java/text/NumberFormat$Field.html#CURRENCY[CURRENCY] (link:{java9-javadoc}/java/text/NumberFormat$Field.html#CURRENCY[java 9]) -** [[painless-api-reference-NumberFormat-Field-DECIMAL_SEPARATOR]]static <> link:{java8-javadoc}/java/text/NumberFormat$Field.html#DECIMAL_SEPARATOR[DECIMAL_SEPARATOR] (link:{java9-javadoc}/java/text/NumberFormat$Field.html#DECIMAL_SEPARATOR[java 9]) -** [[painless-api-reference-NumberFormat-Field-EXPONENT]]static <> link:{java8-javadoc}/java/text/NumberFormat$Field.html#EXPONENT[EXPONENT] (link:{java9-javadoc}/java/text/NumberFormat$Field.html#EXPONENT[java 9]) -** [[painless-api-reference-NumberFormat-Field-EXPONENT_SIGN]]static <> link:{java8-javadoc}/java/text/NumberFormat$Field.html#EXPONENT_SIGN[EXPONENT_SIGN] (link:{java9-javadoc}/java/text/NumberFormat$Field.html#EXPONENT_SIGN[java 9]) -** [[painless-api-reference-NumberFormat-Field-EXPONENT_SYMBOL]]static <> link:{java8-javadoc}/java/text/NumberFormat$Field.html#EXPONENT_SYMBOL[EXPONENT_SYMBOL] (link:{java9-javadoc}/java/text/NumberFormat$Field.html#EXPONENT_SYMBOL[java 9]) -** [[painless-api-reference-NumberFormat-Field-FRACTION]]static <> link:{java8-javadoc}/java/text/NumberFormat$Field.html#FRACTION[FRACTION] (link:{java9-javadoc}/java/text/NumberFormat$Field.html#FRACTION[java 9]) -** [[painless-api-reference-NumberFormat-Field-GROUPING_SEPARATOR]]static <> link:{java8-javadoc}/java/text/NumberFormat$Field.html#GROUPING_SEPARATOR[GROUPING_SEPARATOR] (link:{java9-javadoc}/java/text/NumberFormat$Field.html#GROUPING_SEPARATOR[java 9]) -** [[painless-api-reference-NumberFormat-Field-INTEGER]]static <> link:{java8-javadoc}/java/text/NumberFormat$Field.html#INTEGER[INTEGER] (link:{java9-javadoc}/java/text/NumberFormat$Field.html#INTEGER[java 9]) -** [[painless-api-reference-NumberFormat-Field-PERCENT]]static <> link:{java8-javadoc}/java/text/NumberFormat$Field.html#PERCENT[PERCENT] (link:{java9-javadoc}/java/text/NumberFormat$Field.html#PERCENT[java 9]) -** [[painless-api-reference-NumberFormat-Field-PERMILLE]]static <> link:{java8-javadoc}/java/text/NumberFormat$Field.html#PERMILLE[PERMILLE] (link:{java9-javadoc}/java/text/NumberFormat$Field.html#PERMILLE[java 9]) -** [[painless-api-reference-NumberFormat-Field-SIGN]]static <> link:{java8-javadoc}/java/text/NumberFormat$Field.html#SIGN[SIGN] (link:{java9-javadoc}/java/text/NumberFormat$Field.html#SIGN[java 9]) +** [[painless-api-reference-NumberFormat-Field-CURRENCY]]static <> link:{java8-javadoc}/java/text/NumberFormat.Field.html#CURRENCY[CURRENCY] (link:{java9-javadoc}/java/text/NumberFormat.Field.html#CURRENCY[java 9]) +** [[painless-api-reference-NumberFormat-Field-DECIMAL_SEPARATOR]]static <> link:{java8-javadoc}/java/text/NumberFormat.Field.html#DECIMAL_SEPARATOR[DECIMAL_SEPARATOR] (link:{java9-javadoc}/java/text/NumberFormat.Field.html#DECIMAL_SEPARATOR[java 9]) +** [[painless-api-reference-NumberFormat-Field-EXPONENT]]static <> link:{java8-javadoc}/java/text/NumberFormat.Field.html#EXPONENT[EXPONENT] (link:{java9-javadoc}/java/text/NumberFormat.Field.html#EXPONENT[java 9]) +** [[painless-api-reference-NumberFormat-Field-EXPONENT_SIGN]]static <> link:{java8-javadoc}/java/text/NumberFormat.Field.html#EXPONENT_SIGN[EXPONENT_SIGN] (link:{java9-javadoc}/java/text/NumberFormat.Field.html#EXPONENT_SIGN[java 9]) +** [[painless-api-reference-NumberFormat-Field-EXPONENT_SYMBOL]]static <> link:{java8-javadoc}/java/text/NumberFormat.Field.html#EXPONENT_SYMBOL[EXPONENT_SYMBOL] (link:{java9-javadoc}/java/text/NumberFormat.Field.html#EXPONENT_SYMBOL[java 9]) +** [[painless-api-reference-NumberFormat-Field-FRACTION]]static <> link:{java8-javadoc}/java/text/NumberFormat.Field.html#FRACTION[FRACTION] (link:{java9-javadoc}/java/text/NumberFormat.Field.html#FRACTION[java 9]) +** [[painless-api-reference-NumberFormat-Field-GROUPING_SEPARATOR]]static <> link:{java8-javadoc}/java/text/NumberFormat.Field.html#GROUPING_SEPARATOR[GROUPING_SEPARATOR] (link:{java9-javadoc}/java/text/NumberFormat.Field.html#GROUPING_SEPARATOR[java 9]) +** [[painless-api-reference-NumberFormat-Field-INTEGER]]static <> link:{java8-javadoc}/java/text/NumberFormat.Field.html#INTEGER[INTEGER] (link:{java9-javadoc}/java/text/NumberFormat.Field.html#INTEGER[java 9]) +** [[painless-api-reference-NumberFormat-Field-PERCENT]]static <> link:{java8-javadoc}/java/text/NumberFormat.Field.html#PERCENT[PERCENT] (link:{java9-javadoc}/java/text/NumberFormat.Field.html#PERCENT[java 9]) +** [[painless-api-reference-NumberFormat-Field-PERMILLE]]static <> link:{java8-javadoc}/java/text/NumberFormat.Field.html#PERMILLE[PERMILLE] (link:{java9-javadoc}/java/text/NumberFormat.Field.html#PERMILLE[java 9]) +** [[painless-api-reference-NumberFormat-Field-SIGN]]static <> link:{java8-javadoc}/java/text/NumberFormat.Field.html#SIGN[SIGN] (link:{java9-javadoc}/java/text/NumberFormat.Field.html#SIGN[java 9]) * Inherits methods from ++<>++ diff --git a/docs/painless/painless-api-reference/PrimitiveIterator.OfDouble.asciidoc b/docs/painless/painless-api-reference/PrimitiveIterator.OfDouble.asciidoc index bb26cf3b51477..678c1265f04c9 100644 --- a/docs/painless/painless-api-reference/PrimitiveIterator.OfDouble.asciidoc +++ b/docs/painless/painless-api-reference/PrimitiveIterator.OfDouble.asciidoc @@ -4,6 +4,6 @@ Rebuild by running `gradle generatePainlessApi`. //// [[painless-api-reference-PrimitiveIterator-OfDouble]]++PrimitiveIterator.OfDouble++:: -* ++[[painless-api-reference-PrimitiveIterator-OfDouble-next-0]]<> link:{java8-javadoc}/java/util/PrimitiveIterator$OfDouble.html#next%2D%2D[next]()++ (link:{java9-javadoc}/java/util/PrimitiveIterator$OfDouble.html#next%2D%2D[java 9]) -* ++[[painless-api-reference-PrimitiveIterator-OfDouble-nextDouble-0]]double link:{java8-javadoc}/java/util/PrimitiveIterator$OfDouble.html#nextDouble%2D%2D[nextDouble]()++ (link:{java9-javadoc}/java/util/PrimitiveIterator$OfDouble.html#nextDouble%2D%2D[java 9]) +* ++[[painless-api-reference-PrimitiveIterator-OfDouble-next-0]]<> link:{java8-javadoc}/java/util/PrimitiveIterator.OfDouble.html#next%2D%2D[next]()++ (link:{java9-javadoc}/java/util/PrimitiveIterator.OfDouble.html#next%2D%2D[java 9]) +* ++[[painless-api-reference-PrimitiveIterator-OfDouble-nextDouble-0]]double link:{java8-javadoc}/java/util/PrimitiveIterator.OfDouble.html#nextDouble%2D%2D[nextDouble]()++ (link:{java9-javadoc}/java/util/PrimitiveIterator.OfDouble.html#nextDouble%2D%2D[java 9]) * Inherits methods from ++<>++, ++<>++, ++<>++ diff --git a/docs/painless/painless-api-reference/PrimitiveIterator.OfInt.asciidoc b/docs/painless/painless-api-reference/PrimitiveIterator.OfInt.asciidoc index 1cd90b8e13f02..0f7c1f31f0366 100644 --- a/docs/painless/painless-api-reference/PrimitiveIterator.OfInt.asciidoc +++ b/docs/painless/painless-api-reference/PrimitiveIterator.OfInt.asciidoc @@ -4,6 +4,6 @@ Rebuild by running `gradle generatePainlessApi`. //// [[painless-api-reference-PrimitiveIterator-OfInt]]++PrimitiveIterator.OfInt++:: -* ++[[painless-api-reference-PrimitiveIterator-OfInt-next-0]]<> link:{java8-javadoc}/java/util/PrimitiveIterator$OfInt.html#next%2D%2D[next]()++ (link:{java9-javadoc}/java/util/PrimitiveIterator$OfInt.html#next%2D%2D[java 9]) -* ++[[painless-api-reference-PrimitiveIterator-OfInt-nextInt-0]]int link:{java8-javadoc}/java/util/PrimitiveIterator$OfInt.html#nextInt%2D%2D[nextInt]()++ (link:{java9-javadoc}/java/util/PrimitiveIterator$OfInt.html#nextInt%2D%2D[java 9]) +* ++[[painless-api-reference-PrimitiveIterator-OfInt-next-0]]<> link:{java8-javadoc}/java/util/PrimitiveIterator.OfInt.html#next%2D%2D[next]()++ (link:{java9-javadoc}/java/util/PrimitiveIterator.OfInt.html#next%2D%2D[java 9]) +* ++[[painless-api-reference-PrimitiveIterator-OfInt-nextInt-0]]int link:{java8-javadoc}/java/util/PrimitiveIterator.OfInt.html#nextInt%2D%2D[nextInt]()++ (link:{java9-javadoc}/java/util/PrimitiveIterator.OfInt.html#nextInt%2D%2D[java 9]) * Inherits methods from ++<>++, ++<>++, ++<>++ diff --git a/docs/painless/painless-api-reference/PrimitiveIterator.OfLong.asciidoc b/docs/painless/painless-api-reference/PrimitiveIterator.OfLong.asciidoc index bcfff0e7edc14..ce5b3f5b3a72a 100644 --- a/docs/painless/painless-api-reference/PrimitiveIterator.OfLong.asciidoc +++ b/docs/painless/painless-api-reference/PrimitiveIterator.OfLong.asciidoc @@ -4,6 +4,6 @@ Rebuild by running `gradle generatePainlessApi`. //// [[painless-api-reference-PrimitiveIterator-OfLong]]++PrimitiveIterator.OfLong++:: -* ++[[painless-api-reference-PrimitiveIterator-OfLong-next-0]]<> link:{java8-javadoc}/java/util/PrimitiveIterator$OfLong.html#next%2D%2D[next]()++ (link:{java9-javadoc}/java/util/PrimitiveIterator$OfLong.html#next%2D%2D[java 9]) -* ++[[painless-api-reference-PrimitiveIterator-OfLong-nextLong-0]]long link:{java8-javadoc}/java/util/PrimitiveIterator$OfLong.html#nextLong%2D%2D[nextLong]()++ (link:{java9-javadoc}/java/util/PrimitiveIterator$OfLong.html#nextLong%2D%2D[java 9]) +* ++[[painless-api-reference-PrimitiveIterator-OfLong-next-0]]<> link:{java8-javadoc}/java/util/PrimitiveIterator.OfLong.html#next%2D%2D[next]()++ (link:{java9-javadoc}/java/util/PrimitiveIterator.OfLong.html#next%2D%2D[java 9]) +* ++[[painless-api-reference-PrimitiveIterator-OfLong-nextLong-0]]long link:{java8-javadoc}/java/util/PrimitiveIterator.OfLong.html#nextLong%2D%2D[nextLong]()++ (link:{java9-javadoc}/java/util/PrimitiveIterator.OfLong.html#nextLong%2D%2D[java 9]) * Inherits methods from ++<>++, ++<>++, ++<>++ diff --git a/docs/painless/painless-api-reference/Spliterator.OfDouble.asciidoc b/docs/painless/painless-api-reference/Spliterator.OfDouble.asciidoc index 7292d2bffca8a..4722a8e6e0005 100644 --- a/docs/painless/painless-api-reference/Spliterator.OfDouble.asciidoc +++ b/docs/painless/painless-api-reference/Spliterator.OfDouble.asciidoc @@ -4,5 +4,5 @@ Rebuild by running `gradle generatePainlessApi`. //// [[painless-api-reference-Spliterator-OfDouble]]++Spliterator.OfDouble++:: -* ++[[painless-api-reference-Spliterator-OfDouble-trySplit-0]]<> link:{java8-javadoc}/java/util/Spliterator$OfDouble.html#trySplit%2D%2D[trySplit]()++ (link:{java9-javadoc}/java/util/Spliterator$OfDouble.html#trySplit%2D%2D[java 9]) +* ++[[painless-api-reference-Spliterator-OfDouble-trySplit-0]]<> link:{java8-javadoc}/java/util/Spliterator.OfDouble.html#trySplit%2D%2D[trySplit]()++ (link:{java9-javadoc}/java/util/Spliterator.OfDouble.html#trySplit%2D%2D[java 9]) * Inherits methods from ++<>++, ++<>++, ++<>++ diff --git a/docs/painless/painless-api-reference/Spliterator.OfInt.asciidoc b/docs/painless/painless-api-reference/Spliterator.OfInt.asciidoc index 1f19e7013aa6c..e70221d6bf1c7 100644 --- a/docs/painless/painless-api-reference/Spliterator.OfInt.asciidoc +++ b/docs/painless/painless-api-reference/Spliterator.OfInt.asciidoc @@ -4,5 +4,5 @@ Rebuild by running `gradle generatePainlessApi`. //// [[painless-api-reference-Spliterator-OfInt]]++Spliterator.OfInt++:: -* ++[[painless-api-reference-Spliterator-OfInt-trySplit-0]]<> link:{java8-javadoc}/java/util/Spliterator$OfInt.html#trySplit%2D%2D[trySplit]()++ (link:{java9-javadoc}/java/util/Spliterator$OfInt.html#trySplit%2D%2D[java 9]) +* ++[[painless-api-reference-Spliterator-OfInt-trySplit-0]]<> link:{java8-javadoc}/java/util/Spliterator.OfInt.html#trySplit%2D%2D[trySplit]()++ (link:{java9-javadoc}/java/util/Spliterator.OfInt.html#trySplit%2D%2D[java 9]) * Inherits methods from ++<>++, ++<>++, ++<>++ diff --git a/docs/painless/painless-api-reference/Spliterator.OfLong.asciidoc b/docs/painless/painless-api-reference/Spliterator.OfLong.asciidoc index a1d56d5e1ee3c..8f709add01e17 100644 --- a/docs/painless/painless-api-reference/Spliterator.OfLong.asciidoc +++ b/docs/painless/painless-api-reference/Spliterator.OfLong.asciidoc @@ -4,5 +4,5 @@ Rebuild by running `gradle generatePainlessApi`. //// [[painless-api-reference-Spliterator-OfLong]]++Spliterator.OfLong++:: -* ++[[painless-api-reference-Spliterator-OfLong-trySplit-0]]<> link:{java8-javadoc}/java/util/Spliterator$OfLong.html#trySplit%2D%2D[trySplit]()++ (link:{java9-javadoc}/java/util/Spliterator$OfLong.html#trySplit%2D%2D[java 9]) +* ++[[painless-api-reference-Spliterator-OfLong-trySplit-0]]<> link:{java8-javadoc}/java/util/Spliterator.OfLong.html#trySplit%2D%2D[trySplit]()++ (link:{java9-javadoc}/java/util/Spliterator.OfLong.html#trySplit%2D%2D[java 9]) * Inherits methods from ++<>++, ++<>++, ++<>++ diff --git a/docs/painless/painless-api-reference/Spliterator.OfPrimitive.asciidoc b/docs/painless/painless-api-reference/Spliterator.OfPrimitive.asciidoc index 00cd8b472618b..297c213b44f50 100644 --- a/docs/painless/painless-api-reference/Spliterator.OfPrimitive.asciidoc +++ b/docs/painless/painless-api-reference/Spliterator.OfPrimitive.asciidoc @@ -4,7 +4,7 @@ Rebuild by running `gradle generatePainlessApi`. //// [[painless-api-reference-Spliterator-OfPrimitive]]++Spliterator.OfPrimitive++:: -* ++[[painless-api-reference-Spliterator-OfPrimitive-forEachRemaining-1]]void link:{java8-javadoc}/java/util/Spliterator$OfPrimitive.html#forEachRemaining%2Djava.lang.Object%2D[forEachRemaining](def)++ (link:{java9-javadoc}/java/util/Spliterator$OfPrimitive.html#forEachRemaining%2Djava.lang.Object%2D[java 9]) -* ++[[painless-api-reference-Spliterator-OfPrimitive-tryAdvance-1]]boolean link:{java8-javadoc}/java/util/Spliterator$OfPrimitive.html#tryAdvance%2Djava.lang.Object%2D[tryAdvance](def)++ (link:{java9-javadoc}/java/util/Spliterator$OfPrimitive.html#tryAdvance%2Djava.lang.Object%2D[java 9]) -* ++[[painless-api-reference-Spliterator-OfPrimitive-trySplit-0]]<> link:{java8-javadoc}/java/util/Spliterator$OfPrimitive.html#trySplit%2D%2D[trySplit]()++ (link:{java9-javadoc}/java/util/Spliterator$OfPrimitive.html#trySplit%2D%2D[java 9]) +* ++[[painless-api-reference-Spliterator-OfPrimitive-forEachRemaining-1]]void link:{java8-javadoc}/java/util/Spliterator.OfPrimitive.html#forEachRemaining%2Djava.lang.Object%2D[forEachRemaining](def)++ (link:{java9-javadoc}/java/util/Spliterator.OfPrimitive.html#forEachRemaining%2Djava.lang.Object%2D[java 9]) +* ++[[painless-api-reference-Spliterator-OfPrimitive-tryAdvance-1]]boolean link:{java8-javadoc}/java/util/Spliterator.OfPrimitive.html#tryAdvance%2Djava.lang.Object%2D[tryAdvance](def)++ (link:{java9-javadoc}/java/util/Spliterator.OfPrimitive.html#tryAdvance%2Djava.lang.Object%2D[java 9]) +* ++[[painless-api-reference-Spliterator-OfPrimitive-trySplit-0]]<> link:{java8-javadoc}/java/util/Spliterator.OfPrimitive.html#trySplit%2D%2D[trySplit]()++ (link:{java9-javadoc}/java/util/Spliterator.OfPrimitive.html#trySplit%2D%2D[java 9]) * Inherits methods from ++<>++, ++<>++ diff --git a/docs/painless/painless-api-reference/Stream.Builder.asciidoc b/docs/painless/painless-api-reference/Stream.Builder.asciidoc index aa9146ce8ed61..42d91e01a2703 100644 --- a/docs/painless/painless-api-reference/Stream.Builder.asciidoc +++ b/docs/painless/painless-api-reference/Stream.Builder.asciidoc @@ -4,6 +4,6 @@ Rebuild by running `gradle generatePainlessApi`. //// [[painless-api-reference-Stream-Builder]]++Stream.Builder++:: -* ++[[painless-api-reference-Stream-Builder-add-1]]<> link:{java8-javadoc}/java/util/stream/Stream$Builder.html#add%2Djava.lang.Object%2D[add](def)++ (link:{java9-javadoc}/java/util/stream/Stream$Builder.html#add%2Djava.lang.Object%2D[java 9]) -* ++[[painless-api-reference-Stream-Builder-build-0]]<> link:{java8-javadoc}/java/util/stream/Stream$Builder.html#build%2D%2D[build]()++ (link:{java9-javadoc}/java/util/stream/Stream$Builder.html#build%2D%2D[java 9]) +* ++[[painless-api-reference-Stream-Builder-add-1]]<> link:{java8-javadoc}/java/util/stream/Stream.Builder.html#add%2Djava.lang.Object%2D[add](def)++ (link:{java9-javadoc}/java/util/stream/Stream.Builder.html#add%2Djava.lang.Object%2D[java 9]) +* ++[[painless-api-reference-Stream-Builder-build-0]]<> link:{java8-javadoc}/java/util/stream/Stream.Builder.html#build%2D%2D[build]()++ (link:{java9-javadoc}/java/util/stream/Stream.Builder.html#build%2D%2D[java 9]) * Inherits methods from ++<>++, ++<>++ diff --git a/docs/painless/painless-api-reference/ZoneOffsetTransitionRule.TimeDefinition.asciidoc b/docs/painless/painless-api-reference/ZoneOffsetTransitionRule.TimeDefinition.asciidoc index 45fb25e0f6e73..045f2a0188948 100644 --- a/docs/painless/painless-api-reference/ZoneOffsetTransitionRule.TimeDefinition.asciidoc +++ b/docs/painless/painless-api-reference/ZoneOffsetTransitionRule.TimeDefinition.asciidoc @@ -4,10 +4,10 @@ Rebuild by running `gradle generatePainlessApi`. //// [[painless-api-reference-ZoneOffsetTransitionRule-TimeDefinition]]++ZoneOffsetTransitionRule.TimeDefinition++:: -** [[painless-api-reference-ZoneOffsetTransitionRule-TimeDefinition-STANDARD]]static <> link:{java8-javadoc}/java/time/zone/ZoneOffsetTransitionRule$TimeDefinition.html#STANDARD[STANDARD] (link:{java9-javadoc}/java/time/zone/ZoneOffsetTransitionRule$TimeDefinition.html#STANDARD[java 9]) -** [[painless-api-reference-ZoneOffsetTransitionRule-TimeDefinition-UTC]]static <> link:{java8-javadoc}/java/time/zone/ZoneOffsetTransitionRule$TimeDefinition.html#UTC[UTC] (link:{java9-javadoc}/java/time/zone/ZoneOffsetTransitionRule$TimeDefinition.html#UTC[java 9]) -** [[painless-api-reference-ZoneOffsetTransitionRule-TimeDefinition-WALL]]static <> link:{java8-javadoc}/java/time/zone/ZoneOffsetTransitionRule$TimeDefinition.html#WALL[WALL] (link:{java9-javadoc}/java/time/zone/ZoneOffsetTransitionRule$TimeDefinition.html#WALL[java 9]) -* ++[[painless-api-reference-ZoneOffsetTransitionRule-TimeDefinition-valueOf-1]]static <> link:{java8-javadoc}/java/time/zone/ZoneOffsetTransitionRule$TimeDefinition.html#valueOf%2Djava.lang.String%2D[valueOf](<>)++ (link:{java9-javadoc}/java/time/zone/ZoneOffsetTransitionRule$TimeDefinition.html#valueOf%2Djava.lang.String%2D[java 9]) -* ++[[painless-api-reference-ZoneOffsetTransitionRule-TimeDefinition-values-0]]static <>[] link:{java8-javadoc}/java/time/zone/ZoneOffsetTransitionRule$TimeDefinition.html#values%2D%2D[values]()++ (link:{java9-javadoc}/java/time/zone/ZoneOffsetTransitionRule$TimeDefinition.html#values%2D%2D[java 9]) -* ++[[painless-api-reference-ZoneOffsetTransitionRule-TimeDefinition-createDateTime-3]]<> link:{java8-javadoc}/java/time/zone/ZoneOffsetTransitionRule$TimeDefinition.html#createDateTime%2Djava.time.LocalDateTime%2Djava.time.ZoneOffset%2Djava.time.ZoneOffset%2D[createDateTime](<>, <>, <>)++ (link:{java9-javadoc}/java/time/zone/ZoneOffsetTransitionRule$TimeDefinition.html#createDateTime%2Djava.time.LocalDateTime%2Djava.time.ZoneOffset%2Djava.time.ZoneOffset%2D[java 9]) +** [[painless-api-reference-ZoneOffsetTransitionRule-TimeDefinition-STANDARD]]static <> link:{java8-javadoc}/java/time/zone/ZoneOffsetTransitionRule.TimeDefinition.html#STANDARD[STANDARD] (link:{java9-javadoc}/java/time/zone/ZoneOffsetTransitionRule.TimeDefinition.html#STANDARD[java 9]) +** [[painless-api-reference-ZoneOffsetTransitionRule-TimeDefinition-UTC]]static <> link:{java8-javadoc}/java/time/zone/ZoneOffsetTransitionRule.TimeDefinition.html#UTC[UTC] (link:{java9-javadoc}/java/time/zone/ZoneOffsetTransitionRule.TimeDefinition.html#UTC[java 9]) +** [[painless-api-reference-ZoneOffsetTransitionRule-TimeDefinition-WALL]]static <> link:{java8-javadoc}/java/time/zone/ZoneOffsetTransitionRule.TimeDefinition.html#WALL[WALL] (link:{java9-javadoc}/java/time/zone/ZoneOffsetTransitionRule.TimeDefinition.html#WALL[java 9]) +* ++[[painless-api-reference-ZoneOffsetTransitionRule-TimeDefinition-valueOf-1]]static <> link:{java8-javadoc}/java/time/zone/ZoneOffsetTransitionRule.TimeDefinition.html#valueOf%2Djava.lang.String%2D[valueOf](<>)++ (link:{java9-javadoc}/java/time/zone/ZoneOffsetTransitionRule.TimeDefinition.html#valueOf%2Djava.lang.String%2D[java 9]) +* ++[[painless-api-reference-ZoneOffsetTransitionRule-TimeDefinition-values-0]]static <>[] link:{java8-javadoc}/java/time/zone/ZoneOffsetTransitionRule.TimeDefinition.html#values%2D%2D[values]()++ (link:{java9-javadoc}/java/time/zone/ZoneOffsetTransitionRule.TimeDefinition.html#values%2D%2D[java 9]) +* ++[[painless-api-reference-ZoneOffsetTransitionRule-TimeDefinition-createDateTime-3]]<> link:{java8-javadoc}/java/time/zone/ZoneOffsetTransitionRule.TimeDefinition.html#createDateTime%2Djava.time.LocalDateTime%2Djava.time.ZoneOffset%2Djava.time.ZoneOffset%2D[createDateTime](<>, <>, <>)++ (link:{java9-javadoc}/java/time/zone/ZoneOffsetTransitionRule.TimeDefinition.html#createDateTime%2Djava.time.LocalDateTime%2Djava.time.ZoneOffset%2Djava.time.ZoneOffset%2D[java 9]) * Inherits methods from ++<>++, ++<>++ diff --git a/docs/painless/painless-api-reference/index.asciidoc b/docs/painless/painless-api-reference/index.asciidoc index 51e6271548410..d5deb8900caec 100644 --- a/docs/painless/painless-api-reference/index.asciidoc +++ b/docs/painless/painless-api-reference/index.asciidoc @@ -328,11 +328,17 @@ include::ZonedDateTime.asciidoc[] include::org.elasticsearch.common.geo.GeoPoint.asciidoc[] include::org.elasticsearch.index.fielddata.ScriptDocValues.Booleans.asciidoc[] include::org.elasticsearch.index.fielddata.ScriptDocValues.BytesRefs.asciidoc[] +include::org.elasticsearch.index.fielddata.ScriptDocValues.Dates.asciidoc[] include::org.elasticsearch.index.fielddata.ScriptDocValues.Doubles.asciidoc[] include::org.elasticsearch.index.fielddata.ScriptDocValues.GeoPoints.asciidoc[] include::org.elasticsearch.index.fielddata.ScriptDocValues.Longs.asciidoc[] include::org.elasticsearch.index.fielddata.ScriptDocValues.Strings.asciidoc[] include::org.elasticsearch.index.mapper.IpFieldMapper.IpFieldType.IpScriptDocValues.asciidoc[] +include::org.elasticsearch.index.similarity.ScriptedSimilarity.Doc.asciidoc[] +include::org.elasticsearch.index.similarity.ScriptedSimilarity.Field.asciidoc[] +include::org.elasticsearch.index.similarity.ScriptedSimilarity.Query.asciidoc[] +include::org.elasticsearch.index.similarity.ScriptedSimilarity.Term.asciidoc[] include::org.elasticsearch.painless.FeatureTest.asciidoc[] +include::org.elasticsearch.search.lookup.FieldLookup.asciidoc[] include::org.joda.time.ReadableDateTime.asciidoc[] include::org.joda.time.ReadableInstant.asciidoc[] diff --git a/docs/painless/painless-api-reference/org.elasticsearch.index.fielddata.ScriptDocValues.Booleans.asciidoc b/docs/painless/painless-api-reference/org.elasticsearch.index.fielddata.ScriptDocValues.Booleans.asciidoc index 12aaed4003d97..b3e77c6fbc5cb 100644 --- a/docs/painless/painless-api-reference/org.elasticsearch.index.fielddata.ScriptDocValues.Booleans.asciidoc +++ b/docs/painless/painless-api-reference/org.elasticsearch.index.fielddata.ScriptDocValues.Booleans.asciidoc @@ -4,7 +4,7 @@ Rebuild by running `gradle generatePainlessApi`. //// [[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-Booleans]]++org.elasticsearch.index.fielddata.ScriptDocValues.Booleans++:: -* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-Booleans-get-1]]<> link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues$Booleans.html#get%2Dint%2D[get](int)++ -* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-Booleans-getValue-0]]boolean link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues$Booleans.html#getValue%2D%2D[getValue]()++ -* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-Booleans-getValues-0]]<> link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues$Booleans.html#getValues%2D%2D[getValues]()++ +* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-Booleans-get-1]]<> link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues.Booleans.html#get%2Dint%2D[get](int)++ +* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-Booleans-getValue-0]]boolean link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues.Booleans.html#getValue%2D%2D[getValue]()++ +* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-Booleans-getValues-0]]<> link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues.Booleans.html#getValues%2D%2D[getValues]()++ * Inherits methods from ++<>++, ++<>++, ++<>++, ++<>++ diff --git a/docs/painless/painless-api-reference/org.elasticsearch.index.fielddata.ScriptDocValues.BytesRefs.asciidoc b/docs/painless/painless-api-reference/org.elasticsearch.index.fielddata.ScriptDocValues.BytesRefs.asciidoc index fe11f96874941..3b61905bd0ebf 100644 --- a/docs/painless/painless-api-reference/org.elasticsearch.index.fielddata.ScriptDocValues.BytesRefs.asciidoc +++ b/docs/painless/painless-api-reference/org.elasticsearch.index.fielddata.ScriptDocValues.BytesRefs.asciidoc @@ -4,7 +4,7 @@ Rebuild by running `gradle generatePainlessApi`. //// [[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-BytesRefs]]++org.elasticsearch.index.fielddata.ScriptDocValues.BytesRefs++:: -* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-BytesRefs-get-1]]<> link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues$BytesRefs.html#get%2Dint%2D[get](int)++ -* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-BytesRefs-getValue-0]]<> link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues$BytesRefs.html#getValue%2D%2D[getValue]()++ -* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-BytesRefs-getValues-0]]<> link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues$BytesRefs.html#getValues%2D%2D[getValues]()++ +* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-BytesRefs-get-1]]<> link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues.BytesRefs.html#get%2Dint%2D[get](int)++ +* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-BytesRefs-getValue-0]]<> link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues.BytesRefs.html#getValue%2D%2D[getValue]()++ +* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-BytesRefs-getValues-0]]<> link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues.BytesRefs.html#getValues%2D%2D[getValues]()++ * Inherits methods from ++<>++, ++<>++, ++<>++, ++<>++ diff --git a/docs/painless/painless-api-reference/org.elasticsearch.index.fielddata.ScriptDocValues.Dates.asciidoc b/docs/painless/painless-api-reference/org.elasticsearch.index.fielddata.ScriptDocValues.Dates.asciidoc new file mode 100644 index 0000000000000..061f16b7352a9 --- /dev/null +++ b/docs/painless/painless-api-reference/org.elasticsearch.index.fielddata.ScriptDocValues.Dates.asciidoc @@ -0,0 +1,12 @@ +//// +Automatically generated by PainlessDocGenerator. Do not edit. +Rebuild by running `gradle generatePainlessApi`. +//// + +[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-Dates]]++org.elasticsearch.index.fielddata.ScriptDocValues.Dates++:: +* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-Dates-get-1]]<> link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues.Dates.html#get%2Dint%2D[get](int)++ +* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-Dates-getDate-0]]<> link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues.Dates.html#getDate%2D%2D[getDate]()++ +* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-Dates-getDates-0]]<> link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues.Dates.html#getDates%2D%2D[getDates]()++ +* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-Dates-getValue-0]]<> link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues.Dates.html#getValue%2D%2D[getValue]()++ +* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-Dates-getValues-0]]<> link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues.Dates.html#getValues%2D%2D[getValues]()++ +* Inherits methods from ++<>++, ++<>++, ++<>++, ++<>++ diff --git a/docs/painless/painless-api-reference/org.elasticsearch.index.fielddata.ScriptDocValues.Doubles.asciidoc b/docs/painless/painless-api-reference/org.elasticsearch.index.fielddata.ScriptDocValues.Doubles.asciidoc index 92d4307241483..53c43a9963514 100644 --- a/docs/painless/painless-api-reference/org.elasticsearch.index.fielddata.ScriptDocValues.Doubles.asciidoc +++ b/docs/painless/painless-api-reference/org.elasticsearch.index.fielddata.ScriptDocValues.Doubles.asciidoc @@ -4,7 +4,7 @@ Rebuild by running `gradle generatePainlessApi`. //// [[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-Doubles]]++org.elasticsearch.index.fielddata.ScriptDocValues.Doubles++:: -* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-Doubles-get-1]]<> link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues$Doubles.html#get%2Dint%2D[get](int)++ -* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-Doubles-getValue-0]]double link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues$Doubles.html#getValue%2D%2D[getValue]()++ -* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-Doubles-getValues-0]]<> link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues$Doubles.html#getValues%2D%2D[getValues]()++ +* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-Doubles-get-1]]<> link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues.Doubles.html#get%2Dint%2D[get](int)++ +* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-Doubles-getValue-0]]double link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues.Doubles.html#getValue%2D%2D[getValue]()++ +* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-Doubles-getValues-0]]<> link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues.Doubles.html#getValues%2D%2D[getValues]()++ * Inherits methods from ++<>++, ++<>++, ++<>++, ++<>++ diff --git a/docs/painless/painless-api-reference/org.elasticsearch.index.fielddata.ScriptDocValues.GeoPoints.asciidoc b/docs/painless/painless-api-reference/org.elasticsearch.index.fielddata.ScriptDocValues.GeoPoints.asciidoc index e55afb2559430..a345cd712cf0d 100644 --- a/docs/painless/painless-api-reference/org.elasticsearch.index.fielddata.ScriptDocValues.GeoPoints.asciidoc +++ b/docs/painless/painless-api-reference/org.elasticsearch.index.fielddata.ScriptDocValues.GeoPoints.asciidoc @@ -4,17 +4,17 @@ Rebuild by running `gradle generatePainlessApi`. //// [[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-GeoPoints]]++org.elasticsearch.index.fielddata.ScriptDocValues.GeoPoints++:: -* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-GeoPoints-arcDistance-2]]double link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues$GeoPoints.html#arcDistance%2Ddouble%2Ddouble%2D[arcDistance](double, double)++ -* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-GeoPoints-arcDistanceWithDefault-3]]double link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues$GeoPoints.html#arcDistanceWithDefault%2Ddouble%2Ddouble%2Ddouble%2D[arcDistanceWithDefault](double, double, double)++ -* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-GeoPoints-geohashDistance-1]]double link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues$GeoPoints.html#geohashDistance%2Djava.lang.String%2D[geohashDistance](<>)++ -* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-GeoPoints-geohashDistanceWithDefault-2]]double link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues$GeoPoints.html#geohashDistanceWithDefault%2Djava.lang.String%2Ddouble%2D[geohashDistanceWithDefault](<>, double)++ -* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-GeoPoints-get-1]]<> link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues$GeoPoints.html#get%2Dint%2D[get](int)++ -* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-GeoPoints-getLat-0]]double link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues$GeoPoints.html#getLat%2D%2D[getLat]()++ -* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-GeoPoints-getLats-0]]double[] link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues$GeoPoints.html#getLats%2D%2D[getLats]()++ -* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-GeoPoints-getLon-0]]double link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues$GeoPoints.html#getLon%2D%2D[getLon]()++ -* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-GeoPoints-getLons-0]]double[] link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues$GeoPoints.html#getLons%2D%2D[getLons]()++ -* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-GeoPoints-getValue-0]]<> link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues$GeoPoints.html#getValue%2D%2D[getValue]()++ -* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-GeoPoints-getValues-0]]<> link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues$GeoPoints.html#getValues%2D%2D[getValues]()++ -* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-GeoPoints-planeDistance-2]]double link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues$GeoPoints.html#planeDistance%2Ddouble%2Ddouble%2D[planeDistance](double, double)++ -* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-GeoPoints-planeDistanceWithDefault-3]]double link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues$GeoPoints.html#planeDistanceWithDefault%2Ddouble%2Ddouble%2Ddouble%2D[planeDistanceWithDefault](double, double, double)++ +* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-GeoPoints-arcDistance-2]]double link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues.GeoPoints.html#arcDistance%2Ddouble%2Ddouble%2D[arcDistance](double, double)++ +* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-GeoPoints-arcDistanceWithDefault-3]]double link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues.GeoPoints.html#arcDistanceWithDefault%2Ddouble%2Ddouble%2Ddouble%2D[arcDistanceWithDefault](double, double, double)++ +* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-GeoPoints-geohashDistance-1]]double link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues.GeoPoints.html#geohashDistance%2Djava.lang.String%2D[geohashDistance](<>)++ +* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-GeoPoints-geohashDistanceWithDefault-2]]double link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues.GeoPoints.html#geohashDistanceWithDefault%2Djava.lang.String%2Ddouble%2D[geohashDistanceWithDefault](<>, double)++ +* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-GeoPoints-get-1]]<> link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues.GeoPoints.html#get%2Dint%2D[get](int)++ +* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-GeoPoints-getLat-0]]double link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues.GeoPoints.html#getLat%2D%2D[getLat]()++ +* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-GeoPoints-getLats-0]]double[] link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues.GeoPoints.html#getLats%2D%2D[getLats]()++ +* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-GeoPoints-getLon-0]]double link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues.GeoPoints.html#getLon%2D%2D[getLon]()++ +* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-GeoPoints-getLons-0]]double[] link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues.GeoPoints.html#getLons%2D%2D[getLons]()++ +* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-GeoPoints-getValue-0]]<> link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues.GeoPoints.html#getValue%2D%2D[getValue]()++ +* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-GeoPoints-getValues-0]]<> link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues.GeoPoints.html#getValues%2D%2D[getValues]()++ +* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-GeoPoints-planeDistance-2]]double link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues.GeoPoints.html#planeDistance%2Ddouble%2Ddouble%2D[planeDistance](double, double)++ +* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-GeoPoints-planeDistanceWithDefault-3]]double link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues.GeoPoints.html#planeDistanceWithDefault%2Ddouble%2Ddouble%2Ddouble%2D[planeDistanceWithDefault](double, double, double)++ * Inherits methods from ++<>++, ++<>++, ++<>++, ++<>++ diff --git a/docs/painless/painless-api-reference/org.elasticsearch.index.fielddata.ScriptDocValues.Longs.asciidoc b/docs/painless/painless-api-reference/org.elasticsearch.index.fielddata.ScriptDocValues.Longs.asciidoc index 85e6e9289ef5a..f8a55344f35ee 100644 --- a/docs/painless/painless-api-reference/org.elasticsearch.index.fielddata.ScriptDocValues.Longs.asciidoc +++ b/docs/painless/painless-api-reference/org.elasticsearch.index.fielddata.ScriptDocValues.Longs.asciidoc @@ -4,9 +4,9 @@ Rebuild by running `gradle generatePainlessApi`. //// [[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-Longs]]++org.elasticsearch.index.fielddata.ScriptDocValues.Longs++:: -* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-Longs-get-1]]<> link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues$Longs.html#get%2Dint%2D[get](int)++ -* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-Longs-getDate-0]]<> link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues$Longs.html#getDate%2D%2D[getDate]()++ -* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-Longs-getDates-0]]<> link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues$Longs.html#getDates%2D%2D[getDates]()++ -* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-Longs-getValue-0]]long link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues$Longs.html#getValue%2D%2D[getValue]()++ -* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-Longs-getValues-0]]<> link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues$Longs.html#getValues%2D%2D[getValues]()++ +* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-Longs-get-1]]<> link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues.Longs.html#get%2Dint%2D[get](int)++ +* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-Longs-getDate-0]]<> link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues.Longs.html#getDate%2D%2D[getDate]()++ +* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-Longs-getDates-0]]<> link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues.Longs.html#getDates%2D%2D[getDates]()++ +* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-Longs-getValue-0]]long link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues.Longs.html#getValue%2D%2D[getValue]()++ +* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-Longs-getValues-0]]<> link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues.Longs.html#getValues%2D%2D[getValues]()++ * Inherits methods from ++<>++, ++<>++, ++<>++, ++<>++ diff --git a/docs/painless/painless-api-reference/org.elasticsearch.index.fielddata.ScriptDocValues.Strings.asciidoc b/docs/painless/painless-api-reference/org.elasticsearch.index.fielddata.ScriptDocValues.Strings.asciidoc index c15c152cd65ba..3284662cb9008 100644 --- a/docs/painless/painless-api-reference/org.elasticsearch.index.fielddata.ScriptDocValues.Strings.asciidoc +++ b/docs/painless/painless-api-reference/org.elasticsearch.index.fielddata.ScriptDocValues.Strings.asciidoc @@ -4,7 +4,7 @@ Rebuild by running `gradle generatePainlessApi`. //// [[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-Strings]]++org.elasticsearch.index.fielddata.ScriptDocValues.Strings++:: -* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-Strings-get-1]]<> link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues$Strings.html#get%2Dint%2D[get](int)++ -* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-Strings-getValue-0]]<> link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues$Strings.html#getValue%2D%2D[getValue]()++ -* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-Strings-getValues-0]]<> link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues$Strings.html#getValues%2D%2D[getValues]()++ +* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-Strings-get-1]]<> link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues.Strings.html#get%2Dint%2D[get](int)++ +* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-Strings-getValue-0]]<> link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues.Strings.html#getValue%2D%2D[getValue]()++ +* ++[[painless-api-reference-org-elasticsearch-index-fielddata-ScriptDocValues-Strings-getValues-0]]<> link:{elasticsearch-javadoc}/org/elasticsearch/index/fielddata/ScriptDocValues.Strings.html#getValues%2D%2D[getValues]()++ * Inherits methods from ++<>++, ++<>++, ++<>++, ++<>++ diff --git a/docs/painless/painless-api-reference/org.elasticsearch.index.mapper.IpFieldMapper.IpFieldType.IpScriptDocValues.asciidoc b/docs/painless/painless-api-reference/org.elasticsearch.index.mapper.IpFieldMapper.IpFieldType.IpScriptDocValues.asciidoc index e6ce5d5a57a27..872f355f308c8 100644 --- a/docs/painless/painless-api-reference/org.elasticsearch.index.mapper.IpFieldMapper.IpFieldType.IpScriptDocValues.asciidoc +++ b/docs/painless/painless-api-reference/org.elasticsearch.index.mapper.IpFieldMapper.IpFieldType.IpScriptDocValues.asciidoc @@ -4,7 +4,7 @@ Rebuild by running `gradle generatePainlessApi`. //// [[painless-api-reference-org-elasticsearch-index-mapper-IpFieldMapper-IpFieldType-IpScriptDocValues]]++org.elasticsearch.index.mapper.IpFieldMapper.IpFieldType.IpScriptDocValues++:: -* ++[[painless-api-reference-org-elasticsearch-index-mapper-IpFieldMapper-IpFieldType-IpScriptDocValues-get-1]]<> link:{elasticsearch-javadoc}/org/elasticsearch/index/mapper/IpFieldMapper$IpFieldType$IpScriptDocValues.html#get%2Dint%2D[get](int)++ -* ++[[painless-api-reference-org-elasticsearch-index-mapper-IpFieldMapper-IpFieldType-IpScriptDocValues-getValue-0]]<> link:{elasticsearch-javadoc}/org/elasticsearch/index/mapper/IpFieldMapper$IpFieldType$IpScriptDocValues.html#getValue%2D%2D[getValue]()++ -* ++[[painless-api-reference-org-elasticsearch-index-mapper-IpFieldMapper-IpFieldType-IpScriptDocValues-getValues-0]]<> link:{elasticsearch-javadoc}/org/elasticsearch/index/mapper/IpFieldMapper$IpFieldType$IpScriptDocValues.html#getValues%2D%2D[getValues]()++ +* ++[[painless-api-reference-org-elasticsearch-index-mapper-IpFieldMapper-IpFieldType-IpScriptDocValues-get-1]]<> link:{elasticsearch-javadoc}/org/elasticsearch/index/mapper/IpFieldMapper.IpFieldType.IpScriptDocValues.html#get%2Dint%2D[get](int)++ +* ++[[painless-api-reference-org-elasticsearch-index-mapper-IpFieldMapper-IpFieldType-IpScriptDocValues-getValue-0]]<> link:{elasticsearch-javadoc}/org/elasticsearch/index/mapper/IpFieldMapper.IpFieldType.IpScriptDocValues.html#getValue%2D%2D[getValue]()++ +* ++[[painless-api-reference-org-elasticsearch-index-mapper-IpFieldMapper-IpFieldType-IpScriptDocValues-getValues-0]]<> link:{elasticsearch-javadoc}/org/elasticsearch/index/mapper/IpFieldMapper.IpFieldType.IpScriptDocValues.html#getValues%2D%2D[getValues]()++ * Inherits methods from ++<>++, ++<>++, ++<>++, ++<>++ diff --git a/docs/painless/painless-api-reference/org.elasticsearch.index.similarity.ScriptedSimilarity.Doc.asciidoc b/docs/painless/painless-api-reference/org.elasticsearch.index.similarity.ScriptedSimilarity.Doc.asciidoc new file mode 100644 index 0000000000000..dcddbf40368b8 --- /dev/null +++ b/docs/painless/painless-api-reference/org.elasticsearch.index.similarity.ScriptedSimilarity.Doc.asciidoc @@ -0,0 +1,9 @@ +//// +Automatically generated by PainlessDocGenerator. Do not edit. +Rebuild by running `gradle generatePainlessApi`. +//// + +[[painless-api-reference-org-elasticsearch-index-similarity-ScriptedSimilarity-Doc]]++org.elasticsearch.index.similarity.ScriptedSimilarity.Doc++:: +* ++[[painless-api-reference-org-elasticsearch-index-similarity-ScriptedSimilarity-Doc-getFreq-0]]float link:{elasticsearch-javadoc}/org/elasticsearch/index/similarity/ScriptedSimilarity.Doc.html#getFreq%2D%2D[getFreq]()++ +* ++[[painless-api-reference-org-elasticsearch-index-similarity-ScriptedSimilarity-Doc-getLength-0]]int link:{elasticsearch-javadoc}/org/elasticsearch/index/similarity/ScriptedSimilarity.Doc.html#getLength%2D%2D[getLength]()++ +* Inherits methods from ++<>++ diff --git a/docs/painless/painless-api-reference/org.elasticsearch.index.similarity.ScriptedSimilarity.Field.asciidoc b/docs/painless/painless-api-reference/org.elasticsearch.index.similarity.ScriptedSimilarity.Field.asciidoc new file mode 100644 index 0000000000000..339bb5d619493 --- /dev/null +++ b/docs/painless/painless-api-reference/org.elasticsearch.index.similarity.ScriptedSimilarity.Field.asciidoc @@ -0,0 +1,10 @@ +//// +Automatically generated by PainlessDocGenerator. Do not edit. +Rebuild by running `gradle generatePainlessApi`. +//// + +[[painless-api-reference-org-elasticsearch-index-similarity-ScriptedSimilarity-Field]]++org.elasticsearch.index.similarity.ScriptedSimilarity.Field++:: +* ++[[painless-api-reference-org-elasticsearch-index-similarity-ScriptedSimilarity-Field-getDocCount-0]]long link:{elasticsearch-javadoc}/org/elasticsearch/index/similarity/ScriptedSimilarity.Field.html#getDocCount%2D%2D[getDocCount]()++ +* ++[[painless-api-reference-org-elasticsearch-index-similarity-ScriptedSimilarity-Field-getSumDocFreq-0]]long link:{elasticsearch-javadoc}/org/elasticsearch/index/similarity/ScriptedSimilarity.Field.html#getSumDocFreq%2D%2D[getSumDocFreq]()++ +* ++[[painless-api-reference-org-elasticsearch-index-similarity-ScriptedSimilarity-Field-getSumTotalTermFreq-0]]long link:{elasticsearch-javadoc}/org/elasticsearch/index/similarity/ScriptedSimilarity.Field.html#getSumTotalTermFreq%2D%2D[getSumTotalTermFreq]()++ +* Inherits methods from ++<>++ diff --git a/docs/painless/painless-api-reference/org.elasticsearch.index.similarity.ScriptedSimilarity.Query.asciidoc b/docs/painless/painless-api-reference/org.elasticsearch.index.similarity.ScriptedSimilarity.Query.asciidoc new file mode 100644 index 0000000000000..b15a8476cdabf --- /dev/null +++ b/docs/painless/painless-api-reference/org.elasticsearch.index.similarity.ScriptedSimilarity.Query.asciidoc @@ -0,0 +1,8 @@ +//// +Automatically generated by PainlessDocGenerator. Do not edit. +Rebuild by running `gradle generatePainlessApi`. +//// + +[[painless-api-reference-org-elasticsearch-index-similarity-ScriptedSimilarity-Query]]++org.elasticsearch.index.similarity.ScriptedSimilarity.Query++:: +* ++[[painless-api-reference-org-elasticsearch-index-similarity-ScriptedSimilarity-Query-getBoost-0]]float link:{elasticsearch-javadoc}/org/elasticsearch/index/similarity/ScriptedSimilarity.Query.html#getBoost%2D%2D[getBoost]()++ +* Inherits methods from ++<>++ diff --git a/docs/painless/painless-api-reference/org.elasticsearch.index.similarity.ScriptedSimilarity.Term.asciidoc b/docs/painless/painless-api-reference/org.elasticsearch.index.similarity.ScriptedSimilarity.Term.asciidoc new file mode 100644 index 0000000000000..42ed2538f972e --- /dev/null +++ b/docs/painless/painless-api-reference/org.elasticsearch.index.similarity.ScriptedSimilarity.Term.asciidoc @@ -0,0 +1,9 @@ +//// +Automatically generated by PainlessDocGenerator. Do not edit. +Rebuild by running `gradle generatePainlessApi`. +//// + +[[painless-api-reference-org-elasticsearch-index-similarity-ScriptedSimilarity-Term]]++org.elasticsearch.index.similarity.ScriptedSimilarity.Term++:: +* ++[[painless-api-reference-org-elasticsearch-index-similarity-ScriptedSimilarity-Term-getDocFreq-0]]long link:{elasticsearch-javadoc}/org/elasticsearch/index/similarity/ScriptedSimilarity.Term.html#getDocFreq%2D%2D[getDocFreq]()++ +* ++[[painless-api-reference-org-elasticsearch-index-similarity-ScriptedSimilarity-Term-getTotalTermFreq-0]]long link:{elasticsearch-javadoc}/org/elasticsearch/index/similarity/ScriptedSimilarity.Term.html#getTotalTermFreq%2D%2D[getTotalTermFreq]()++ +* Inherits methods from ++<>++ diff --git a/docs/painless/painless-api-reference/org.elasticsearch.painless.FeatureTest.asciidoc b/docs/painless/painless-api-reference/org.elasticsearch.painless.FeatureTest.asciidoc index 40fdec50793a9..6687f0dd651bd 100644 --- a/docs/painless/painless-api-reference/org.elasticsearch.painless.FeatureTest.asciidoc +++ b/docs/painless/painless-api-reference/org.elasticsearch.painless.FeatureTest.asciidoc @@ -8,8 +8,11 @@ Rebuild by running `gradle generatePainlessApi`. * ++[[painless-api-reference-org-elasticsearch-painless-FeatureTest-overloadedStatic-1]]static boolean link:{painless-javadoc}/org/elasticsearch/painless/FeatureTest.html#overloadedStatic%2Dboolean%2D[overloadedStatic](boolean)++ * ++[[painless-api-reference-org-elasticsearch-painless-FeatureTest-org.elasticsearch.painless.FeatureTest-0]]link:{painless-javadoc}/org/elasticsearch/painless/FeatureTest.html#org.elasticsearch.painless.FeatureTest%2D%2D[org.elasticsearch.painless.FeatureTest]()++ * ++[[painless-api-reference-org-elasticsearch-painless-FeatureTest-org.elasticsearch.painless.FeatureTest-2]]link:{painless-javadoc}/org/elasticsearch/painless/FeatureTest.html#org.elasticsearch.painless.FeatureTest%2Dint%2Dint%2D[org.elasticsearch.painless.FeatureTest](int, int)++ +* ++[[painless-api-reference-org-elasticsearch-painless-FeatureTest-addToTotal-1]]int link:{painless-javadoc}/org/elasticsearch/painless/FeatureTestAugmentation.html#addToTotal%2Dorg.elasticsearch.painless.FeatureTest%2Dint%2D[addToTotal](int)++ +* ++[[painless-api-reference-org-elasticsearch-painless-FeatureTest-getTotal-0]]int link:{painless-javadoc}/org/elasticsearch/painless/FeatureTestAugmentation.html#getTotal%2Dorg.elasticsearch.painless.FeatureTest%2D[getTotal]()++ * ++[[painless-api-reference-org-elasticsearch-painless-FeatureTest-getX-0]]int link:{painless-javadoc}/org/elasticsearch/painless/FeatureTest.html#getX%2D%2D[getX]()++ * ++[[painless-api-reference-org-elasticsearch-painless-FeatureTest-getY-0]]int link:{painless-javadoc}/org/elasticsearch/painless/FeatureTest.html#getY%2D%2D[getY]()++ +* ++[[painless-api-reference-org-elasticsearch-painless-FeatureTest-listInput-1]]void link:{painless-javadoc}/org/elasticsearch/painless/FeatureTest.html#listInput%2Djava.util.List%2D[listInput](<>)++ * ++[[painless-api-reference-org-elasticsearch-painless-FeatureTest-setX-1]]void link:{painless-javadoc}/org/elasticsearch/painless/FeatureTest.html#setX%2Dint%2D[setX](int)++ * ++[[painless-api-reference-org-elasticsearch-painless-FeatureTest-setY-1]]void link:{painless-javadoc}/org/elasticsearch/painless/FeatureTest.html#setY%2Dint%2D[setY](int)++ * ++[[painless-api-reference-org-elasticsearch-painless-FeatureTest-twoFunctionsOfX-2]]<> link:{painless-javadoc}/org/elasticsearch/painless/FeatureTest.html#twoFunctionsOfX%2Djava.util.function.Function%2Djava.util.function.Function%2D[twoFunctionsOfX](<>, <>)++ diff --git a/docs/painless/painless-api-reference/org.elasticsearch.search.lookup.FieldLookup.asciidoc b/docs/painless/painless-api-reference/org.elasticsearch.search.lookup.FieldLookup.asciidoc new file mode 100644 index 0000000000000..a9735010ed6f6 --- /dev/null +++ b/docs/painless/painless-api-reference/org.elasticsearch.search.lookup.FieldLookup.asciidoc @@ -0,0 +1,10 @@ +//// +Automatically generated by PainlessDocGenerator. Do not edit. +Rebuild by running `gradle generatePainlessApi`. +//// + +[[painless-api-reference-org-elasticsearch-search-lookup-FieldLookup]]++org.elasticsearch.search.lookup.FieldLookup++:: +* ++[[painless-api-reference-org-elasticsearch-search-lookup-FieldLookup-getValue-0]]def link:{elasticsearch-javadoc}/org/elasticsearch/search/lookup/FieldLookup.html#getValue%2D%2D[getValue]()++ +* ++[[painless-api-reference-org-elasticsearch-search-lookup-FieldLookup-getValues-0]]<> link:{elasticsearch-javadoc}/org/elasticsearch/search/lookup/FieldLookup.html#getValues%2D%2D[getValues]()++ +* ++[[painless-api-reference-org-elasticsearch-search-lookup-FieldLookup-isEmpty-0]]boolean link:{elasticsearch-javadoc}/org/elasticsearch/search/lookup/FieldLookup.html#isEmpty%2D%2D[isEmpty]()++ +* Inherits methods from ++<>++ diff --git a/docs/painless/painless-operators.asciidoc b/docs/painless/painless-operators.asciidoc index 2ed0fb6d4be50..0d5135022ad90 100644 --- a/docs/painless/painless-operators.asciidoc +++ b/docs/painless/painless-operators.asciidoc @@ -660,7 +660,7 @@ Note that def types will be assumed to be of the boolean type. Any def type eva *Examples:* [source,Java] ---- -boolean x = !false; // declares the boolean variable x and sets it to the the opposite of the false value +boolean x = !false; // declares the boolean variable x and sets it to the opposite of the false value boolean y = !x; // declares the boolean variable y and sets it to the opposite of the boolean variable x def z = !y; // declares the def variable z and sets it to the opposite of the boolean variable y ---- diff --git a/docs/plugins/analysis-kuromoji.asciidoc b/docs/plugins/analysis-kuromoji.asciidoc index 7a702295dd90d..383df5afb485b 100644 --- a/docs/plugins/analysis-kuromoji.asciidoc +++ b/docs/plugins/analysis-kuromoji.asciidoc @@ -160,7 +160,6 @@ The above `analyze` request returns the following: [source,js] -------------------------------------------------- -# Result { "tokens" : [ { "token" : "東京", diff --git a/docs/plugins/discovery-ec2.asciidoc b/docs/plugins/discovery-ec2.asciidoc index 121e3adbc60f9..60ccaba00ba75 100644 --- a/docs/plugins/discovery-ec2.asciidoc +++ b/docs/plugins/discovery-ec2.asciidoc @@ -223,7 +223,7 @@ Prefer https://aws.amazon.com/amazon-linux-ami/[Amazon Linux AMIs] as since Elas * Networking throttling takes place on smaller instance types in both the form of https://lab.getbase.com/how-we-discovered-limitations-on-the-aws-tcp-stack/[bandwidth and number of connections]. Therefore if large number of connections are needed and networking is becoming a bottleneck, avoid https://aws.amazon.com/ec2/instance-types/[instance types] with networking labeled as `Moderate` or `Low`. * Multicast is not supported, even when in an VPC; the aws cloud plugin which joins by performing a security group lookup. * When running in multiple http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html[availability zones] be sure to leverage {ref}/allocation-awareness.html[shard allocation awareness] so that not all copies of shard data reside in the same availability zone. -* Do not span a cluster across regions. If necessary, use a tribe node. +* Do not span a cluster across regions. If necessary, use a cross cluster search. ===== Misc * If you have split your nodes into roles, consider https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html[tagging the EC2 instances] by role to make it easier to filter and view your EC2 instances in the AWS console. diff --git a/docs/plugins/ingest-geoip.asciidoc b/docs/plugins/ingest-geoip.asciidoc index 51274b109ad7a..a9a663dd99def 100644 --- a/docs/plugins/ingest-geoip.asciidoc +++ b/docs/plugins/ingest-geoip.asciidoc @@ -10,7 +10,7 @@ under the CCA-ShareAlike 3.0 license. For more details see, http://dev.maxmind.c The GeoIP processor can run with other geoip2 databases from Maxmind. The files must be copied into the geoip config directory, and the `database_file` option should be used to specify the filename of the custom database. Custom database files must be compressed -with gzip. The geoip config directory is located at `$ES_HOME/config/ingest/geoip` and holds the shipped databases too. +with gzip. The geoip config directory is located at `$ES_HOME/config/ingest-geoip` and holds the shipped databases too. :plugin_name: ingest-geoip include::install_remove.asciidoc[] @@ -187,4 +187,4 @@ The geoip processor supports the following setting: The maximum number of results that should be cached. Defaults to `1000`. -Note that these settings are node settings and apply to all geoip processors, i.e. there is one cache for all defined geoip processors. \ No newline at end of file +Note that these settings are node settings and apply to all geoip processors, i.e. there is one cache for all defined geoip processors. diff --git a/docs/plugins/ingest.asciidoc b/docs/plugins/ingest.asciidoc index 490d3d1362b59..fa316c3862fb1 100644 --- a/docs/plugins/ingest.asciidoc +++ b/docs/plugins/ingest.asciidoc @@ -25,6 +25,13 @@ under the CCA-ShareAlike 3.0 license. For more details see, http://dev.maxmind.c A processor that extracts details from the User-Agent header value. +[float] +=== Community contributed ingest plugins + +The following plugin has been contributed by our community: + +* https://github.com/johtani/elasticsearch-ingest-csv[Ingest CSV Processor Plugin] (by Jun Ohtani) + include::ingest-attachment.asciidoc[] include::ingest-geoip.asciidoc[] diff --git a/docs/plugins/integrations.asciidoc b/docs/plugins/integrations.asciidoc index 4cfc5ab7539a8..9ece2febde180 100644 --- a/docs/plugins/integrations.asciidoc +++ b/docs/plugins/integrations.asciidoc @@ -17,6 +17,9 @@ Integrations are not plugins, but are external tools or modules that make it eas * https://drupal.org/project/elasticsearch_connector[Drupal]: Drupal Elasticsearch integration. +* https://wordpress.org/plugins/wpsolr-search-engine/[WPSOLR]: + Elasticsearch (and Apache Solr) WordPress Plugin + * http://searchbox-io.github.com/wp-elasticsearch/[Wp-Elasticsearch]: Elasticsearch WordPress Plugin diff --git a/docs/plugins/plugin-script.asciidoc b/docs/plugins/plugin-script.asciidoc index e7f4d25802384..329a65afbc59a 100644 --- a/docs/plugins/plugin-script.asciidoc +++ b/docs/plugins/plugin-script.asciidoc @@ -64,21 +64,42 @@ sudo bin/elasticsearch-plugin install [url] <1> ----------------------------------- <1> must be a valid URL, the plugin name is determined from its descriptor. -For instance, to install a plugin from your local file system, you could run: - +-- +Unix:: +To install a plugin from your local file system at `/path/to/plugin.zip`, you could run: ++ [source,shell] ----------------------------------- sudo bin/elasticsearch-plugin install file:///path/to/plugin.zip ----------------------------------- +Windows:: +To install a plugin from your local file system at `C:\path\to\plugin.zip`, you could run: ++ +[source,shell] +----------------------------------- +bin\elasticsearch-plugin install file:///C:/path/to/plugin.zip +----------------------------------- ++ +NOTE: Any path that contains spaces must be wrapped in quotes! + +HTTP:: +To install a plugin from a HTTP URL: ++ +[source,shell] +----------------------------------- +sudo bin/elasticsearch-plugin install http://some.domain/path/to/plugin.zip +----------------------------------- ++ The plugin script will refuse to talk to an HTTPS URL with an untrusted certificate. To use a self-signed HTTPS cert, you will need to add the CA cert to a local Java truststore and pass the location to the script as follows: - ++ [source,shell] ----------------------------------- -sudo ES_JAVA_OPTS="-Djavax.net.ssl.trustStore=/path/to/trustStore.jks" bin/elasticsearch-plugin install https://.... +sudo ES_JAVA_OPTS="-Djavax.net.ssl.trustStore=/path/to/trustStore.jks" bin/elasticsearch-plugin install https://host/plugin.zip ----------------------------------- +-- [[listing-removing-updating]] === Listing, Removing and Updating Installed Plugins diff --git a/docs/plugins/repository-azure.asciidoc b/docs/plugins/repository-azure.asciidoc index a7b4dbaa7fbdf..583217324edda 100644 --- a/docs/plugins/repository-azure.asciidoc +++ b/docs/plugins/repository-azure.asciidoc @@ -44,15 +44,18 @@ The initial backoff period is defined by Azure SDK as `30s`. Which means `30s` o before retrying after a first timeout or failure. The maximum backoff period is defined by Azure SDK as `90s`. +`endpoint_suffix` can be used to specify Azure endpoint suffix explicitly. Defaults to `core.windows.net`. + [source,yaml] ---- -cloud.azure.storage.timeout: 10s +azure.client.default.timeout: 10s azure.client.default.max_retries: 7 +azure.client.default.endpoint_suffix: core.chinacloudapi.cn azure.client.secondary.timeout: 30s ---- In this example, timeout will be `10s` per try for `default` with `7` retries before failing -and `30s` per try for `secondary` with `3` retries. +and endpoint suffix will be `core.chinacloudapi.cn` and `30s` per try for `secondary` with `3` retries. [IMPORTANT] .Supported Azure Storage Account types @@ -67,6 +70,19 @@ The Azure Repository plugin works with all Standard storage accounts https://azure.microsoft.com/en-gb/documentation/articles/storage-premium-storage[Premium Locally Redundant Storage] (`Premium_LRS`) is **not supported** as it is only usable as VM disk storage, not as general storage. =============================================== +You can register a proxy per client using the following settings: + +[source,yaml] +---- +azure.client.default.proxy.host: proxy.host +azure.client.default.proxy.port: 8888 +azure.client.default.proxy.type: http +---- + +Supported values for `proxy.type` are `direct` (default), `http` or `socks`. +When `proxy.type` is set to `http` or `socks`, `proxy.host` and `proxy.port` must be provided. + + [[repository-azure-repository-settings]] ===== Repository settings diff --git a/docs/plugins/repository-gcs.asciidoc b/docs/plugins/repository-gcs.asciidoc index fa4e8ca39807f..90c6e8f0ba4eb 100644 --- a/docs/plugins/repository-gcs.asciidoc +++ b/docs/plugins/repository-gcs.asciidoc @@ -93,7 +93,7 @@ A service account file looks like this: // NOTCONSOLE This file must be stored in the {ref}/secure-settings.html[elasticsearch keystore], under a setting name -of the form `gcs.client.NAME.credentials_file`, where `NAME` is the name of the client congiguration. +of the form `gcs.client.NAME.credentials_file`, where `NAME` is the name of the client configuration. The default client name is `default`, but a different client name can be specified in repository settings using `client`. diff --git a/docs/reference/aggregations/bucket/datehistogram-aggregation.asciidoc b/docs/reference/aggregations/bucket/datehistogram-aggregation.asciidoc index 3029b28df3bd2..ea72e07e337b8 100644 --- a/docs/reference/aggregations/bucket/datehistogram-aggregation.asciidoc +++ b/docs/reference/aggregations/bucket/datehistogram-aggregation.asciidoc @@ -398,11 +398,11 @@ the `order` setting. Supports the same `order` functionality as the < @@ -22,15 +22,18 @@ price for the product. The mapping could look like: } } } + } } -------------------------------------------------- - +// CONSOLE +// TESTSETUP <1> The `resellers` is an array that holds nested documents under the `product` object. The following aggregations will return the minimum price products can be purchased in: [source,js] -------------------------------------------------- +GET /_search { "query" : { "match" : { "name" : "led tv" } @@ -47,6 +50,9 @@ The following aggregations will return the minimum price products can be purchas } } -------------------------------------------------- +// CONSOLE +// TEST[s/GET \/_search/GET \/_search\?filter_path=aggregations/] +// TEST[s/^/PUT index\/product\/0\?refresh\n{"name":"led", "resellers": [{"name": "foo", "price": 350.00}, {"name": "bar", "price": 500.00}]}\n/] As you can see above, the nested aggregation requires the `path` of the nested documents within the top level documents. Then one can define any type of aggregation over these nested documents. @@ -56,12 +62,16 @@ Response: [source,js] -------------------------------------------------- { - "aggregations": { - "resellers": { - "min_price": { - "value" : 350 - } - } + ... + "aggregations": { + "resellers": { + "doc_count": 0, + "min_price": { + "value": 350 + } } + } } -------------------------------------------------- +// TESTRESPONSE[s/\.\.\.//] +// TESTRESPONSE[s/: [0-9]+/: $body.$_path/] diff --git a/docs/reference/aggregations/bucket/range-aggregation.asciidoc b/docs/reference/aggregations/bucket/range-aggregation.asciidoc index 7ce8ec699f0de..8ff26c7c92f5c 100644 --- a/docs/reference/aggregations/bucket/range-aggregation.asciidoc +++ b/docs/reference/aggregations/bucket/range-aggregation.asciidoc @@ -8,21 +8,25 @@ Example: [source,js] -------------------------------------------------- +GET /_search { "aggs" : { "price_ranges" : { "range" : { "field" : "price", "ranges" : [ - { "to" : 50 }, - { "from" : 50, "to" : 100 }, - { "from" : 100 } + { "to" : 100.0 }, + { "from" : 100.0, "to" : 200.0 }, + { "from" : 200.0 } ] } } } } -------------------------------------------------- +// CONSOLE +// TEST[setup:sales] +// TEST[s/GET \/_search/GET \/_search\?filter_path=aggregations/] Response: @@ -30,28 +34,31 @@ Response: -------------------------------------------------- { ... - "aggregations": { "price_ranges" : { "buckets": [ { - "to": 50, + "key": "*-100.0", + "to": 100.0, "doc_count": 2 }, { - "from": 50, - "to": 100, - "doc_count": 4 + "key": "100.0-200.0", + "from": 100.0, + "to": 200.0, + "doc_count": 2 }, { - "from": 100, - "doc_count": 4 + "key": "200.0-*", + "from": 200.0, + "doc_count": 3 } ] } } } -------------------------------------------------- +// TESTRESPONSE[s/\.\.\.//] ==== Keyed Response @@ -59,6 +66,7 @@ Setting the `keyed` flag to `true` will associate a unique string key with each [source,js] -------------------------------------------------- +GET /_search { "aggs" : { "price_ranges" : { @@ -66,15 +74,18 @@ Setting the `keyed` flag to `true` will associate a unique string key with each "field" : "price", "keyed" : true, "ranges" : [ - { "to" : 50 }, - { "from" : 50, "to" : 100 }, - { "from" : 100 } + { "to" : 100 }, + { "from" : 100, "to" : 200 }, + { "from" : 200 } ] } } } } -------------------------------------------------- +// CONSOLE +// TEST[setup:sales] +// TEST[s/GET \/_search/GET \/_search\?filter_path=aggregations/] Response: @@ -82,33 +93,34 @@ Response: -------------------------------------------------- { ... - "aggregations": { "price_ranges" : { "buckets": { - "*-50.0": { - "to": 50, + "*-100.0": { + "to": 100.0, "doc_count": 2 }, - "50.0-100.0": { - "from": 50, - "to": 100, - "doc_count": 4 + "100.0-200.0": { + "from": 100.0, + "to": 200.0, + "doc_count": 2 }, - "100.0-*": { - "from": 100, - "doc_count": 4 + "200.0-*": { + "from": 200.0, + "doc_count": 3 } } } } } -------------------------------------------------- +// TESTRESPONSE[s/\.\.\.//] It is also possible to customize the key for each range: [source,js] -------------------------------------------------- +GET /_search { "aggs" : { "price_ranges" : { @@ -116,20 +128,58 @@ It is also possible to customize the key for each range: "field" : "price", "keyed" : true, "ranges" : [ - { "key" : "cheap", "to" : 50 }, - { "key" : "average", "from" : 50, "to" : 100 }, - { "key" : "expensive", "from" : 100 } + { "key" : "cheap", "to" : 100 }, + { "key" : "average", "from" : 100, "to" : 200 }, + { "key" : "expensive", "from" : 200 } ] } } } } -------------------------------------------------- +// CONSOLE +// TEST[setup:sales] +// TEST[s/GET \/_search/GET \/_search\?filter_path=aggregations/] + +Response: + +[source,js] +-------------------------------------------------- +{ + ... + "aggregations": { + "price_ranges" : { + "buckets": { + "cheap": { + "to": 100.0, + "doc_count": 2 + }, + "average": { + "from": 100.0, + "to": 200.0, + "doc_count": 2 + }, + "expensive": { + "from": 200.0, + "doc_count": 3 + } + } + } + } +} +-------------------------------------------------- +// TESTRESPONSE[s/\.\.\.//] ==== Script +Range aggregation accepts a `script` parameter. This parameter allows to defined an inline `script` that +will be executed during aggregation execution. + +The following example shows how to use an `inline` script with the `painless` script language and no script parameters: + [source,js] -------------------------------------------------- +GET /_search { "aggs" : { "price_ranges" : { @@ -139,33 +189,50 @@ It is also possible to customize the key for each range: "source": "doc['price'].value" }, "ranges" : [ - { "to" : 50 }, - { "from" : 50, "to" : 100 }, - { "from" : 100 } + { "to" : 100 }, + { "from" : 100, "to" : 200 }, + { "from" : 200 } ] } } } } -------------------------------------------------- +// CONSOLE + +It is also possible to use stored scripts. Here is a simple stored script: + +[source,js] +-------------------------------------------------- +POST /_scripts/convert_currency +{ + "script": { + "lang": "painless", + "source": "doc[params.field].value * params.conversion_rate" + } +} +-------------------------------------------------- +// CONSOLE +// TEST[setup:sales] -This will interpret the `script` parameter as an `inline` script with the `painless` script language and no script parameters. To use a stored script use the following syntax: +And this new stored script can be used in the range aggregation like this: [source,js] -------------------------------------------------- +GET /_search { "aggs" : { "price_ranges" : { "range" : { "script" : { - "id": "my_script", - "params": { - "field": "price" + "id": "convert_currency", <1> + "params": { <2> + "field": "price", + "conversion_rate": 0.835526591 } }, "ranges" : [ - { "to" : 50 }, - { "from" : 50, "to" : 100 }, + { "from" : 0, "to" : 100 }, { "from" : 100 } ] } @@ -173,6 +240,39 @@ This will interpret the `script` parameter as an `inline` script with the `painl } } -------------------------------------------------- +// CONSOLE +// TEST[s/GET \/_search/GET \/_search\?filter_path=aggregations/] +// TEST[continued] +<1> Id of the stored script +<2> Parameters to use when executing the stored script + +////////////////////////// + +[source,js] +-------------------------------------------------- +{ + "aggregations": { + "price_ranges" : { + "buckets": [ + { + "key" : "0.0-100.0", + "from" : 0.0, + "to" : 100.0, + "doc_count" : 2 + }, + { + "key" : "100.0-*", + "from" : 100.0, + "doc_count" : 5 + } + ] + } + } +} +-------------------------------------------------- +// TESTRESPONSE + +////////////////////////// ==== Value Script @@ -180,13 +280,13 @@ Lets say the product prices are in USD but we would like to get the price ranges [source,js] -------------------------------------------------- +GET /sales/_search { "aggs" : { "price_ranges" : { "range" : { "field" : "price", "script" : { - "lang": "painless", "source": "_value * params.conversion_rate", "params" : { "conversion_rate" : 0.8 @@ -202,6 +302,8 @@ Lets say the product prices are in USD but we would like to get the price ranges } } -------------------------------------------------- +// CONSOLE +// TEST[setup:sales] ==== Sub Aggregations @@ -209,15 +311,16 @@ The following example, not only "bucket" the documents to the different buckets [source,js] -------------------------------------------------- +GET /_search { "aggs" : { "price_ranges" : { "range" : { "field" : "price", "ranges" : [ - { "to" : 50 }, - { "from" : 50, "to" : 100 }, - { "from" : 100 } + { "to" : 100 }, + { "from" : 100, "to" : 200 }, + { "from" : 200 } ] }, "aggs" : { @@ -229,68 +332,77 @@ The following example, not only "bucket" the documents to the different buckets } } -------------------------------------------------- +// CONSOLE +// TEST[setup:sales] +// TEST[s/GET \/_search/GET \/_search\?filter_path=aggregations/] Response: [source,js] -------------------------------------------------- { - "aggregations": { - "price_ranges" : { - "buckets": [ - { - "to": 50, - "doc_count": 2, - "price_stats": { - "count": 2, - "min": 20, - "max": 47, - "avg": 33.5, - "sum": 67 - } - }, - { - "from": 50, - "to": 100, - "doc_count": 4, - "price_stats": { - "count": 4, - "min": 60, - "max": 98, - "avg": 82.5, - "sum": 330 - } - }, - { - "from": 100, - "doc_count": 4, - "price_stats": { - "count": 4, - "min": 134, - "max": 367, - "avg": 216, - "sum": 864 - } - } - ] + ... + "aggregations": { + "price_ranges": { + "buckets": [ + { + "key": "*-100.0", + "to": 100.0, + "doc_count": 2, + "price_stats": { + "count": 2, + "min": 10.0, + "max": 50.0, + "avg": 30.0, + "sum": 60.0 + } + }, + { + "key": "100.0-200.0", + "from": 100.0, + "to": 200.0, + "doc_count": 2, + "price_stats": { + "count": 2, + "min": 150.0, + "max": 175.0, + "avg": 162.5, + "sum": 325.0 + } + }, + { + "key": "200.0-*", + "from": 200.0, + "doc_count": 3, + "price_stats": { + "count": 3, + "min": 200.0, + "max": 200.0, + "avg": 200.0, + "sum": 600.0 + } } + ] } + } } -------------------------------------------------- +// TESTRESPONSE[s/\.\.\.//] If a sub aggregation is also based on the same value source as the range aggregation (like the `stats` aggregation in the example above) it is possible to leave out the value source definition for it. The following will return the same response as above: [source,js] -------------------------------------------------- +GET /_search { "aggs" : { "price_ranges" : { "range" : { "field" : "price", "ranges" : [ - { "to" : 50 }, - { "from" : 50, "to" : 100 }, - { "from" : 100 } + { "to" : 100 }, + { "from" : 100, "to" : 200 }, + { "from" : 200 } ] }, "aggs" : { @@ -302,5 +414,5 @@ If a sub aggregation is also based on the same value source as the range aggrega } } -------------------------------------------------- - +// CONSOLE <1> We don't need to specify the `price` as we "inherit" it by default from the parent `range` aggregation diff --git a/docs/reference/aggregations/bucket/reverse-nested-aggregation.asciidoc b/docs/reference/aggregations/bucket/reverse-nested-aggregation.asciidoc index b6074298e1c03..8797e6041d5f3 100644 --- a/docs/reference/aggregations/bucket/reverse-nested-aggregation.asciidoc +++ b/docs/reference/aggregations/bucket/reverse-nested-aggregation.asciidoc @@ -17,36 +17,48 @@ the issue documents as nested documents. The mapping could look like: [source,js] -------------------------------------------------- +PUT /issues { - ... - - "issue" : { - "properties" : { - "tags" : { "type" : "text" }, - "comments" : { <1> - "type" : "nested", - "properties" : { - "username" : { "type" : "keyword" }, - "comment" : { "type" : "text" } + "mappings": { + "issue" : { + "properties" : { + "tags" : { "type" : "keyword" }, + "comments" : { <1> + "type" : "nested", + "properties" : { + "username" : { "type" : "keyword" }, + "comment" : { "type" : "text" } + } } } } } } -------------------------------------------------- - +// CONSOLE <1> The `comments` is an array that holds nested documents under the `issue` object. The following aggregations will return the top commenters' username that have commented and per top commenter the top tags of the issues the user has commented on: +////////////////////////// + +[source,js] +-------------------------------------------------- +POST /issues/issue/0?refresh +{"tags": ["tag_1"], "comments": [{"username": "username_1"}]} +-------------------------------------------------- +// CONSOLE +// TEST[continued] + +////////////////////////// + [source,js] -------------------------------------------------- +GET /issues/_search { "query": { - "match": { - "name": "led tv" - } + "match_all": {} }, "aggs": { "comments": { @@ -76,6 +88,9 @@ tags of the issues the user has commented on: } } -------------------------------------------------- +// CONSOLE +// TEST[continued] +// TEST[s/_search/_search\?filter_path=aggregations/] As you can see above, the `reverse_nested` aggregation is put in to a `nested` aggregation as this is the only place in the dsl where the `reversed_nested` aggregation can be used. Its sole purpose is to join back to a parent doc higher @@ -92,23 +107,29 @@ Possible response snippet: { "aggregations": { "comments": { + "doc_count": 1, "top_usernames": { + "doc_count_error_upper_bound" : 0, + "sum_other_doc_count" : 0, "buckets": [ { "key": "username_1", - "doc_count": 12, + "doc_count": 1, "comment_to_issue": { + "doc_count": 1, "top_tags_per_comment": { + "doc_count_error_upper_bound" : 0, + "sum_other_doc_count" : 0, "buckets": [ { - "key": "tag1", - "doc_count": 9 - }, + "key": "tag_1", + "doc_count": 1 + } ... ] } } - }, + } ... ] } @@ -116,3 +137,4 @@ Possible response snippet: } } -------------------------------------------------- +// TESTRESPONSE[s/\.\.\.//] diff --git a/docs/reference/aggregations/bucket/significantterms-aggregation.asciidoc b/docs/reference/aggregations/bucket/significantterms-aggregation.asciidoc index 6b37eeedb0141..1db54611b31cf 100644 --- a/docs/reference/aggregations/bucket/significantterms-aggregation.asciidoc +++ b/docs/reference/aggregations/bucket/significantterms-aggregation.asciidoc @@ -15,6 +15,50 @@ They are the terms that have undergone a significant change in popularity measur If the term "H5N1" only exists in 5 documents in a 10 million document index and yet is found in 4 of the 100 documents that make up a user's search results that is significant and probably very relevant to their search. 5/10,000,000 vs 4/100 is a big swing in frequency. +////////////////////////// + +[source,js] +-------------------------------------------------- +PUT /reports +{ + "mappings": { + "report": { + "properties": { + "force": { + "type": "keyword" + }, + "crime_type": { + "type": "keyword" + } + } + } + } +} + +POST /reports/report/_bulk?refresh +{"index":{"_id":0}} +{"force": "British Transport Police", "crime_type": "Bicycle theft"} +{"index":{"_id":1}} +{"force": "British Transport Police", "crime_type": "Bicycle theft"} +{"index":{"_id":2}} +{"force": "British Transport Police", "crime_type": "Bicycle theft"} +{"index":{"_id":3}} +{"force": "British Transport Police", "crime_type": "Robbery"} +{"index":{"_id":4}} +{"force": "Metropolitan Police Service", "crime_type": "Robbery"} +{"index":{"_id":5}} +{"force": "Metropolitan Police Service", "crime_type": "Bicycle theft"} +{"index":{"_id":6}} +{"force": "Metropolitan Police Service", "crime_type": "Robbery"} +{"index":{"_id":7}} +{"force": "Metropolitan Police Service", "crime_type": "Robbery"} + +------------------------------------------------- +// NOTCONSOLE +// TESTSETUP + +////////////////////////// + ==== Single-set analysis In the simplest case, the _foreground_ set of interest is the search results matched by a query and the _background_ @@ -24,17 +68,20 @@ Example: [source,js] -------------------------------------------------- +GET /_search { "query" : { "terms" : {"force" : [ "British Transport Police" ]} }, "aggregations" : { - "significantCrimeTypes" : { + "significant_crime_types" : { "significant_terms" : { "field" : "crime_type" } } } } -------------------------------------------------- +// CONSOLE +// TEST[s/_search/_search\?filter_path=aggregations/] Response: @@ -42,9 +89,8 @@ Response: -------------------------------------------------- { ... - "aggregations" : { - "significantCrimeTypes" : { + "significant_crime_types" : { "doc_count": 47347, "bg_count": 5064554, "buckets" : [ @@ -60,6 +106,8 @@ Response: } } -------------------------------------------------- +// TESTRESPONSE[s/\.\.\.//] +// TESTRESPONSE[s/: (0\.)?[0-9]+/: $body.$_path/] When querying an index of all crimes from all police forces, what these results show is that the British Transport Police force stand out as a force dealing with a disproportionately large number of bicycle thefts. Ordinarily, bicycle thefts represent only 1% of crimes (66799/5064554) @@ -81,12 +129,13 @@ Example using a parent aggregation for segmentation: [source,js] -------------------------------------------------- +GET /_search { "aggregations": { "forces": { "terms": {"field": "force"}, "aggregations": { - "significantCrimeTypes": { + "significant_crime_types": { "significant_terms": {"field": "crime_type"} } } @@ -94,6 +143,8 @@ Example using a parent aggregation for segmentation: } } -------------------------------------------------- +// CONSOLE +// TEST[s/_search/_search\?filter_path=aggregations/] Response: @@ -101,14 +152,15 @@ Response: -------------------------------------------------- { ... - "aggregations": { "forces": { + "doc_count_error_upper_bound": 1375, + "sum_other_doc_count": 7879845, "buckets": [ { "key": "Metropolitan Police Service", "doc_count": 894038, - "significantCrimeTypes": { + "significant_crime_types": { "doc_count": 894038, "bg_count": 5064554, "buckets": [ @@ -117,7 +169,7 @@ Response: "doc_count": 27617, "score": 0.0599, "bg_count": 53182 - }, + } ... ] } @@ -125,7 +177,7 @@ Response: { "key": "British Transport Police", "doc_count": 47347, - "significantCrimeTypes": { + "significant_crime_types": { "doc_count": 47347, "bg_count": 5064554, "buckets": [ @@ -134,16 +186,19 @@ Response: "doc_count": 3640, "score": 0.371, "bg_count": 66799 - }, + } ... ] } } ] } + } } - -------------------------------------------------- +// TESTRESPONSE[s/\.\.\.//] +// TESTRESPONSE[s/: (0\.)?[0-9]+/: $body.$_path/] +// TESTRESPONSE[s/: "[^"]*"/: $body.$_path/] Now we have anomaly detection for each of the police forces using a single request. @@ -152,15 +207,16 @@ area to identify unusual hot-spots of a particular crime type: [source,js] -------------------------------------------------- +GET /_search { "aggs": { "hotspots": { - "geohash_grid" : { - "field":"location", - "precision":5, + "geohash_grid": { + "field": "location", + "precision": 5 }, "aggs": { - "significantCrimeTypes": { + "significant_crime_types": { "significant_terms": {"field": "crime_type"} } } @@ -168,6 +224,7 @@ area to identify unusual hot-spots of a particular crime type: } } -------------------------------------------------- +// CONSOLE This example uses the `geohash_grid` aggregation to create result buckets that represent geographic areas, and inside each bucket we can identify anomalous levels of a crime type in these tightly-focused areas e.g. @@ -283,6 +340,7 @@ Mutual information as described in "Information Retrieval", Manning et al., Chap "include_negatives": true } -------------------------------------------------- +// NOTCONSOLE Mutual information does not differentiate between terms that are descriptive for the subset or for documents outside the subset. The significant terms therefore can contain terms that appear more or less frequent in the subset than outside the subset. To filter out the terms that appear less often in the subset than in documents outside the subset, `include_negatives` can be set to `false`. @@ -293,7 +351,7 @@ Per default, the assumption is that the documents in the bucket are also contain "background_is_superset": false -------------------------------------------------- - +// NOTCONSOLE ===== Chi square Chi square as described in "Information Retrieval", Manning et al., Chapter 13.5.2 can be used as significance score by adding the parameter @@ -304,7 +362,7 @@ Chi square as described in "Information Retrieval", Manning et al., Chapter 13.5 "chi_square": { } -------------------------------------------------- - +// NOTCONSOLE Chi square behaves like mutual information and can be configured with the same parameters `include_negatives` and `background_is_superset`. @@ -317,7 +375,7 @@ Google normalized distance as described in "The Google Similarity Distance", Ci "gnd": { } -------------------------------------------------- - +// NOTCONSOLE `gnd` also accepts the `background_is_superset` parameter. @@ -336,7 +394,7 @@ Multiple observations are typically required to reinforce a view so it is recomm "percentage": { } -------------------------------------------------- - +// NOTCONSOLE ===== Which one is best? @@ -360,7 +418,7 @@ Customized scores can be implemented via a script: } } -------------------------------------------------- - +// NOTCONSOLE Scripts can be inline (as in above example), indexed or stored on disk. For details on the options, see <>. Available parameters in the script are @@ -400,6 +458,7 @@ It is possible to only return terms that match more than a configured number of [source,js] -------------------------------------------------- +GET /_search { "aggs" : { "tags" : { @@ -411,7 +470,7 @@ It is possible to only return terms that match more than a configured number of } } -------------------------------------------------- - +// CONSOLE The above aggregation would only return tags which have been found in 10 hits or more. Default value is `3`. @@ -442,9 +501,12 @@ context: [source,js] -------------------------------------------------- +GET /_search { "query" : { - "match" : "madrid" + "match" : { + "city" : "madrid" + } }, "aggs" : { "tags" : { @@ -458,6 +520,7 @@ context: } } -------------------------------------------------- +// CONSOLE The above filter would help focus in on terms that were peculiar to the city of Madrid rather than revealing terms like "Spanish" that are unusual in the full index's worldwide context but commonplace in the subset of documents containing the @@ -472,32 +535,26 @@ It is possible (although rarely required) to filter the values for which buckets `exclude` parameters which are based on a regular expression string or arrays of exact terms. This functionality mirrors the features described in the <> documentation. - -===== Execution hint - +==== Execution hint There are different mechanisms by which terms aggregations can be executed: - by using field values directly in order to aggregate data per-bucket (`map`) - - by using ordinals of the field and preemptively allocating one bucket per ordinal value (`global_ordinals`) - - by using ordinals of the field and dynamically allocating one bucket per ordinal value (`global_ordinals_hash`) + - by using global ordinals of the field and allocating one bucket per global ordinal (`global_ordinals`) Elasticsearch tries to have sensible defaults so this is something that generally doesn't need to be configured. -`map` should only be considered when very few documents match a query. Otherwise the ordinals-based execution modes -are significantly faster. By default, `map` is only used when running an aggregation on scripts, since they don't have -ordinals. - -`global_ordinals` is the second fastest option, but the fact that it preemptively allocates buckets can be memory-intensive, -especially if you have one or more sub aggregations. It is used by default on top-level terms aggregations. +`global_ordinals` is the default option for `keyword` field, it uses global ordinals to allocates buckets dynamically +so memory usage is linear to the number of values of the documents that are part of the aggregation scope. -`global_ordinals_hash` on the contrary to `global_ordinals` and `global_ordinals_low_cardinality` allocates buckets dynamically -so memory usage is linear to the number of values of the documents that are part of the aggregation scope. It is used by default -in inner aggregations. +`map` should only be considered when very few documents match a query. Otherwise the ordinals-based execution mode +is significantly faster. By default, `map` is only used when running an aggregation on scripts, since they don't have +ordinals. [source,js] -------------------------------------------------- +GET /_search { "aggs" : { "tags" : { @@ -509,7 +566,8 @@ in inner aggregations. } } -------------------------------------------------- +// CONSOLE -<1> the possible values are `map`, `global_ordinals` and `global_ordinals_hash` +<1> the possible values are `map`, `global_ordinals` Please note that Elasticsearch will ignore this execution hint if it is not applicable. diff --git a/docs/reference/aggregations/bucket/terms-aggregation.asciidoc b/docs/reference/aggregations/bucket/terms-aggregation.asciidoc index 721b1e9eccaea..ba6b912780c9f 100644 --- a/docs/reference/aggregations/bucket/terms-aggregation.asciidoc +++ b/docs/reference/aggregations/bucket/terms-aggregation.asciidoc @@ -3,10 +3,61 @@ A multi-bucket value source based aggregation where buckets are dynamically built - one per unique value. +////////////////////////// + +[source,js] +-------------------------------------------------- +PUT /products +{ + "mappings": { + "product": { + "properties": { + "genre": { + "type": "keyword" + }, + "product": { + "type": "keyword" + } + } + } + } +} + +POST /products/product/_bulk?refresh +{"index":{"_id":0}} +{"genre": "rock", "product": "Product A"} +{"index":{"_id":1}} +{"genre": "rock"} +{"index":{"_id":2}} +{"genre": "rock"} +{"index":{"_id":3}} +{"genre": "jazz", "product": "Product Z"} +{"index":{"_id":4}} +{"genre": "jazz"} +{"index":{"_id":5}} +{"genre": "electronic"} +{"index":{"_id":6}} +{"genre": "electronic"} +{"index":{"_id":7}} +{"genre": "electronic"} +{"index":{"_id":8}} +{"genre": "electronic"} +{"index":{"_id":9}} +{"genre": "electronic"} +{"index":{"_id":10}} +{"genre": "electronic"} + +------------------------------------------------- +// NOTCONSOLE +// TESTSETUP + +////////////////////////// + Example: [source,js] -------------------------------------------------- +GET /_search { "aggs" : { "genres" : { @@ -15,6 +66,8 @@ Example: } } -------------------------------------------------- +// CONSOLE +// TEST[s/_search/_search\?filter_path=aggregations/] Response: @@ -22,30 +75,29 @@ Response: -------------------------------------------------- { ... - "aggregations" : { "genres" : { "doc_count_error_upper_bound": 0, <1> "sum_other_doc_count": 0, <2> "buckets" : [ <3> { - "key" : "jazz", - "doc_count" : 10 + "key" : "electronic", + "doc_count" : 6 }, { "key" : "rock", - "doc_count" : 10 + "doc_count" : 3 }, { - "key" : "electronic", - "doc_count" : 10 - }, + "key" : "jazz", + "doc_count" : 2 + } ] } } } -------------------------------------------------- - +// TESTRESPONSE[s/\.\.\.//] <1> an upper bound of the error on the document counts for each term, see <> <2> when there are lots of unique terms, elasticsearch only returns the top terms; this number is the sum of the document counts for all buckets that are not part of the response <3> the list of the top buckets, the meaning of `top` being defined by the <> @@ -74,6 +126,7 @@ A request is made to obtain the top 5 terms in the field product, ordered by des [source,js] -------------------------------------------------- +GET /_search { "aggs" : { "products" : { @@ -85,6 +138,8 @@ A request is made to obtain the top 5 terms in the field product, ordered by des } } -------------------------------------------------- +// CONSOLE +// TEST[s/_search/_search\?filter_path=aggregations/] The terms for each of the three shards are shown below with their respective document counts in brackets: @@ -108,7 +163,6 @@ respective document counts in brackets: The shards will return their top 5 terms so the results from the shards will be: - [width="100%",cols="^2,^2,^2,^2",options="header"] |========================================================= | | Shard A | Shard B | Shard C @@ -165,9 +219,9 @@ otherwise. ==== Calculating Document Count Error -There are two error values which can be shown on the terms aggregation. The first gives a value for the aggregation as +There are two error values which can be shown on the terms aggregation. The first gives a value for the aggregation as a whole which represents the maximum potential document count for a term which did not make it into the final list of -terms. This is calculated as the sum of the document count from the last term returned from each shard .For the example +terms. This is calculated as the sum of the document count from the last term returned from each shard. For the example given above the value would be 46 (2 + 15 + 29). This means that in the worst case scenario a term which was not returned could have the 4th highest document count. @@ -175,10 +229,10 @@ could have the 4th highest document count. -------------------------------------------------- { ... - "aggregations" : { "products" : { "doc_count_error_upper_bound" : 46, + "sum_other_doc_count" : 79, "buckets" : [ { "key" : "Product A", @@ -187,33 +241,55 @@ could have the 4th highest document count. { "key" : "Product Z", "doc_count" : 52 - }, + } ... ] } } } -------------------------------------------------- +// TESTRESPONSE[s/\.\.\.//] +// TESTRESPONSE[s/: (\-)?[0-9]+/: $body.$_path/] ==== Per bucket document count error -The second error value can be enabled by setting the `show_term_doc_count_error` parameter to true. This shows an error value -for each term returned by the aggregation which represents the 'worst case' error in the document count and can be useful when -deciding on a value for the `shard_size` parameter. This is calculated by summing the document counts for the last term returned -by all shards which did not return the term. In the example above the error in the document count for Product C would be 15 as -Shard B was the only shard not to return the term and the document count of the last term it did return was 15. The actual document -count of Product C was 54 so the document count was only actually off by 4 even though the worst case was that it would be off by -15. Product A, however has an error of 0 for its document count, since every shard returned it we can be confident that the count -returned is accurate. +The second error value can be enabled by setting the `show_term_doc_count_error` parameter to true: [source,js] -------------------------------------------------- +GET /_search { - ... + "aggs" : { + "products" : { + "terms" : { + "field" : "product", + "size" : 5, + "show_term_doc_count_error": true + } + } + } +} +-------------------------------------------------- +// CONSOLE +// TEST[s/_search/_search\?filter_path=aggregations/] + +This shows an error value for each term returned by the aggregation which represents the 'worst case' error in the document count +and can be useful when deciding on a value for the `shard_size` parameter. This is calculated by summing the document counts for +the last term returned by all shards which did not return the term. In the example above the error in the document count for Product C +would be 15 as Shard B was the only shard not to return the term and the document count of the last term it did return was 15. +The actual document count of Product C was 54 so the document count was only actually off by 4 even though the worst case was that +it would be off by 15. Product A, however has an error of 0 for its document count, since every shard returned it we can be confident +that the count returned is accurate. + +[source,js] +-------------------------------------------------- +{ + ... "aggregations" : { "products" : { "doc_count_error_upper_bound" : 46, + "sum_other_doc_count" : 79, "buckets" : [ { "key" : "Product A", @@ -224,13 +300,15 @@ returned is accurate. "key" : "Product Z", "doc_count" : 52, "doc_count_error_upper_bound" : 2 - }, + } ... ] } } } -------------------------------------------------- +// TESTRESPONSE[s/\.\.\.//] +// TESTRESPONSE[s/: (\-)?[0-9]+/: $body.$_path/] These errors can only be calculated in this way when the terms are ordered by descending document count. When the aggregation is ordered by the terms values themselves (either ascending or descending) there is no error in the document count since if a shard @@ -257,6 +335,7 @@ Ordering the buckets by their doc `_count` in an ascending manner: [source,js] -------------------------------------------------- +GET /_search { "aggs" : { "genres" : { @@ -268,11 +347,13 @@ Ordering the buckets by their doc `_count` in an ascending manner: } } -------------------------------------------------- +// CONSOLE Ordering the buckets alphabetically by their terms in an ascending manner: [source,js] -------------------------------------------------- +GET /_search { "aggs" : { "genres" : { @@ -284,6 +365,7 @@ Ordering the buckets alphabetically by their terms in an ascending manner: } } -------------------------------------------------- +// CONSOLE deprecated[6.0.0, Use `_key` instead of `_term` to order buckets by their term] @@ -291,6 +373,7 @@ Ordering the buckets by single value metrics sub-aggregation (identified by the [source,js] -------------------------------------------------- +GET /_search { "aggs" : { "genres" : { @@ -305,11 +388,13 @@ Ordering the buckets by single value metrics sub-aggregation (identified by the } } -------------------------------------------------- +// CONSOLE Ordering the buckets by multi value metrics sub-aggregation (identified by the aggregation name): [source,js] -------------------------------------------------- +GET /_search { "aggs" : { "genres" : { @@ -324,6 +409,7 @@ Ordering the buckets by multi value metrics sub-aggregation (identified by the a } } -------------------------------------------------- +// CONSOLE [NOTE] .Pipeline aggs cannot be used for sorting @@ -355,6 +441,7 @@ PATH = [ , ]* [ [ , ]* [ the possible values are `breadth_first` and `depth_first` @@ -729,28 +853,20 @@ collection mode need to replay the query on the second pass but only for the doc There are different mechanisms by which terms aggregations can be executed: - by using field values directly in order to aggregate data per-bucket (`map`) - - by using ordinals of the field and preemptively allocating one bucket per ordinal value (`global_ordinals`) - - by using ordinals of the field and dynamically allocating one bucket per ordinal value (`global_ordinals_hash`) - - by using per-segment ordinals to compute counts and remap these counts to global counts using global ordinals (`global_ordinals_low_cardinality`) + - by using global ordinals of the field and allocating one bucket per global ordinal (`global_ordinals`) Elasticsearch tries to have sensible defaults so this is something that generally doesn't need to be configured. -`map` should only be considered when very few documents match a query. Otherwise the ordinals-based execution modes -are significantly faster. By default, `map` is only used when running an aggregation on scripts, since they don't have -ordinals. - -`global_ordinals_low_cardinality` only works for leaf terms aggregations but is usually the fastest execution mode. Memory -usage is linear with the number of unique values in the field, so it is only enabled by default on low-cardinality fields. - -`global_ordinals` is the second fastest option, but the fact that it preemptively allocates buckets can be memory-intensive, -especially if you have one or more sub aggregations. It is used by default on top-level terms aggregations. +`global_ordinals` is the default option for `keyword` field, it uses global ordinals to allocates buckets dynamically +so memory usage is linear to the number of values of the documents that are part of the aggregation scope. -`global_ordinals_hash` on the contrary to `global_ordinals` and `global_ordinals_low_cardinality` allocates buckets dynamically -so memory usage is linear to the number of values of the documents that are part of the aggregation scope. It is used by default -in inner aggregations. +`map` should only be considered when very few documents match a query. Otherwise the ordinals-based execution mode +is significantly faster. By default, `map` is only used when running an aggregation on scripts, since they don't have +ordinals. [source,js] -------------------------------------------------- +GET /_search { "aggs" : { "tags" : { @@ -762,8 +878,9 @@ in inner aggregations. } } -------------------------------------------------- +// CONSOLE -<1> The possible values are `map`, `global_ordinals`, `global_ordinals_hash` and `global_ordinals_low_cardinality` +<1> The possible values are `map`, `global_ordinals` Please note that Elasticsearch will ignore this execution hint if it is not applicable and that there is no backward compatibility guarantee on these hints. @@ -775,6 +892,7 @@ had a value. [source,js] -------------------------------------------------- +GET /_search { "aggs" : { "tags" : { @@ -786,6 +904,7 @@ had a value. } } -------------------------------------------------- +// CONSOLE <1> Documents without a value in the `tags` field will fall into the same bucket as documents that have the value `N/A`. diff --git a/docs/reference/aggregations/matrix/stats-aggregation.asciidoc b/docs/reference/aggregations/matrix/stats-aggregation.asciidoc index bb66115ecd571..3cc207fef7d2a 100644 --- a/docs/reference/aggregations/matrix/stats-aggregation.asciidoc +++ b/docs/reference/aggregations/matrix/stats-aggregation.asciidoc @@ -13,13 +13,34 @@ The `matrix_stats` aggregation is a numeric aggregation that computes the follow `correlation`:: The covariance matrix scaled to a range of -1 to 1, inclusive. Describes the relationship between field distributions. +////////////////////////// + +[source,js] +-------------------------------------------------- +PUT /statistics/doc/0 +{"poverty": 24.0, "income": 50000.0} + +PUT /statistics/doc/1 +{"poverty": 13.0, "income": 95687.0} + +PUT /statistics/doc/2 +{"poverty": 69.0, "income": 7890.0} + +POST /_refresh +-------------------------------------------------- +// NOTCONSOLE +// TESTSETUP + +////////////////////////// + The following example demonstrates the use of matrix stats to describe the relationship between income and poverty. [source,js] -------------------------------------------------- +GET /_search { "aggs": { - "matrixstats": { + "statistics": { "matrix_stats": { "fields": ["poverty", "income"] } @@ -27,6 +48,8 @@ The following example demonstrates the use of matrix stats to describe the relat } } -------------------------------------------------- +// CONSOLE +// TEST[s/_search/_search\?filter_path=aggregations/] The aggregation type is `matrix_stats` and the `fields` setting defines the set of fields (as an array) for computing the statistics. The above request returns the following response: @@ -36,7 +59,7 @@ the statistics. The above request returns the following response: { ... "aggregations": { - "matrixstats": { + "statistics": { "doc_count": 50, "fields": [{ "name": "income", @@ -73,6 +96,8 @@ the statistics. The above request returns the following response: } } -------------------------------------------------- +// TESTRESPONSE[s/\.\.\.//] +// TESTRESPONSE[s/: (\-)?[0-9\.E]+/: $body.$_path/] The `doc_count` field indicates the number of documents involved in the computation of the statistics. @@ -96,6 +121,7 @@ This is done by adding a set of fieldname : value mappings to specify default va [source,js] -------------------------------------------------- +GET /_search { "aggs": { "matrixstats": { @@ -107,6 +133,7 @@ This is done by adding a set of fieldname : value mappings to specify default va } } -------------------------------------------------- +// CONSOLE <1> Documents without a value in the `income` field will have the default value `50000`. diff --git a/docs/reference/aggregations/metrics/extendedstats-aggregation.asciidoc b/docs/reference/aggregations/metrics/extendedstats-aggregation.asciidoc index b71427ae9cb55..6eb2f18928a81 100644 --- a/docs/reference/aggregations/metrics/extendedstats-aggregation.asciidoc +++ b/docs/reference/aggregations/metrics/extendedstats-aggregation.asciidoc @@ -109,7 +109,7 @@ GET /exams/_search // CONSOLE // TEST[setup:exams] -This will interpret the `script` parameter as an `inline` script with the `painless` script language and no script parameters. To use a file script use the following syntax: +This will interpret the `script` parameter as an `inline` script with the `painless` script language and no script parameters. To use a stored script use the following syntax: [source,js] -------------------------------------------------- diff --git a/docs/reference/aggregations/metrics/tophits-aggregation.asciidoc b/docs/reference/aggregations/metrics/tophits-aggregation.asciidoc index 05fb51e6d8530..7668a0df792dc 100644 --- a/docs/reference/aggregations/metrics/tophits-aggregation.asciidoc +++ b/docs/reference/aggregations/metrics/tophits-aggregation.asciidoc @@ -177,6 +177,7 @@ relevancy order of the most relevant document in a bucket. [source,js] -------------------------------------------------- +POST /sales/_search { "query": { "match": { @@ -184,7 +185,7 @@ relevancy order of the most relevant document in a bucket. } }, "aggs": { - "top-sites": { + "top_sites": { "terms": { "field": "domain", "order": { @@ -207,6 +208,8 @@ relevancy order of the most relevant document in a bucket. } } -------------------------------------------------- +// CONSOLE +// TEST[setup:sales] At the moment the `max` (or `min`) aggregator is needed to make sure the buckets from the `terms` aggregator are ordered according to the score of the most relevant webpage per domain. Unfortunately the `top_hits` aggregator @@ -224,31 +227,127 @@ the same id. In order to determine the identity of a nested hit there is more ne nested hits also include their nested identity. The nested identity is kept under the `_nested` field in the search hit and includes the array field and the offset in the array field the nested hit belongs to. The offset is zero based. -Top hits response snippet with a nested hit, which resides in the third slot of array field `nested_field1` in document with id `1`: +Let's see how it works with a real sample. Considering the following mapping: [source,js] -------------------------------------------------- -... -"hits": { - "total": 25365, - "max_score": 1, - "hits": [ - { - "_index": "a", - "_type": "b", - "_id": "1", - "_score": 1, - "_nested" : { - "field" : "nested_field1", - "offset" : 2 - } - "_source": ... - }, - ... - ] +PUT /sales +{ + "mappings": { + "product" : { + "properties" : { + "tags" : { "type" : "keyword" }, + "comments" : { <1> + "type" : "nested", + "properties" : { + "username" : { "type" : "keyword" }, + "comment" : { "type" : "text" } + } + } + } + } + } +} +-------------------------------------------------- +// CONSOLE +<1> The `comments` is an array that holds nested documents under the `product` object. + +And some documents: + +[source,js] +-------------------------------------------------- +PUT /sales/product/1?refresh +{ + "tags": ["car", "auto"], + "comments": [ + {"username": "baddriver007", "comment": "This car could have better brakes"}, + {"username": "dr_who", "comment": "Where's the autopilot? Can't find it"}, + {"username": "ilovemotorbikes", "comment": "This car has two extra wheels"} + ] +} +-------------------------------------------------- +// CONSOLE +// TEST[continued] + +It's now possible to execute the following `top_hits` aggregation (wrapped in a `nested` aggregation): + +[source,js] +-------------------------------------------------- +POST /sales/_search +{ + "query": { + "term": { "tags": "car" } + }, + "aggs": { + "by_sale": { + "nested" : { + "path" : "comments" + }, + "aggs": { + "by_user": { + "terms": { + "field": "comments.username", + "size": 1 + }, + "aggs": { + "by_nested": { + "top_hits":{} + } + } + } + } + } + } +} +-------------------------------------------------- +// CONSOLE +// TEST[continued] +// TEST[s/_search/_search\?filter_path=aggregations.by_sale.by_user.buckets/] + +Top hits response snippet with a nested hit, which resides in the first slot of array field `comments`: + +[source,js] +-------------------------------------------------- +{ + ... + "aggregations": { + "by_sale": { + "by_user": { + "buckets": [ + { + "key": "baddriver007", + "doc_count": 1, + "by_nested": { + "hits": { + "total": 1, + "max_score": 0.2876821, + "hits": [ + { + "_nested": { + "field": "comments", <1> + "offset": 0 <2> + }, + "_score": 0.2876821, + "_source": { + "comment": "This car could have better brakes", <3> + "username": "baddriver007" + } + } + ] + } + } + } + ... + ] + } + } + } } -... -------------------------------------------------- +// TESTRESPONSE[s/\.\.\.//] +<1> Name of the array field containing the nested hit +<2> Position if the nested hit in the containing array +<3> Source of the nested hit If `_source` is requested then just the part of the source of the nested object is returned, not the entire source of the document. Also stored fields on the *nested* inner object level are accessible via `top_hits` aggregator residing in a `nested` or `reverse_nested` aggregator. @@ -290,3 +389,4 @@ the second slow of the `nested_child_field` field: } ... -------------------------------------------------- +// NOTCONSOLE \ No newline at end of file diff --git a/docs/reference/aggregations/pipeline/bucket-selector-aggregation.asciidoc b/docs/reference/aggregations/pipeline/bucket-selector-aggregation.asciidoc index 1dc44876c5361..cd0218e7c4353 100644 --- a/docs/reference/aggregations/pipeline/bucket-selector-aggregation.asciidoc +++ b/docs/reference/aggregations/pipeline/bucket-selector-aggregation.asciidoc @@ -6,7 +6,7 @@ in the parent multi-bucket aggregation. The specified metric must be numeric and If the script language is `expression` then a numeric return value is permitted. In this case 0.0 will be evaluated as `false` and all other values will evaluate to true. -Note: The bucket_selector aggregation, like all pipeline aggregations, executions after all other sibling aggregations. This means that +NOTE: The bucket_selector aggregation, like all pipeline aggregations, executions after all other sibling aggregations. This means that using the bucket_selector aggregation to filter the returned buckets in the response does not save on execution time running the aggregations. ==== Syntax diff --git a/docs/reference/analysis/analyzers/custom-analyzer.asciidoc b/docs/reference/analysis/analyzers/custom-analyzer.asciidoc index f14759856dd05..34572acaa9650 100644 --- a/docs/reference/analysis/analyzers/custom-analyzer.asciidoc +++ b/docs/reference/analysis/analyzers/custom-analyzer.asciidoc @@ -202,7 +202,7 @@ POST my_index/_analyze -------------------------------------------------- // CONSOLE -<1> The `emoticon` character filter, `punctuation` tokenizer and +<1> The `emoticons` character filter, `punctuation` tokenizer and `english_stop` token filter are custom implementations which are defined in the same index settings. diff --git a/docs/reference/analysis/analyzers/lang-analyzer.asciidoc b/docs/reference/analysis/analyzers/lang-analyzer.asciidoc index 65cc30780b1a7..1ce44b6028db8 100644 --- a/docs/reference/analysis/analyzers/lang-analyzer.asciidoc +++ b/docs/reference/analysis/analyzers/lang-analyzer.asciidoc @@ -6,6 +6,7 @@ following types are supported: <>, <>, <>, +<>, <>, <>, <>, @@ -55,7 +56,7 @@ functionality is implemented by adding the with the `keywords` set to the value of the `stem_exclusion` parameter. The following analyzers support setting custom `stem_exclusion` list: -`arabic`, `armenian`, `basque`, `bulgarian`, `catalan`, `czech`, +`arabic`, `armenian`, `basque`, `bengali`, `bulgarian`, `catalan`, `czech`, `dutch`, `english`, `finnish`, `french`, `galician`, `german`, `hindi`, `hungarian`, `indonesian`, `irish`, `italian`, `latvian`, `lithuanian`, `norwegian`, `portuguese`, `romanian`, `russian`, `sorani`, @@ -209,6 +210,54 @@ PUT /armenian_example <2> This filter should be removed unless there are words which should be excluded from stemming. +[[bengali-analyzer]] +===== `bengali` analyzer + +The `bengali` analyzer could be reimplemented as a `custom` analyzer as follows: + +[source,js] +---------------------------------------------------- +PUT /bengali_example +{ + "settings": { + "analysis": { + "filter": { + "bengali_stop": { + "type": "stop", + "stopwords": "_bengali_" <1> + }, + "bengali_keywords": { + "type": "keyword_marker", + "keywords": ["উদাহরণ"] <2> + }, + "bengali_stemmer": { + "type": "stemmer", + "language": "bengali" + } + }, + "analyzer": { + "bengali": { + "tokenizer": "standard", + "filter": [ + "lowercase", + "indic_normalization", + "bengali_normalization", + "bengali_stop", + "bengali_keywords", + "bengali_stemmer" + ] + } + } + } + } +} +---------------------------------------------------- +// CONSOLE +<1> The default stopwords can be overridden with the `stopwords` + or `stopwords_path` parameters. +<2> This filter should be removed unless there are words which should + be excluded from stemming. + [[brazilian-analyzer]] ===== `brazilian` analyzer diff --git a/docs/reference/analysis/tokenfilters/compound-word-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/compound-word-tokenfilter.asciidoc index e790ed4c4b5b1..d200c0b988bc4 100644 --- a/docs/reference/analysis/tokenfilters/compound-word-tokenfilter.asciidoc +++ b/docs/reference/analysis/tokenfilters/compound-word-tokenfilter.asciidoc @@ -86,25 +86,27 @@ Here is an example: -------------------------------------------------- PUT /compound_word_example { - "index": { - "analysis": { - "analyzer": { - "my_analyzer": { - "type": "custom", - "tokenizer": "standard", - "filter": ["dictionary_decompounder", "hyphenation_decompounder"] - } - }, - "filter": { - "dictionary_decompounder": { - "type": "dictionary_decompounder", - "word_list": ["one", "two", "three"] + "settings": { + "index": { + "analysis": { + "analyzer": { + "my_analyzer": { + "type": "custom", + "tokenizer": "standard", + "filter": ["dictionary_decompounder", "hyphenation_decompounder"] + } }, - "hyphenation_decompounder": { - "type" : "hyphenation_decompounder", - "word_list_path": "analysis/example_word_list.txt", - "hyphenation_patterns_path": "analysis/hyphenation_patterns.xml", - "max_subword_size": 22 + "filter": { + "dictionary_decompounder": { + "type": "dictionary_decompounder", + "word_list": ["one", "two", "three"] + }, + "hyphenation_decompounder": { + "type" : "hyphenation_decompounder", + "word_list_path": "analysis/example_word_list.txt", + "hyphenation_patterns_path": "analysis/hyphenation_patterns.xml", + "max_subword_size": 22 + } } } } diff --git a/docs/reference/analysis/tokenfilters/length-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/length-tokenfilter.asciidoc index 2651980966e95..e53a198df5570 100644 --- a/docs/reference/analysis/tokenfilters/length-tokenfilter.asciidoc +++ b/docs/reference/analysis/tokenfilters/length-tokenfilter.asciidoc @@ -11,6 +11,6 @@ type: |=========================================================== |Setting |Description |`min` |The minimum number. Defaults to `0`. -|`max` |The maximum number. Defaults to `Integer.MAX_VALUE`. +|`max` |The maximum number. Defaults to `Integer.MAX_VALUE`, which is `2^31-1` or 2147483647. |=========================================================== diff --git a/docs/reference/analysis/tokenfilters/ngram-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/ngram-tokenfilter.asciidoc index 5f911360085a1..acc178a2741fa 100644 --- a/docs/reference/analysis/tokenfilters/ngram-tokenfilter.asciidoc +++ b/docs/reference/analysis/tokenfilters/ngram-tokenfilter.asciidoc @@ -13,3 +13,6 @@ type: |`max_gram` |Defaults to `2`. |============================ +The index level setting `index.max_ngram_diff` controls the maximum allowed +difference between `max_gram` and `min_gram`. + diff --git a/docs/reference/analysis/tokenfilters/pattern-capture-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/pattern-capture-tokenfilter.asciidoc index 4dac79b6571b1..5b935d31f1289 100644 --- a/docs/reference/analysis/tokenfilters/pattern-capture-tokenfilter.asciidoc +++ b/docs/reference/analysis/tokenfilters/pattern-capture-tokenfilter.asciidoc @@ -131,10 +131,12 @@ Multiple patterns are required to allow overlapping captures, but also means that patterns are less dense and easier to understand. *Note:* All tokens are emitted in the same position, and with the same -character offsets, so when combined with highlighting, the whole -original token will be highlighted, not just the matching subset. For -instance, querying the above email address for `"smith"` would -highlight: +character offsets. This means, for example, that a `match` query for +`john-smith_123@foo-bar.com` that uses this analyzer will return documents +containing any of these tokens, even when using the `and` operator. +Also, when combined with highlighting, the whole original token will +be highlighted, not just the matching subset. For instance, querying +the above email address for `"smith"` would highlight: [source,html] -------------------------------------------------- diff --git a/docs/reference/analysis/tokenfilters/shingle-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/shingle-tokenfilter.asciidoc index 5e3565cf83cb3..386b45559fdbf 100644 --- a/docs/reference/analysis/tokenfilters/shingle-tokenfilter.asciidoc +++ b/docs/reference/analysis/tokenfilters/shingle-tokenfilter.asciidoc @@ -38,3 +38,5 @@ used if the position increment is greater than one when a `stop` filter is used together with the `shingle` filter. Defaults to `"_"` |======================================================================= +The index level setting `index.max_shingle_diff` controls the maximum allowed +difference between `max_shingle_size` and `min_shingle_size`. diff --git a/docs/reference/analysis/tokenfilters/stemmer-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/stemmer-tokenfilter.asciidoc index a052a4a7a5877..a13c6746d74be 100644 --- a/docs/reference/analysis/tokenfilters/stemmer-tokenfilter.asciidoc +++ b/docs/reference/analysis/tokenfilters/stemmer-tokenfilter.asciidoc @@ -44,6 +44,10 @@ Basque:: http://snowball.tartarus.org/algorithms/basque/stemmer.html[*`basque`*] +Bengali:: +http://www.tandfonline.com/doi/abs/10.1080/02564602.1993.11437284[*`bengali`*] +http://members.unine.ch/jacques.savoy/clef/BengaliStemmerLight.java.txt[*`light_bengali`*] + Brazilian Portuguese:: http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/br/BrazilianStemmer.html[*`brazilian`*] diff --git a/docs/reference/analysis/tokenfilters/stop-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/stop-tokenfilter.asciidoc index b20f9c9418dc7..3167a4342ac2d 100644 --- a/docs/reference/analysis/tokenfilters/stop-tokenfilter.asciidoc +++ b/docs/reference/analysis/tokenfilters/stop-tokenfilter.asciidoc @@ -71,7 +71,7 @@ PUT /my_index Elasticsearch provides the following predefined list of languages: -`_arabic_`, `_armenian_`, `_basque_`, `_brazilian_`, `_bulgarian_`, +`_arabic_`, `_armenian_`, `_basque_`, `_bengali_`, `_brazilian_`, `_bulgarian_`, `_catalan_`, `_czech_`, `_danish_`, `_dutch_`, `_english_`, `_finnish_`, `_french_`, `_galician_`, `_german_`, `_greek_`, `_hindi_`, `_hungarian_`, `_indonesian_`, `_irish_`, `_italian_`, `_latvian_`, `_norwegian_`, `_persian_`, diff --git a/docs/reference/analysis/tokenizers/ngram-tokenizer.asciidoc b/docs/reference/analysis/tokenizers/ngram-tokenizer.asciidoc index ae3183f0fd13c..c182ffacd1cfe 100644 --- a/docs/reference/analysis/tokenizers/ngram-tokenizer.asciidoc +++ b/docs/reference/analysis/tokenizers/ngram-tokenizer.asciidoc @@ -198,6 +198,9 @@ value. The smaller the length, the more documents will match but the lower the quality of the matches. The longer the length, the more specific the matches. A tri-gram (length `3`) is a good place to start. +The index level setting `index.max_ngram_diff` controls the maximum allowed +difference between `max_gram` and `min_gram`. + [float] === Example configuration diff --git a/docs/reference/analysis/tokenizers/whitespace-tokenizer.asciidoc b/docs/reference/analysis/tokenizers/whitespace-tokenizer.asciidoc index 55ce590031f3e..149fe421c0a32 100644 --- a/docs/reference/analysis/tokenizers/whitespace-tokenizer.asciidoc +++ b/docs/reference/analysis/tokenizers/whitespace-tokenizer.asciidoc @@ -111,4 +111,10 @@ The above sentence would produce the following terms: [float] === Configuration -The `whitespace` tokenizer is not configurable. +The `whitespace` tokenizer accepts the following parameters: + +[horizontal] +`max_token_length`:: + + The maximum token length. If a token is seen that exceeds this length then + it is split at `max_token_length` intervals. Defaults to `255`. diff --git a/docs/reference/api-conventions.asciidoc b/docs/reference/api-conventions.asciidoc index b21eac7908192..472c48e523229 100644 --- a/docs/reference/api-conventions.asciidoc +++ b/docs/reference/api-conventions.asciidoc @@ -47,7 +47,7 @@ to. If `open` is specified then the wildcard expression is expanded to only open indices and if `closed` is specified then the wildcard expression is expanded only to closed indices. Also both values (`open,closed`) can be specified to expand to all indices. - ++ If `none` is specified then wildcard expansion will be disabled and if `all` is specified, wildcard expressions will expand to all indices (this is equivalent to specifying `open,closed`). @@ -577,7 +577,9 @@ the maximum allowed Levenshtein Edit Distance (or number of edits) `AUTO`:: + -- -generates an edit distance based on the length of the term. For lengths: +generates an edit distance based on the length of the term. +Low and high distance arguments may be optionally provided `AUTO:[low],[high]`, if not specified, +the default values are 3 and 6, equivalent to `AUTO:3,6` that make for lengths: `0..2`:: must match exactly `3..5`:: one edit allowed @@ -600,7 +602,7 @@ invalid `size` parameter to the `_search` API: POST /twitter/_search?size=surprise_me ---------------------------------------------------------------------- // CONSOLE -// TEST[s/surprise_me/surprise_me&error_trace=false/ catch:request] +// TEST[s/surprise_me/surprise_me&error_trace=false/ catch:bad_request] // Since the test system sends error_trace=true by default we have to override The response looks like: @@ -634,7 +636,7 @@ But if you set `error_trace=true`: POST /twitter/_search?size=surprise_me&error_trace=true ---------------------------------------------------------------------- // CONSOLE -// TEST[catch:request] +// TEST[catch:bad_request] The response looks like: diff --git a/docs/reference/cat.asciidoc b/docs/reference/cat.asciidoc index b201a9b1ca891..31e0bf61707e3 100644 --- a/docs/reference/cat.asciidoc +++ b/docs/reference/cat.asciidoc @@ -35,7 +35,7 @@ GET /_cat/master?v Might respond with: -[source,js] +[source,txt] -------------------------------------------------- id host ip node u_n93zwxThWHi1PDBJAGAg 127.0.0.1 127.0.0.1 u_n93zw @@ -57,7 +57,7 @@ GET /_cat/master?help Might respond respond with: -[source,js] +[source,txt] -------------------------------------------------- id | | node id host | h | host name @@ -81,7 +81,7 @@ GET /_cat/nodes?h=ip,port,heapPercent,name Responds with: -[source,js] +[source,txt] -------------------------------------------------- 127.0.0.1 9300 27 sLBaIGK -------------------------------------------------- @@ -197,7 +197,7 @@ GET _cat/templates?v&s=order:desc,index_patterns returns: -[source,sh] +[source,txt] -------------------------------------------------- name index_patterns order version pizza_pepperoni [*pepperoni*] 2 diff --git a/docs/reference/cat/allocation.asciidoc b/docs/reference/cat/allocation.asciidoc index ba702080e581a..3719758ff58e9 100644 --- a/docs/reference/cat/allocation.asciidoc +++ b/docs/reference/cat/allocation.asciidoc @@ -18,7 +18,7 @@ Might respond with: shards disk.indices disk.used disk.avail disk.total disk.percent host ip node 5 260b 47.3gb 43.4gb 100.7gb 46 127.0.0.1 127.0.0.1 CSUXak2 -------------------------------------------------- -// TESTRESPONSE[s/260b/\\d+b/ s/\d+(\.\d+)?[tgmk]?b/\\d+(\\.\\d+)?[tgmk]?b/ s/46/\\d+/] +// TESTRESPONSE[s/\d+(\.\d+)?[tgmk]?b/\\d+(\\.\\d+)?[tgmk]?b/ s/46/\\d+/] // TESTRESPONSE[s/CSUXak2/.+/ _cat] Here we can see that each node has been allocated a single shard and diff --git a/docs/reference/cat/indices.asciidoc b/docs/reference/cat/indices.asciidoc index 0c840071bb9b1..746d0b4bb58a2 100644 --- a/docs/reference/cat/indices.asciidoc +++ b/docs/reference/cat/indices.asciidoc @@ -97,7 +97,7 @@ GET /_cat/indices/twitter?pri&v&h=health,index,pri,rep,docs.count,mt Might look like: -[source,js] +[source,txt] -------------------------------------------------- health index pri rep docs.count mt pri.mt yellow twitter 1 1 1200 16 16 @@ -115,7 +115,7 @@ GET /_cat/indices?v&h=i,tm&s=tm:desc Might look like: -[source,js] +[source,txt] -------------------------------------------------- i tm twitter 8.1gb diff --git a/docs/reference/cat/nodeattrs.asciidoc b/docs/reference/cat/nodeattrs.asciidoc index 18feeba8d03f3..196f142cc35e1 100644 --- a/docs/reference/cat/nodeattrs.asciidoc +++ b/docs/reference/cat/nodeattrs.asciidoc @@ -49,7 +49,7 @@ GET /_cat/nodeattrs?v&h=name,pid,attr,value Might look like: -[source,js] +[source,txt] -------------------------------------------------- name pid attr value EK_AsJb 19566 testattr test diff --git a/docs/reference/cat/nodes.asciidoc b/docs/reference/cat/nodes.asciidoc index 60e204410c5be..151ce80196b50 100644 --- a/docs/reference/cat/nodes.asciidoc +++ b/docs/reference/cat/nodes.asciidoc @@ -27,6 +27,11 @@ The last (`node.role`, `master`, and `name`) columns provide ancillary information that can often be useful when looking at the cluster as a whole, particularly large ones. How many master-eligible nodes do I have? +The `nodes` API accepts an additional URL parameter `full_id` accepting `true` +or `false`. The purpose of this parameter is to format the ID field (if +requested with `id` or `nodeId`) in its full length or in abbreviated form (the +default). + [float] === Columns @@ -53,7 +58,7 @@ GET /_cat/nodes?v&h=id,ip,port,v,m Might look like: -["source","js",subs="attributes,callouts"] +["source","txt",subs="attributes,callouts"] -------------------------------------------------- id ip port v m veJR 127.0.0.1 59938 {version} * diff --git a/docs/reference/cat/recovery.asciidoc b/docs/reference/cat/recovery.asciidoc index 4c981f206c70e..c4288f882e21e 100644 --- a/docs/reference/cat/recovery.asciidoc +++ b/docs/reference/cat/recovery.asciidoc @@ -21,7 +21,7 @@ GET _cat/recovery?v The response of this request will be something like: -[source,js] +[source,txt] --------------------------------------------------------------------------- index shard time type stage source_host source_node target_host target_node repository snapshot files files_recovered files_percent files_total bytes bytes_recovered bytes_percent bytes_total translog_ops translog_ops_recovered translog_ops_percent twitter 0 13ms store done n/a n/a 127.0.0.1 node-0 n/a n/a 0 0 100% 13 0 0 100% 9928 0 0 100.0% @@ -48,7 +48,7 @@ GET _cat/recovery?v&h=i,s,t,ty,st,shost,thost,f,fp,b,bp This will return a line like: -[source,js] +[source,txt] ---------------------------------------------------------------------------- i s t ty st shost thost f fp b bp twitter 0 1252ms peer done 192.168.1.1 192.168.1.2 0 100.0% 0 100.0% @@ -76,7 +76,7 @@ GET _cat/recovery?v&h=i,s,t,ty,st,rep,snap,f,fp,b,bp This will show a recovery of type snapshot in the response -[source,js] +[source,txt] -------------------------------------------------------------------------------- i s t ty st rep snap f fp b bp twitter 0 1978ms snapshot done twitter snap_1 79 8.0% 12086 9.0% diff --git a/docs/reference/cat/shards.asciidoc b/docs/reference/cat/shards.asciidoc index 6d786a315c740..f63e37c6a3d69 100644 --- a/docs/reference/cat/shards.asciidoc +++ b/docs/reference/cat/shards.asciidoc @@ -16,7 +16,7 @@ GET _cat/shards This will return -[source,js] +[source,txt] --------------------------------------------------------------------------- twitter 0 p STARTED 3014 31.1mb 192.168.56.10 H5dfFeA --------------------------------------------------------------------------- @@ -42,7 +42,7 @@ GET _cat/shards/twitt* Which will return the following -[source,js] +[source,txt] --------------------------------------------------------------------------- twitter 0 p STARTED 3014 31.1mb 192.168.56.10 H5dfFeA --------------------------------------------------------------------------- @@ -68,7 +68,7 @@ GET _cat/shards A relocating shard will be shown as follows -[source,js] +[source,txt] --------------------------------------------------------------------------- twitter 0 p RELOCATING 3014 31.1mb 192.168.56.10 H5dfFeA -> -> 192.168.56.30 bGG90GE --------------------------------------------------------------------------- @@ -90,7 +90,7 @@ GET _cat/shards You can get the initializing state in the response like this -[source,js] +[source,txt] --------------------------------------------------------------------------- twitter 0 p STARTED 3014 31.1mb 192.168.56.10 H5dfFeA twitter 0 r INITIALIZING 0 14.3mb 192.168.56.30 bGG90GE @@ -112,7 +112,7 @@ GET _cat/shards?h=index,shard,prirep,state,unassigned.reason The reason for an unassigned shard will be listed as the last field -[source,js] +[source,txt] --------------------------------------------------------------------------- twitter 0 p STARTED 3014 31.1mb 192.168.56.10 H5dfFeA twitter 0 r STARTED 3014 31.1mb 192.168.56.30 bGG90GE diff --git a/docs/reference/cat/thread_pool.asciidoc b/docs/reference/cat/thread_pool.asciidoc index 721d85e46a0cb..163a729e51cc3 100644 --- a/docs/reference/cat/thread_pool.asciidoc +++ b/docs/reference/cat/thread_pool.asciidoc @@ -92,7 +92,7 @@ GET /_cat/thread_pool/generic?v&h=id,name,active,rejected,completed which looks like: -[source,js] +[source,txt] -------------------------------------------------- id name active rejected completed 0EWUhXeBQtaVGlexUeVwMg generic 0 0 70 diff --git a/docs/reference/cluster/allocation-explain.asciidoc b/docs/reference/cluster/allocation-explain.asciidoc index 8749970aeb27c..615a8a0108427 100644 --- a/docs/reference/cluster/allocation-explain.asciidoc +++ b/docs/reference/cluster/allocation-explain.asciidoc @@ -19,6 +19,7 @@ To explain the allocation of a shard, first an index should exist: -------------------------------------------------- PUT /myindex -------------------------------------------------- +// CONSOLE // TESTSETUP And then the allocation for shards of that index can be explained: @@ -72,6 +73,24 @@ GET /_cluster/allocation/explain This section includes examples of the cluster allocation explain API response output under various scenarios. +////////////////////////// + +[source,js] +-------------------------------------------------- +PUT /idx?master_timeout=1s&timeout=1s +{"settings": {"index.routing.allocation.include._name": "non_existent_node"} } + +GET /_cluster/allocation/explain +{ + "index": "idx", + "shard": 0, + "primary": true +} +-------------------------------------------------- +// CONSOLE + +////////////////////////// + The API response for an unassigned shard: [source,js] @@ -91,8 +110,9 @@ The API response for an unassigned shard: "node_allocation_decisions" : [ { "node_id" : "8qt2rY-pT6KNZB3-hGfLnw", - "node_name" : "node_t1", + "node_name" : "node-0", "transport_address" : "127.0.0.1:9401", + "node_attributes" : {}, "node_decision" : "no", <4> "weight_ranking" : 1, "deciders" : [ @@ -102,24 +122,15 @@ The API response for an unassigned shard: "explanation" : "node does not match index setting [index.routing.allocation.include] filters [_name:\"non_existent_node\"]" <6> } ] - }, - { - "node_id" : "7Wr-QxLXRLKDxhzNm50pFA", - "node_name" : "node_t0", - "transport_address" : "127.0.0.1:9400", - "node_decision" : "no", - "weight_ranking" : 2, - "deciders" : [ - { - "decider" : "filter", - "decision" : "NO", - "explanation" : "node does not match index setting [index.routing.allocation.include] filters [_name:\"non_existent_node\"]" - } - ] } ] } -------------------------------------------------- +// TESTRESPONSE[s/"at" : "[^"]*"/"at" : $body.$_path/] +// TESTRESPONSE[s/"node_id" : "[^"]*"/"node_id" : $body.$_path/] +// TESTRESPONSE[s/"transport_address" : "[^"]*"/"transport_address" : $body.$_path/] +// TESTRESPONSE[s/"node_attributes" : \{\}/"node_attributes" : $body.$_path/] + <1> The current state of the shard <2> The reason for the shard originally becoming unassigned <3> Whether to allocate the shard @@ -171,6 +182,7 @@ allocated to a node in the cluster: "allocate_explanation" : "cannot allocate because a previous copy of the primary shard existed but can no longer be found on the nodes in the cluster" } -------------------------------------------------- +// NOTCONSOLE The API response output for a replica that is unassigned due to delayed allocation: @@ -220,6 +232,7 @@ The API response output for a replica that is unassigned due to delayed allocati ] } -------------------------------------------------- +// NOTCONSOLE <1> The configured delay before allocating a replica shard that does not exist due to the node holding it leaving the cluster <2> The remaining delay before allocating the replica shard <3> Information about the shard data found on a node @@ -267,6 +280,7 @@ remain on its current node and is required to move: ] } -------------------------------------------------- +// NOTCONSOLE <1> Whether the shard is allowed to remain on its current node <2> The deciders that factored into the decision of why the shard is not allowed to remain on its current node <3> Whether the shard is allowed to be allocated to another node @@ -302,6 +316,7 @@ because moving the shard to another node does not form a better cluster balance: ] } -------------------------------------------------- +// NOTCONSOLE <1> Whether rebalancing is allowed on the cluster <2> Whether the shard can be rebalanced to another node <3> The reason the shard cannot be rebalanced to the node, in this case indicating that it offers no better balance than the current node diff --git a/docs/reference/cluster/nodes-info.asciidoc b/docs/reference/cluster/nodes-info.asciidoc index 1d572eb31fdd1..2b91310da3a8e 100644 --- a/docs/reference/cluster/nodes-info.asciidoc +++ b/docs/reference/cluster/nodes-info.asciidoc @@ -6,9 +6,10 @@ the cluster nodes information. [source,js] -------------------------------------------------- -curl -XGET 'http://localhost:9200/_nodes' -curl -XGET 'http://localhost:9200/_nodes/nodeId1,nodeId2' +GET /_nodes +GET /_nodes/nodeId1,nodeId2 -------------------------------------------------- +// CONSOLE The first command retrieves information of all the nodes in the cluster. The second command selectively retrieves nodes information of only @@ -52,14 +53,22 @@ It also allows to get only information on `settings`, `os`, `process`, `jvm`, [source,js] -------------------------------------------------- -curl -XGET 'http://localhost:9200/_nodes/process' -curl -XGET 'http://localhost:9200/_nodes/_all/process' -curl -XGET 'http://localhost:9200/_nodes/nodeId1,nodeId2/jvm,process' +# return just process +GET /_nodes/process + +# same as above +GET /_nodes/_all/process + +# return just jvm and process of only nodeId1 and nodeId2 +GET /_nodes/nodeId1,nodeId2/jvm,process + # same as above -curl -XGET 'http://localhost:9200/_nodes/nodeId1,nodeId2/info/jvm,process' +GET /_nodes/nodeId1,nodeId2/info/jvm,process -curl -XGET 'http://localhost:9200/_nodes/nodeId1,nodeId2/_all +# return all the information of only nodeId1 and nodeId2 +GET /_nodes/nodeId1,nodeId2/_all -------------------------------------------------- +// CONSOLE The `_all` flag can be set to return all the information - or you can simply omit it. @@ -110,23 +119,36 @@ the current running process: [[plugins-info]] ==== Plugins information -`plugins` - if set, the result will contain details about the installed plugins - per node: +`plugins` - if set, the result will contain details about the installed plugins and modules per node: -* `name`: plugin name -* `version`: version of Elasticsearch the plugin was built for -* `description`: short description of the plugin's purpose -* `classname`: fully-qualified class name of the plugin's entry point -* `has_native_controller`: whether or not the plugin has a native controller process +[source,js] +-------------------------------------------------- +GET /_nodes/plugins +-------------------------------------------------- +// CONSOLE +// TEST[setup:node] The result will look similar to: [source,js] -------------------------------------------------- { + "_nodes": ... "cluster_name": "elasticsearch", "nodes": { - "O70_wBv6S9aPPcAKdSUBtw": { + "USpTGYaBSIKbgSUJR2Z9lg": { + "name": "node-0", + "transport_address": "192.168.17:9300", + "host": "node-0.elastic.co", + "ip": "192.168.17", + "version": "{version}", + "build_hash": "587409e", + "roles": [ + "master", + "data", + "ingest" + ], + "attributes": {}, "plugins": [ { "name": "analysis-icu", @@ -149,11 +171,41 @@ The result will look similar to: "classname": "org.elasticsearch.ingest.useragent.IngestUserAgentPlugin", "has_native_controller": false } + ], + "modules": [ + { + "name": "lang-painless", + "version": "{version}", + "description": "An easy, safe and fast scripting language for Elasticsearch", + "classname": "org.elasticsearch.painless.PainlessPlugin", + "has_native_controller": false + } ] } } } -------------------------------------------------- +// TESTRESPONSE[s/"_nodes": \.\.\./"_nodes": $body.$_path,/] +// TESTRESPONSE[s/"elasticsearch"/$body.cluster_name/] +// TESTRESPONSE[s/"USpTGYaBSIKbgSUJR2Z9lg"/\$node_name/] +// TESTRESPONSE[s/"name": "node-0"/"name": $body.$_path/] +// TESTRESPONSE[s/"transport_address": "192.168.17:9300"/"transport_address": $body.$_path/] +// TESTRESPONSE[s/"host": "node-0.elastic.co"/"host": $body.$_path/] +// TESTRESPONSE[s/"ip": "192.168.17"/"ip": $body.$_path/] +// TESTRESPONSE[s/"build_hash": "587409e"/"build_hash": $body.$_path/] +// TESTRESPONSE[s/"roles": \[[^\]]*\]/"roles": $body.$_path/] +// TESTRESPONSE[s/"attributes": \{[^\}]*\}/"attributes": $body.$_path/] +// TESTRESPONSE[s/"plugins": \[[^\]]*\]/"plugins": $body.$_path/] +// TESTRESPONSE[s/"modules": \[[^\]]*\]/"modules": $body.$_path/] + +The following information are available for each plugin and module: + +* `name`: plugin name +* `version`: version of Elasticsearch the plugin was built for +* `description`: short description of the plugin's purpose +* `classname`: fully-qualified class name of the plugin's entry point +* `has_native_controller`: whether or not the plugin has a native controller process + [float] [[ingest-info]] @@ -162,16 +214,30 @@ The result will look similar to: `ingest` - if set, the result will contain details about the available processors per node: -* `type`: the processor type +[source,js] +-------------------------------------------------- +GET /_nodes/ingest +-------------------------------------------------- +// CONSOLE +// TEST[setup:node] The result will look similar to: [source,js] -------------------------------------------------- { + "_nodes": ... "cluster_name": "elasticsearch", "nodes": { - "O70_wBv6S9aPPcAKdSUBtw": { + "USpTGYaBSIKbgSUJR2Z9lg": { + "name": "node-0", + "transport_address": "192.168.17:9300", + "host": "node-0.elastic.co", + "ip": "192.168.17", + "version": "{version}", + "build_hash": "587409e", + "roles": [], + "attributes": {}, "ingest": { "processors": [ { @@ -221,4 +287,19 @@ The result will look similar to: } } } --------------------------------------------------- \ No newline at end of file +-------------------------------------------------- +// TESTRESPONSE[s/"_nodes": \.\.\./"_nodes": $body.$_path,/] +// TESTRESPONSE[s/"elasticsearch"/$body.cluster_name/] +// TESTRESPONSE[s/"USpTGYaBSIKbgSUJR2Z9lg"/\$node_name/] +// TESTRESPONSE[s/"name": "node-0"/"name": $body.$_path/] +// TESTRESPONSE[s/"transport_address": "192.168.17:9300"/"transport_address": $body.$_path/] +// TESTRESPONSE[s/"host": "node-0.elastic.co"/"host": $body.$_path/] +// TESTRESPONSE[s/"ip": "192.168.17"/"ip": $body.$_path/] +// TESTRESPONSE[s/"build_hash": "587409e"/"build_hash": $body.$_path/] +// TESTRESPONSE[s/"roles": \[[^\]]*\]/"roles": $body.$_path/] +// TESTRESPONSE[s/"attributes": \{[^\}]*\}/"attributes": $body.$_path/] +// TESTRESPONSE[s/"processors": \[[^\]]*\]/"processors": $body.$_path/] + +The following information are available for each ingest processor: + +* `type`: the processor type diff --git a/docs/reference/cluster/nodes-stats.asciidoc b/docs/reference/cluster/nodes-stats.asciidoc index 793a056b3ebb9..40c02cf35aa09 100644 --- a/docs/reference/cluster/nodes-stats.asciidoc +++ b/docs/reference/cluster/nodes-stats.asciidoc @@ -96,7 +96,9 @@ information that concern the file system: Total number of unallocated bytes in all file stores `fs.total.available_in_bytes`:: - Total number of bytes available to this Java virtual machine on all file stores + Total number of bytes available to this Java virtual machine on all file stores. + Depending on OS or process level restrictions, this might appear less than `fs.total.free_in_bytes`. + This is the actual amount of free disk space the Elasticsearch node can utilise. `fs.data`:: List of all file stores @@ -253,6 +255,25 @@ the operating system: The total amount of time (in nanoseconds) for which all tasks in the same cgroup as the Elasticsearch process have been throttled. +`os.cgroup.memory.control_group` (Linux only):: + The `memory` control group to which the Elasticsearch process + belongs + +`os.cgroup.memory.limit_in_bytes` (Linux only):: + The maximum amount of user memory (including file cache) allowed + for all tasks in the same cgroup as the Elasticsearch process. + This value can be too big to store in a `long`, so is returned as + a string so that the value returned can exactly match what the + underlying operating system interface returns. Any value that is + too large to parse into a `long` almost certainly means no limit + has been set for the cgroup. + +`os.cgroup.memory.usage_in_bytes` (Linux only):: + The total current memory usage by processes in the cgroup (in bytes) + by all tasks in the same cgroup as the Elasticsearch process. + This value is stored as a string for consistency with + `os.cgroup.memory.limit_in_bytes`. + NOTE: For the cgroup stats to be visible, cgroups must be compiled into the kernal, the `cpu` and `cpuacct` cgroup subsystems must be configured and stats must be readable from `/sys/fs/cgroup/cpu` @@ -360,4 +381,4 @@ The `ingest` flag can be set to retrieve statistics that concern ingest: `ingest.total.failed`:: The total number ingest preprocessing operations failed during the lifetime of this node -On top of these overall ingest statistics, these statistics are also provided on a per pipeline basis. \ No newline at end of file +On top of these overall ingest statistics, these statistics are also provided on a per pipeline basis. diff --git a/docs/reference/cluster/pending.asciidoc b/docs/reference/cluster/pending.asciidoc index 8431507312952..c64890cd31271 100644 --- a/docs/reference/cluster/pending.asciidoc +++ b/docs/reference/cluster/pending.asciidoc @@ -12,8 +12,9 @@ might be reported by both task api and pending cluster tasks API. [source,js] -------------------------------------------------- -$ curl -XGET 'http://localhost:9200/_cluster/pending_tasks' +GET /_cluster/pending_tasks -------------------------------------------------- +// CONSOLE Usually this will return an empty list as cluster-level changes are usually fast. However if there are tasks queued up, the output will look something @@ -47,3 +48,5 @@ like this: ] } -------------------------------------------------- +// NOTCONSOLE +// We can't test tasks output diff --git a/docs/reference/cluster/state.asciidoc b/docs/reference/cluster/state.asciidoc index 3235287941f0d..d0ff3290c74d3 100644 --- a/docs/reference/cluster/state.asciidoc +++ b/docs/reference/cluster/state.asciidoc @@ -6,8 +6,9 @@ the whole cluster. [source,js] -------------------------------------------------- -$ curl -XGET 'http://localhost:9200/_cluster/state' +GET /_cluster/state -------------------------------------------------- +// CONSOLE The response provides the cluster name, the total compressed size of the cluster state (its size when serialized for transmission over @@ -27,8 +28,9 @@ it is possible to filter the cluster state response specifying the parts in the [source,js] -------------------------------------------------- -$ curl -XGET 'http://localhost:9200/_cluster/state/{metrics}/{indices}' +GET /_cluster/state/{metrics}/{indices} -------------------------------------------------- +// CONSOLE `metrics` can be a comma-separated list of @@ -50,17 +52,27 @@ $ curl -XGET 'http://localhost:9200/_cluster/state/{metrics}/{indices}' `blocks`:: Shows the `blocks` part of the response -A couple of example calls: +The following example returns only `metadata` and `routing_table` data for the `foo` and `bar` indices: [source,js] -------------------------------------------------- -# return only metadata and routing_table data for specified indices -$ curl -XGET 'http://localhost:9200/_cluster/state/metadata,routing_table/foo,bar' +GET /_cluster/state/metadata,routing_table/foo,bar +-------------------------------------------------- +// CONSOLE + +The next example returns everything for the `foo` and `bar` indices: -# return everything for these two indices -$ curl -XGET 'http://localhost:9200/_cluster/state/_all/foo,bar' +[source,js] +-------------------------------------------------- +GET /_cluster/state/_all/foo,bar +-------------------------------------------------- +// CONSOLE -# Return only blocks data -$ curl -XGET 'http://localhost:9200/_cluster/state/blocks' +And this example return only `blocks` data: +[source,js] -------------------------------------------------- +GET /_cluster/state/blocks +-------------------------------------------------- +// CONSOLE + diff --git a/docs/reference/cluster/stats.asciidoc b/docs/reference/cluster/stats.asciidoc index e4bed585ead5e..6efb4dced8bb8 100644 --- a/docs/reference/cluster/stats.asciidoc +++ b/docs/reference/cluster/stats.asciidoc @@ -8,21 +8,28 @@ versions, memory usage, cpu and installed plugins). [source,js] -------------------------------------------------- -curl -XGET 'http://localhost:9200/_cluster/stats?human&pretty' +GET /_cluster/stats?human&pretty -------------------------------------------------- +// CONSOLE +// TEST[setup:twitter] Will return, for example: ["source","js",subs="attributes,callouts"] -------------------------------------------------- { - "timestamp": 1459427693515, + "_nodes" : { + "total" : 1, + "successful" : 1, + "failed" : 0 + }, "cluster_name": "elasticsearch", + "timestamp": 1459427693515, "status": "green", "indices": { - "count": 2, + "count": 1, "shards": { - "total": 10, - "primaries": 10, + "total": 5, + "primaries": 5, "replication": 0, "index": { "shards": { @@ -48,9 +55,7 @@ Will return, for example: }, "store": { "size": "16.2kb", - "size_in_bytes": 16684, - "throttle_time": "0s", - "throttle_time_in_millis": 0 + "size_in_bytes": 16684 }, "fielddata": { "memory_size": "0b", @@ -83,6 +88,8 @@ Will return, for example: "term_vectors_memory_in_bytes": 0, "norms_memory": "384b", "norms_memory_in_bytes": 384, + "points_memory" : "0b", + "points_memory_in_bytes" : 0, "doc_values_memory": "744b", "doc_values_memory_in_bytes": 744, "index_writer_memory": "0b", @@ -91,10 +98,8 @@ Will return, for example: "version_map_memory_in_bytes": 0, "fixed_bit_set": "0b", "fixed_bit_set_memory_in_bytes": 0, + "max_unsafe_auto_id_timestamp" : -9223372036854775808, "file_sizes": {} - }, - "percolator": { - "num_queries": 0 } }, "nodes": { @@ -188,8 +193,22 @@ Will return, for example: "classname": "org.elasticsearch.ingest.useragent.IngestUserAgentPlugin", "has_native_controller": false } - ] + ], + "network_types" : { + "transport_types" : { + "netty4" : 1 + }, + "http_types" : { + "netty4" : 1 + } + } } } -------------------------------------------------- - +// TESTRESPONSE[s/"plugins": \[[^\]]*\]/"plugins": $body.$_path/] +// TESTRESPONSE[s/: (\-)?[0-9]+/: $body.$_path/] +// TESTRESPONSE[s/: "[^"]*"/: $body.$_path/] +//// +The TESTRESPONSE above replace all the fields values by the expected ones in the test, +because we don't really care about the field values but we want to check the fields names. +//// \ No newline at end of file diff --git a/docs/reference/cluster/tasks.asciidoc b/docs/reference/cluster/tasks.asciidoc index f0a5b4f8eb9a8..ed73290883d23 100644 --- a/docs/reference/cluster/tasks.asciidoc +++ b/docs/reference/cluster/tasks.asciidoc @@ -57,6 +57,8 @@ The result will look similar to the following: } } -------------------------------------------------- +// NOTCONSOLE +// We can't test tasks output It is also possible to retrieve information for a particular task: @@ -117,6 +119,8 @@ might look like: } } -------------------------------------------------- +// NOTCONSOLE +// We can't test tasks output The new `description` field contains human readable text that identifies the particular request that the task is performing such as identifying the search @@ -180,7 +184,6 @@ POST _tasks/_cancel?nodes=nodeId1,nodeId2&actions=*reindex -------------------------------------------------- // CONSOLE - [float] === Task Grouping diff --git a/docs/reference/docs/delete-by-query.asciidoc b/docs/reference/docs/delete-by-query.asciidoc index 6db27698245c9..0aea249d899e4 100644 --- a/docs/reference/docs/delete-by-query.asciidoc +++ b/docs/reference/docs/delete-by-query.asciidoc @@ -187,45 +187,99 @@ starting the next set. This is "bursty" instead of "smooth". The default is `-1` [float] === Response body +////////////////////////// + +[source,js] +-------------------------------------------------- +POST /twitter/_delete_by_query +{ + "query": { <1> + "match": { + "message": "some message" + } + } +} +-------------------------------------------------- +// CONSOLE +// TEST[setup:big_twitter] + +////////////////////////// + The JSON response looks like this: [source,js] -------------------------------------------------- { - "took" : 639, - "deleted": 0, + "took" : 147, + "timed_out": false, + "total": 119, + "deleted": 119, "batches": 1, - "version_conflicts": 2, - "retries": 0, + "version_conflicts": 0, + "noops": 0, + "retries": { + "bulk": 0, + "search": 0 + }, "throttled_millis": 0, + "requests_per_second": -1.0, + "throttled_until_millis": 0, "failures" : [ ] } -------------------------------------------------- +// TESTRESPONSE[s/: [0-9]+/: $body.$_path/] `took`:: The number of milliseconds from start to end of the whole operation. +`timed_out`:: + +This flag is set to `true` if any of the requests executed during the +delete by query execution has timed out. + +`total`:: + +The number of documents that were successfully processed. + `deleted`:: The number of documents that were successfully deleted. `batches`:: -The number of scroll responses pulled back by the the delete by query. +The number of scroll responses pulled back by the delete by query. `version_conflicts`:: The number of version conflicts that the delete by query hit. +`noops`:: + +This field is always equal to zero for delete by query. It only exists +so that delete by query, update by query and reindex APIs return responses + with the same structure. + `retries`:: -The number of retries that the delete by query did in response to a full queue. +The number of retries attempted by delete by query. `bulk` is the number +of bulk actions retried and `search` is the number of search actions retried. `throttled_millis`:: Number of milliseconds the request slept to conform to `requests_per_second`. +`requests_per_second`:: + +The number of requests per second effectively executed during the delete by query. + +`throttled_until_millis`:: + +This field should always be equal to zero in a delete by query response. It only +has meaning when using the <>, where it +indicates the next time (in milliseconds since epoch) a throttled request will be +executed again in order to conform to `requests_per_second`. + `failures`:: Array of all indexing failures. If this is non-empty then the request aborted @@ -285,6 +339,8 @@ The responses looks like: } } -------------------------------------------------- +// NOTCONSOLE +// We can't test tasks output <1> this object contains the actual status. It is just like the response json with the important addition of the `total` field. `total` is the total number diff --git a/docs/reference/docs/get.asciidoc b/docs/reference/docs/get.asciidoc index 2a252595dd59a..11b2347e7f31b 100644 --- a/docs/reference/docs/get.asciidoc +++ b/docs/reference/docs/get.asciidoc @@ -275,10 +275,6 @@ replicas. The `preference` can be set to: -`_primary`:: - The operation will go and be executed only on the primary - shards. - `_local`:: The operation will prefer to be executed on a local allocated shard if possible. diff --git a/docs/reference/docs/index_.asciidoc b/docs/reference/docs/index_.asciidoc index 8e18f3034e82b..0ac58622d5154 100644 --- a/docs/reference/docs/index_.asciidoc +++ b/docs/reference/docs/index_.asciidoc @@ -70,7 +70,8 @@ type specified. Check out the <> section for more information on mapping definitions. Automatic index creation can be disabled by setting -`action.auto_create_index` to `false` in the config file of all nodes. +`action.auto_create_index` to `false` in the config file of all nodes, +or via the cluster update settings API. Automatic mapping creation can be disabled by setting `index.mapper.dynamic` to `false` per-index as an index setting. @@ -91,8 +92,7 @@ will control the version of the document the operation is intended to be executed against. A good example of a use case for versioning is performing a transactional read-then-update. Specifying a `version` from the document initially read ensures no changes have happened in the -meantime (when reading in order to update, it is recommended to set -`preference` to `_primary`). For example: +meantime. For example: [source,js] -------------------------------------------------- @@ -242,7 +242,7 @@ The result of the above index operation is: [[index-routing]] === Routing -By default, shard placement — or `routing` — is controlled by using a +By default, shard placement ? or `routing` ? is controlled by using a hash of the document's id value. For more explicit control, the value fed into the hash function used by the router can be directly specified on a per-operation basis using the `routing` parameter. For example: diff --git a/docs/reference/docs/reindex.asciidoc b/docs/reference/docs/reindex.asciidoc index 817c676a72c2c..e1876327504e2 100644 --- a/docs/reference/docs/reindex.asciidoc +++ b/docs/reference/docs/reindex.asciidoc @@ -558,29 +558,63 @@ starting the next set. This is "bursty" instead of "smooth". The default is `-1` [[docs-reindex-response-body]] === Response body +////////////////////////// +[source,js] +-------------------------------------------------- +POST /_reindex?wait_for_completion +{ + "source": { + "index": "twitter" + }, + "dest": { + "index": "new_twitter" + } +} +-------------------------------------------------- +// CONSOLE +// TEST[setup:twitter] + +////////////////////////// + The JSON response looks like this: [source,js] -------------------------------------------------- { - "took" : 639, + "took": 639, + "timed_out": false, + "total": 5, "updated": 0, - "created": 123, + "created": 5, + "deleted": 0, "batches": 1, + "noops": 0, "version_conflicts": 2, "retries": { "bulk": 0, "search": 0 - } + }, "throttled_millis": 0, - "failures" : [ ] + "requests_per_second": 1, + "throttled_until_millis": 0, + "failures": [ ] } -------------------------------------------------- +// TESTRESPONSE[s/: [0-9]+/: $body.$_path/] `took`:: The number of milliseconds from start to end of the whole operation. +`timed_out`:: + +This flag is set to `true` if any of the requests executed during the +reindex has timed out. + +`total`:: + +The number of documents that were successfully processed. + `updated`:: The number of documents that were successfully updated. @@ -589,9 +623,18 @@ The number of documents that were successfully updated. The number of documents that were successfully created. +`deleted`:: + +The number of documents that were successfully deleted. + `batches`:: -The number of scroll responses pulled back by the the reindex. +The number of scroll responses pulled back by the reindex. + +`noops`:: + +The number of documents that were ignored because the script used for +the reindex returned a `noop` value for `ctx.op`. `version_conflicts`:: @@ -606,6 +649,17 @@ actions retried and `search` is the number of search actions retried. Number of milliseconds the request slept to conform to `requests_per_second`. +`requests_per_second`:: + +The number of requests per second effectively executed during the reindex. + +`throttled_until_millis`:: + +This field should always be equal to zero in a delete by query response. It only +has meaning when using the <>, where it +indicates the next time (in milliseconds since epoch) a throttled request will be +executed again in order to conform to `requests_per_second`. + `failures`:: Array of all indexing failures. If this is non-empty then the request aborted @@ -667,6 +721,8 @@ The responses looks like: } } -------------------------------------------------- +// NOTCONSOLE +// We can't test tasks output <1> this object contains the actual status. It is just like the response json with the important addition of the `total` field. `total` is the total number diff --git a/docs/reference/docs/update-by-query.asciidoc b/docs/reference/docs/update-by-query.asciidoc index 6b25b693f10ca..59e82ff36c49a 100644 --- a/docs/reference/docs/update-by-query.asciidoc +++ b/docs/reference/docs/update-by-query.asciidoc @@ -98,8 +98,8 @@ parameter in the same way as the search api. So far we've only been updating documents without changing their source. That is genuinely useful for things like <> but it's only half the -fun. `_update_by_query` supports a `script` object to update the document. This -will increment the `likes` field on all of kimchy's tweets: +fun. `_update_by_query` <> to update +the document. This will increment the `likes` field on all of kimchy's tweets: [source,js] -------------------------------------------------- @@ -245,40 +245,75 @@ starting the next set. This is "bursty" instead of "smooth". The default is `-1` [[docs-update-by-query-response-body]] === Response body +////////////////////////// +[source,js] +-------------------------------------------------- +POST /twitter/_update_by_query?conflicts=proceed +-------------------------------------------------- +// CONSOLE +// TEST[setup:twitter] + +////////////////////////// + The JSON response looks like this: [source,js] -------------------------------------------------- { - "took" : 639, - "updated": 0, + "took" : 147, + "timed_out": false, + "total": 5, + "updated": 5, + "deleted": 0, "batches": 1, - "version_conflicts": 2, + "version_conflicts": 0, + "noops": 0, "retries": { "bulk": 0, "search": 0 - } + }, "throttled_millis": 0, + "requests_per_second": -1.0, + "throttled_until_millis": 0, "failures" : [ ] } -------------------------------------------------- +// TESTRESPONSE[s/"took" : 147/"took" : "$body.took"/] `took`:: The number of milliseconds from start to end of the whole operation. +`timed_out`:: + +This flag is set to `true` if any of the requests executed during the +update by query execution has timed out. + +`total`:: + +The number of documents that were successfully processed. + `updated`:: The number of documents that were successfully updated. +`deleted`:: + +The number of documents that were successfully deleted. + `batches`:: -The number of scroll responses pulled back by the the update by query. +The number of scroll responses pulled back by the update by query. `version_conflicts`:: The number of version conflicts that the update by query hit. +`noops`:: + +The number of documents that were ignored because the script used for +the update by query returned a `noop` value for `ctx.op`. + `retries`:: The number of retries attempted by update-by-query. `bulk` is the number of bulk @@ -288,6 +323,17 @@ actions retried and `search` is the number of search actions retried. Number of milliseconds the request slept to conform to `requests_per_second`. +`requests_per_second`:: + +The number of requests per second effectively executed during the update by query. + +`throttled_until_millis`:: + +This field should always be equal to zero in a delete by query response. It only +has meaning when using the <>, where it +indicates the next time (in milliseconds since epoch) a throttled request will be +executed again in order to conform to `requests_per_second`. + `failures`:: Array of all indexing failures. If this is non-empty then the request aborted @@ -350,6 +396,8 @@ The responses looks like: } } -------------------------------------------------- +// NOTCONSOLE +// We can't test tasks output <1> this object contains the actual status. It is just like the response json with the important addition of the `total` field. `total` is the total number diff --git a/docs/reference/docs/update.asciidoc b/docs/reference/docs/update.asciidoc index 37091c47e0b35..9e3a537e96b23 100644 --- a/docs/reference/docs/update.asciidoc +++ b/docs/reference/docs/update.asciidoc @@ -170,7 +170,7 @@ the request was ignored. "_type": "type1", "_id": "1", "_version": 6, - "result": noop + "result": "noop" } -------------------------------------------------- // TESTRESPONSE diff --git a/docs/reference/getting-started.asciidoc b/docs/reference/getting-started.asciidoc index 3b34a93a6d698..b738c1c6186a2 100755 --- a/docs/reference/getting-started.asciidoc +++ b/docs/reference/getting-started.asciidoc @@ -13,7 +13,7 @@ Here are a few sample use-cases that Elasticsearch could be used for: * You run a price alerting platform which allows price-savvy customers to specify a rule like "I am interested in buying a specific electronic gadget and I want to be notified if the price of gadget falls below $X from any vendor within the next month". In this case you can scrape vendor prices, push them into Elasticsearch and use its reverse-search (Percolator) capability to match price movements against customer queries and eventually push the alerts out to the customer once matches are found. * You have analytics/business-intelligence needs and want to quickly investigate, analyze, visualize, and ask ad-hoc questions on a lot of data (think millions or billions of records). In this case, you can use Elasticsearch to store your data and then use Kibana (part of the Elasticsearch/Logstash/Kibana stack) to build custom dashboards that can visualize aspects of your data that are important to you. Additionally, you can use the Elasticsearch aggregations functionality to perform complex business intelligence queries against your data. -For the rest of this tutorial, I will guide you through the process of getting Elasticsearch up and running, taking a peek inside it, and performing basic operations like indexing, searching, and modifying your data. At the end of this tutorial, you should have a good idea of what Elasticsearch is, how it works, and hopefully be inspired to see how you can use it to either build sophisticated search applications or to mine intelligence from your data. +For the rest of this tutorial, you will be guided through the process of getting Elasticsearch up and running, taking a peek inside it, and performing basic operations like indexing, searching, and modifying your data. At the end of this tutorial, you should have a good idea of what Elasticsearch is, how it works, and hopefully be inspired to see how you can use it to either build sophisticated search applications or to mine intelligence from your data. -- == Basic Concepts @@ -148,6 +148,16 @@ And now we are ready to start our node and single cluster: ./elasticsearch -------------------------------------------------- +[float] +=== Installation with Homebrew + +On macOS, Elasticsearch can also be installed via https://brew.sh[Homebrew]: + +["source","sh"] +-------------------------------------------------- +brew install elasticsearch +-------------------------------------------------- + [float] === Installation example with MSI Windows Installer @@ -288,7 +298,13 @@ epoch timestamp cluster status node.total node.data shards pri relo i We can see that our cluster named "elasticsearch" is up with a green status. -Whenever we ask for the cluster health, we either get green, yellow, or red. Green means everything is good (cluster is fully functional), yellow means all data is available but some replicas are not yet allocated (cluster is fully functional), and red means some data is not available for whatever reason. Note that even if a cluster is red, it still is partially functional (i.e. it will continue to serve search requests from the available shards) but you will likely need to fix it ASAP since you have missing data. +Whenever we ask for the cluster health, we either get green, yellow, or red. + + * Green - everything is good (cluster is fully functional) + * Yellow - all data is available but some replicas are not yet allocated (cluster is fully functional) + * Red - some data is not available for whatever reason (cluster is partially functional) + +**Note:** When a cluster is red, it will continue to serve search requests from the available shards but you will likely need to fix it ASAP since there are unassigned shards. Also from the above response, we can see a total of 1 node and that we have 0 shards since we have no data in it yet. Note that since we are using the default cluster name (elasticsearch) and since Elasticsearch uses unicast network discovery by default to find other nodes on the same machine, it is possible that you could accidentally start up more than one node on your computer and have them all join a single cluster. In this scenario, you may see more than 1 node in the above response. @@ -373,7 +389,7 @@ PUT /customer/doc/1?pretty And the response: -[source,sh] +[source,js] -------------------------------------------------- { "_index" : "customer", @@ -570,7 +586,7 @@ POST /customer/doc/1/_update?pretty In the above example, `ctx._source` refers to the current source document that is about to be updated. -Note that as of this writing, updates can only be performed on a single document at a time. In the future, Elasticsearch might provide the ability to update multiple documents given a query condition (like an `SQL UPDATE-WHERE` statement). +Elasticsearch provides the ability to update multiple documents given a query condition (like an `SQL UPDATE-WHERE` statement). See {ref}/docs-update-by-query.html[`docs-update-by-query` API] === Deleting Documents @@ -644,7 +660,7 @@ Now that we've gotten a glimpse of the basics, let's try to work on a more reali -------------------------------------------------- // NOTCONSOLE -For the curious, I generated this data from http://www.json-generator.com/[`www.json-generator.com/`] so please ignore the actual values and semantics of the data as these are all randomly generated. +For the curious, this data was generated using http://www.json-generator.com/[`www.json-generator.com/`], so please ignore the actual values and semantics of the data as these are all randomly generated. [float] === Loading the Sample Dataset @@ -672,7 +688,7 @@ GET /_cat/indices?v And the response: -[source,js] +[source,txt] -------------------------------------------------- health status index uuid pri rep docs.count docs.deleted store.size pri.store.size yellow open bank l7sSYV2cQXmu6_4rJWVIww 5 1 1000 0 128.6kb 128.6kb @@ -1268,4 +1284,4 @@ There are many other aggregations capabilities that we won't go into detail here == Conclusion -Elasticsearch is both a simple and complex product. We've so far learned the basics of what it is, how to look inside of it, and how to work with it using some of the REST APIs. I hope that this tutorial has given you a better understanding of what Elasticsearch is and more importantly, inspired you to further experiment with the rest of its great features! +Elasticsearch is both a simple and complex product. We've so far learned the basics of what it is, how to look inside of it, and how to work with it using some of the REST APIs. Hopefully this tutorial has given you a better understanding of what Elasticsearch is and more importantly, inspired you to further experiment with the rest of its great features! diff --git a/docs/reference/how-to/disk-usage.asciidoc b/docs/reference/how-to/disk-usage.asciidoc index a9dd7501a72c3..aa39f28b2fdd6 100644 --- a/docs/reference/how-to/disk-usage.asciidoc +++ b/docs/reference/how-to/disk-usage.asciidoc @@ -136,11 +136,16 @@ PUT index // CONSOLE [float] -=== Disable `_all` +=== Watch your shard size -The <> field indexes the value of all fields of a -document and can use significant space. If you never need to search against all -fields at the same time, it can be disabled. +Larger shards are going to be more efficient at storing data. To increase the size of your shards, you can decrease the number of primary shards in an index by <> with less primary shards, creating less indices (e.g. by leveraging the <>), or modifying an existing index using the <>. + +Keep in mind that large shard sizes come with drawbacks, such as long full recovery times. + +[float] +=== Disable `_source` + +The <> field stores the original JSON body of the document. If you don’t need access to it you can disable it. However, APIs that needs access to `_source` such as update and reindex won’t work. [float] === Use `best_compression` @@ -149,6 +154,18 @@ The `_source` and stored fields can easily take a non negligible amount of disk space. They can be compressed more aggressively by using the `best_compression` <>. +[float] +=== Force Merge + +Indices in Elasticsearch are stored in one or more shards. Each shard is a Lucene index and made up of one or more segments - the actual files on disk. Larger segments are more efficient for storing data. + +The <> can be used to reduce the number of segments per shard. In many cases, the number of segments can be reduced to one per shard by setting `max_num_segments=1`. + +[float] +=== Shrink Index + +The <> allows you to reduce the number of shards in an index. Together with the Force Merge API above, this can significantly reduce the number of shards and segments of an index. + [float] === Use the smallest numeric type that is sufficient diff --git a/docs/reference/how-to/indexing-speed.asciidoc b/docs/reference/how-to/indexing-speed.asciidoc index 668857ed41e4d..db7479f9f7d38 100644 --- a/docs/reference/how-to/indexing-speed.asciidoc +++ b/docs/reference/how-to/indexing-speed.asciidoc @@ -114,6 +114,13 @@ The default is `10%` which is often plenty: for example, if you give the JVM 10GB of memory, it will give 1GB to the index buffer, which is enough to host two shards that are heavily indexing. +[float] +=== Disable `_field_names` + +The <> introduces some +index-time overhead, so you might want to disable it if you never need to +run `exists` queries. + [float] === Additional optimizations diff --git a/docs/reference/index-modules.asciidoc b/docs/reference/index-modules.asciidoc index 51d2291c4d87d..bf93f62847fb6 100644 --- a/docs/reference/index-modules.asciidoc +++ b/docs/reference/index-modules.asciidoc @@ -121,6 +121,11 @@ specific index module: <> or <> for a more efficient alternative to raising this. +`index.max_inner_result_window`:: + + The maximum value of `from + size` for inner hits definition and top hits aggregations to this index. Defaults to + `100`. Inner hits and top hits aggregation take heap memory and time proportional to `from + size` and this limits that memory. + `index.max_rescore_window`:: The maximum value of `window_size` for `rescore` requests in searches of this index. @@ -128,6 +133,27 @@ specific index module: requests take heap memory and time proportional to `max(window_size, from + size)` and this limits that memory. +`index.max_docvalue_fields_search`:: + + The maximum number of `docvalue_fields` that are allowed in a query. + Defaults to `100`. Doc-value fields are costly since they might incur + a per-field per-document seek. + +`index.max_script_fields`:: + + The maximum number of `script_fields` that are allowed in a query. + Defaults to `32`. + +`index.max_ngram_diff`:: + + The maximum allowed difference between min_gram and max_gram for NGramTokenizer and NGramTokenFilter. + Defaults to `1`. + +`index.max_shingle_diff`:: + + The maximum allowed difference between max_shingle_size and min_shingle_size for ShingleTokenFilter. + Defaults to `3`. + `index.blocks.read_only`:: Set to `true` to make the index and index metadata read only, `false` to diff --git a/docs/reference/index-modules/index-sorting.asciidoc b/docs/reference/index-modules/index-sorting.asciidoc index 9dfe3b9eeea29..8aede7492df97 100644 --- a/docs/reference/index-modules/index-sorting.asciidoc +++ b/docs/reference/index-modules/index-sorting.asciidoc @@ -196,16 +196,13 @@ as soon as N documents have been collected per segment. "hits" : [] }, "took": 20, - "terminated_early": true, <2> "timed_out": false } -------------------------------------------------- // TESTRESPONSE[s/"_shards": \.\.\./"_shards": "$body._shards",/] // TESTRESPONSE[s/"took": 20,/"took": "$body.took",/] -// TESTRESPONSE[s/"terminated_early": true,//] <1> The total number of hits matching the query is unknown because of early termination. -<2> Indicates whether the top docs retrieval has actually terminated_early. NOTE: Aggregations will collect all documents that match the query regardless of the value of `track_total_hits` diff --git a/docs/reference/index-modules/similarity.asciidoc b/docs/reference/index-modules/similarity.asciidoc index 5be6fa2ae7246..20bf9a51357e4 100644 --- a/docs/reference/index-modules/similarity.asciidoc +++ b/docs/reference/index-modules/similarity.asciidoc @@ -5,7 +5,7 @@ A similarity (scoring / ranking model) defines how matching documents are scored. Similarity is per field, meaning that via the mapping one can define a different similarity per field. -Configuring a custom similarity is considered a expert feature and the +Configuring a custom similarity is considered an expert feature and the builtin similarities are most likely sufficient as is described in <>. @@ -20,29 +20,39 @@ settings. [source,js] -------------------------------------------------- -"similarity" : { - "my_similarity" : { - "type" : "DFR", - "basic_model" : "g", - "after_effect" : "l", - "normalization" : "h2", - "normalization.h2.c" : "3.0" - } +PUT /index +{ + "settings" : { + "index" : { + "similarity" : { + "my_similarity" : { + "type" : "DFR", + "basic_model" : "g", + "after_effect" : "l", + "normalization" : "h2", + "normalization.h2.c" : "3.0" + } + } + } + } } -------------------------------------------------- +// CONSOLE Here we configure the DFRSimilarity so it can be referenced as `my_similarity` in mappings as is illustrate in the below example: [source,js] -------------------------------------------------- +PUT /index/_mapping/book { - "book" : { - "properties" : { - "title" : { "type" : "text", "similarity" : "my_similarity" } - } + "properties" : { + "title" : { "type" : "text", "similarity" : "my_similarity" } + } } -------------------------------------------------- +// CONSOLE +// TEST[continued] [float] === Available similarities @@ -173,7 +183,7 @@ TF-IDF: [source,js] -------------------------------------------------- -PUT index +PUT /index { "settings": { "number_of_shards": 1, @@ -198,19 +208,19 @@ PUT index } } -PUT index/doc/1 +PUT /index/doc/1 { "field": "foo bar foo" } -PUT index/doc/2 +PUT /index/doc/2 { "field": "bar baz" } -POST index/_refresh +POST /index/_refresh -GET index/_search?explain=true +GET /index/_search?explain=true { "query": { "query_string": { @@ -328,7 +338,7 @@ more efficient: [source,js] -------------------------------------------------- -PUT index +PUT /index { "settings": { "number_of_shards": 1, @@ -362,19 +372,19 @@ PUT index [source,js] -------------------------------------------------- -PUT index/doc/1 +PUT /index/doc/1 { "field": "foo bar foo" } -PUT index/doc/2 +PUT /index/doc/2 { "field": "bar baz" } -POST index/_refresh +POST /index/_refresh -GET index/_search?explain=true +GET /index/_search?explain=true { "query": { "query_string": { @@ -494,36 +504,41 @@ it is <>: [source,js] -------------------------------------------------- -PUT /my_index +PUT /index { "settings": { "index": { "similarity": { "default": { - "type": "boolean" + "type": "classic" } } } } } -------------------------------------------------- +// CONSOLE If you want to change the default similarity after creating the index -you must <> your index, send the follwing +you must <> your index, send the following request and <> it again afterwards: [source,js] -------------------------------------------------- -PUT /my_index/_settings +POST /index/_close + +PUT /index/_settings { - "settings": { - "index": { - "similarity": { - "default": { - "type": "boolean" - } + "index": { + "similarity": { + "default": { + "type": "classic" } } } } + +POST /index/_open -------------------------------------------------- +// CONSOLE +// TEST[continued] diff --git a/docs/reference/index-modules/store.asciidoc b/docs/reference/index-modules/store.asciidoc index 2722c05b32159..27b5b751233a9 100644 --- a/docs/reference/index-modules/store.asciidoc +++ b/docs/reference/index-modules/store.asciidoc @@ -31,6 +31,7 @@ PUT /my_index } } --------------------------------- +// CONSOLE WARNING: This is an expert-only setting and may be removed in the future. @@ -108,6 +109,7 @@ PUT /my_index } } --------------------------------- +// CONSOLE The default value is the empty array, which means that nothing will be loaded into the file-system cache eagerly. For indices that are actively searched, diff --git a/docs/reference/index-modules/translog.asciidoc b/docs/reference/index-modules/translog.asciidoc index 66919597d2c37..31d529b6c4436 100644 --- a/docs/reference/index-modules/translog.asciidoc +++ b/docs/reference/index-modules/translog.asciidoc @@ -105,7 +105,7 @@ In order to run the `elasticsearch-translog` tool, specify the `truncate` subcommand as well as the directory for the corrupted translog with the `-d` option: -[source,js] +[source,txt] -------------------------------------------------- $ bin/elasticsearch-translog truncate -d /var/lib/elasticsearchdata/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/ Checking existing translog files diff --git a/docs/reference/index-shared2.asciidoc b/docs/reference/index-shared2.asciidoc index 0a0e3aaf57d2d..e48948079cc9f 100644 --- a/docs/reference/index-shared2.asciidoc +++ b/docs/reference/index-shared2.asciidoc @@ -1,28 +1,2 @@ include::migration/index.asciidoc[] - -include::api-conventions.asciidoc[] - -include::docs.asciidoc[] - -include::search.asciidoc[] - -include::aggregations.asciidoc[] - -include::indices.asciidoc[] - -include::cat.asciidoc[] - -include::cluster.asciidoc[] - -include::query-dsl.asciidoc[] - -include::mapping.asciidoc[] - -include::analysis.asciidoc[] - -include::modules.asciidoc[] - -include::index-modules.asciidoc[] - -include::ingest.asciidoc[] diff --git a/docs/reference/index-shared3.asciidoc b/docs/reference/index-shared3.asciidoc index cf685c15253f2..4da338186b0c8 100644 --- a/docs/reference/index-shared3.asciidoc +++ b/docs/reference/index-shared3.asciidoc @@ -1,10 +1,26 @@ -include::how-to.asciidoc[] +include::api-conventions.asciidoc[] -include::testing.asciidoc[] +include::docs.asciidoc[] -include::glossary.asciidoc[] +include::search.asciidoc[] -include::release-notes.asciidoc[] +include::aggregations.asciidoc[] -include::redirects.asciidoc[] +include::indices.asciidoc[] + +include::cat.asciidoc[] + +include::cluster.asciidoc[] + +include::query-dsl.asciidoc[] + +include::mapping.asciidoc[] + +include::analysis.asciidoc[] + +include::modules.asciidoc[] + +include::index-modules.asciidoc[] + +include::ingest.asciidoc[] diff --git a/docs/reference/index-shared4.asciidoc b/docs/reference/index-shared4.asciidoc new file mode 100644 index 0000000000000..3d807dd98d39c --- /dev/null +++ b/docs/reference/index-shared4.asciidoc @@ -0,0 +1,8 @@ + +include::how-to.asciidoc[] + +include::testing.asciidoc[] + +include::glossary.asciidoc[] + +include::release-notes.asciidoc[] diff --git a/docs/reference/index-shared5.asciidoc b/docs/reference/index-shared5.asciidoc new file mode 100644 index 0000000000000..572522f6c8e74 --- /dev/null +++ b/docs/reference/index-shared5.asciidoc @@ -0,0 +1,2 @@ + +include::redirects.asciidoc[] diff --git a/docs/reference/index.asciidoc b/docs/reference/index.asciidoc index 5fa6e1d914d6a..8aa9eef32f8bf 100644 --- a/docs/reference/index.asciidoc +++ b/docs/reference/index.asciidoc @@ -8,3 +8,5 @@ include::../Versions.asciidoc[] include::index-shared1.asciidoc[] include::index-shared2.asciidoc[] include::index-shared3.asciidoc[] +include::index-shared4.asciidoc[] +include::index-shared5.asciidoc[] diff --git a/docs/reference/indices.asciidoc b/docs/reference/indices.asciidoc index 873021c420636..cda7c41cb42d1 100644 --- a/docs/reference/indices.asciidoc +++ b/docs/reference/indices.asciidoc @@ -16,6 +16,7 @@ index settings, aliases, mappings, and index templates. * <> * <> * <> +* <> * <> [float] @@ -70,6 +71,8 @@ include::indices/open-close.asciidoc[] include::indices/shrink-index.asciidoc[] +include::indices/split-index.asciidoc[] + include::indices/rollover-index.asciidoc[] include::indices/put-mapping.asciidoc[] diff --git a/docs/reference/indices/flush.asciidoc b/docs/reference/indices/flush.asciidoc index e9cac91d740cc..0c75fd011b418 100644 --- a/docs/reference/indices/flush.asciidoc +++ b/docs/reference/indices/flush.asciidoc @@ -96,6 +96,7 @@ which returns something similar to: "generation" : 2, "user_data" : { "translog_uuid" : "hnOG3xFcTDeoI_kvvvOdNA", + "history_uuid" : "XP7KDJGiS1a2fHYiFL5TXQ", "local_checkpoint" : "-1", "translog_generation" : "1", "max_seq_no" : "-1", @@ -117,6 +118,7 @@ which returns something similar to: -------------------------------------------------- // TESTRESPONSE[s/"id" : "3M3zkw2GHMo2Y4h4\/KFKCg=="/"id": $body.indices.twitter.shards.0.0.commit.id/] // TESTRESPONSE[s/"translog_uuid" : "hnOG3xFcTDeoI_kvvvOdNA"/"translog_uuid": $body.indices.twitter.shards.0.0.commit.user_data.translog_uuid/] +// TESTRESPONSE[s/"history_uuid" : "XP7KDJGiS1a2fHYiFL5TXQ"/"history_uuid": $body.indices.twitter.shards.0.0.commit.user_data.history_uuid/] // TESTRESPONSE[s/"sync_id" : "AVvFY-071siAOuFGEO9P"/"sync_id": $body.indices.twitter.shards.0.0.commit.user_data.sync_id/] // TESTRESPONSE[s/"1": \.\.\./"1": $body.indices.twitter.shards.1/] // TESTRESPONSE[s/"2": \.\.\./"2": $body.indices.twitter.shards.2/] diff --git a/docs/reference/indices/open-close.asciidoc b/docs/reference/indices/open-close.asciidoc index 59f36112b4ea2..6d0866d303b88 100644 --- a/docs/reference/indices/open-close.asciidoc +++ b/docs/reference/indices/open-close.asciidoc @@ -32,3 +32,10 @@ This setting can also be changed via the cluster update settings api. Closed indices consume a significant amount of disk-space which can cause problems in managed environments. Closing indices can be disabled via the cluster settings API by setting `cluster.indices.close.enable` to `false`. The default is `true`. + +[float] +=== Wait For Active Shards + +Because opening an index allocates its shards, the +<> setting on +index creation applies to the index opening action as well. diff --git a/docs/reference/indices/recovery.asciidoc b/docs/reference/indices/recovery.asciidoc index 448c423d0b646..49f58e645bcda 100644 --- a/docs/reference/indices/recovery.asciidoc +++ b/docs/reference/indices/recovery.asciidoc @@ -15,12 +15,60 @@ GET index1,index2/_recovery?human To see cluster-wide recovery status simply leave out the index names. +////////////////////////// + +Here we create a repository and snapshot index1 in +order to restore it right after and prints out the +indices recovery result. + +[source,js] +-------------------------------------------------- +# create the index +PUT index1 +{"settings": {"index.number_of_shards": 1}} + +# create the repository +PUT /_snapshot/my_repository +{"type": "fs","settings": {"location": "recovery_asciidoc" }} + +# snapshot the index +PUT /_snapshot/my_repository/snap_1?wait_for_completion=true + +# delete the index +DELETE index1 + +# and restore the snapshot +POST /_snapshot/my_repository/snap_1/_restore?wait_for_completion=true + +-------------------------------------------------- +// CONSOLE + +[source,js] +-------------------------------------------------- +{ + "snapshot": { + "snapshot": "snap_1", + "indices": [ + "index1" + ], + "shards": { + "total": 1, + "failed": 0, + "successful": 1 + } + } +} +-------------------------------------------------- +// TESTRESPONSE + +////////////////////////// + [source,js] -------------------------------------------------- GET /_recovery?human -------------------------------------------------- // CONSOLE -// TEST[s/^/PUT index1\n{"settings": {"index.number_of_shards": 1}}\n/] +// TEST[continued] Response: [source,js] @@ -34,16 +82,20 @@ Response: "primary" : true, "start_time" : "2014-02-24T12:15:59.716", "start_time_in_millis": 1393244159716, + "stop_time" : "0s", + "stop_time_in_millis" : 0, "total_time" : "2.9m", "total_time_in_millis" : 175576, "source" : { "repository" : "my_repository", "snapshot" : "my_snapshot", - "index" : "index1" + "index" : "index1", + "version" : "{version}" }, "target" : { "id" : "ryqJ5lO5S4-lSFbGntkEkg", - "hostname" : "my.fqdn", + "host" : "my.fqdn", + "transport_address" : "my.fqdn", "ip" : "10.0.1.7", "name" : "my_es_node" }, @@ -64,7 +116,11 @@ Response: "percent" : "94.5%" }, "total_time" : "0s", - "total_time_in_millis" : 0 + "total_time_in_millis" : 0, + "source_throttle_time" : "0s", + "source_throttle_time_in_millis" : 0, + "target_throttle_time" : "0s", + "target_throttle_time_in_millis" : 0 }, "translog" : { "recovered" : 0, @@ -74,7 +130,7 @@ Response: "total_time" : "0s", "total_time_in_millis" : 0, }, - "start" : { + "verify_index" : { "check_index_time" : "0s", "check_index_time_in_millis" : 0, "total_time" : "0s", @@ -84,7 +140,12 @@ Response: } } -------------------------------------------------- -// We should really assert that this is up to date but that is hard! +// TESTRESPONSE[s/: (\-)?[0-9]+/: $body.$_path/] +// TESTRESPONSE[s/: "[^"]*"/: $body.$_path/] +//// +The TESTRESPONSE above replace all the fields values by the expected ones in the test, +because we don't really care about the field values but we want to check the fields names. +//// The above response shows a single index recovering a single shard. In this case, the source of the recovery is a snapshot repository and the target of the recovery is the node with name "my_es_node". @@ -97,6 +158,8 @@ In some cases a higher level of detail may be preferable. Setting "detailed=true -------------------------------------------------- GET _recovery?human&detailed=true -------------------------------------------------- +// CONSOLE +// TEST[s/^/PUT index1\n{"settings": {"index.number_of_shards": 1}}\n/] Response: @@ -117,13 +180,15 @@ Response: "total_time_in_millis" : 2115, "source" : { "id" : "RGMdRc-yQWWKIBM4DGvwqQ", - "hostname" : "my.fqdn", + "host" : "my.fqdn", + "transport_address" : "my.fqdn", "ip" : "10.0.1.7", "name" : "my_es_node" }, "target" : { "id" : "RGMdRc-yQWWKIBM4DGvwqQ", - "hostname" : "my.fqdn", + "host" : "my.fqdn", + "transport_address" : "my.fqdn", "ip" : "10.0.1.7", "name" : "my_es_node" }, @@ -154,20 +219,27 @@ Response: "name" : "segments_2", "length" : 251, "recovered" : 251 - }, - ... + } ] }, "total_time" : "2ms", - "total_time_in_millis" : 2 + "total_time_in_millis" : 2, + "source_throttle_time" : "0s", + "source_throttle_time_in_millis" : 0, + "target_throttle_time" : "0s", + "target_throttle_time_in_millis" : 0 }, "translog" : { "recovered" : 71, + "total" : 0, + "percent" : "100.0%", + "total_on_start" : 0, "total_time" : "2.0s", "total_time_in_millis" : 2025 }, - "start" : { + "verify_index" : { "check_index_time" : 0, + "check_index_time_in_millis" : 0, "total_time" : "88ms", "total_time_in_millis" : 88 } @@ -175,7 +247,15 @@ Response: } } -------------------------------------------------- -// We should really assert that this is up to date but that is hard! +// TESTRESPONSE[s/"source" : \{[^}]*\}/"source" : $body.$_path/] +// TESTRESPONSE[s/"details" : \[[^\]]*\]//] +// TESTRESPONSE[s/: (\-)?[0-9]+/: $body.$_path/] +// TESTRESPONSE[s/: "[^"]*"/: $body.$_path/] +//// +The TESTRESPONSE above replace all the fields values by the expected ones in the test, +because we don't really care about the field values but we want to check the fields names. +It also removes the "details" part which is important in this doc but really hard to test. +//// This response shows a detailed listing (truncated for brevity) of the actual files recovered and their sizes. diff --git a/docs/reference/indices/rollover-index.asciidoc b/docs/reference/indices/rollover-index.asciidoc index 9aec8243af3f9..33bb09a1ef662 100644 --- a/docs/reference/indices/rollover-index.asciidoc +++ b/docs/reference/indices/rollover-index.asciidoc @@ -25,7 +25,8 @@ POST /logs_write/_rollover <2> { "conditions": { "max_age": "7d", - "max_docs": 1000 + "max_docs": 1000, + "max_size": "5gb" } } -------------------------------------------------- @@ -34,7 +35,7 @@ POST /logs_write/_rollover <2> // TEST[s/# Add > 1000 documents to logs-000001/POST _reindex?refresh\n{"source":{"index":"twitter"},"dest":{"index":"logs-000001"}}/] <1> Creates an index called `logs-0000001` with the alias `logs_write`. <2> If the index pointed to by `logs_write` was created 7 or more days ago, or - contains 1,000 or more documents, then the `logs-000002` index is created + contains 1,000 or more documents, or has an index size at least around 5GB, then the `logs-000002` index is created and the `logs_write` alias is updated to point to `logs-000002`. The above request might return the following response: @@ -50,7 +51,8 @@ The above request might return the following response: "dry_run": false, <2> "conditions": { <3> "[max_age: 7d]": false, - "[max_docs: 1000]": true + "[max_docs: 1000]": true, + "[max_size: 5gb]": false, } } -------------------------------------------------- @@ -76,7 +78,8 @@ POST /my_alias/_rollover/my_new_index_name { "conditions": { "max_age": "7d", - "max_docs": 1000 + "max_docs": 1000, + "max_size": "5gb" } } -------------------------------------------------- @@ -186,7 +189,8 @@ POST /logs_write/_rollover { "conditions" : { "max_age": "7d", - "max_docs": 1000 + "max_docs": 1000, + "max_size": "5gb" }, "settings": { "index.number_of_shards": 2 @@ -214,7 +218,8 @@ POST /logs_write/_rollover?dry_run { "conditions" : { "max_age": "7d", - "max_docs": 1000 + "max_docs": 1000, + "max_size": "5gb" } } -------------------------------------------------- diff --git a/docs/reference/indices/segments.asciidoc b/docs/reference/indices/segments.asciidoc index e4e9792151d99..614bd8852b7ac 100644 --- a/docs/reference/indices/segments.asciidoc +++ b/docs/reference/indices/segments.asciidoc @@ -6,36 +6,78 @@ is built with. Allows to be used to provide more information on the state of a shard and an index, possibly optimization information, data "wasted" on deletes, and so on. -Endpoints include segments for a specific index, several indices, or -all: +Endpoints include segments for a specific index: [source,js] -------------------------------------------------- -curl -XGET 'http://localhost:9200/test/_segments' -curl -XGET 'http://localhost:9200/test1,test2/_segments' -curl -XGET 'http://localhost:9200/_segments' +GET /test/_segments -------------------------------------------------- +// CONSOLE +// TEST[s/^/PUT test\n{"settings":{"number_of_shards":1, "number_of_replicas": 0}}\nPOST test\/test\?refresh\n{"test": "test"}\n/] +// TESTSETUP + +For several indices: + +[source,js] +-------------------------------------------------- +GET /test1,test2/_segments +-------------------------------------------------- +// CONSOLE +// TEST[s/^/PUT test1\nPUT test2\n/] + +Or for all indices: + +[source,js] +-------------------------------------------------- +GET /_segments +-------------------------------------------------- +// CONSOLE Response: [source,js] -------------------------------------------------- { - ... - "_3": { - "generation": 3, - "num_docs": 1121, - "deleted_docs": 53, - "size_in_bytes": 228288, - "memory_in_bytes": 3211, - "committed": true, - "search": true, - "version": "4.6", - "compound": true - } - ... + "_shards": ... + "indices": { + "test": { + "shards": { + "0": [ + { + "routing": { + "state": "STARTED", + "primary": true, + "node": "zDC_RorJQCao9xf9pg3Fvw" + }, + "num_committed_segments": 0, + "num_search_segments": 1, + "segments": { + "_0": { + "generation": 0, + "num_docs": 1, + "deleted_docs": 0, + "size_in_bytes": 3800, + "memory_in_bytes": 1410, + "committed": false, + "search": true, + "version": "7.0.0", + "compound": true, + "attributes": { + } + } + } + } + ] + } + } + } } -------------------------------------------------- +// TESTRESPONSE[s/"_shards": \.\.\./"_shards": $body._shards,/] +// TESTRESPONSE[s/"node": "zDC_RorJQCao9xf9pg3Fvw"/"node": $body.$_path/] +// TESTRESPONSE[s/"attributes": \{[^}]*\}/"attributes": $body.$_path/] +// TESTRESPONSE[s/: (\-)?[0-9]+/: $body.$_path/] +// TESTRESPONSE[s/7\.0\.0/$body.$_path/] _0:: The key of the JSON document is the name of the segment. This name is used to generate file names: all files starting with this @@ -74,6 +116,8 @@ compound:: Whether the segment is stored in a compound file. When true, this means that Lucene merged all files from the segment in a single one in order to save file descriptors. +attributes:: Contains information about whether high compression was enabled + [float] === Verbose mode @@ -83,8 +127,9 @@ NOTE: The format of the additional detail information is labelled as experimenta [source,js] -------------------------------------------------- -curl -XGET 'http://localhost:9200/test/_segments?verbose=true' +GET /test/_segments?verbose=true -------------------------------------------------- +// CONSOLE Response: @@ -92,7 +137,7 @@ Response: -------------------------------------------------- { ... - "_3": { + "_0": { ... "ram_tree": [ { @@ -114,3 +159,5 @@ Response: ... } -------------------------------------------------- +// NOTCONSOLE +//Response is too verbose to be fully shown in documentation, so we just show the relevant bit and don't test the response. diff --git a/docs/reference/indices/shard-stores.asciidoc b/docs/reference/indices/shard-stores.asciidoc index 16b64b40f6d6c..4788e66768f21 100644 --- a/docs/reference/indices/shard-stores.asciidoc +++ b/docs/reference/indices/shard-stores.asciidoc @@ -17,10 +17,17 @@ indices, or all: [source,js] -------------------------------------------------- -curl -XGET 'http://localhost:9200/test/_shard_stores' -curl -XGET 'http://localhost:9200/test1,test2/_shard_stores' -curl -XGET 'http://localhost:9200/_shard_stores' +# return information of only index test +GET /test/_shard_stores + +# return information of only test1 and test2 indices +GET /test1,test2/_shard_stores + +# return information of all indices +GET /_shard_stores -------------------------------------------------- +// CONSOLE +// TEST[s/^/PUT test\nPUT test1\nPUT test2\n/] The scope of shards to list store information can be changed through `status` param. Defaults to 'yellow' and 'red'. 'yellow' lists store information of @@ -30,8 +37,11 @@ Use 'green' to list store information for shards with all assigned copies. [source,js] -------------------------------------------------- -curl -XGET 'http://localhost:9200/_shard_stores?status=green' +GET /_shard_stores?status=green -------------------------------------------------- +// CONSOLE +// TEST[setup:node] +// TEST[s/^/PUT my-index\n{"settings":{"number_of_shards":1, "number_of_replicas": 0}}\nPOST my-index\/test\?refresh\n{"test": "test"}\n/] Response: @@ -40,27 +50,36 @@ The shard stores information is grouped by indices and shard ids. [source,js] -------------------------------------------------- { - ... - "0": { <1> - "stores": [ <2> - { - "sPa3OgxLSYGvQ4oPs-Tajw": { <3> - "name": "node_t0", - "transport_address": "local[1]", - "attributes": { - "mode": "local" + "indices": { + "my-index": { + "shards": { + "0": { <1> + "stores": [ <2> + { + "sPa3OgxLSYGvQ4oPs-Tajw": { <3> + "name": "node_t0", + "ephemeral_id" : "9NlXRFGCT1m8tkvYCMK-8A", + "transport_address": "local[1]", + "attributes": {} + }, + "allocation_id": "2iNySv_OQVePRX-yaRH_lQ", <4> + "allocation" : "primary|replica|unused" <5> + "store_exception": ... <6> } - }, - "allocation_id": "2iNySv_OQVePRX-yaRH_lQ", <4> - "allocation" : "primary" | "replica" | "unused", <5> - "store_exception": ... <6> - }, - ... - ] - }, - ... + ] + } + } + } + } } -------------------------------------------------- +// TESTRESPONSE[s/"store_exception": \.\.\.//] +// TESTRESPONSE[s/"sPa3OgxLSYGvQ4oPs-Tajw"/\$node_name/] +// TESTRESPONSE[s/: "[^"]*"/: $body.$_path/] +// TESTRESPONSE[s/"attributes": \{[^}]*\}/"attributes": $body.$_path/] + + + <1> The key is the corresponding shard id for the store information <2> A list of store information for all copies of the shard <3> The node information that hosts a copy of the store, the key diff --git a/docs/reference/indices/split-index.asciidoc b/docs/reference/indices/split-index.asciidoc new file mode 100644 index 0000000000000..467c09baa2432 --- /dev/null +++ b/docs/reference/indices/split-index.asciidoc @@ -0,0 +1,165 @@ +[[indices-split-index]] +== Split Index + +number_of_routing_shards + +The split index API allows you to split an existing index into a new index +with multiple of it's primary shards. Similarly to the <> +where the number of primary shards in the shrunk index must be a factor of the source index. +The `_split` API requires the source index to be created with a specific number of routing shards +in order to be split in the future. (Note: this requirement might be remove in future releases) +The number of routing shards specify the hashing space that is used internally to distribute documents +across shards, in oder to have a consistent hashing that is compatible with the method elasticsearch +uses today. +For example an index with `8` primary shards and a `index.number_of_routing_shards` of `32` +can be split into `16` and `32` primary shards. An index with `1` primary shard +and `index.number_of_routing_shards` of `64` can be split into `2`, `4`, `8`, `16`, `32` or `64`. +The same works for non power of two routing shards ie. an index with `1` primary shard and +`index.number_of_routing_shards` set to `15` can be split into `3` and `15` or alternatively`5` and `15`. +The number of shards in the split index must always be a factor of `index.number_of_routing_shards` +in the source index. Before splitting, a (primary) copy of every shard in the index must be active in the cluster. + +Splitting works as follows: + +* First, it creates a new target index with the same definition as the source + index, but with a larger number of primary shards. + +* Then it hard-links segments from the source index into the target index. (If + the file system doesn't support hard-linking, then all segments are copied + into the new index, which is a much more time consuming process.) + +* Once the low level files are created all documents will be `hashed` again to delete + documents that belong in a different shard. + +* Finally, it recovers the target index as though it were a closed index which + had just been re-opened. + +[float] +=== Preparing an index for splitting + +Create an index with a routing shards factor: + +[source,js] +-------------------------------------------------- +PUT my_source_index +{ + "settings": { + "index.number_of_shards" : 1, + "index.number_of_routing_shards" : 2 <1> + } +} +------------------------------------------------- +// CONSOLE + +<1> Allows to split the index into two shards or in other words, it allows + for a single split operation. + +In order to split an index, the index must be marked as read-only, +and have <> `green`. + +This can be achieved with the following request: + +[source,js] +-------------------------------------------------- +PUT /my_source_index/_settings +{ + "settings": { + "index.blocks.write": true <1> + } +} +-------------------------------------------------- +// CONSOLE +// TEST[continued] + +<1> Prevents write operations to this index while still allowing metadata + changes like deleting the index. + +[float] +=== Spitting an index + +To split `my_source_index` into a new index called `my_target_index`, issue +the following request: + +[source,js] +-------------------------------------------------- +POST my_source_index/_split/my_target_index +{ + "settings": { + "index.number_of_shards": 2 + } +} +-------------------------------------------------- +// CONSOLE +// TEST[continued] + +The above request returns immediately once the target index has been added to +the cluster state -- it doesn't wait for the split operation to start. + +[IMPORTANT] +===================================== + +Indices can only be split if they satisfy the following requirements: + +* the target index must not exist + +* The index must have less primary shards than the target index. + +* The number of primary shards in the target index must be a factor of the + number of primary shards in the source index. + +* The node handling the split process must have sufficient free disk space to + accommodate a second copy of the existing index. + +===================================== + +The `_split` API is similar to the <> +and accepts `settings` and `aliases` parameters for the target index: + +[source,js] +-------------------------------------------------- +POST my_source_index/_split/my_target_index +{ + "settings": { + "index.number_of_shards": 5 <1> + }, + "aliases": { + "my_search_indices": {} + } +} +-------------------------------------------------- +// CONSOLE +// TEST[s/^/PUT my_source_index\n{"settings": {"index.blocks.write": true, "index.number_of_routing_shards" : 5, "index.number_of_shards": "1"}}\n/] + +<1> The number of shards in the target index. This must be a factor of the + number of shards in the source index. + + +NOTE: Mappings may not be specified in the `_split` request, and all +`index.analysis.*` and `index.similarity.*` settings will be overwritten with +the settings from the source index. + +[float] +=== Monitoring the split process + +The split process can be monitored with the <>, or the <> can be used to wait +until all primary shards have been allocated by setting the `wait_for_status` +parameter to `yellow`. + +The `_split` API returns as soon as the target index has been added to the +cluster state, before any shards have been allocated. At this point, all +shards are in the state `unassigned`. If, for any reason, the target index +can't be allocated, its primary shard will remain `unassigned` until it +can be allocated on that node. + +Once the primary shard is allocated, it moves to state `initializing`, and the +split process begins. When the split operation completes, the shard will +become `active`. At that point, Elasticsearch will try to allocate any +replicas and may decide to relocate the primary shard to another node. + +[float] +=== Wait For Active Shards + +Because the split operation creates a new index to split the shards to, +the <> setting +on index creation applies to the split index action as well. diff --git a/docs/reference/indices/templates.asciidoc b/docs/reference/indices/templates.asciidoc index 86f02f850066e..6aefe3ccb4744 100644 --- a/docs/reference/indices/templates.asciidoc +++ b/docs/reference/indices/templates.asciidoc @@ -7,8 +7,10 @@ applied when new indices are created. The templates include both and a simple pattern template that controls whether the template should be applied to the new index. -NOTE: Templates are only applied at index creation time. Changing a template -will have no impact on existing indices. +NOTE: Templates are only applied at index creation time. Changing a template +will have no impact on existing indices. When using the create index API, the +settings/mappings defined as part of the create index call will take precedence +over any matching settings/mappings defined in the template. For example: diff --git a/docs/reference/ingest.asciidoc b/docs/reference/ingest.asciidoc index 7150bd32739c1..463e47dfde950 100644 --- a/docs/reference/ingest.asciidoc +++ b/docs/reference/ingest.asciidoc @@ -18,7 +18,8 @@ node.ingest: false To pre-process documents before indexing, you <> that specifies a series of <>. Each processor transforms the document in some way. For example, you may have a pipeline that consists of one processor that removes a field from -the document followed by another processor that renames a field. +the document followed by another processor that renames a field. Configured pipelines are then stored +in the <>. To use a pipeline, you simply specify the `pipeline` parameter on an index or bulk request to tell the ingest node which pipeline to use. For example: @@ -31,7 +32,7 @@ PUT my-index/my-type/my-id?pipeline=my_pipeline_id } -------------------------------------------------- // CONSOLE -// TEST[catch:request] +// TEST[catch:bad_request] See <> for more information about creating, adding, and deleting pipelines. diff --git a/docs/reference/ingest/ingest-node.asciidoc b/docs/reference/ingest/ingest-node.asciidoc index 9b8de759b6554..74cfabbff47a1 100644 --- a/docs/reference/ingest/ingest-node.asciidoc +++ b/docs/reference/ingest/ingest-node.asciidoc @@ -563,6 +563,20 @@ to set the index that the document will be indexed into: -------------------------------------------------- // NOTCONSOLE +Dynamic field names are also supported. This example sets the field named after the +value of `service` to the value of the field `code`: + +[source,js] +-------------------------------------------------- +{ + "set": { + "field": "{{service}}" + "value": "{{code}}" + } +} +-------------------------------------------------- +// NOTCONSOLE + [[handling-failure-in-pipelines]] == Handling Failures in Pipelines @@ -752,7 +766,7 @@ Accepts a single value or an array of values. -------------------------------------------------- { "append": { - "field": "field1" + "field": "field1", "value": ["item2", "item3", "item4"] } } @@ -2012,7 +2026,7 @@ into: // NOTCONSOLE If there is already a `bar` field nested under `foo` then -this processor merges the the `foo.bar` field into it. If the field is +this processor merges the `foo.bar` field into it. If the field is a scalar value then it will turn that field into an array field. For example, the following document: @@ -2054,7 +2068,7 @@ Consider the following document: -------------------------------------------------- // NOTCONSOLE -Then the the `foo` needs to be renamed first before the `dot_expander` +Then the `foo` needs to be renamed first before the `dot_expander` processor is applied. So in order for the `foo.bar` field to properly be expanded into the `bar` field under the `foo` field the following pipeline should be used: diff --git a/docs/reference/mapping.asciidoc b/docs/reference/mapping.asciidoc index 4a90fc6204469..bc5c1def32f68 100644 --- a/docs/reference/mapping.asciidoc +++ b/docs/reference/mapping.asciidoc @@ -9,8 +9,6 @@ are stored and indexed. For instance, use mappings to define: * which string fields should be treated as full text fields. * which fields contain numbers, dates, or geolocations. -* whether the values of all fields in the document should be - indexed into the catch-all <> field. * the <> of date values. * custom rules to control the mapping for <>. diff --git a/docs/reference/mapping/fields.asciidoc b/docs/reference/mapping/fields.asciidoc index 9361c6b26dd9d..dd88910607269 100644 --- a/docs/reference/mapping/fields.asciidoc +++ b/docs/reference/mapping/fields.asciidoc @@ -40,10 +40,6 @@ can be customised when a mapping type is created. [float] === Indexing meta-fields -<>:: - - A _catch-all_ field that indexes the values of all other fields. Disabled by default. - <>:: All fields in the document which contain non-null values. @@ -63,8 +59,6 @@ can be customised when a mapping type is created. Application specific metadata. -include::fields/all-field.asciidoc[] - include::fields/field-names-field.asciidoc[] include::fields/id-field.asciidoc[] diff --git a/docs/reference/mapping/fields/all-field.asciidoc b/docs/reference/mapping/fields/all-field.asciidoc deleted file mode 100644 index 2ed2f9e432fdb..0000000000000 --- a/docs/reference/mapping/fields/all-field.asciidoc +++ /dev/null @@ -1,360 +0,0 @@ -[[mapping-all-field]] -=== `_all` field - -deprecated[6.0.0, `_all` may no longer be enabled for indices created in 6.0+, use a custom field and the mapping `copy_to` parameter] - -The `_all` field is a special _catch-all_ field which concatenates the values -of all of the other fields into one big string, using space as a delimiter, which is then -<> and indexed, but not stored. This means that it can be -searched, but not retrieved. - -The `_all` field allows you to search for values in documents without knowing -which field contains the value. This makes it a useful option when getting -started with a new dataset. For instance: - -[source,js] --------------------------------- -PUT /my_index -{ - "mapping": { - "user": { - "_all": { - "enabled": true <1> - } - } - } -} - -PUT /my_index/user/1 <2> -{ - "first_name": "John", - "last_name": "Smith", - "date_of_birth": "1970-10-24" -} - -GET /my_index/_search -{ - "query": { - "match": { - "_all": "john smith 1970" - } - } -} --------------------------------- -// TEST[skip:_all is no longer allowed] -// CONSOLE -<1> Enabling the `_all` field -<2> The `_all` field will contain the terms: [ `"john"`, `"smith"`, `"1970"`, `"10"`, `"24"` ] - -[NOTE] -.All values treated as strings -============================================================================= - -The `date_of_birth` field in the above example is recognised as a `date` field -and so will index a single term representing `1970-10-24 00:00:00 UTC`. The -`_all` field, however, treats all values as strings, so the date value is -indexed as the three string terms: `"1970"`, `"24"`, `"10"`. - -It is important to note that the `_all` field combines the original values -from each field as a string. It does not combine the _terms_ from each field. - -============================================================================= - -The `_all` field is just a <> field, and accepts the same -parameters that other string fields accept, including `analyzer`, -`term_vectors`, `index_options`, and `store`. - -The `_all` field can be useful, especially when exploring new data using -simple filtering. However, by concatenating field values into one big string, -the `_all` field loses the distinction between short fields (more relevant) -and long fields (less relevant). For use cases where search relevance is -important, it is better to query individual fields specifically. - -The `_all` field is not free: it requires extra CPU cycles and uses more disk -space. For this reason, it is disabled by default. If needed, it can be -<>. - -[[querying-all-field]] -==== Using the `_all` field in queries - -The <> and -<> queries query the -`_all` field by default if it is enabled, unless another field is specified: - -[source,js] --------------------------------- -GET _search -{ - "query": { - "query_string": { - "query": "john smith new york" - } - } -} --------------------------------- -// CONSOLE - -The same goes for the `?q=` parameter in <> (which is rewritten to a `query_string` query internally): - -[source,js] --------------------------------- -GET _search?q=john+smith+new+york --------------------------------- -// TEST[skip:_all is no longer allowed] -// CONSOLE - -Other queries, such as the <> and -<> queries require you to specify the `_all` field -explicitly, as per the <>. - -[[enabling-all-field]] -==== Enabling the `_all` field - -The `_all` field can be enabled per-type by setting `enabled` to `true`: - -[source,js] --------------------------------- -PUT my_index -{ - "mappings": { - "type_1": { <1> - "properties": {...} - }, - "type_2": { <2> - "_all": { - "enabled": true - }, - "properties": {...} - } - } -} --------------------------------- -// TEST[s/\.\.\.//] -// TEST[skip:_all is no longer allowed] -// CONSOLE - -<1> The `_all` field in `type_1` is disabled. -<2> The `_all` field in `type_2` is enabled. - -If the `_all` field is enabled, then URI search requests and the `query_string` -and `simple_query_string` queries can automatically use it for queries (see -<>). You can configure them to use a different field with -the `index.query.default_field` setting: - -[source,js] --------------------------------- -PUT my_index -{ - "mappings": { - "my_type": { - "properties": { - "content": { - "type": "text" - } - } - } - }, - "settings": { - "index.query.default_field": "content" <1> - } -} --------------------------------- -// CONSOLE - -<1> The `query_string` query will default to querying the `content` field in this index. - -[[all-field-and-boosting]] -==== Index boosting and the `_all` field - -Individual fields can be _boosted_ at index time, with the <> -parameter. The `_all` field takes these boosts into account: - -[source,js] --------------------------------- -PUT myindex -{ - "mappings": { - "mytype": { - "_all": {"enabled": true}, - "properties": { - "title": { <1> - "type": "text", - "boost": 2 - }, - "content": { <1> - "type": "text" - } - } - } - } -} --------------------------------- -// TEST[skip:_all is no longer allowed] -// CONSOLE - -<1> When querying the `_all` field, words that originated in the - `title` field are twice as relevant as words that originated in - the `content` field. - -WARNING: Using index-time boosting with the `_all` field has a significant -impact on query performance. Usually the better solution is to query fields -individually, with optional query time boosting. - - -[[custom-all-fields]] -==== Custom `_all` fields - -While there is only a single `_all` field per index, the <> -parameter allows the creation of multiple __custom `_all` fields__. For -instance, `first_name` and `last_name` fields can be combined together into -the `full_name` field: - -[source,js] --------------------------------- -PUT myindex -{ - "mappings": { - "mytype": { - "properties": { - "first_name": { - "type": "text", - "copy_to": "full_name" <1> - }, - "last_name": { - "type": "text", - "copy_to": "full_name" <1> - }, - "full_name": { - "type": "text" - } - } - } - } -} - -PUT myindex/mytype/1 -{ - "first_name": "John", - "last_name": "Smith" -} - -GET myindex/_search -{ - "query": { - "match": { - "full_name": "John Smith" - } - } -} --------------------------------- -// CONSOLE - -<1> The `first_name` and `last_name` values are copied to the `full_name` field. - -[[highlighting-all-field]] -==== Highlighting and the `_all` field - -A field can only be used for <> if -the original string value is available, either from the -<> field or as a stored field. - -The `_all` field is not present in the `_source` field and it is not stored or -enabled by default, and so cannot be highlighted. There are two options. Either -<> or highlight the -<>. - -[[all-field-store]] -===== Store the `_all` field - -If `store` is set to `true`, then the original field value is retrievable and -can be highlighted: - -[source,js] --------------------------------- -PUT myindex -{ - "mappings": { - "mytype": { - "_all": { - "enabled": true, - "store": true - } - } - } -} - -PUT myindex/mytype/1 -{ - "first_name": "John", - "last_name": "Smith" -} - -GET _search -{ - "query": { - "match": { - "_all": "John Smith" - } - }, - "highlight": { - "fields": { - "_all": {} - } - } -} --------------------------------- -// TEST[skip:_all is no longer allowed] -// CONSOLE - -Of course, enabling and storing the `_all` field will use significantly more -disk space and, because it is a combination of other fields, it may result in -odd highlighting results. - -The `_all` field also accepts the `term_vector` and `index_options` -parameters, allowing highlighting to use it. - -[[all-highlight-fields]] -===== Highlight original fields - -You can query the `_all` field, but use the original fields for highlighting as follows: - -[source,js] --------------------------------- -PUT myindex -{ - "mappings": { - "mytype": { - "_all": {"enabled": true} - } - } -} - -PUT myindex/mytype/1 -{ - "first_name": "John", - "last_name": "Smith" -} - -GET _search -{ - "query": { - "match": { - "_all": "John Smith" <1> - } - }, - "highlight": { - "fields": { - "*_name": { <2> - "require_field_match": false <3> - } - } - } -} --------------------------------- -// TEST[skip:_all is no longer allowed] -// CONSOLE - -<1> The query inspects the `_all` field to find matching documents. -<2> Highlighting is performed on the two name fields, which are available from the `_source`. -<3> The query wasn't run against the name fields, so set `require_field_match` to `false`. diff --git a/docs/reference/mapping/fields/field-names-field.asciidoc b/docs/reference/mapping/fields/field-names-field.asciidoc index 45839ac55d950..9dd1f17cbb3a9 100644 --- a/docs/reference/mapping/fields/field-names-field.asciidoc +++ b/docs/reference/mapping/fields/field-names-field.asciidoc @@ -35,3 +35,25 @@ GET my_index/_search // CONSOLE <1> Querying on the `_field_names` field (also see the <> query) + + +==== Disabling `_field_names` + +Because `_field_names` introduce some index-time overhead, you might want to +disable this field if you want to optimize for indexing speed and do not need +`exists` queries. + +[source,js] +-------------------------------------------------- +PUT tweets +{ + "mappings": { + "tweet": { + "_field_names": { + "enabled": false + } + } + } +} +-------------------------------------------------- +// CONSOLE diff --git a/docs/reference/mapping/fields/routing-field.asciidoc b/docs/reference/mapping/fields/routing-field.asciidoc index 96a5de1c61605..5fd8545dece5c 100644 --- a/docs/reference/mapping/fields/routing-field.asciidoc +++ b/docs/reference/mapping/fields/routing-field.asciidoc @@ -96,7 +96,7 @@ PUT my_index2/my_type/1 <2> } ------------------------------ // CONSOLE -// TEST[catch:request] +// TEST[catch:bad_request] <1> Routing is required for `my_type` documents. <2> This index request throws a `routing_missing_exception`. diff --git a/docs/reference/mapping/fields/source-field.asciidoc b/docs/reference/mapping/fields/source-field.asciidoc index db005f6eb3739..ea9ed5d29b326 100644 --- a/docs/reference/mapping/fields/source-field.asciidoc +++ b/docs/reference/mapping/fields/source-field.asciidoc @@ -64,8 +64,6 @@ simple queries to filter the dataset by date or tags, and the results are returned as aggregations. In this case, disabling the `_source` field will save space and reduce I/O. -It is also advisable to disable the <> in the -metrics case. ************************************************** diff --git a/docs/reference/mapping/fields/type-field.asciidoc b/docs/reference/mapping/fields/type-field.asciidoc index f2ff9631634eb..d5b098245999a 100644 --- a/docs/reference/mapping/fields/type-field.asciidoc +++ b/docs/reference/mapping/fields/type-field.asciidoc @@ -1,7 +1,7 @@ [[mapping-type-field]] === `_type` field -deprecated::[6.0.0,See <>] +deprecated[6.0.0,See <>] Each document indexed is associated with a <> (see <>) and an <>. The `_type` field is diff --git a/docs/reference/mapping/params/boost.asciidoc b/docs/reference/mapping/params/boost.asciidoc index e6890f948bea1..1f9d83dbe50db 100644 --- a/docs/reference/mapping/params/boost.asciidoc +++ b/docs/reference/mapping/params/boost.asciidoc @@ -66,13 +66,6 @@ POST _search // CONSOLE -The boost is also applied when it is copied with the -value in the <> field. This means that, when -querying the `_all` field, words that originated from the `title` field will -have a higher score than words that originated in the `content` field. -This functionality comes at a cost: queries on the `_all` field are slower -when field boosting is used. - deprecated[5.0.0, index time boost is deprecated. Instead, the field mapping boost is applied at query time. For indices created before 5.0.0 the boost will still be applied at index time.] [WARNING] .Why index time boosting is a bad idea diff --git a/docs/reference/mapping/params/coerce.asciidoc b/docs/reference/mapping/params/coerce.asciidoc index dacdabaafc030..d3e158185b6ba 100644 --- a/docs/reference/mapping/params/coerce.asciidoc +++ b/docs/reference/mapping/params/coerce.asciidoc @@ -45,7 +45,7 @@ PUT my_index/my_type/2 } -------------------------------------------------- // CONSOLE -// TEST[catch:request] +// TEST[catch:bad_request] <1> The `number_one` field will contain the integer `10`. <2> This document will be rejected because coercion is disabled. @@ -88,6 +88,6 @@ PUT my_index/my_type/2 { "number_two": "10" } <2> -------------------------------------------------- // CONSOLE -// TEST[catch:request] +// TEST[catch:bad_request] <1> The `number_one` field overrides the index level setting to enable coercion. <2> This document will be rejected because the `number_two` field inherits the index-level coercion setting. diff --git a/docs/reference/mapping/params/copy-to.asciidoc b/docs/reference/mapping/params/copy-to.asciidoc index bc22c9238219a..599e93aa12e97 100644 --- a/docs/reference/mapping/params/copy-to.asciidoc +++ b/docs/reference/mapping/params/copy-to.asciidoc @@ -1,10 +1,9 @@ [[copy-to]] === `copy_to` -The `copy_to` parameter allows you to create custom -<> fields. In other words, the values of multiple -fields can be copied into a group field, which can then be queried as a single -field. For instance, the `first_name` and `last_name` fields can be copied to +The `copy_to` parameter allows you to copy the values of multiple +fields into a group field, which can then be queried as a single +field. For instance, the `first_name` and `last_name` fields can be copied to the `full_name` field as follows: [source,js] diff --git a/docs/reference/mapping/params/ignore-above.asciidoc b/docs/reference/mapping/params/ignore-above.asciidoc index 6a24ca626d981..2db12a33368a2 100644 --- a/docs/reference/mapping/params/ignore-above.asciidoc +++ b/docs/reference/mapping/params/ignore-above.asciidoc @@ -56,5 +56,5 @@ limit of `32766`. NOTE: The value for `ignore_above` is the _character count_, but Lucene counts bytes. If you use UTF-8 text with many non-ASCII characters, you may want to -set the limit to `32766 / 3 = 10922` since UTF-8 characters may occupy at most -3 bytes. +set the limit to `32766 / 4 = 8191` since UTF-8 characters may occupy at most +4 bytes. diff --git a/docs/reference/mapping/params/ignore-malformed.asciidoc b/docs/reference/mapping/params/ignore-malformed.asciidoc index 916b01b33c190..905a0f7d78a98 100644 --- a/docs/reference/mapping/params/ignore-malformed.asciidoc +++ b/docs/reference/mapping/params/ignore-malformed.asciidoc @@ -44,7 +44,7 @@ PUT my_index/my_type/2 } -------------------------------------------------- // CONSOLE -// TEST[catch:request] +// TEST[catch:bad_request] <1> This document will have the `text` field indexed, but not the `number_one` field. <2> This document will be rejected because `number_two` does not allow malformed values. diff --git a/docs/reference/mapping/params/index-options.asciidoc b/docs/reference/mapping/params/index-options.asciidoc index 8d180e4f98852..e2cd6ce20e16f 100644 --- a/docs/reference/mapping/params/index-options.asciidoc +++ b/docs/reference/mapping/params/index-options.asciidoc @@ -28,6 +28,8 @@ following settings: offsets (which map the term back to the original string) are indexed. Offsets are used by the <> to speed up highlighting. +NOTE: <> don't support the `index_options` parameter any longer. + <> string fields use `positions` as the default, and all other fields use `docs` as the default. diff --git a/docs/reference/mapping/params/index.asciidoc b/docs/reference/mapping/params/index.asciidoc index e097293d1422b..32916e98d358c 100644 --- a/docs/reference/mapping/params/index.asciidoc +++ b/docs/reference/mapping/params/index.asciidoc @@ -2,5 +2,5 @@ === `index` The `index` option controls whether field values are indexed. It accepts `true` -or `false`. Fields that are not indexed are not queryable. +or `false` and defaults to `true`. Fields that are not indexed are not queryable. diff --git a/docs/reference/mapping/removal_of_types.asciidoc b/docs/reference/mapping/removal_of_types.asciidoc index d51fba66dd10a..006bc789f3084 100644 --- a/docs/reference/mapping/removal_of_types.asciidoc +++ b/docs/reference/mapping/removal_of_types.asciidoc @@ -41,7 +41,7 @@ field, so documents of different types with the same `_id` could exist in a single index. Mapping types were also used to establish a -/guide/en/elasticsearch/reference/5.4/mapping-parent-field.html[parent-child relationship] +<> between documents, so documents of type `question` could be parents to documents of type `answer`. @@ -103,7 +103,7 @@ larger number of primary shards for `tweets`. ==== Custom type field Of course, there is a limit to how many primary shards can exist in a cluster -so you many not want to waste an entire shard for a collection of only a few +so you may not want to waste an entire shard for a collection of only a few thousand documents. In this case, you can implement your own custom `type` field which will work in a similar way to the old `_type`. diff --git a/docs/reference/mapping/types.asciidoc b/docs/reference/mapping/types.asciidoc index 8ef7c3b2bf2a4..2cbc3a5bc54ad 100644 --- a/docs/reference/mapping/types.asciidoc +++ b/docs/reference/mapping/types.asciidoc @@ -34,7 +34,7 @@ string:: <> and <> <>:: `completion` to provide auto-complete suggestions <>:: `token_count` to count the number of tokens in a string -{plugins}/mapper-size.html[`mapper-murmur3`]:: `murmur3` to compute hashes of values at index-time and store them in the index +{plugins}/mapper-murmur3.html[`mapper-murmur3`]:: `murmur3` to compute hashes of values at index-time and store them in the index <>:: Accepts queries from the query-dsl diff --git a/docs/reference/mapping/types/parent-join.asciidoc b/docs/reference/mapping/types/parent-join.asciidoc index 048e866d03b48..56396ce7584f1 100644 --- a/docs/reference/mapping/types/parent-join.asciidoc +++ b/docs/reference/mapping/types/parent-join.asciidoc @@ -32,7 +32,7 @@ PUT my_index To index a document with a join, the name of the relation and the optional parent of the document must be provided in the `source`. -For instance the following creates two parent documents in the `question` context: +For instance the following example creates two `parent` documents in the `question` context: [source,js] -------------------------------------------------- @@ -85,8 +85,7 @@ must be added in the `_source`. WARNING: It is required to index the lineage of a parent in the same shard so you must always route child documents using their greater parent id. -For instance the following index two children documents pointing to the same parent `1` -with a `routing` value equals to the `id` of the parent: +For instance the following example shows how to index two `child` documents: [source,js] -------------------------------------------------- @@ -111,10 +110,21 @@ PUT my_index/doc/4?routing=1&refresh // CONSOLE // TEST[continued] -<1> This child document must be on the same shard than its parent +<1> The routing value is mandatory because parent and child documents must be indexed on the same shard <2> `answer` is the name of the join for this document <3> The parent id of this child document +==== Parent-join and performance. + +The join field shouldn't be used like joins in a relation database. In Elasticsearch the key to good performance +is to de-normalize your data into documents. Each join field, `has_child` or `has_parent` query adds a +significant tax to your query performance. + +The only case where the join field makes sense is if your data contains a one-to-many relationship where +one entity significantly outnumbers the other entity. An example of such case is a use case with products +and offers for these products. In the case that offers significantly outnumbers the number of products then +it makes sense to model the product as parent document and the offer as child document. + ==== Parent-join restrictions * Only one `join` field mapping is allowed per index. @@ -339,7 +349,7 @@ GET _nodes/stats/indices/fielddata?human&fields=my_join_field#question // CONSOLE // TEST[continued] -==== Multiple levels of parent join +==== Multiple children per parent It is also possible to define multiple children for a single parent: @@ -364,62 +374,3 @@ PUT my_index // CONSOLE <1> `question` is parent of `answer` and `comment`. - -And multiple levels of parent/child: - -[source,js] --------------------------------------------------- -PUT my_index -{ - "mappings": { - "doc": { - "properties": { - "my_join_field": { - "type": "join", - "relations": { - "question": ["answer", "comment"], <1> - "answer": "vote" <2> - } - } - } - } - } -} --------------------------------------------------- -// CONSOLE - -<1> `question` is parent of `answer` and `comment` -<2> `answer` is parent of `vote` - -The mapping above represents the following tree: - - question - / \ - / \ - comment answer - | - | - vote - -Indexing a grand child document requires a `routing` value equals -to the grand-parent (the greater parent of the lineage): - - -[source,js] --------------------------------------------------- -PUT my_index/doc/3?routing=1&refresh <1> -{ - "text": "This is a vote", - "my_join_field": { - "name": "vote", - "parent": "2" <2> - } -} --------------------------------------------------- -// CONSOLE -// TEST[continued] - -<1> This child document must be on the same shard than its grandparent and parent -<2> The parent id of this document (must points to an `answer` document) - - diff --git a/docs/reference/mapping/types/percolator.asciidoc b/docs/reference/mapping/types/percolator.asciidoc index 47eb3efc89d45..cdf1c876d5156 100644 --- a/docs/reference/mapping/types/percolator.asciidoc +++ b/docs/reference/mapping/types/percolator.asciidoc @@ -277,6 +277,9 @@ now returns matches from the new index: "body": "quick brown fox" } } + }, + "fields" : { + "_percolator_document_slot" : [0] } } ] @@ -472,6 +475,9 @@ This results in a response like this: } } } + }, + "fields" : { + "_percolator_document_slot" : [0] } } ] @@ -495,9 +501,9 @@ Otherwise percolate queries can be parsed incorrectly. In certain cases it is unknown what kind of percolator queries do get registered, and if no field mapping exists for fields that are referred by percolator queries then adding a percolator query fails. This means the mapping needs to be updated to have the field with the appropriate settings, and then the percolator query can be added. But sometimes it is sufficient -if all unmapped fields are handled as if these were default string fields. In those cases one can configure the -`index.percolator.map_unmapped_fields_as_string` setting to `true` (default to `false`) and then if a field referred in -a percolator query does not exist, it will be handled as a default string field so that adding the percolator query doesn't +if all unmapped fields are handled as if these were default text fields. In those cases one can configure the +`index.percolator.map_unmapped_fields_as_text` setting to `true` (default to `false`) and then if a field referred in +a percolator query does not exist, it will be handled as a default text field so that adding the percolator query doesn't fail. [float] diff --git a/docs/reference/migration/index.asciidoc b/docs/reference/migration/index.asciidoc index daef700fc47d6..15db4b7a94a85 100644 --- a/docs/reference/migration/index.asciidoc +++ b/docs/reference/migration/index.asciidoc @@ -8,15 +8,15 @@ your application from one version of Elasticsearch to another. As a general rule: -* Migration between minor versions -- e.g. `6.x` to `6.y` -- can be +* Migration between minor versions -- e.g. `7.x` to `7.y` -- can be performed by <>. -* Migration between consecutive major versions -- e.g. `5.x` to `6.x` -- +* Migration between consecutive major versions -- e.g. `6.x` to `7.x` -- requires a <>. -* Migration between non-consecutive major versions -- e.g. `2.x` to `6.x` -- +* Migration between non-consecutive major versions -- e.g. `5.x` to `7.x` -- is not supported. See <> for more info. -- -include::migrate_6_0.asciidoc[] +include::migrate_7_0.asciidoc[] diff --git a/docs/reference/migration/migrate_6_0.asciidoc b/docs/reference/migration/migrate_6_0.asciidoc deleted file mode 100644 index ffebfd9cfe7c6..0000000000000 --- a/docs/reference/migration/migrate_6_0.asciidoc +++ /dev/null @@ -1,65 +0,0 @@ -[[breaking-changes-6.0]] -== Breaking changes in 6.0 - -This section discusses the changes that you need to be aware of when migrating -your application to Elasticsearch 6.0. - -[float] -=== Indices created before 6.0 - -Elasticsearch 6.0 can read indices created in version 5.0 or above. An -Elasticsearch 6.0 node will not start in the presence of indices created in a -version of Elasticsearch before 5.0. - -[IMPORTANT] -.Reindex indices from Elasticseach 2.x or before -========================================= - -Indices created in Elasticsearch 2.x or before will need to be reindexed with -Elasticsearch 5.x in order to be readable by Elasticsearch 6.x. The easiest -way to reindex old indices is to use the `reindex` API. - -========================================= - -[float] -=== Also see: - -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> - - -include::migrate_6_0/aggregations.asciidoc[] -include::migrate_6_0/analysis.asciidoc[] -include::migrate_6_0/cat.asciidoc[] -include::migrate_6_0/clients.asciidoc[] -include::migrate_6_0/cluster.asciidoc[] -include::migrate_6_0/docs.asciidoc[] -include::migrate_6_0/indices.asciidoc[] -include::migrate_6_0/ingest.asciidoc[] -include::migrate_6_0/java.asciidoc[] -include::migrate_6_0/mappings.asciidoc[] -include::migrate_6_0/packaging.asciidoc[] -include::migrate_6_0/percolator.asciidoc[] -include::migrate_6_0/plugins.asciidoc[] -include::migrate_6_0/reindex.asciidoc[] -include::migrate_6_0/rest.asciidoc[] -include::migrate_6_0/scripting.asciidoc[] -include::migrate_6_0/search.asciidoc[] -include::migrate_6_0/settings.asciidoc[] -include::migrate_6_0/stats.asciidoc[] diff --git a/docs/reference/migration/migrate_6_0/aggregations.asciidoc b/docs/reference/migration/migrate_6_0/aggregations.asciidoc deleted file mode 100644 index 4e10c6de80b1e..0000000000000 --- a/docs/reference/migration/migrate_6_0/aggregations.asciidoc +++ /dev/null @@ -1,59 +0,0 @@ -[[breaking_60_aggregations_changes]] -=== Aggregations changes - -==== Deprecated `pattern` element of include/exclude for terms aggregations has been removed - -The `include` and `exclude` options of `terms` aggregations used to accept a -sub `pattern` object which has been removed. The pattern should now be directly -put as a value of the `include` and `exclude` fields. For instance, the below -`terms` aggregation: - -[source,js] --------------------------------------------------- -POST /twitter/_search?size=0 -{ - "aggs" : { - "top_users" : { - "terms" : { - "field" : "user", - "include": { - "pattern": "foo.*" - }, - "exclude": { - "pattern": ".*bar" - } - } - } - } -} --------------------------------------------------- -// CONSOLE -// TEST[skip: uses old unsupported syntax] - -should be replaced with: - -[source,js] --------------------------------------------------- -POST /twitter/_search?size=0 -{ - "aggs" : { - "top_users" : { - "terms" : { - "field" : "user", - "include": "foo.*", - "exclude": ".*bar" - } - } - } -} --------------------------------------------------- -// CONSOLE -// TEST[setup:twitter] - -==== Numeric `to` and `from` parameters in `date_range` aggregation are interpreted according to `format` now - -Numeric `to` and `from` parameters in `date_range` aggregations used to always be interpreted as `epoch_millis`, -making other numeric formats like `epoch_seconds` unusable for numeric input values. -Now we interpret these parameters according to the `format` of the target field. -If the `format` in the mappings is not compatible with the numeric input value, a compatible -`format` (e.g. `epoch_millis`, `epoch_second`) must be specified in the `date_range` aggregation, otherwise an error is thrown. diff --git a/docs/reference/migration/migrate_6_0/analysis.asciidoc b/docs/reference/migration/migrate_6_0/analysis.asciidoc deleted file mode 100644 index 9a76fc23a9f86..0000000000000 --- a/docs/reference/migration/migrate_6_0/analysis.asciidoc +++ /dev/null @@ -1,13 +0,0 @@ -[[breaking_60_analysis_changes]] -=== Analysis changes - -==== Synonym Token Filter - -In 6.0, Synonym Token Filter tokenizes synonyms with whatever -tokenizer and token filters appear before it in the chain. - -The `tokenizer` and `ignore_case` parameters are deprecated -and will be ignored when used in new indices. These parameters -will continue to function as before when used in indices -created in 5.x. - diff --git a/docs/reference/migration/migrate_6_0/cat.asciidoc b/docs/reference/migration/migrate_6_0/cat.asciidoc deleted file mode 100644 index 013c0705991cb..0000000000000 --- a/docs/reference/migration/migrate_6_0/cat.asciidoc +++ /dev/null @@ -1,7 +0,0 @@ -[[breaking_60_cat_changes]] -=== Cat API changes - -==== Unbounded queue size in cat thread pool - -Previously if a queue size backing a thread pool was unbounded, the cat thread pool API would output an empty string in -the queue_size column. This has been changed to now output -1 so that the output is always present and always numeric. diff --git a/docs/reference/migration/migrate_6_0/clients.asciidoc b/docs/reference/migration/migrate_6_0/clients.asciidoc deleted file mode 100644 index 55d1675e5dda0..0000000000000 --- a/docs/reference/migration/migrate_6_0/clients.asciidoc +++ /dev/null @@ -1,11 +0,0 @@ -[[breaking_60_clients_changes]] -=== Clients changes - -==== Java High Level REST Client - -Starting from version 5.6.0 a new Java client has been released: the Java High Level REST Client. -This official high-level client (named like this to differentiate it from the existing low-level client) for -Elasticsearch can be used to execute search, index, delete, update and bulk operations using the same Core -Java classes as the `TransportClient` uses. - -This Java High Level REST Client is designed to replace the `TransportClient` in a near future. diff --git a/docs/reference/migration/migrate_6_0/cluster.asciidoc b/docs/reference/migration/migrate_6_0/cluster.asciidoc deleted file mode 100644 index bd070d8d1f483..0000000000000 --- a/docs/reference/migration/migrate_6_0/cluster.asciidoc +++ /dev/null @@ -1,27 +0,0 @@ -[[breaking_60_cluster_changes]] -=== Cluster changes - -==== Cluster name no longer allowed in path.data - -Previously the cluster name could be used in the `path.data` setting with a -warning. This is now no longer allowed. For instance, in the previous version -this was valid: - -[source,sh] --------------------------------------------------- -# Assuming path.data is /tmp/mydata -# No longer supported: -$ tree /tmp/mydata -/tmp/mydata -├── -│   └── nodes -│   └── 0 -│   └── - -# Should be changed to: -$ tree /tmp/mydata -/tmp/mydata -├── nodes -│   └── 0 -│   └── --------------------------------------------------- diff --git a/docs/reference/migration/migrate_6_0/docs.asciidoc b/docs/reference/migration/migrate_6_0/docs.asciidoc deleted file mode 100644 index a7e34090e9e41..0000000000000 --- a/docs/reference/migration/migrate_6_0/docs.asciidoc +++ /dev/null @@ -1,23 +0,0 @@ -[[breaking_60_docs_changes]] -=== Document API changes - -==== version type `force` removed - -Document modification operations may no longer specify the `version_type` of -`force` to override any previous version checks. - -==== <> no longer support versions - -Adding a `version` to an upsert request is no longer supported. - -==== `created` field removed in the Index API - -The `created` field has been removed in the Index API as in the `index` and -`create` bulk operations. `operation` field should be used instead. - - -==== `found` field removed in the Delete API - -The `found` field has been removed in the Delete API as in the `delete` bulk -operations. `operation` field should be used instead. - diff --git a/docs/reference/migration/migrate_6_0/indices.asciidoc b/docs/reference/migration/migrate_6_0/indices.asciidoc deleted file mode 100644 index b52a403ca2a60..0000000000000 --- a/docs/reference/migration/migrate_6_0/indices.asciidoc +++ /dev/null @@ -1,75 +0,0 @@ -[[breaking_60_indices_changes]] -=== Indices changes - -==== Index templates use `index_patterns` instead of `template` - -Previously templates expressed the indices that they should match using a glob -style pattern in the `template` field. They should now use the `index_patterns` -field instead. As the name implies you can define multiple glob style patterns -in an array but for convenience defining a single pattern as a bare string is -also supported. So both of these examples are valid: - -[source,js] --------------------------------------------------- -PUT _template/template_1 -{ - "index_patterns": ["te*", "bar*"], - "settings": { - "number_of_shards": 1 - } -} -PUT _template/template_2 -{ - "index_patterns": "te*", - "settings": { - "number_of_shards": 1 - } -} --------------------------------------------------- -// CONSOLE - - -==== Shadow Replicas have been removed - -Shadow replicas don't see enough usage, and have been removed. This includes the -following settings: - -- `index.shared_filesystem` -- `index.shadow_replicas` -- `node.add_lock_id_to_custom_path` - -==== Open/Close index API allows wildcard expressions that match no indices by default - -The default value of the `allow_no_indices` option for the Open/Close index API -has been changed from `false` to `true` so it is aligned with the behaviour of the -Delete index API. As a result, Open/Close index API don't return an error by -default when a provided wildcard expression doesn't match any closed/open index. - -==== Delete a document - -Delete a document from non-existing index has been modified to not create the index. -However if an external versioning is used the index will be created and the document -will be marked for deletion. - -==== Indices aliases api resolves indices expressions only against indices - -The index parameter in the update-aliases, put-alias, and delete-alias APIs no -longer accepts alias names. Instead, it accepts only index names (or wildcards -which will expand to matching indices). - -==== Delete index api resolves indices expressions only against indices - -The index parameter in the delete index API no longer accepts alias names. -Instead, it accepts only index names (or wildcards which will expand to -matching indices). - -==== Support for `+` has been removed in index expressions - -Omitting the `+` has the same effect as specifying it, hence support for `+` -has been removed in index expressions. - -==== Translog retention - -Translog files are now kept for up to 12 hours (by default), with a maximum size of `512mb` (default), and -are no longer deleted on `flush`. This is to increase the chance of doing an operation based recovery when -bringing up replicas up to speed. diff --git a/docs/reference/migration/migrate_6_0/ingest.asciidoc b/docs/reference/migration/migrate_6_0/ingest.asciidoc deleted file mode 100644 index eba1a19ec8788..0000000000000 --- a/docs/reference/migration/migrate_6_0/ingest.asciidoc +++ /dev/null @@ -1,14 +0,0 @@ -[[breaking_60_ingest_changes]] -=== Ingest changes - -==== Timestamp meta-data field type has changed - -The type of the "timestamp" meta-data field has changed from `java.lang.String` to `java.util.Date`. - -==== The format of the string-formatted ingest.timestamp field has changed - -Previously, since Elasticsearch 5.4.0, you needed to use `ingest.new_date_format` to have the -`ingest.timestamp` metadata field be formatted in such a way that ES can coerce it to a field -of type `date` without further transformation. This is not necessary anymore and this setting was -removed. You can now simply set a field to `{{ingest.timestamp}}` in a pipeline, and have that -field be of type `date` without any mapping errors. diff --git a/docs/reference/migration/migrate_6_0/java.asciidoc b/docs/reference/migration/migrate_6_0/java.asciidoc deleted file mode 100644 index b706b4abb14ad..0000000000000 --- a/docs/reference/migration/migrate_6_0/java.asciidoc +++ /dev/null @@ -1,64 +0,0 @@ -[[breaking_60_java_changes]] -=== Java API changes - -==== `setSource` methods require XContentType - -Previously the `setSource` methods and other methods that accepted byte/string representations of -an object source did not require the XContentType to be specified. The auto-detection of the content -type is no longer used, so these methods now require the XContentType as an additional argument when -providing the source in bytes or as a string. - -==== `DeleteByQueryRequest` requires an explicitly set query - -In previous versions of Elasticsearch, delete by query requests without an explicit query -were accepted, match_all was used as the default query and all documents were deleted -as a result. From version 6.0.0, a `DeleteByQueryRequest` requires an explicit query be set. - -==== `InternalStats` and `Stats` getCountAsString() method removed - -The `count` value in the stats aggregation represents a doc count that shouldn't require a formatted -version. This method was deprecated in 5.4 in favour of just using -`String.valueOf(getCount())` if needed - -==== `ActionRequestBuilder#execute` returns `ActionFuture` rather than `ListenableActionFuture` - -When sending a request through the request builders e.g. client.prepareSearch().execute(), it used to -be possible to call `addListener` against the returned `ListenableActionFuture`. With this change an -`ActionFuture` is returned instead, which is consistent with what the `Client` methods return, hence -it is not possible to associate the future with listeners. The `execute` method that accept a listener -as an argument can be used instead. - -==== `Terms.Order` and `Histogram.Order` classes replace by `BucketOrder` - -The `terms`, `histogram`, and `date_histogram` aggregation code has been refactored to use common -code for ordering buckets. The `BucketOrder` class must be used instead of `Terms.Order` and -`Histogram.Order`. The `static` methods in the `BucketOrder` class must be called instead of directly -accessing internal order instances, e.g. `BucketOrder.count(boolean)` and `BucketOrder.aggregation(String, boolean)`. -Use `BucketOrder.key(boolean)` to order the `terms` aggregation buckets by `_term`. - -=== `getTookInMillis()` removed in `BulkResponse`, `SearchResponse` and `TermVectorsResponse` - -In `BulkResponse`, `SearchResponse` and `TermVectorsResponse` `getTookInMiilis()` method -has been removed in favor of `getTook` method. `getTookInMiilis()` is easily replaced by -`getTook().getMillis()`. - -=== `GetField` and `SearchHitField` replaced by `DocumentField` - -As `GetField` and `SearchHitField` have the same members, they have been unified into -`DocumentField`. - -=== Some Aggregation classes have moved packages - -The classes for the variants of the range aggregation (geo_distance, date and ip) were moved into the `org.elasticsearch.search.aggregations.bucket.range` -package. - -The `org.elasticsearch.search.aggregations.bucket.terms.support` package was removed and the classes were moved to -`org.elasticsearch.search.aggregations.bucket.terms`. - -The filter aggregation classes were moved to `org.elasticsearch.search.aggregations.bucket.filter` - -=== Constructor for `PercentileRanksAggregationBuilder` has changed - -It is now required to include the desired ranks as a non-null, non-empty array of doubles to the builder's constructor, -rather than configuring them via a setter on the builder. The setter method `values()` has correspondingly -been removed. \ No newline at end of file diff --git a/docs/reference/migration/migrate_6_0/mappings.asciidoc b/docs/reference/migration/migrate_6_0/mappings.asciidoc deleted file mode 100644 index 43c61664d8710..0000000000000 --- a/docs/reference/migration/migrate_6_0/mappings.asciidoc +++ /dev/null @@ -1,32 +0,0 @@ -[[breaking_60_mappings_changes]] -=== Mapping changes - -==== Coercion of boolean fields - -Previously, Elasticsearch recognized the strings `true`, `false`, `on`, `off`, `yes`, `no`, `0`, `1` as booleans. Elasticsearch 6.0 -recognizes only the strings `true` and `false` as booleans and will throw an error otherwise. For backwards compatibility purposes, during the 6.x -series the previous coercion rules will continue to work on pre-6.0 indices. This means that you do not need to change affected existing -mappings immediately. However, it is not possible to create new indices from existing index templates that violate the strict `boolean` -coercion rules. - -==== The `_all` meta field is now disabled by default - -On new mappings, the `_all` meta field that contains a copy of the text from -each field is now disabled by default. The `query_string` and -`simple_query_string` queries that previously used `_all` to search will now -check if `_all` is enabled/disabled and switch to executing the query across all -fields if `_all` is disabled. `_all` can no longer be configured for indices -created with Elasticsearch version 6.0 or later. - -==== The `include_in_all` mapping parameter is now disallowed - -Since the ++_all++ field is now disabled by default and cannot be configured for -indices created with Elasticsearch 6.0 or later, the `include_in_all` setting is -now disallowed for these indices' mappings. - -==== Unrecognized `match_mapping_type` options not silently ignored - -Previously Elasticsearch would silently ignore any dynamic templates that -included a `match_mapping_type` type that was unrecognized. An exception is now -thrown on an unrecognized type. - diff --git a/docs/reference/migration/migrate_6_0/packaging.asciidoc b/docs/reference/migration/migrate_6_0/packaging.asciidoc deleted file mode 100644 index 1eec3d72000e9..0000000000000 --- a/docs/reference/migration/migrate_6_0/packaging.asciidoc +++ /dev/null @@ -1,78 +0,0 @@ -[[breaking_60_packaging_changes]] -=== Packaging changes - -==== Configuring custom user and group for package is no longer allowed - -Previously someone could configure the `$ES_USER` and `$ES_GROUP` variables to -change which user and group Elasticsearch was run as. This is no longer -possible, the DEB and RPM packages now exclusively use the user and group -`elasticsearch`. If a custom user or group is needed then a provisioning system -should use the tarball distribution instead of the provided RPM and DEB -packages. - -==== `path.conf` is no longer a configurable setting - -Previous versions of Elasticsearch enabled setting `path.conf` as a -setting. This was rather convoluted as it meant that you could start -Elasticsearch with a config file that specified via `path.conf` that -Elasticsearch should use another config file. Instead, to configure a custom -config directory, use the <>. - -==== `CONF_DIR` is no longer supported - -Previous versions of Elasticsearch enabled using the `CONF_DIR` environment -variable to specify a custom configuration directory for some configuration -files and some scripts (it was used inconsistently). Starting in Elasticsearch -6.0.0, the usage of this environment variable has been superceded by -`ES_PATH_CONF`, and this new environment variable is consistently used for all -configuration files and scripts. - -==== Default path settings are removed - -Previous versions of Elasticsearch enabled setting `default.path.data` and -`default.path.logs` to set the default data path and default logs path if they -were not otherwise set in the configuration file. These settings have been -removed and now data paths and log paths can be configured via settings -only. Related, this means that the environment variables `DATA_DIR` and -`LOG_DIR` no longer have any effect as these were used to set -`default.path.data` and `default.path.logs` in the packaging scripts. - -Additionally, this means that if you were using the package distributions (i.e., -you have installed Elasticsearch from the RPM or the DEB distributions), you had -not previously explicitly configured `path.data` or `path.logs`, and you carry -over your `elasticsearch.yml` file from 5.x, then you will need to add settings -for `path.data` and `path.logs`. To use the defaults that you were implicitly -using previously, you should add these lines to your `elasticsearch.yml`: - -[source,yaml] --------------------------------------------------- -path.data: /var/lib/elasticsearch -path.logs: /var/log/elasticsearch --------------------------------------------------- - -(If you already had explicit values for either of these settings, you should of -course preserve those). If you do not do this, Elasticsearch will refuse to -start. - -==== 32-bit is no longer maintained - -We previously attempted to ensure that Elasticsearch could be started on 32-bit -JVM (although a bootstrap check prevented using a 32-bit JVM in production). We -are no longer maintaining this attempt. - -==== `ES_JVM_OPTIONS`is no longer supported - -The environment variable `ES_JVM_OPTIONS` that enabled a custom location for the -`jvm.options` file has been removed in favor of using the environment variable -`ES_PATH_CONF`. This environment variable is already used in the packaging to -support relocating the configuration files so this change merely aligns the -other configuration files with the location of the `jvm.options` file. - -==== `ES_INCLUDE` is no longer supported - -The environment variable `ES_INCLUDE` could previously be used to establish the -environment used to start Elasticsearch (and various supporting scripts). This -legacy feature could be useful when there were several environment variables -useful for configuring JVM options; this functionality had previously been -replaced by <>. Therefore, `ES_INCLUDE` has been removed. diff --git a/docs/reference/migration/migrate_6_0/percolator.asciidoc b/docs/reference/migration/migrate_6_0/percolator.asciidoc deleted file mode 100644 index d31a1857ce926..0000000000000 --- a/docs/reference/migration/migrate_6_0/percolator.asciidoc +++ /dev/null @@ -1,6 +0,0 @@ -[[breaking_60_percolator_changes]] -=== Percolator changes - -==== Deprecated percolator and mpercolate apis have been removed - -Instead the `percolate` query should be used via either the search or msearch apis. \ No newline at end of file diff --git a/docs/reference/migration/migrate_6_0/plugins.asciidoc b/docs/reference/migration/migrate_6_0/plugins.asciidoc deleted file mode 100644 index efb7328030e8e..0000000000000 --- a/docs/reference/migration/migrate_6_0/plugins.asciidoc +++ /dev/null @@ -1,92 +0,0 @@ -[[breaking_60_plugins_changes]] -=== Plugins changes - -==== Mapper attachments plugin - -* The mapper attachments plugin has been deprecated in elasticsearch 5.0 and is now removed. -You can use {plugins}/ingest-attachment.html[ingest attachment plugin] instead. - -==== S3 Repository plugin - -* The bucket an s3 repository is configured with will no longer be created automatically. -It must exist before the s3 repository is created. - -* Support for specifying s3 credentials through environment variables and -system properties has been removed. Use the `elasticsearch-keystore` tool -to securely store the credentials. - -* Specifying region has been removed. This includes the settings `cloud.aws.region`, -`cloud.aws.s3.region`, `repositories.s3.region`, and specifying -region inside the repository settings. Instead, specify the full endpoint if a custom -s3 location is needed, or rely on the default behavior which automatically locates -the region of the configured bucket. - -* Specifying s3 signer type has been removed, including `cloud.aws.signer` and `cloud.aws.s3.signer`. - -* Global repositories settings have been removed. This includes `repositories.s3.bucket`, -`repositories.s3.server_side_encryption`, `repositories.s3.buffer_size`, -`repositories.s3.max_retries`, `repositories.s3.use_throttle_retries`, -`repositories.s3.chunk_size`, `repositories.s3.compress`, `repositories.s3.storage_class`, -`repositories.s3.canned_acl`, `repositories.s3.base_path`, and -`repositories.s3.path_style_access`. Instead, these settings should be set directly in the - settings per repository. - See {plugins}/repository-s3-repository.html[S3 Repository settings]. - -* Shared client settings have been removed. This includes `cloud.aws.access_key`, - `cloud.aws.secret_key`, `cloud.aws.protocol`, `cloud.aws.proxy.host`, - `cloud.aws.proxy.port`, `cloud.aws.proxy.username`, `cloud.aws.proxy.password`, - `cloud.aws.signer`, `cloud.aws.read_timeout`, `cloud.aws.s3.access_key`, - `cloud.aws.s3.secret_key`, `cloud.aws.s3.protocol`, `cloud.aws.s3.proxy.host`, - `cloud.aws.s3.proxy.port`, `cloud.aws.s3.proxy.username`, `cloud.aws.s3.proxy.password`, - `cloud.aws.s3.signer`, `cloud.aws.s3.read_timeout`, `repositories.s3.access_key`, - `repositories.s3.secret_key`, `repositories.s3.endpoint` and `repositories.s3.protocol`. -Instead, use the new named client settings under `s3.client.CLIENT_NAME.*`. - -==== Azure Repository plugin - -* The container an azure repository is configured with will no longer be created automatically. -It must exist before the azure repository is created. - -* Global repositories settings you are able to set in elasticsearch config file under `repositories.azure` -name space have been removed. This includes `repositories.azure.account`, `repositories.azure.container`, -`repositories.azure.base_path`, `repositories.azure.location_mode`, `repositories.azure.chunk_size` and -`repositories.azure.compress`. -You must set those settings per repository instead. Respectively `account`, `container`, `base_path`, -`location_mode`, `chunk_size` and `compress`. -See {plugins}/repository-azure-usage.html#repository-azure-repository-settings[Azure Repository settings]. - -==== GCS Repository plugin - -* The `service_account` setting has been removed. A service account json credential file must now be -specified in the <>. - -==== EC2 Discovery plugin - -* Specifying ec2 signer type has been removed, including `cloud.aws.signer` and `cloud.aws.ec2.signer`. - -* The region setting has been removed. This includes the settings `cloud.aws.region` -and `cloud.aws.ec2.region`. Instead, specify the full endpoint. - -* All `cloud.aws.*` and `cloud.aws.ec2.*` settings have been removed. Use `discovery.ec2.*` settings instead. - -==== Ignoring hidden folders - -Previous versions of Elasticsearch would skip hidden files and directories when -scanning the plugins folder. This leniency has been removed. - -==== ICU Analysis plugin - -The icu4j library has been upgraded to 59.1, -Indices created in the previous major version will need to be reindexed -in order to return correct (and correctly ordered) results, -and to take advantage of new characters. - -==== Plugins should not construct `Environment` instances from `Settings` - -Previously, plugins could construct an `Environment` instance from `Settings` to -discover the path to plugin-specific config files. This will no longer work in -all situations as the `Settings` object does not carry the necessary information -for the config path to be set correctly. Instead, plugins that need to know the -config path should have a single constructor that accepts a pair of `Settings` -and `Path` instances, and construct an `Environment` using the corresponding -constructor on `Environment`. diff --git a/docs/reference/migration/migrate_6_0/reindex.asciidoc b/docs/reference/migration/migrate_6_0/reindex.asciidoc deleted file mode 100644 index d74a5102d91b7..0000000000000 --- a/docs/reference/migration/migrate_6_0/reindex.asciidoc +++ /dev/null @@ -1,6 +0,0 @@ -[[breaking_60_reindex_changes]] -=== Reindex changes - -==== `size` parameter - -The `size` parameter can no longer be explicitly set to `-1`. If all documents are required then the `size` parameter should not be set. \ No newline at end of file diff --git a/docs/reference/migration/migrate_6_0/rest.asciidoc b/docs/reference/migration/migrate_6_0/rest.asciidoc deleted file mode 100644 index c9bdc8785f1c4..0000000000000 --- a/docs/reference/migration/migrate_6_0/rest.asciidoc +++ /dev/null @@ -1,121 +0,0 @@ -[[breaking_60_rest_changes]] -=== REST changes - -==== Unquoted JSON - -In previous versions of Elasticsearch, JSON documents were allowed to contain unquoted field names. -This feature was removed in the 5.x series, but a backwards-compatibility layer was added via the -system property `elasticsearch.json.allow_unquoted_field_names`. This backwards-compatibility layer -has been removed in Elasticsearch 6.0.0. - -==== Duplicate Keys in JSON, CBOR, Yaml and Smile - -In previous versions of Elasticsearch, documents were allowed to contain duplicate keys. Elasticsearch 6.0.0 - enforces that all keys are unique. This applies to all content types: JSON, CBOR, Yaml and Smile. - -==== Content-Type Auto-detection - -In previous versions of Elasticsearch, having a proper Content-Type for the data in a request was not enforced. -Elasticsearch 6.0.0 enforces that all requests with a body must have a supported Content-Type and this type will -be used when parsing the data. - -When using the `source` query string parameter, the `source_content_type` parameter must also be specified with -the media type of the source. - -==== Boolean API parameters - -All REST APIs parameters (both request parameters and JSON body) support providing boolean "false" as the -value `false` and boolean "true" as the value `true`. All other values will raise an error. - -==== Analyze API changes - -The deprecated request parameters and plain text in request body has been removed. Define parameters in request body. - -==== Support custom normalizer in Analyze API - -Analyze API can analyze normalizer and custom normalizer. -In previous versions of Elasticsearch, Analyze API is requiring a `tokenizer` or `analyzer` parameter. -In Elasticsearch 6.0.0, Analyze API can analyze a text as a keyword field with custom normalizer -or if `char_filter`/`filter` is set and `tokenizer`/`analyzer` is not set. - -==== Indices exists API - -The `ignore_unavailable` and `allow_no_indices` options are no longer accepted -as they could cause undesired results when their values differed from their -defaults. - -==== `timestamp` and `ttl` in index requests - -`timestamp` and `ttl` are not accepted anymore as parameters of index/update -requests. - -==== Refresh requests with one or more shard failures return HTTP 500 response instead of 200 - -Refresh requests that are broadcast to multiple shards that can have one or more -shards fail during the request now return a 500 response instead of a 200 -response in the event there is at least one failure. - -==== Delete by Query API requires an explicit query - -In previous versions of Elasticsearch, delete by query requests without an explicit query -were accepted, match_all was used as the default query and all documents were deleted -as a result. From version 6.0.0, delete by query requests require an explicit query. - -==== DELETE document calls now implicitly create the type - -Running `DELETE index/type/id` now implicitly creates `type` with a default -mapping if it did not exist yet. - -==== Indices information APIs - -Previously it was possible to execute `GET /_aliases,_mappings` or `GET -/myindex/_settings,_alias` by separating mulitple types of requests with commas -in order to retrieve multiple types of information about one or more indices. -This comma-separation for retrieving multiple pieces of information has been -removed.. `GET /_all` can be used to retrieve all aliases, settings, and -mappings for all indices. In order to retrieve only the mappings for an index, -`GET /myindex/_mappings` (or `_aliases`, or `_settings`). - -==== Requests to existing endpoints with incorrect HTTP verb now return 405 responses - -Issuing a request to an endpoint that exists, but with an incorrect HTTP verb -(such as a `POST` request to `/myindex/_settings`) now returns an HTTP 405 -response instead of a 404. An `Allow` header is added to the 405 responses -containing the allowed verbs. For example: - -[source,text] -------------------------------------------- -$ curl -v -XPOST 'localhost:9200/my_index/_settings' -* Trying 127.0.0.1... -* TCP_NODELAY set -* Connected to localhost (127.0.0.1) port 9200 (#0) -> POST /my_index/_settings HTTP/1.1 -> Host: localhost:9200 -> User-Agent: curl/7.51.0 -> Accept: */* -> -< HTTP/1.1 405 Method Not Allowed -< Allow: PUT,GET -< content-type: application/json; charset=UTF-8 -< content-length: 134 -< -{ - "error" : "Incorrect HTTP method for uri [/my_index/_settings] and method [POST], allowed: [PUT, GET]", - "status" : 405 -} -* Curl_http_done: called premature == 0 -* Connection #0 to host localhost left intact --------------------------------------------- - -==== Disallow using `_cache` and `_cache_key` - -The `_cache` and `_cache_key` options in queries have been deprecated since version 2.0.0 and -have been ignored since then, issuing a deprecation warning. These options have now been completely -removed, so using them now will throw an error. - -==== IndexClosedException to return 400 status code - -An `IndexClosedException` is returned whenever an api that doesn't support -closed indices (e.g. search) is called passing closed indices as parameters -and `ignore_unavailable` is set to `false`. The response status code returned -in such case changed from `403` to `400` diff --git a/docs/reference/migration/migrate_6_0/scripting.asciidoc b/docs/reference/migration/migrate_6_0/scripting.asciidoc deleted file mode 100644 index 5a474927c1a89..0000000000000 --- a/docs/reference/migration/migrate_6_0/scripting.asciidoc +++ /dev/null @@ -1,81 +0,0 @@ -[[breaking_60_scripting_changes]] -=== Scripting changes - -==== Groovy, JavaScript, and Python languages removed - -The Groovy, JavaScript, and Python scripting languages were deprecated in -elasticsearch 5.0 and have now been removed. Use painless instead. - -==== Native scripts removed - -Native scripts have been removed. Instead, -<>. - -==== Date fields now return dates - -`doc.some_date_field.value` now returns ++ReadableDateTime++s instead of -milliseconds since epoch as a `long`. The same is true for -`doc.some_date_field[some_number]`. Use `doc.some_date_field.value.millis` to -fetch the milliseconds since epoch if you need it. - -==== Removed access to index internal via the `_index` variable - -The `_index` variable has been removed. If you used it for advanced scoring, consider writing a `Similarity` plugin. - -==== Script Settings - -All of the existing scripting security settings have been removed. Instead -they are replaced with `script.allowed_types` and `script.allowed_contexts`. - -==== `lang` can no longer be specified when using a stored script as part of a request - -The `lang` variable can no longer be specified as part of a request that uses a stored -script otherwise an error will occur. Note that a request using a stored script is -different from a request that puts a stored script. The language of the script has -already been stored as part of the cluster state and an `id` is sufficient to access -all of the information necessary to execute a stored script. - -==== 'lang` can no longer be used when putting, getting, or deleting a stored script - -Stored scripts can no longer have the `lang` parameter specified as part of the url -when performing PUT, GET, and DELETE actions on the `_scripts/` path. All stored -scripts must have a unique `id` as the namespace is only `id` now and no longer `lang` -and `id`. - -==== Stored search template apis removed - -The PUT, GET and DELETE `_search/template` apis have been removed. Store search templates with the stored scripts apis instead. - -For example, previously one might have stored a search template with the following: - -[source,js] --------------------------------------------------- -PUT /_search/template/my_template -{ - "query": { - "match": { - "f1": "{{f1}}" - } - } -} --------------------------------------------------- - -And instead one would now use the following: - -[source,js] --------------------------------------------------- -PUT /_scripts/my_template -{ - "script": { - "lang": "mustache", - "source": { - "query": { - "match": { - "f1": "{{f1}}" - } - } - } - } -} --------------------------------------------------- - diff --git a/docs/reference/migration/migrate_6_0/search.asciidoc b/docs/reference/migration/migrate_6_0/search.asciidoc deleted file mode 100644 index 020ab625b76e5..0000000000000 --- a/docs/reference/migration/migrate_6_0/search.asciidoc +++ /dev/null @@ -1,143 +0,0 @@ -[[breaking_60_search_changes]] -=== Search and Query DSL changes - -==== Changes to queries - -* The `collect_payloads` parameter of the `span_near` query has been removed. Payloads will be - loaded when needed. - -* Queries on boolean fields now strictly parse boolean-like values. This means - only the strings `"true"` and `"false"` will be parsed into their boolean - counterparts. Other strings will cause an error to be thrown. - -* The `in` query (a synonym for the `terms` query) has been removed - -* The `geo_bbox` query (a synonym for the `geo_bounding_box` query) has been removed - -* The `mlt` query (a synonym for the `more_like_this` query) has been removed - -* The `fuzzy_match` and `match_fuzzy` query (synonyma for the `match` query) have been removed - -* The `terms` query now always returns scores equal to `1` and is not subject to - `indices.query.bool.max_clause_count` anymore. - -* The deprecated `indices` query has been removed. - -* Support for empty query objects (`{ }`) has been removed from the query DSL. - An error is thrown whenever an empty query object is provided. - -* The deprecated `minimum_number_should_match` parameter in the `bool` query has - been removed, use `minimum_should_match` instead. - -* The `query_string` query now correctly parses the maximum number of - states allowed when - "https://en.wikipedia.org/wiki/Powerset_construction#Complexity[determinizing]" - a regex as `max_determinized_states` instead of the typo - `max_determined_states`. - -* The `query_string` query no longer accepts `enable_position_increment`, use - `enable_position_increments` instead. - -* For `geo_distance` queries, sorting, and aggregations the `sloppy_arc` option - has been removed from the `distance_type` parameter. - -* The `geo_distance_range` query, which was deprecated in 5.0, has been removed. - -* The `optimize_bbox` parameter has been removed from `geo_distance` queries. - -* The `ignore_malformed` and `coerce` parameters have been removed from - `geo_bounding_box`, `geo_polygon`, and `geo_distance` queries. - -* The `disable_coord` parameter of the `bool` and `common_terms` queries has - been removed. If provided, it will be ignored and issue a deprecation warning. - -* The `template` query has been removed. This query was deprecated since 5.0 - -* The `percolate` query's `document_type` has been deprecated. From 6.0 and later - it is no longer required to specify the `document_type` parameter. - -* The `split_on_whitespace` parameter for the `query_string` query has been removed. - If provided, it will be ignored and issue a deprecation warning. - The `query_string` query now splits on operator only. - -* The `use_dismax` parameter for the `query_string` query has been removed. - If provided, it will be ignored and issue a deprecation warning. - The `tie_breaker` parameter must be used instead. - -* The `auto_generate_phrase_queries` parameter for the `query_string` query has been removed, - use an explicit quoted query instead. - If provided, it will be ignored and issue a deprecation warning. - -* The `all_fields` parameter for the `query_string` has been removed. - Set `default_field` to *` instead. - If provided, `default_field` will be automatically set to `*` - -* The `index` parameter in the terms filter, used to look up terms in a dedicated index is - now mandatory. Previously, the index defaulted to the index the query was executed on. Now this index - must be explicitly set in the request. - -==== Search shards API - -The search shards API no longer accepts the `type` url parameter, which didn't -have any effect in previous versions. - -==== Changes to the Profile API - -The `"time"` field showing human readable timing output has been replaced by the `"time_in_nanos"` -field which displays the elapsed time in nanoseconds. The `"time"` field can be turned on by adding -`"?human=true"` to the request url. It will display a rounded, human readable time value. - -==== Scoring changes - -===== Query normalization - -Query normalization has been removed. This means that the TF-IDF similarity no -longer tries to make scores comparable across queries and that boosts are now -integrated into scores as simple multiplicative factors. - -Other similarities are not affected as they did not normalize scores and -already integrated boosts into scores as multiplicative factors. - -See https://issues.apache.org/jira/browse/LUCENE-7347[`LUCENE-7347`] for more -information. - -===== Coordination factors - -Coordination factors have been removed from the scoring formula. This means that -boolean queries no longer score based on the number of matching clauses. -Instead, they always return the sum of the scores of the matching clauses. - -As a consequence, use of the TF-IDF similarity is now discouraged as this was -an important component of the quality of the scores that this similarity -produces. BM25 is recommended instead. - -See https://issues.apache.org/jira/browse/LUCENE-7347[`LUCENE-7347`] for more -information. - -==== Fielddata on `_uid` - -Fielddata on `_uid` is deprecated. It is possible to switch to `_id` instead -but the only reason why it has not been deprecated too is because it is used -for the `random_score` function. If you really need access to the id of -documents for sorting, aggregations or search scripts, the recommendation is -to duplicate the id as a field in the document. - -==== Highlighters - -The `unified` highlighter is the new default choice for highlighter. -The offset strategy for each field is picked internally by this highlighter depending on the -type of the field (`index_options`). -It is still possible to force the highlighter to `fvh` or `plain` types. - -The `postings` highlighter has been removed from Lucene and Elasticsearch. -The `unified` highlighter outputs the same highlighting when `index_options` is set - to `offsets`. - -==== `fielddata_fields` - -The deprecated `fielddata_fields` have now been removed. `docvalue_fields` should be used instead. - -==== Inner hits - -The source inside a hit of inner hits keeps its full path with respect to the entire source. -In prior versions the source field names were relative to the inner hit. diff --git a/docs/reference/migration/migrate_6_0/settings.asciidoc b/docs/reference/migration/migrate_6_0/settings.asciidoc deleted file mode 100644 index 289e1903fb81e..0000000000000 --- a/docs/reference/migration/migrate_6_0/settings.asciidoc +++ /dev/null @@ -1,88 +0,0 @@ -[[breaking_60_settings_changes]] -=== Settings changes - -==== Remove support for elasticsearch.json and elasticsearch.yaml configuration file - -The configuration file found in the Elasticsearch config directory could previously have -a `.yml`, `.yaml` or `.json` extension. Only `elasticsearch.yml` is now supported. - -==== Duplicate keys in configuration file - -In previous versions of Elasticsearch, the configuration file was allowed to -contain duplicate keys. For example: - -[source,yaml] --------------------------------------------------- -node: - name: my-node - -node - attr: - rack: my-rack --------------------------------------------------- - -In Elasticsearch 6.0.0, this is no longer permitted. Instead, this must be -specified in a single key as: - -[source,yaml] --------------------------------------------------- -node: - name: my-node - attr: - rack: my-rack --------------------------------------------------- - -==== Coercion of boolean settings - -Previously, Elasticsearch recognized the strings `true`, `false`, `on`, `off`, `yes`, `no`, `0`, `1` as booleans. Elasticsearch 6.0 -recognizes only `true` and `false` as boolean and will throw an error otherwise. For backwards compatibility purposes, during the 6.x series -index settings on pre-6.0 indices will continue to work. Note that this does not apply to node-level settings that are stored -in `elasticsearch.yml`. - -==== Snapshot settings - -The internal setting `cluster.routing.allocation.snapshot.relocation_enabled` that allowed shards with running snapshots to be reallocated to -different nodes has been removed. Enabling this setting could cause allocation issues if a shard got allocated off a node and then -reallocated back to this node while a snapshot was running. - -==== Store throttling settings - -Store throttling has been removed. As a consequence, the -`indices.store.throttle.type` and `indices.store.throttle.max_bytes_per_sec` -cluster settings and the `index.store.throttle.type` and -`index.store.throttle.max_bytes_per_sec` index settings are not -recognized anymore. - -==== Store settings - -The `default` `index.store.type` has been removed. If you were using it, we -advise that you simply remove it from your index settings and Elasticsearch -will use the best `store` implementation for your operating system. - -==== Network settings - -The blocking TCP client, blocking TCP server, and blocking HTTP server have been removed. -As a consequence, the `network.tcp.blocking_server`, `network.tcp.blocking_client`, -`network.tcp.blocking`,`transport.tcp.blocking_client`, `transport.tcp.blocking_server`, -and `http.tcp.blocking_server` settings are not recognized anymore. - -The previously unused settings `transport.netty.max_cumulation_buffer_capacity`, -`transport.netty.max_composite_buffer_components` and -`http.netty.max_cumulation_buffer_capacity` have been removed. - -==== Similarity settings - -The `base` similarity is now ignored as coords and query normalization have -been removed. If provided, this setting will be ignored and issue a -deprecation warning. - -==== Script Settings - -All of the existing scripting security settings have been removed. Instead -they are replaced with `script.allowed_types` and `script.allowed_contexts`. - -==== Discovery Settings - -The `discovery.type` settings no longer supports the values `gce`, `aws` and `ec2`. -Integration with these platforms should be done by setting the `discovery.zen.hosts_provider` setting to -one of those values. diff --git a/docs/reference/migration/migrate_6_0/stats.asciidoc b/docs/reference/migration/migrate_6_0/stats.asciidoc deleted file mode 100644 index ed70d1503c4c3..0000000000000 --- a/docs/reference/migration/migrate_6_0/stats.asciidoc +++ /dev/null @@ -1,17 +0,0 @@ -[[breaking_60_stats_changes]] -=== Stats and info changes - -==== Removal of `throttle_time` in the `store` stats - -Given that store throttling has been removed, the `store` stats do not report -`throttle_time` anymore. - -==== FS stats no longer reports if the disk spins - -Elasticsearch has defaulted to assuming that it is running on SSDs since -the 2.x series of Elasticsearch. As such, Elasticsearch no longer needs to -collect information from the operating system as to whether or not the -underlying disks of each data path spin or not. While this functionality was no -longer needed starting in the 2.x series of Elasticsearch, it was maintained in -the filesystem section of the nodes stats APIs. This information has now been -removed. diff --git a/docs/reference/migration/migrate_7_0.asciidoc b/docs/reference/migration/migrate_7_0.asciidoc new file mode 100644 index 0000000000000..043d62465be39 --- /dev/null +++ b/docs/reference/migration/migrate_7_0.asciidoc @@ -0,0 +1,39 @@ +[[breaking-changes-7.0]] +== Breaking changes in 7.0 + +This section discusses the changes that you need to be aware of when migrating +your application to Elasticsearch 7.0. + +[float] +=== Indices created before 7.0 + +Elasticsearch 7.0 can read indices created in version 6.0 or above. An +Elasticsearch 7.0 node will not start in the presence of indices created in a +version of Elasticsearch before 6.0. + +[IMPORTANT] +.Reindex indices from Elasticseach 5.x or before +========================================= + +Indices created in Elasticsearch 5.x or before will need to be reindexed with +Elasticsearch 6.x in order to be readable by Elasticsearch 7.x. The easiest +way to reindex old indices is to use the `reindex` API. + +========================================= + +[float] +=== Also see: + +* <> +* <> +* <> +* <> +* <> +* <> + +include::migrate_7_0/aggregations.asciidoc[] +include::migrate_7_0/cluster.asciidoc[] +include::migrate_7_0/indices.asciidoc[] +include::migrate_7_0/mappings.asciidoc[] +include::migrate_7_0/search.asciidoc[] +include::migrate_7_0/plugins.asciidoc[] diff --git a/docs/reference/migration/migrate_7_0/aggregations.asciidoc b/docs/reference/migration/migrate_7_0/aggregations.asciidoc new file mode 100644 index 0000000000000..9f91497289372 --- /dev/null +++ b/docs/reference/migration/migrate_7_0/aggregations.asciidoc @@ -0,0 +1,6 @@ +[[breaking_70_aggregations_changes]] +=== Aggregations changes + +==== Deprecated `global_ordinals_hash` and `global_ordinals_low_cardinality` execution hints for terms aggregations have been removed + +These `execution_hint` are removed and should be replaced by `global_ordinals`. \ No newline at end of file diff --git a/docs/reference/migration/migrate_7_0/cluster.asciidoc b/docs/reference/migration/migrate_7_0/cluster.asciidoc new file mode 100644 index 0000000000000..e9584074d73d2 --- /dev/null +++ b/docs/reference/migration/migrate_7_0/cluster.asciidoc @@ -0,0 +1,16 @@ +[[breaking_70_cluster_changes]] +=== Cluster changes + +==== `:` is no longer allowed in cluster name + +Due to cross-cluster search using `:` to separate a cluster and index name, +cluster names may no longer contain `:`. + +==== New default for `wait_for_active_shards` parameter of the open index command + +The default value for the `wait_for_active_shards` parameter of the open index API +is changed from 0 to 1, which means that the command will now by default wait for all +primary shards of the opened index to be allocated. + +==== Shard preferences `_primary`, `_primary_first`, `_replica`, and `_replica_first` are removed +These shard preferences are removed in favour of the `_prefer_nodes` and `_only_nodes` preferences. diff --git a/docs/reference/migration/migrate_7_0/indices.asciidoc b/docs/reference/migration/migrate_7_0/indices.asciidoc new file mode 100644 index 0000000000000..de16498c84a53 --- /dev/null +++ b/docs/reference/migration/migrate_7_0/indices.asciidoc @@ -0,0 +1,39 @@ +[[breaking_70_indices_changes]] +=== Indices changes + +==== `:` is no longer allowed in index name + +Due to cross-cluster search using `:` to separate a cluster and index name, +index names may no longer contain `:`. + +==== `index.unassigned.node_left.delayed_timeout` may no longer be negative + +Negative values were interpreted as zero in earlier versions but are no +longer accepted. + + +==== `_flush` and `_force_merge` will no longer refresh + +In previous versions issuing a `_flush` or `_force_merge` (with `flush=true`) +had the undocumented side-effect of refreshing the index which made new documents +visible to searches and non-realtime GET operations. From now on these operations +don't have this side-effect anymore. To make documents visible an explicit `_refresh` +call is needed unless the index is refreshed by the internal scheduler. + + +==== Limit to the difference between max_size and min_size in NGramTokenFilter and NGramTokenizer + +To safeguard against creating too many index terms, the difference between `max_ngram` and +`min_ngram` in `NGramTokenFilter` and `NGramTokenizer` has been limited to 1. This default +limit can be changed with the index setting `index.max_ngram_diff`. Note that if the limit is +exceeded a error is thrown only for new indices. For existing pre-7.0 indices, a deprecation +warning is logged. + + +==== Limit to the difference between max_size and min_size in ShingleTokenFilter + +To safeguard against creating too many tokens, the difference between `max_shingle_size` and +`min_shingle_size` in `ShingleTokenFilter` has been limited to 3. This default +limit can be changed with the index setting `index.max_shingle_diff`. Note that if the limit is +exceeded a error is thrown only for new indices. For existing pre-7.0 indices, a deprecation +warning is logged. \ No newline at end of file diff --git a/docs/reference/migration/migrate_7_0/mappings.asciidoc b/docs/reference/migration/migrate_7_0/mappings.asciidoc new file mode 100644 index 0000000000000..4f8893829f9c7 --- /dev/null +++ b/docs/reference/migration/migrate_7_0/mappings.asciidoc @@ -0,0 +1,10 @@ +[[breaking_70_mappings_changes]] +=== Mapping changes + +==== The `_all` meta field is removed + +The `_all` field deprecated in 6 have now been removed. + +==== `index_options` for numeric fields has been removed + +The `index_options` field for numeric fields has been deprecated in 6 and has now been removed. \ No newline at end of file diff --git a/docs/reference/migration/migrate_7_0/plugins.asciidoc b/docs/reference/migration/migrate_7_0/plugins.asciidoc new file mode 100644 index 0000000000000..6bc9edec0dabc --- /dev/null +++ b/docs/reference/migration/migrate_7_0/plugins.asciidoc @@ -0,0 +1,14 @@ +[[breaking_70_plugins_changes]] +=== Plugins changes + +==== Azure Repository plugin + +* The legacy azure settings which where starting with `cloud.azure.storage.` prefix have been removed. +This includes `account`, `key`, `default` and `timeout`. +You need to use settings which are starting with `azure.client.` prefix instead. + +* Global timeout setting `cloud.azure.storage.timeout` has been removed. +You must set it per azure client instead. Like `azure.client.default.timeout: 10s` for example. + +See {plugins}/repository-azure-usage.html#repository-azure-repository-settings[Azure Repository settings]. + diff --git a/docs/reference/migration/migrate_7_0/search.asciidoc b/docs/reference/migration/migrate_7_0/search.asciidoc new file mode 100644 index 0000000000000..a2e5d1ccf8563 --- /dev/null +++ b/docs/reference/migration/migrate_7_0/search.asciidoc @@ -0,0 +1,35 @@ +[[breaking_70_search_changes]] +=== Search and Query DSL changes + +==== Changes to queries +* The default value for `transpositions` parameter of `fuzzy` query + has been changed to `true`. + +==== Adaptive replica selection enabled by default + +Adaptive replica selection has been enabled by default. If you wish to return to +the older round robin of search requests, you can use the +`cluster.routing.use_adaptive_replica_selection` setting: + +[source,js] +-------------------------------------------------- +PUT /_cluster/settings +{ + "transient": { + "cluster.routing.use_adaptive_replica_selection": false + } +} +-------------------------------------------------- +// CONSOLE + +==== Search API returns `400` for invalid requests + +The Search API returns `400 - Bad request` while it would previously return +`500 - Internal Server Error` in the following cases of invalid request: + +* the result window is too large +* sort is used in combination with rescore +* the rescore window is too large +* the number of slices is too large +* keep alive for scroll is too large +* number of filters in the adjacency matrix aggregation is too large diff --git a/docs/reference/modules.asciidoc b/docs/reference/modules.asciidoc index ec6bd4593f5d0..548f29e57a5cd 100644 --- a/docs/reference/modules.asciidoc +++ b/docs/reference/modules.asciidoc @@ -99,6 +99,7 @@ include::modules/network.asciidoc[] include::modules/node.asciidoc[] +:edit_url: include::modules/plugins.asciidoc[] include::modules/scripting.asciidoc[] @@ -112,4 +113,3 @@ include::modules/transport.asciidoc[] include::modules/tribe.asciidoc[] include::modules/cross-cluster-search.asciidoc[] - diff --git a/docs/reference/modules/cross-cluster-search.asciidoc b/docs/reference/modules/cross-cluster-search.asciidoc index 62b2f375ee6fb..315149941f444 100644 --- a/docs/reference/modules/cross-cluster-search.asciidoc +++ b/docs/reference/modules/cross-cluster-search.asciidoc @@ -1,8 +1,6 @@ [[modules-cross-cluster-search]] == Cross Cluster Search -beta[] - The _cross cluster search_ feature allows any node to act as a federated client across multiple clusters. In contrast to the <> feature, a cross cluster search node won't join the remote cluster, instead it connects to a remote cluster in a light fashion in order to execute @@ -206,3 +204,11 @@ will be prefixed with their remote cluster name: to `false` (defaults to `true`) to prevent certain nodes from connecting to remote clusters. Cross-cluster search requests must be sent to a node that is allowed to act as a cross-cluster client. + +[float] +[[retrieve-remote-clusters-info]] +=== Retrieving remote clusters info + +The <> allows to retrieve +information about the configured remote clusters, as well as the remote +nodes that the Cross Cluster Search node is connected to. diff --git a/docs/reference/modules/discovery/zen.asciidoc b/docs/reference/modules/discovery/zen.asciidoc index cbe8ad13dd2ed..e68350515eabf 100644 --- a/docs/reference/modules/discovery/zen.asciidoc +++ b/docs/reference/modules/discovery/zen.asciidoc @@ -56,9 +56,15 @@ The unicast discovery uses the <> module to perform As part of the ping process a master of the cluster is either elected or joined to. This is done automatically. The -`discovery.zen.ping_timeout` (which defaults to `3s`) allows for the -tweaking of election time to handle cases of slow or congested networks -(higher values assure less chance of failure). Once a node joins, it +`discovery.zen.ping_timeout` (which defaults to `3s`) determines how long the node +will wait before deciding on starting an election or joining an existing cluster. +Three pings will be sent over this timeout interval. In case where no decision can be +reached after the timeout, the pinging process restarts. +In slow or congested networks, three seconds might not be enough for a node to become +aware of the other nodes in its environment before making an election decision. +Increasing the timeout should be done with care in that case, as it will slow down the +election process. +Once a node decides to join an existing formed cluster, it will send a join request to the master (`discovery.zen.join_timeout`) with a timeout defaulting at 20 times the ping timeout. diff --git a/docs/reference/modules/gateway.asciidoc b/docs/reference/modules/gateway.asciidoc index cad05d1baaa0c..0af0d31fba2c5 100644 --- a/docs/reference/modules/gateway.asciidoc +++ b/docs/reference/modules/gateway.asciidoc @@ -4,9 +4,9 @@ The local gateway module stores the cluster state and shard data across full cluster restarts. - The following _static_ settings, which must be set on every master node, - control how long a freshly elected master should wait before it tries to - recover the cluster state and the cluster's data: +The following _static_ settings, which must be set on every master node, +control how long a freshly elected master should wait before it tries to +recover the cluster state and the cluster's data: `gateway.expected_nodes`:: diff --git a/docs/reference/modules/indices/circuit_breaker.asciidoc b/docs/reference/modules/indices/circuit_breaker.asciidoc index 04ffaf7a0cd34..7792e10c68469 100644 --- a/docs/reference/modules/indices/circuit_breaker.asciidoc +++ b/docs/reference/modules/indices/circuit_breaker.asciidoc @@ -83,7 +83,8 @@ within a period of time. See the "prefer-parameters" section of the <> documentation for more information. -`script.max_compilations_per_minute`:: +`script.max_compilations_rate`:: - Limit for the number of unique dynamic scripts within a minute that are - allowed to be compiled. Defaults to 15. + Limit for the number of unique dynamic scripts within a certain interval + that are allowed to be compiled. Defaults to 75/5m, meaning 75 every 5 + minutes. diff --git a/docs/reference/modules/node.asciidoc b/docs/reference/modules/node.asciidoc index 4b8f8f4a02581..1e2f7a6e3dd97 100644 --- a/docs/reference/modules/node.asciidoc +++ b/docs/reference/modules/node.asciidoc @@ -104,10 +104,12 @@ To create a dedicated master-eligible node, set: node.master: true <1> node.data: false <2> node.ingest: false <3> +search.remote.connect: false <4> ------------------- <1> The `node.master` role is enabled by default. <2> Disable the `node.data` role (enabled by default). <3> Disable the `node.ingest` role (enabled by default). +<4> Disable cross-cluster search (enabled by default). ifdef::include-xpack[] NOTE: These settings apply only when {xpack} is not installed. To create a @@ -154,6 +156,13 @@ discovery.zen.minimum_master_nodes: 2 <1> ---------------------------- <1> Defaults to `1`. +To be able to remain available when one of the master-eligible nodes fails, +clusters should have at least three master-eligible nodes, with +`minimum_master_nodes` set accordingly. A <>, +performed without any downtime, also requires at least three master-eligible +nodes to avoid the possibility of data loss if a network split occurs while the +upgrade is in progress. + This setting can also be changed dynamically on a live cluster with the <>: @@ -194,10 +203,12 @@ To create a dedicated data node, set: node.master: false <1> node.data: true <2> node.ingest: false <3> +search.remote.connect: false <4> ------------------- <1> Disable the `node.master` role (enabled by default). <2> The `node.data` role is enabled by default. <3> Disable the `node.ingest` role (enabled by default). +<4> Disable cross-cluster search (enabled by default). ifdef::include-xpack[] NOTE: These settings apply only when {xpack} is not installed. To create a @@ -325,5 +336,6 @@ the <>, the <> and the <>. ifdef::include-xpack[] +:edit_url!: include::{xes-repo-dir}/node.asciidoc[] endif::include-xpack[] diff --git a/docs/reference/modules/plugins.asciidoc b/docs/reference/modules/plugins.asciidoc index 240f984091345..ad708e88024cd 100644 --- a/docs/reference/modules/plugins.asciidoc +++ b/docs/reference/modules/plugins.asciidoc @@ -6,7 +6,7 @@ Plugins are a way to enhance the basic elasticsearch functionality in a custom manner. They range from adding custom mapping types, custom -analyzers (in a more built in fashion), native scripts, custom discovery +analyzers (in a more built in fashion), custom script engines, custom discovery and more. See the {plugins}/index.html[Plugins documentation] for more. diff --git a/docs/reference/modules/scripting/engine.asciidoc b/docs/reference/modules/scripting/engine.asciidoc index 37baa0801c9aa..da3b4529daacc 100644 --- a/docs/reference/modules/scripting/engine.asciidoc +++ b/docs/reference/modules/scripting/engine.asciidoc @@ -21,7 +21,7 @@ include-tagged::{plugins-examples-dir}/script-expert-scoring/src/main/java/org/e -------------------------------------------------- You can execute the script by specifying its `lang` as `expert_scripts`, and the name -of the script as the the script source: +of the script as the script source: [source,js] diff --git a/docs/reference/modules/scripting/security.asciidoc b/docs/reference/modules/scripting/security.asciidoc index 7c1c2c41f387a..37168f56b8f14 100644 --- a/docs/reference/modules/scripting/security.asciidoc +++ b/docs/reference/modules/scripting/security.asciidoc @@ -47,21 +47,6 @@ Bad: * Users can write arbitrary scripts, queries, `_search` requests. * User actions make documents with structure defined by users. -[float] -[[modules-scripting-security-do-no-weaken]] -=== Do not weaken script security settings -By default Elasticsearch will run inline, stored, and filesystem scripts for -the builtin languages, namely the scripting language Painless, the template -language Mustache, and the expression language Expressions. These *ought* to be -safe to expose to trusted users and to your application servers because they -have strong security sandboxes. The Elasticsearch committers do not support any -non-sandboxed scripting languages and using any would be a poor choice because: -1. This drops a layer of security, leaving only Elasticsearch's builtin -<>. -2. Non-sandboxed scripts have unchecked access to Elasticsearch's internals and -can cause all kinds of trouble if misused. - - [float] [[modules-scripting-other-layers]] === Other security layers @@ -93,7 +78,7 @@ security of the Elasticsearch deployment. By default all script types are allowed to be executed. This can be modified using the setting `script.allowed_types`. Only the types specified as part of the setting will be -allowed to be executed. To specify no types are allowed, set `scripts.allowed_types` to +allowed to be executed. To specify no types are allowed, set `script.allowed_types` to be `none`. [source,yaml] @@ -109,7 +94,7 @@ script.allowed_types: inline <1> By default all script contexts are allowed to be executed. This can be modified using the setting `script.allowed_contexts`. Only the contexts specified as part of the setting will -be allowed to be executed. To specify no contexts are allowed, set `scripts.allowed_contexts` +be allowed to be executed. To specify no contexts are allowed, set `script.allowed_contexts` to be `none`. [source,yaml] diff --git a/docs/reference/modules/scripting/using.asciidoc b/docs/reference/modules/scripting/using.asciidoc index 031023a1d3491..0393472e684dd 100644 --- a/docs/reference/modules/scripting/using.asciidoc +++ b/docs/reference/modules/scripting/using.asciidoc @@ -49,10 +49,7 @@ GET my_index/_search `lang`:: - Specifies the language the script is written in. Defaults to `painless` but - may be set to any of languages listed in <>. The - default language may be changed in the `elasticsearch.yml` config file by - setting `script.default_lang` to the appropriate language. + Specifies the language the script is written in. Defaults to `painless`. `source`, `id`:: @@ -104,10 +101,34 @@ If you compile too many unique scripts within a small amount of time, Elasticsearch will reject the new dynamic scripts with a `circuit_breaking_exception` error. By default, up to 15 inline scripts per minute will be compiled. You can change this setting dynamically by setting -`script.max_compilations_per_minute`. +`script.max_compilations_rate`. ======================================== +[float] +[[modules-scripting-short-script-form]] +=== Short Script Form +A short script form can be used for brevity. In the short form, `script` is represented +by a string instead of an object. This string contains the source of the script. + +Short form: + +[source,js] +---------------------- + "script": "ctx._source.likes++" +---------------------- +// NOTCONSOLE + +The same script in the normal form: + +[source,js] +---------------------- + "script": { + "source": "ctx._source.likes++" + } +---------------------- +// NOTCONSOLE + [float] [[modules-scripting-stored-scripts]] === Stored Scripts @@ -178,14 +199,12 @@ DELETE _scripts/calculate-score === Script Caching All scripts are cached by default so that they only need to be recompiled -when updates occur. File scripts keep a static cache and will always reside -in memory. Both inline and stored scripts are stored in a cache that can evict -residing scripts. By default, scripts do not have a time-based expiration, but +when updates occur. By default, scripts do not have a time-based expiration, but you can change this behavior by using the `script.cache.expire` setting. You can configure the size of this cache by using the `script.cache.max_size` setting. By default, the cache size is `100`. NOTE: The size of stored scripts is limited to 65,535 bytes. This can be changed by setting `script.max_size_in_bytes` setting to increase that soft -limit, but if scripts are really large then alternatives like -<> scripts should be considered instead. +limit, but if scripts are really large then a +<> should be considered. diff --git a/docs/reference/modules/snapshots.asciidoc b/docs/reference/modules/snapshots.asciidoc index 64e9e2e1663aa..f5b561600492b 100644 --- a/docs/reference/modules/snapshots.asciidoc +++ b/docs/reference/modules/snapshots.asciidoc @@ -1,39 +1,55 @@ [[modules-snapshots]] == Snapshot And Restore -The snapshot and restore module allows to create snapshots of individual -indices or an entire cluster into a remote repository like shared file system, -S3, or HDFS. These snapshots are great for backups because they can be restored -relatively quickly but they are not archival because they can only be restored -to versions of Elasticsearch that can read the index. That means that: +You can store snapshots of individual indices or an entire cluster in +a remote repository like a shared file system, S3, or HDFS. These snapshots +are great for backups because they can be restored relatively quickly. However, +snapshots can only be restored to versions of Elasticsearch that can read the +indices: +* A snapshot of an index created in 5.x can be restored to 6.x. * A snapshot of an index created in 2.x can be restored to 5.x. * A snapshot of an index created in 1.x can be restored to 2.x. -* A snapshot of an index created in 1.x can **not** be restored to 5.x. - -To restore a snapshot of an index created in 1.x to 5.x you can restore it to -a 2.x cluster and use <> to rebuild -the index in a 5.x cluster. This is as time consuming as restoring from -archival copies of the original data. - -Note: If a repository is connected to a 2.x cluster, and you want to connect -a 5.x cluster to the same repository, you will have to either first set the 2.x -repository to `readonly` mode (see below for details on `readonly` mode) or create -the 5.x repository in `readonly` mode. A 5.x cluster will update the repository -to conform to 5.x specific formats, which will mean that any new snapshots written -via the 2.x cluster will not be visible to the 5.x cluster, and vice versa. -In fact, as a general rule, only one cluster should connect to the same repository -location with write access; all other clusters connected to the same repository -should be set to `readonly` mode. While setting all but one repositories to -`readonly` should work with multiple clusters differing by one major version, -it is not a supported configuration. +Conversely, snapshots of indices created in 1.x **cannot** be restored to +5.x or 6.x, and snapshots of indices created in 2.x **cannot** be restored +to 6.x. + +Snapshots are incremental and can contain indices created in various +versions of Elasticsearch. If any indices in a snapshot were created in an +incompatible version, you will not be able restore the snapshot. + +IMPORTANT: When backing up your data prior to an upgrade, keep in mind that you +won't be able to restore snapshots after you upgrade if they contain indices +created in a version that's incompatible with the upgrade version. + +If you end up in a situation where you need to restore a snapshot of an index +that is incompatible with the version of the cluster you are currently running, +you can restore it on the latest compatible version and use +<> to rebuild the index on the current +version. Reindexing from remote is only possible if the original index has +source enabled. Retrieving and reindexing the data can take significantly longer +than simply restoring a snapshot. If you have a large amount of data, we +recommend testing the reindex from remote process with a subset of your data to +understand the time requirements before proceeding. [float] === Repositories -Before any snapshot or restore operation can be performed, a snapshot repository should be registered in -Elasticsearch. The repository settings are repository-type specific. See below for details. +You must register a snapshot repository before you can perform snapshot and +restore operations. We recommend creating a new snapshot repository for each +major version. The valid repository settings depend on the repository type. + +If you register same snapshot repository with multiple clusters, only +one cluster should have write access to the repository. All other clusters +connected to that repository should set the repository to `readonly` mode. + +NOTE: The snapshot format can change across major versions, so if you have +clusters on different major versions trying to write the same repository, +new snapshots written by one version will not be visible to the other. While +setting the repository to `readonly` on all but one of the clusters should work +with multiple clusters differing by one major version, it is not a supported +configuration. [source,js] ----------------------------------- @@ -48,7 +64,7 @@ PUT /_snapshot/my_backup // CONSOLE // TESTSETUP -Once a repository is registered, its information can be obtained using the following command: +To retrieve information about a registered repository, use a GET request: [source,js] ----------------------------------- @@ -71,9 +87,11 @@ which returns: ----------------------------------- // TESTRESPONSE -Information about multiple repositories can be fetched in one go by using a comma-delimited list of repository names. -Star wildcards are supported as well. For example, information about repositories that start with `repo` or that contain `backup` -can be obtained using the following command: +To retrieve information about multiple repositories, specify a +a comma-delimited list of repositories. You can also use the * wildcard when +specifying repository names. For example, the following request retrieves +information about all of the snapshot repositories that start with `repo` or +contain `backup`: [source,js] ----------------------------------- @@ -81,8 +99,8 @@ GET /_snapshot/repo*,*backup* ----------------------------------- // CONSOLE -If a repository name is not specified, or `_all` is used as repository name Elasticsearch will return information about -all repositories currently registered in the cluster: +To retrieve information about all registered snapshot repositories, omit the +repository name or specify `_all`: [source,js] ----------------------------------- @@ -264,7 +282,7 @@ PUT /_snapshot/my_backup/snapshot_2?wait_for_completion=true // TEST[continued] The list of indices that should be included into the snapshot can be specified using the `indices` parameter that -supports <>. The snapshot request also supports the +supports <>. The snapshot request also supports the `ignore_unavailable` option. Setting it to `true` will cause indices that do not exist to be ignored during snapshot creation. By default, when `ignore_unavailable` option is not set and an index is missing the snapshot request will fail. By setting `include_global_state` to false it's possible to prevent the cluster global state to be stored as part of @@ -411,7 +429,7 @@ By default, all indices in the snapshot are restored, and the cluster state is *not* restored. It's possible to select indices that should be restored as well as to allow the global cluster state from being restored by using `indices` and `include_global_state` options in the restore request body. The list of indices -supports <>. The `rename_pattern` +supports <>. The `rename_pattern` and `rename_replacement` options can be also used to rename indices on restore using regular expression that supports referencing the original text as explained @@ -487,16 +505,18 @@ same size or topology. However, the version of the new cluster should be the sa If the new cluster has a smaller size additional considerations should be made. First of all it's necessary to make sure that new cluster have enough capacity to store all indices in the snapshot. It's possible to change indices settings during restore to reduce the number of replicas, which can help with restoring snapshots into smaller cluster. It's also -possible to select only subset of the indices using the `indices` parameter. Prior to version 1.5.0 elasticsearch -didn't check restored persistent settings making it possible to accidentally restore an incompatible -`discovery.zen.minimum_master_nodes` setting, and as a result disable a smaller cluster until the required number of -master eligible nodes is added. Starting with version 1.5.0 incompatible settings are ignored. +possible to select only subset of the indices using the `indices` parameter. If indices in the original cluster were assigned to particular nodes using <>, the same rules will be enforced in the new cluster. Therefore if the new cluster doesn't contain nodes with appropriate attributes that a restored index can be allocated on, such index will not be successfully restored unless these index allocation settings are changed during restore operation. +The restore operation also checks that restored persistent settings are compatible with the current cluster to avoid accidentally +restoring an incompatible settings such as `discovery.zen.minimum_master_nodes` and as a result disable a smaller cluster until the +required number of master eligible nodes is added. If you need to restore a snapshot with incompatible persistent settings, try +restoring it without the global cluster state. + [float] === Snapshot status @@ -578,7 +598,7 @@ state. Once recovery of primary shards is completed Elasticsearch is switching t creates the required number of replicas at this moment cluster switches to the `yellow` state. Once all required replicas are created, the cluster switches to the `green` states. -The cluster health operation provides only a high level status of the restore process. It’s possible to get more +The cluster health operation provides only a high level status of the restore process. It's possible to get more detailed insight into the current state of the recovery process by using <> and <> APIs. diff --git a/docs/reference/query-dsl/common-terms-query.asciidoc b/docs/reference/query-dsl/common-terms-query.asciidoc index a0c58597f7a5a..41034f357ce4c 100644 --- a/docs/reference/query-dsl/common-terms-query.asciidoc +++ b/docs/reference/query-dsl/common-terms-query.asciidoc @@ -76,7 +76,7 @@ GET /_search "common": { "body": { "query": "this is bonsai cool", - "cutoff_frequency": 0.001 + "cutoff_frequency": 0.001 } } } @@ -100,8 +100,8 @@ GET /_search "common": { "body": { "query": "nelly the elephant as a cartoon", - "cutoff_frequency": 0.001, - "low_freq_operator": "and" + "cutoff_frequency": 0.001, + "low_freq_operator": "and" } } } @@ -200,11 +200,11 @@ GET /_search "common": { "body": { "query": "nelly the elephant not as a cartoon", - "cutoff_frequency": 0.001, - "minimum_should_match": { - "low_freq" : 2, - "high_freq" : 3 - } + "cutoff_frequency": 0.001, + "minimum_should_match": { + "low_freq" : 2, + "high_freq" : 3 + } } } } @@ -261,11 +261,11 @@ GET /_search "common": { "body": { "query": "how not to be", - "cutoff_frequency": 0.001, - "minimum_should_match": { - "low_freq" : 2, - "high_freq" : 3 - } + "cutoff_frequency": 0.001, + "minimum_should_match": { + "low_freq" : 2, + "high_freq" : 3 + } } } } diff --git a/docs/reference/query-dsl/fuzzy-query.asciidoc b/docs/reference/query-dsl/fuzzy-query.asciidoc index 70f7eb48ada7f..4139ab4a0e65a 100644 --- a/docs/reference/query-dsl/fuzzy-query.asciidoc +++ b/docs/reference/query-dsl/fuzzy-query.asciidoc @@ -36,7 +36,8 @@ GET /_search "boost" : 1.0, "fuzziness" : 2, "prefix_length" : 0, - "max_expansions": 100 + "max_expansions": 100, + "transpositions": false } } } @@ -63,6 +64,10 @@ GET /_search The maximum number of terms that the `fuzzy` query will expand to. Defaults to `50`. +`transpositions`:: + + Whether fuzzy transpositions (`ab` -> `ba`) are supported. + Default is `true`. WARNING: This query can be very heavy if `prefix_length` is set to `0` and if `max_expansions` is set to a high number. It could result in every term in the diff --git a/docs/reference/query-dsl/has-child-query.asciidoc b/docs/reference/query-dsl/has-child-query.asciidoc index bfe7eff4c2f73..d13ae326fb7fe 100644 --- a/docs/reference/query-dsl/has-child-query.asciidoc +++ b/docs/reference/query-dsl/has-child-query.asciidoc @@ -23,6 +23,14 @@ GET /_search -------------------------------------------------- // CONSOLE +Note that the `has_child` is a slow query compared to other queries in the +query dsl due to the fact that it performs a join. The performance degrades +as the number of matching child documents pointing to unique parent documents +increases. If you care about query performance you should not use this query. +However if you do happen to use this query then use it as less as possible. Each +`has_child` query that gets added to a search request can increase query time +significantly. + [float] ==== Scoring capabilities diff --git a/docs/reference/query-dsl/has-parent-query.asciidoc b/docs/reference/query-dsl/has-parent-query.asciidoc index 9840bf957a513..4065a9d99fe2e 100644 --- a/docs/reference/query-dsl/has-parent-query.asciidoc +++ b/docs/reference/query-dsl/has-parent-query.asciidoc @@ -25,6 +25,13 @@ GET /_search -------------------------------------------------- // CONSOLE +Note that the `has_parent` is a slow query compared to other queries in the +query dsl due to the fact that it performs a join. The performance degrades +as the number of matching parent documents increases. If you care about query +performance you should not use this query. However if you do happen to use +this query then use it as less as possible. Each `has_parent` query that gets +added to a search request can increase query time significantly. + [float] ==== Scoring capabilities @@ -69,7 +76,7 @@ is not mapped. Child documents can't be sorted by fields in matching parent documents via the regular sort options. If you need to sort child documents by field in the parent -documents then you can should use the `function_score` query and then just sort +documents then you should use the `function_score` query and then just sort by `_score`. Sorting tags by parent document' `view_count` field: diff --git a/docs/reference/query-dsl/match-query.asciidoc b/docs/reference/query-dsl/match-query.asciidoc index ed47a1c8f1a76..56874f25a8514 100644 --- a/docs/reference/query-dsl/match-query.asciidoc +++ b/docs/reference/query-dsl/match-query.asciidoc @@ -19,7 +19,7 @@ GET /_search // CONSOLE Note, `message` is the name of a field, you can substitute the name of -any field (including `_all`) instead. +any field instead. [[query-dsl-match-query-boolean]] ==== match diff --git a/docs/reference/query-dsl/mlt-query.asciidoc b/docs/reference/query-dsl/mlt-query.asciidoc index e36025980d65c..68714b8b7fe56 100644 --- a/docs/reference/query-dsl/mlt-query.asciidoc +++ b/docs/reference/query-dsl/mlt-query.asciidoc @@ -111,9 +111,10 @@ analyzes it, usually using the same analyzer at the field, then selects the top K terms with highest tf-idf to form a disjunctive query of these terms. IMPORTANT: The fields on which to perform MLT must be indexed and of type -`string`. Additionally, when using `like` with documents, either `_source` -must be enabled or the fields must be `stored` or store `term_vector`. In -order to speed up analysis, it could help to store term vectors at index time. +`text` or `keyword``. Additionally, when using `like` with documents, either +`_source` must be enabled or the fields must be `stored` or store +`term_vector`. In order to speed up analysis, it could help to store term +vectors at index time. For example, if we wish to perform MLT on the "title" and "tags.raw" fields, we can explicitly store their `term_vector` at index time. We can still @@ -181,8 +182,7 @@ for documents `like: "Apple"`, but `unlike: "cake crumble tree"`. The syntax is the same as `like`. `fields`:: -A list of fields to fetch and analyze the text from. Defaults to the `_all` -field for free text and to all possible fields for document inputs. +A list of fields to fetch and analyze the text from. `like_text`:: The text to find documents like it. diff --git a/docs/reference/query-dsl/multi-match-query.asciidoc b/docs/reference/query-dsl/multi-match-query.asciidoc index 48c3f77d3cb9f..217d3a7d21165 100644 --- a/docs/reference/query-dsl/multi-match-query.asciidoc +++ b/docs/reference/query-dsl/multi-match-query.asciidoc @@ -58,6 +58,8 @@ GET /_search <1> The `subject` field is three times as important as the `message` field. +WARNING: There is a limit of no more than 1024 fields being queried at once. + [[multi-match-types]] [float] ==== Types of `multi_match` query: @@ -137,7 +139,8 @@ follows: Also, accepts `analyzer`, `boost`, `operator`, `minimum_should_match`, `fuzziness`, `lenient`, `prefix_length`, `max_expansions`, `rewrite`, `zero_terms_query`, - `cutoff_frequency` and `auto_generate_synonyms_phrase_query`, as explained in <>. + `cutoff_frequency`, `auto_generate_synonyms_phrase_query` and `fuzzy_transpositions`, + as explained in <>. [IMPORTANT] [[operator-min]] diff --git a/docs/reference/query-dsl/multi-term-rewrite.asciidoc b/docs/reference/query-dsl/multi-term-rewrite.asciidoc index 27f31340a1957..0d327a40fdea3 100644 --- a/docs/reference/query-dsl/multi-term-rewrite.asciidoc +++ b/docs/reference/query-dsl/multi-term-rewrite.asciidoc @@ -19,7 +19,7 @@ boost. into a should clause in a boolean query, and keeps the scores as computed by the query. Note that typically such scores are meaningless to the user, and require non-trivial CPU to compute, so it's almost -always better to use `constant_score_auto`. This rewrite method will hit +always better to use `constant_score`. This rewrite method will hit too many clauses failure if it exceeds the boolean query limit (defaults to `1024`). * `constant_score_boolean`: Similar to `scoring_boolean` except scores diff --git a/docs/reference/query-dsl/percolate-query.asciidoc b/docs/reference/query-dsl/percolate-query.asciidoc index c1f539cbd3ab3..f5d779340754d 100644 --- a/docs/reference/query-dsl/percolate-query.asciidoc +++ b/docs/reference/query-dsl/percolate-query.asciidoc @@ -103,6 +103,9 @@ The above request will yield the following response: "message": "bonsai tree" } } + }, + "fields" : { + "_percolator_document_slot" : [0] <2> } } ] @@ -112,6 +115,8 @@ The above request will yield the following response: // TESTRESPONSE[s/"took": 13,/"took": "$body.took",/] <1> The query with id `1` matches our document. +<2> The `_percolator_document_slot` field indicates which document has matched with this query. + Useful when percolating multiple document simultaneously. [float] ==== Parameters @@ -120,7 +125,10 @@ The following parameters are required when percolating a document: [horizontal] `field`:: The field of type `percolator` that holds the indexed queries. This is a required parameter. +`name`:: The suffix to be used for the `_percolator_document_slot` field in case multiple `percolate` queries have been specified. + This is an optional parameter. `document`:: The source of the document being percolated. +`documents`:: Like the `document` parameter, but accepts multiple documents via a json array. `document_type`:: The type / mapping of the document being percolated. This setting is deprecated and only required for indices created before 6.0 Instead of specifying the source of the document being percolated, the source can also be retrieved from an already @@ -136,6 +144,87 @@ In that case the `document` parameter can be substituted with the following para `preference`:: Optionally, preference to be used to fetch document to percolate. `version`:: Optionally, the expected version of the document to be fetched. +[float] +==== Percolating multiple documents + +The `percolate` query can match multiple documents simultaneously with the indexed percolator queries. +Percolating multiple documents in a single request can improve performance as queries only need to be parsed and +matched once instead of multiple times. + +The `_percolator_document_slot` field that is being returned with each matched percolator query is important when percolating +multiple documents simultaneously. It indicates which documents matched with a particular percolator query. The numbers +correlate with the slot in the `documents` array specified in the `percolate` query. + +[source,js] +-------------------------------------------------- +GET /my-index/_search +{ + "query" : { + "percolate" : { + "field" : "query", + "documents" : [ <1> + { + "message" : "bonsai tree" + }, + { + "message" : "new tree" + }, + { + "message" : "the office" + }, + { + "message" : "office tree" + } + ] + } + } +} +-------------------------------------------------- +// CONSOLE +// TEST[continued] + +<1> The documents array contains 4 documents that are going to be percolated at the same time. + +[source,js] +-------------------------------------------------- +{ + "took": 13, + "timed_out": false, + "_shards": { + "total": 5, + "successful": 5, + "skipped" : 0, + "failed": 0 + }, + "hits": { + "total": 1, + "max_score": 1.5606477, + "hits": [ + { + "_index": "my-index", + "_type": "doc", + "_id": "1", + "_score": 1.5606477, + "_source": { + "query": { + "match": { + "message": "bonsai tree" + } + } + }, + "fields" : { + "_percolator_document_slot" : [0, 1, 3] <1> + } + } + ] + } +} +-------------------------------------------------- +// TESTRESPONSE[s/"took": 13,/"took": "$body.took",/] + +<1> The `_percolator_document_slot` indicates that the first, second and last documents specified in the `percolate` query + are matching with this query. + [float] ==== Percolating an Existing Document @@ -202,7 +291,7 @@ GET /my-index/_search <1> The version is optional, but useful in certain cases. We can ensure that we are trying to percolate the document we just have indexed. A change may be made after we have indexed, and if that is the -case the then the search request would fail with a version conflict error. +case the search request would fail with a version conflict error. The search response returned is identical as in the previous example. @@ -307,6 +396,9 @@ This will yield the following response. "message": [ "The quick brown fox jumps over the lazy dog" <1> ] + }, + "fields" : { + "_percolator_document_slot" : [0] } }, { @@ -325,6 +417,9 @@ This will yield the following response. "message": [ "The quick brown fox jumps over the lazy dog" <1> ] + }, + "fields" : { + "_percolator_document_slot" : [0] } } ] @@ -338,6 +433,179 @@ This will yield the following response. Instead of the query in the search request highlighting the percolator hits, the percolator queries are highlighting the document defined in the `percolate` query. +When percolating multiple documents at the same time like the request below then the highlight response is different: + +[source,js] +-------------------------------------------------- +GET /my-index/_search +{ + "query" : { + "percolate" : { + "field": "query", + "documents" : [ + { + "message" : "bonsai tree" + }, + { + "message" : "new tree" + }, + { + "message" : "the office" + }, + { + "message" : "office tree" + } + ] + } + }, + "highlight": { + "fields": { + "message": {} + } + } +} +-------------------------------------------------- +// CONSOLE +// TEST[continued] + +The slightly different response: + +[source,js] +-------------------------------------------------- +{ + "took": 13, + "timed_out": false, + "_shards": { + "total": 5, + "successful": 5, + "skipped" : 0, + "failed": 0 + }, + "hits": { + "total": 1, + "max_score": 1.5606477, + "hits": [ + { + "_index": "my-index", + "_type": "doc", + "_id": "1", + "_score": 1.5606477, + "_source": { + "query": { + "match": { + "message": "bonsai tree" + } + } + }, + "fields" : { + "_percolator_document_slot" : [0, 1, 3] + }, + "highlight" : { <1> + "0_message" : [ + "bonsai tree" + ], + "3_message" : [ + "office tree" + ], + "1_message" : [ + "new tree" + ] + } + } + ] + } +} +-------------------------------------------------- +// TESTRESPONSE[s/"took": 13,/"took": "$body.took",/] + +<1> The highlight fields have been prefixed with the document slot they belong to, + in order to know which highlight field belongs to what document. + +[float] +==== Specifying multiple percolate queries + +It is possible to specify multiple `percolate` queries in a single search request: + +[source,js] +-------------------------------------------------- +GET /my-index/_search +{ + "query" : { + "bool" : { + "should" : [ + { + "percolate" : { + "field" : "query", + "document" : { + "message" : "bonsai tree" + }, + "name": "query1" <1> + } + }, + { + "percolate" : { + "field" : "query", + "document" : { + "message" : "tulip flower" + }, + "name": "query2" <1> + } + } + ] + } + } +} +-------------------------------------------------- +// CONSOLE +// TEST[continued] + +<1> The `name` parameter will be used to identify which percolator document slots belong to what `percolate` query. + +The `_percolator_document_slot` field name will be suffixed with what is specified in the `_name` parameter. +If that isn't specified then the `field` parameter will be used, which in this case will result in ambiguity. + +The above search request returns a response similar to this: + +[source,js] +-------------------------------------------------- +{ + "took": 13, + "timed_out": false, + "_shards": { + "total": 5, + "successful": 5, + "skipped" : 0, + "failed": 0 + }, + "hits": { + "total": 1, + "max_score": 0.5753642, + "hits": [ + { + "_index": "my-index", + "_type": "doc", + "_id": "1", + "_score": 0.5753642, + "_source": { + "query": { + "match": { + "message": "bonsai tree" + } + } + }, + "fields" : { + "_percolator_document_slot_query1" : [0] <1> + } + } + ] + } +} +-------------------------------------------------- +// TESTRESPONSE[s/"took": 13,/"took": "$body.took",/] + +<1> The `_percolator_document_slot_query1` percolator slot field indicates that these matched slots are from the `percolate` + query with `_name` parameter set to `query1`. + [float] ==== How it Works Under the Hood diff --git a/docs/reference/query-dsl/prefix-query.asciidoc b/docs/reference/query-dsl/prefix-query.asciidoc index 270fc925f0c50..54d69583e990c 100644 --- a/docs/reference/query-dsl/prefix-query.asciidoc +++ b/docs/reference/query-dsl/prefix-query.asciidoc @@ -28,19 +28,6 @@ GET /_search -------------------------------------------------- // CONSOLE -Or with the `prefix` deprecated[5.0.0, Use `value`] syntax: - -[source,js] --------------------------------------------------- -GET /_search -{ "query": { - "prefix" : { "user" : { "prefix" : "ki", "boost" : 2.0 } } - } -} --------------------------------------------------- -// CONSOLE -// TEST[warning:Deprecated field [prefix] used, expected [value] instead] - This multi term query allows you to control how it gets rewritten using the <> parameter. diff --git a/docs/reference/query-dsl/query-string-query.asciidoc b/docs/reference/query-dsl/query-string-query.asciidoc index 992c7f5e2e87b..29fe70adb2e10 100644 --- a/docs/reference/query-dsl/query-string-query.asciidoc +++ b/docs/reference/query-dsl/query-string-query.asciidoc @@ -48,12 +48,12 @@ The `query_string` top level parameters include: |Parameter |Description |`query` |The actual query to be parsed. See <>. -|`default_field` |The default field for query terms if no prefix field -is specified. Defaults to the `index.query.default_field` index -settings, which in turn defaults to `*`. -`*` extracts all fields in the mapping that are eligible to term queries -and filters the metadata fields. All extracted fields are then combined -to build a query when no prefix field is provided. +|`default_field` |The default field for query terms if no prefix field is +specified. Defaults to the `index.query.default_field` index settings, which in +turn defaults to `*`. `*` extracts all fields in the mapping that are eligible +to term queries and filters the metadata fields. All extracted fields are then +combined to build a query when no prefix field is provided. There is a limit of +no more than 1024 fields being queried at once. |`default_operator` |The default operator used if no explicit operator is specified. For example, with a default operator of `OR`, the query @@ -63,6 +63,11 @@ with default operator of `AND`, the same query is translated to |`analyzer` |The analyzer name used to analyze the query string. +|`quote_analyzer` |The name of the analyzer that is used to analyze +quoted phrases in the query string. For those parts, it overrides other +analyzers that are set using the `analyzer` parameter or the +<> setting. + |`allow_leading_wildcard` |When set, `*` or `?` are allowed as the first character. Defaults to `true`. @@ -73,11 +78,14 @@ increments in result queries. Defaults to `true`. expand to. Defaults to `50` |`fuzziness` |Set the fuzziness for fuzzy queries. Defaults -to `AUTO`. See <> for allowed settings. +to `AUTO`. See <> for allowed settings. |`fuzzy_prefix_length` |Set the prefix length for fuzzy queries. Default is `0`. +|`fuzzy_transpositions` |Set to `false` to disable fuzzy transpositions (`ab` -> `ba`). +Default is `true`. + |`phrase_slop` |Sets the default slop for phrases. If zero, then exact phrase matches are required. Default value is `0`. @@ -115,9 +123,7 @@ Defaults to `true`. |`all_fields` | deprecated[6.0.0, set `default_field` to `*` instead] Perform the query on all fields detected in the mapping that can -be queried. Will be used by default when the `_all` field is disabled and no -`default_field` is specified (either in the index settings or in the request -body) and no `fields` are specified. +be queried. |======================================================================= diff --git a/docs/reference/query-dsl/query-string-syntax.asciidoc b/docs/reference/query-dsl/query-string-syntax.asciidoc index 8a76d686b623a..8a7b394b2e870 100644 --- a/docs/reference/query-dsl/query-string-syntax.asciidoc +++ b/docs/reference/query-dsl/query-string-syntax.asciidoc @@ -53,6 +53,25 @@ Be aware that wildcard queries can use an enormous amount of memory and perform very badly -- just think how many terms need to be queried to match the query string `"a* b* c*"`. +[WARNING] +======= +Pure wildcards `\*` are rewritten to <> queries for efficiency. +As a consequence, the wildcard `"field:*"` would match documents with an emtpy value + like the following: +``` +{ + "field": "" +} +``` +\... and would **not** match if the field is missing or set with an explicit null +value like the following: +``` +{ + "field": null +} +``` +======= + [WARNING] ======= Allowing a wildcard at the beginning of a word (eg `"*ing"`) is particularly diff --git a/docs/reference/query-dsl/simple-query-string-query.asciidoc b/docs/reference/query-dsl/simple-query-string-query.asciidoc index 1251803fca95c..99fbc131c1be3 100644 --- a/docs/reference/query-dsl/simple-query-string-query.asciidoc +++ b/docs/reference/query-dsl/simple-query-string-query.asciidoc @@ -13,8 +13,7 @@ GET /_search "query": { "simple_query_string" : { "query": "\"fried eggs\" +(eggplant | potato) -frittata", - "analyzer": "snowball", - "fields": ["body^5","_all"], + "fields": ["title^5", "body"], "default_operator": "and" } } @@ -30,7 +29,10 @@ The `simple_query_string` top level parameters include: |`query` |The actual query to be parsed. See below for syntax. |`fields` |The fields to perform the parsed query against. Defaults to the -`index.query.default_field` index settings, which in turn defaults to `_all`. +`index.query.default_field` index settings, which in turn defaults to `*`. `*` +extracts all fields in the mapping that are eligible to term queries and filters +the metadata fields. There is a limit of no more than 1024 fields being queried +at once. |`default_operator` |The default operator used if no explicit operator is specified. For example, with a default operator of `OR`, the query @@ -38,7 +40,7 @@ is specified. For example, with a default operator of `OR`, the query with default operator of `AND`, the same query is translated to `capital AND of AND Hungary`. The default value is `OR`. -|`analyzer` |The analyzer used to analyze each term of the query when +|`analyzer` |Force the analyzer to use to analyze each term of the query when creating composite queries. |`flags` |Flags specifying which features of the `simple_query_string` to @@ -65,9 +67,18 @@ comprehensive example. |`auto_generate_synonyms_phrase_query` |Whether phrase queries should be automatically generated for multi terms synonyms. Defaults to `true`. -|`all_fields` | Perform the query on all fields detected in the mapping that can -be queried. Will be used by default when the `_all` field is disabled and no -`default_field` is specified index settings, and no `fields` are specified. +|`all_fields` | deprecated[6.0.0, set `fields` to `*` instead] +Perform the query on all fields detected in the mapping that can +be queried. + +|`fuzzy_prefix_length` |Set the prefix length for fuzzy queries. Default +is `0`. + +|`fuzzy_max_expansions` |Controls the number of terms fuzzy queries will +expand to. Defaults to `50` + +|`fuzzy_transpositions` |Set to `false` to disable fuzzy transpositions (`ab` -> `ba`). +Default is `true`. |======================================================================= [float] @@ -114,12 +125,9 @@ documents that contain "baz". ==== Default Field When not explicitly specifying the field to search on in the query string syntax, the `index.query.default_field` will be used to derive -which field to search on. It defaults to `_all` field. - -If the `_all` field is disabled and no `fields` are specified in the request`, -the `simple_query_string` query will automatically attempt to determine the -existing fields in the index's mapping that are queryable, and perform the -search on those fields. +which fields to search on. It defaults to `*` and the query will automatically +attempt to determine the existing fields in the index's mapping that are queryable, +and perform the search on those fields. [float] ==== Multi Field diff --git a/docs/reference/query-dsl/span-not-query.asciidoc b/docs/reference/query-dsl/span-not-query.asciidoc index 5a648bd4b0e90..1632ee03b2fb8 100644 --- a/docs/reference/query-dsl/span-not-query.asciidoc +++ b/docs/reference/query-dsl/span-not-query.asciidoc @@ -1,7 +1,9 @@ [[query-dsl-span-not-query]] === Span Not Query -Removes matches which overlap with another span query. The span not +Removes matches which overlap with another span query or which are +within x tokens before (controlled by the parameter `pre`) or y tokens +after (controled by the parameter `post`) another SpanQuery. The span not query maps to Lucene `SpanNotQuery`. Here is an example: [source,js] @@ -39,7 +41,7 @@ In the above example all documents with the term hoya are filtered except the on Other top level options: [horizontal] -`pre`:: If set the amount of tokens before the include span can't have overlap with the exclude span. -`post`:: If set the amount of tokens after the include span can't have overlap with the exclude span. +`pre`:: If set the amount of tokens before the include span can't have overlap with the exclude span. Defaults to 0. +`post`:: If set the amount of tokens after the include span can't have overlap with the exclude span. Defaults to 0. `dist`:: If set the amount of tokens from within the include span can't have overlap with the exclude span. Equivalent of setting both `pre` and `post`. diff --git a/docs/reference/query-dsl/term-level-queries.asciidoc b/docs/reference/query-dsl/term-level-queries.asciidoc index a6aae4896681b..883fd4c36b5ea 100644 --- a/docs/reference/query-dsl/term-level-queries.asciidoc +++ b/docs/reference/query-dsl/term-level-queries.asciidoc @@ -21,6 +21,12 @@ The queries in this group are: Find documents which contain any of the exact terms specified in the field specified. +<>:: + + Find documents which match with one or more of the specified terms. The + number of terms that must match depend on the specified minimum should + match field or script. + <>:: Find documents where the field specified contains values (dates, numbers, @@ -66,6 +72,8 @@ include::term-query.asciidoc[] include::terms-query.asciidoc[] +include::terms-set-query.asciidoc[] + include::range-query.asciidoc[] include::exists-query.asciidoc[] diff --git a/docs/reference/query-dsl/terms-set-query.asciidoc b/docs/reference/query-dsl/terms-set-query.asciidoc new file mode 100644 index 0000000000000..659f840cccbb9 --- /dev/null +++ b/docs/reference/query-dsl/terms-set-query.asciidoc @@ -0,0 +1,122 @@ +[[query-dsl-terms-set-query]] +=== Terms Set Query + +experimental[The terms_set query is a new query and its syntax may change in the future] + +Returns any documents that match with at least one or more of the +provided terms. The terms are not analyzed and thus must match exactly. +The number of terms that must match varies per document and is either +controlled by a minimum should match field or computed per document in +a minimum should match script. + +The field that controls the number of required terms that must match must +be a number field: + +[source,js] +-------------------------------------------------- +PUT /my-index +{ + "mappings": { + "doc": { + "properties": { + "required_matches": { + "type": "long" + } + } + } + } +} + +PUT /my-index/doc/1?refresh +{ + "codes": ["ghi", "jkl"], + "required_matches": 2 +} + +PUT /my-index/doc/2?refresh +{ + "codes": ["def", "ghi"], + "required_matches": 2 +} +-------------------------------------------------- +// CONSOLE +// TESTSETUP + +An example that uses the minimum should match field: + +[source,js] +-------------------------------------------------- +GET /my-index/_search +{ + "query": { + "terms_set": { + "codes" : { + "terms" : ["abc", "def", "ghi"], + "minimum_should_match_field": "required_matches" + } + } + } +} +-------------------------------------------------- +// CONSOLE + +Response: + +[source,js] +-------------------------------------------------- +{ + "took": 13, + "timed_out": false, + "_shards": { + "total": 5, + "successful": 5, + "skipped" : 0, + "failed": 0 + }, + "hits": { + "total": 1, + "max_score": 0.5753642, + "hits": [ + { + "_index": "my-index", + "_type": "doc", + "_id": "2", + "_score": 0.5753642, + "_source": { + "codes": ["def", "ghi"], + "required_matches": 2 + } + } + ] + } +} +-------------------------------------------------- +// TESTRESPONSE[s/"took": 13,/"took": "$body.took",/] + +Scripts can also be used to control how many terms are required to match +in a more dynamic way. For example a create date or a popularity field +can be used as basis for the number of required terms to match. + +Also the `params.num_terms` parameter is available in the script to indicate the +number of terms that have been specified. + +An example that always limits the number of required terms to match to never +become larger than the number of terms specified: + +[source,js] +-------------------------------------------------- +GET /my-index/_search +{ + "query": { + "terms_set": { + "codes" : { + "terms" : ["abc", "def", "ghi"], + "minimum_should_match_script": { + "source": "Math.min(params.num_terms, doc['required_matches'].value)" + } + } + } + } +} +-------------------------------------------------- +// CONSOLE diff --git a/docs/reference/release-notes.asciidoc b/docs/reference/release-notes.asciidoc index a69fb622ad73b..490249461e5d1 100644 --- a/docs/reference/release-notes.asciidoc +++ b/docs/reference/release-notes.asciidoc @@ -3,13 +3,11 @@ [partintro] -- + This section summarizes the changes in each release. -* <> -* <> -* <> +* <> -- -include::release-notes/6.0.0-alpha2.asciidoc[] -include::release-notes/6.0.0-alpha1.asciidoc[] -include::release-notes/6.0.0-alpha1-5x.asciidoc[] + +include::release-notes/7.0.0-alpha1.asciidoc[] diff --git a/docs/reference/release-notes/6.0.0-alpha1-5x.asciidoc b/docs/reference/release-notes/6.0.0-alpha1-5x.asciidoc deleted file mode 100644 index c44fd17b69057..0000000000000 --- a/docs/reference/release-notes/6.0.0-alpha1-5x.asciidoc +++ /dev/null @@ -1,1108 +0,0 @@ -[[release-notes-6.0.0-alpha1-5x]] -== 6.0.0-alpha1 Release Notes (Changes previously released in 5.x) - -The changes listed below were first released in the 5.x series. Changes -released for the first time in Elasticsearch 6.0.0-alpha1 are listed in -<>. - -[[breaking-6.0.0-alpha1-5x]] -[float] -=== Breaking changes - -Aliases:: -* Validate alias names the same as index names {pull}20771[#20771] (issue: {issue}20748[#20748]) - -CRUD:: -* Fixed naming inconsistency for fields/stored_fields in the APIs {pull}20166[#20166] (issues: {issue}18943[#18943], {issue}20155[#20155]) - -Core:: -* Add system call filter bootstrap check {pull}21940[#21940] -* Remove ignore system bootstrap checks {pull}20511[#20511] - -Internal:: -* `_flush` should block by default {pull}20597[#20597] (issue: {issue}20569[#20569]) - -Packaging:: -* Rename service.bat to elasticsearch-service.bat {pull}20496[#20496] (issue: {issue}17528[#17528]) - -Plugin Lang Painless:: -* Remove all date 'now' methods from Painless {pull}20766[#20766] (issue: {issue}20762[#20762]) - -Query DSL:: -* Fix name of `enabled_position_increments` {pull}22895[#22895] - -REST:: -* Change separator for shards preference {pull}20786[#20786] (issues: {issue}20722[#20722], {issue}20769[#20769]) - -Search:: -* Remove DFS_QUERY_AND_FETCH as a search type {pull}22787[#22787] - -Settings:: -* Remove support for default settings {pull}24093[#24093] (issues: {issue}23981[#23981], {issue}24052[#24052], {issue}24074[#24074]) - - - -[[breaking-java-6.0.0-alpha1-5x]] -[float] -=== Breaking Java changes - -Aggregations:: -* Move getProperty method out of MultiBucketsAggregation.Bucket interface {pull}23988[#23988] -* Remove getProperty method from Aggregations interface and impl {pull}23972[#23972] -* Move getProperty method out of Aggregation interface {pull}23949[#23949] - -Allocation:: -* Cluster Explain API uses the allocation process to explain shard allocation decisions {pull}22182[#22182] (issues: {issue}20347[#20347], {issue}20634[#20634], {issue}21103[#21103], {issue}21662[#21662], {issue}21691[#21691]) - -Cluster:: -* Remove PROTO-based custom cluster state components {pull}22336[#22336] (issue: {issue}21868[#21868]) - -Core:: -* Remove ability to plug-in TransportService {pull}20505[#20505] - -Discovery:: -* Remove pluggability of ElectMasterService {pull}21031[#21031] - -Exceptions:: -* Remove `IndexTemplateAlreadyExistsException` and `IndexShardAlreadyExistsException` {pull}21539[#21539] (issue: {issue}21494[#21494]) -* Replace IndexAlreadyExistsException with ResourceAlreadyExistsException {pull}21494[#21494] - -Ingest:: -* Change type of ingest doc meta-data field 'TIMESTAMP' to `Date` {pull}22234[#22234] (issue: {issue}22074[#22074]) - -Internal:: -* Replace SearchExtRegistry with namedObject {pull}22492[#22492] -* Replace Suggesters with namedObject {pull}22491[#22491] -* Consolidate the last easy parser construction {pull}22095[#22095] -* Introduce XContentParser#namedObject {pull}22003[#22003] -* Pass executor name to request interceptor to support async intercept calls {pull}21089[#21089] -* Remove TransportService#registerRequestHandler leniency {pull}20469[#20469] (issue: {issue}20468[#20468]) - -Java API:: -* Fold InternalSearchHits and friends into their interfaces {pull}23042[#23042] - -Network:: -* Remove HttpServer and HttpServerAdapter in favor of a simple dispatch method {pull}22636[#22636] (issue: {issue}18482[#18482]) -* Unguice Transport and friends {pull}20526[#20526] - -Plugins:: -* Deguice rest handlers {pull}22575[#22575] -* Plugins: Replace Rest filters with RestHandler wrapper {pull}21905[#21905] -* Plugins: Remove support for onModule {pull}21416[#21416] -* Cleanup sub fetch phase extension point {pull}20382[#20382] - -Query DSL:: -* Resolve index names in indices_boost {pull}21393[#21393] (issue: {issue}4756[#4756]) - -Scripting:: -* Refactor ScriptType to be a Top-Level Class {pull}21136[#21136] - -Search:: -* Remove QUERY_AND_FETCH search type {pull}22996[#22996] -* Cluster search shards improvements: expose ShardId, adjust visibility of some members {pull}21752[#21752] - - - -[[deprecation-6.0.0-alpha1-5x]] -[float] -=== Deprecations - -Java API:: -* Add BulkProcessor methods with XContentType parameter {pull}23078[#23078] (issue: {issue}22691[#22691]) -* Deprecate and remove "minimumNumberShouldMatch" in BoolQueryBuilder {pull}22403[#22403] - -Plugin Repository S3:: -* S3 Repository: Deprecate remaining `repositories.s3.*` settings {pull}24144[#24144] (issue: {issue}24143[#24143]) -* Deprecate specifying credentials through env vars, sys props, and remove profile files {pull}22567[#22567] (issues: {issue}21041[#21041], {issue}22479[#22479]) - -Query DSL:: -* Add deprecation logging message for 'fuzzy' query {pull}20993[#20993] (issue: {issue}15760[#15760]) - -REST:: -* Optionally require a valid content type for all rest requests with content {pull}22691[#22691] (issue: {issue}19388[#19388]) - -Scripting:: -* Change Namespace for Stored Script to Only Use Id {pull}22206[#22206] - -Shadow Replicas:: -* Add a deprecation notice to shadow replicas {pull}22647[#22647] (issue: {issue}22024[#22024]) - -Stats:: -* Deprecate _field_stats endpoint {pull}23914[#23914] - - - -[[feature-6.0.0-alpha1-5x]] -[float] -=== New features - -Aggregations:: -* Initial version of an adjacency matrix using the Filters aggregation {pull}22239[#22239] (issue: {issue}22169[#22169]) - -Analysis:: -* Adds pattern keyword marker filter support {pull}23600[#23600] (issue: {issue}4877[#4877]) -* Expose WordDelimiterGraphTokenFilter {pull}23327[#23327] (issue: {issue}23104[#23104]) -* Synonym Graph Support (LUCENE-6664) {pull}21517[#21517] -* Expose Lucenes Ukrainian analyzer {pull}21176[#21176] (issue: {issue}19433[#19433]) - -CAT API:: -* Provides a cat api endpoint for templates. {pull}20545[#20545] (issue: {issue}20467[#20467]) - -CRUD:: -* Allow an index to be partitioned with custom routing {pull}22274[#22274] (issue: {issue}21585[#21585]) - -Highlighting:: -* Integrate UnifiedHighlighter {pull}21621[#21621] (issue: {issue}21376[#21376]) - -Index APIs:: -* Add FieldCapabilities (_field_caps) API {pull}23007[#23007] (issue: {issue}22438[#22438]) - -Ingest:: -* introduce KV Processor in Ingest Node {pull}22272[#22272] (issue: {issue}22222[#22222]) - -Mapping:: -* Add the ability to set a normalizer on keyword fields. {pull}21919[#21919] (issue: {issue}18064[#18064]) -* Add RangeFieldMapper for numeric and date range types {pull}21002[#21002] (issue: {issue}20999[#20999]) - -Plugin Discovery File:: -* File-based discovery plugin {pull}20394[#20394] (issue: {issue}20323[#20323]) - -Query DSL:: -* Add "all fields" execution mode to simple_query_string query {pull}21341[#21341] (issues: {issue}19784[#19784], {issue}20925[#20925]) -* Add support for `quote_field_suffix` to `simple_query_string`. {pull}21060[#21060] (issue: {issue}18641[#18641]) -* Add "all field" execution mode to query_string query {pull}20925[#20925] (issue: {issue}19784[#19784]) - -Reindex API:: -* Add automatic parallelization support to reindex and friends {pull}20767[#20767] (issue: {issue}20624[#20624]) - -Search:: -* Introduce incremental reduction of TopDocs {pull}23946[#23946] -* Add federated cross cluster search capabilities {pull}22502[#22502] (issue: {issue}21473[#21473]) -* Add field collapsing for search request {pull}22337[#22337] (issue: {issue}21833[#21833]) - -Settings:: -* Add infrastructure for elasticsearch keystore {pull}22335[#22335] - -Similarities:: -* Adds boolean similarity to Elasticsearch {pull}23637[#23637] (issue: {issue}6731[#6731]) - - - -[[enhancement-6.0.0-alpha1-5x]] -[float] -=== Enhancements - -Aggregations:: -* Add `count` to rest output of `geo_centroid` {pull}24387[#24387] (issue: {issue}24366[#24366]) -* Allow scripted metric agg to access `_score` {pull}24295[#24295] -* Add BucketMetricValue interface {pull}24188[#24188] -* Move aggs CommonFields and TYPED_KEYS_DELIMITER from InternalAggregation to Aggregation {pull}23987[#23987] -* Use ParseField for aggs CommonFields rather than String {pull}23717[#23717] -* Share XContent rendering code in terms aggs {pull}23680[#23680] -* Add unit tests for ParentToChildAggregator {pull}23305[#23305] (issue: {issue}22278[#22278]) -* First step towards incremental reduction of query responses {pull}23253[#23253] -* `value_type` is useful regardless of scripting. {pull}22160[#22160] (issue: {issue}20163[#20163]) -* Support for partitioning set of terms {pull}21626[#21626] (issue: {issue}21487[#21487]) -* Rescorer should be applied in the TopHits aggregation {pull}20978[#20978] (issue: {issue}19317[#19317]) - -Aliases:: -* Handle multiple aliases in _cat/aliases api {pull}23698[#23698] (issue: {issue}23661[#23661]) - -Allocation:: -* Trigger replica recovery restarts by master when primary relocation completes {pull}23926[#23926] (issue: {issue}23904[#23904]) -* Makes the same_shard host dynamically updatable {pull}23397[#23397] (issue: {issue}22992[#22992]) -* Include stale replica shard info when explaining an unassigned primary {pull}22826[#22826] -* Adds setting level to allocation decider explanations {pull}22268[#22268] (issue: {issue}21771[#21771]) -* Improves allocation decider decision explanation messages {pull}21771[#21771] -* Prepares allocator decision objects for use with the allocation explain API {pull}21691[#21691] -* Balance step in BalancedShardsAllocator for a single shard {pull}21103[#21103] -* Process more expensive allocation deciders last {pull}20724[#20724] (issue: {issue}12815[#12815]) -* Separates decision making from decision application in BalancedShardsAllocator {pull}20634[#20634] - -Analysis:: -* Support Keyword type in Analyze API {pull}23161[#23161] -* Expose FlattenGraphTokenFilter {pull}22643[#22643] -* Analyze API Position Length Support {pull}22574[#22574] -* Remove AnalysisService and reduce it to a simple name to analyzer mapping {pull}20627[#20627] (issues: {issue}19827[#19827], {issue}19828[#19828]) - -CAT API:: -* Adding built-in sorting capability to _cat apis. {pull}20658[#20658] (issue: {issue}16975[#16975]) -* Add health status parameter to cat indices API {pull}20393[#20393] - -CRUD:: -* Use correct block levels for TRA subclasses {pull}22224[#22224] -* Make index and delete operation execute as a single bulk item {pull}21964[#21964] - -Cache:: -* Do not cache term queries. {pull}21566[#21566] (issues: {issue}16031[#16031], {issue}20116[#20116]) -* Parse alias filters on the coordinating node {pull}20916[#20916] - -Circuit Breakers:: -* Closing a ReleasableBytesStreamOutput closes the underlying BigArray {pull}23941[#23941] -* Add used memory amount to CircuitBreakingException message (#22521) {pull}22693[#22693] (issue: {issue}22521[#22521]) -* Cluster Settings Updates should not trigger circuit breakers. {pull}20827[#20827] - -Cluster:: -* Extract a common base class to allow services to listen to remote cluster config updates {pull}24367[#24367] -* Prevent nodes from joining if newer indices exist in the cluster {pull}23843[#23843] -* Connect to new nodes concurrently {pull}22984[#22984] (issue: {issue}22828[#22828]) -* Keep NodeConnectionsService in sync with current nodes in the cluster state {pull}22509[#22509] -* Add a generic way of checking version before serializing custom cluster object {pull}22376[#22376] (issue: {issue}22313[#22313]) -* Add validation for supported index version on node join, restore, upgrade & open index {pull}21830[#21830] (issue: {issue}21670[#21670]) -* Let ClusterStateObserver only hold onto state that's needed for change detection {pull}21631[#21631] (issue: {issue}21568[#21568]) -* Cache successful shard deletion checks {pull}21438[#21438] -* Remove mutable status field from cluster state {pull}21379[#21379] -* Skip shard management code when updating cluster state on client/tribe nodes {pull}20731[#20731] -* Add clusterUUID to RestMainAction output {pull}20503[#20503] - -Core:: -* Regex upgrades {pull}24316[#24316] (issue: {issue}24226[#24226]) -* Detect remnants of path.data/default.path.data bug {pull}24099[#24099] (issues: {issue}23981[#23981], {issue}24052[#24052], {issue}24074[#24074], {issue}24093[#24093]) -* Await termination after shutting down executors {pull}23889[#23889] -* Add early-access check {pull}23743[#23743] (issue: {issue}23668[#23668]) -* Adapter action future should restore interrupts {pull}23618[#23618] (issue: {issue}23617[#23617]) -* Disable bootstrap checks for single-node discovery {pull}23598[#23598] (issues: {issue}23585[#23585], {issue}23595[#23595]) -* Enable explicitly enforcing bootstrap checks {pull}23585[#23585] (issue: {issue}21864[#21864]) -* Add equals/hashcode method to ReplicationResponse {pull}23215[#23215] -* Simplify ElasticsearchException rendering as a XContent {pull}22611[#22611] -* Remove setLocalNode from ClusterService and TransportService {pull}22608[#22608] -* Rename bootstrap.seccomp to bootstrap.system_call_filter {pull}22226[#22226] (issue: {issue}21940[#21940]) -* Cleanup random stats serialization code {pull}22223[#22223] -* Avoid corruption when deserializing booleans {pull}22152[#22152] -* Reduce memory pressure when sending large terms queries. {pull}21776[#21776] -* Install a security manager on startup {pull}21716[#21716] -* Log node ID on startup {pull}21673[#21673] -* Ensure source filtering automatons are only compiled once {pull}20857[#20857] (issue: {issue}20839[#20839]) -* Improve scheduling fairness when batching cluster state changes with equal priority {pull}20775[#20775] (issue: {issue}20768[#20768]) -* Add production warning for pre-release builds {pull}20674[#20674] -* Add serial collector bootstrap check {pull}20558[#20558] -* Do not log full bootstrap checks exception {pull}19989[#19989] - -Dates:: -* Improve error handling for epoch format parser with time zone (#22621) {pull}23689[#23689] - -Discovery:: -* Introduce single-node discovery {pull}23595[#23595] -* UnicastZenPing shouldn't ping the address of the local node {pull}23567[#23567] -* MasterFaultDetection can start after the initial cluster state has been processed {pull}23037[#23037] (issue: {issue}22828[#22828]) -* Simplify Unicast Zen Ping {pull}22277[#22277] (issues: {issue}19370[#19370], {issue}21739[#21739], {issue}22120[#22120], {issue}22194[#22194]) -* Prefer joining node with conflicting transport address when becoming master {pull}22134[#22134] (issues: {issue}22049[#22049], {issue}22120[#22120]) - -Engine:: -* Engine: store maxUnsafeAutoIdTimestamp in commit {pull}24149[#24149] -* Replace EngineClosedException with AlreadyClosedExcpetion {pull}22631[#22631] - -Exceptions:: -* Add BWC layer for Exceptions {pull}21694[#21694] (issue: {issue}21656[#21656]) - -Geo:: -* Optimize geo-distance sorting. {pull}20596[#20596] (issue: {issue}20450[#20450]) - -Highlighting:: -* Add support for fragment_length in the unified highlighter {pull}23431[#23431] -* Add BreakIteratorBoundaryScanner support {pull}23248[#23248] - -Index APIs:: -* Open and close index to honour allow_no_indices option {pull}24222[#24222] (issue: {issue}24031[#24031]) -* Wildcard cluster names for cross cluster search {pull}23985[#23985] (issue: {issue}23893[#23893]) -* Indexing: Add shard id to indexing operation listener {pull}22606[#22606] -* Better error when can't auto create index {pull}22488[#22488] (issues: {issue}21448[#21448], {issue}22435[#22435]) -* Add date-math support to `_rollover` {pull}20709[#20709] - -Ingest:: -* Lazy load the geoip databases {pull}23337[#23337] -* add `ignore_missing` flag to ingest plugins {pull}22273[#22273] -* Added ability to remove pipelines via wildcards (#22149) {pull}22191[#22191] (issue: {issue}22149[#22149]) -* Enables the ability to inject serialized json fields into root of document {pull}22179[#22179] (issue: {issue}21898[#21898]) -* compile ScriptProcessor inline scripts when creating ingest pipelines {pull}21858[#21858] (issue: {issue}21842[#21842]) -* add `ignore_missing` option to SplitProcessor {pull}20982[#20982] (issues: {issue}19995[#19995], {issue}20840[#20840]) -* add ignore_missing option to convert,trim,lowercase,uppercase,grok,rename {pull}20194[#20194] (issue: {issue}19995[#19995]) -* introduce the JSON Processor {pull}20128[#20128] (issue: {issue}20052[#20052]) - -Internal:: -* Add cross cluster support to `_field_caps` {pull}24463[#24463] (issue: {issue}24334[#24334]) -* Log JVM arguments on startup {pull}24451[#24451] -* Preserve cluster alias throughout search execution to lookup nodes by cluster and ID {pull}24438[#24438] -* Move RemoteClusterService into TransportService {pull}24424[#24424] -* Enum related performance additions. {pull}24274[#24274] (issue: {issue}24226[#24226]) -* Add a dedicated TransportRemoteInfoAction for consistency {pull}24040[#24040] (issue: {issue}23969[#23969]) -* Simplify sorted top docs merging in SearchPhaseController {pull}23881[#23881] -* Synchronized CollapseTopFieldDocs with lucenes relatives {pull}23854[#23854] -* Cleanup SearchPhaseController interface {pull}23844[#23844] -* Do not create String instances in 'Strings' methods accepting StringBuilder {pull}22907[#22907] -* Improve connection closing in `RemoteClusterConnection` {pull}22804[#22804] (issue: {issue}22803[#22803]) -* Remove some more usages of ParseFieldMatcher {pull}22437[#22437] (issues: {issue}19552[#19552], {issue}22130[#22130]) -* Remove some more usages of ParseFieldMatcher {pull}22398[#22398] (issues: {issue}19552[#19552], {issue}22130[#22130]) -* Remove some more usages of ParseFieldMatcher {pull}22395[#22395] (issues: {issue}19552[#19552], {issue}22130[#22130]) -* Remove some ParseFieldMatcher usages {pull}22389[#22389] (issues: {issue}19552[#19552], {issue}22130[#22130]) -* Introduce ToXContentObject interface {pull}22387[#22387] (issue: {issue}16347[#16347]) -* Add infrastructure to manage network connections outside of Transport/TransportService {pull}22194[#22194] -* Replace strict parsing mode with response headers assertions {pull}22130[#22130] (issues: {issue}11859[#11859], {issue}19552[#19552], {issue}20993[#20993]) -* Start using `ObjectParser` for aggs. {pull}22048[#22048] (issue: {issue}22009[#22009]) -* Don't output null source node in RecoveryFailedException {pull}21963[#21963] -* ClusterService should expose "applied" cluster states (i.e., remove ClusterStateStatus) {pull}21817[#21817] -* Rename ClusterState#lookupPrototypeSafe to `lookupPrototype` and remove "unsafe" unused variant {pull}21686[#21686] -* ShardActiveResponseHandler shouldn't hold to an entire cluster state {pull}21470[#21470] (issue: {issue}21394[#21394]) -* Remove unused ClusterService dependency from SearchPhaseController {pull}21421[#21421] -* Remove special case in case no action filters are registered {pull}21251[#21251] -* Use TimveValue instead of long for CacheBuilder methods {pull}20887[#20887] -* Remove SearchContext#current and all it's threadlocals {pull}20778[#20778] (issue: {issue}19341[#19341]) -* Remove poor-mans compression in InternalSearchHit and friends {pull}20472[#20472] - -Java API:: -* Added types options to DeleteByQueryRequest {pull}23265[#23265] (issue: {issue}21984[#21984]) -* prevent NPE when trying to uncompress a null BytesReference {pull}22386[#22386] - -Java High Level REST Client:: -* Add utility method to parse named XContent objects with typed prefix {pull}24240[#24240] (issue: {issue}22965[#22965]) -* Convert suggestion response parsing to use NamedXContentRegistry {pull}23355[#23355] -* UpdateRequest implements ToXContent {pull}23289[#23289] -* Add javadoc for DocWriteResponse.Builders {pull}23267[#23267] -* Expose WriteRequest.RefreshPolicy string representation {pull}23106[#23106] -* Use `typed_keys` parameter to prefix suggester names by type in search responses {pull}23080[#23080] (issue: {issue}22965[#22965]) -* Add parsing from xContent to MainResponse {pull}22934[#22934] -* Parse elasticsearch exception's root causes {pull}22924[#22924] -* Add parsing method to BytesRestResponse's error {pull}22873[#22873] -* Add parsing methods to BulkItemResponse {pull}22859[#22859] -* Add parsing method for ElasticsearchException.generateFailureXContent() {pull}22815[#22815] -* Add parsing method for ElasticsearchException.generateThrowableXContent() {pull}22783[#22783] -* Add parsing methods for UpdateResponse {pull}22586[#22586] -* Add parsing from xContent to InternalSearchHit and InternalSearchHits {pull}22429[#22429] -* Add fromxcontent methods to index response {pull}22229[#22229] -* Add fromXContent() methods for ReplicationResponse {pull}22196[#22196] (issue: {issue}22082[#22082]) -* Add parsing method for ElasticsearchException {pull}22143[#22143] -* Add fromXContent method to GetResponse {pull}22082[#22082] - -Java REST Client:: -* move ignore parameter support from yaml test client to low level rest client {pull}22637[#22637] -* Warn log deprecation warnings received from server {pull}21895[#21895] -* Support Preemptive Authentication with RestClient {pull}21336[#21336] -* Provide error message when rest request path is null {pull}21233[#21233] (issue: {issue}21232[#21232]) - -Logging:: -* Log deleting indices at info level {pull}22627[#22627] (issue: {issue}22605[#22605]) -* Expose logs base path {pull}22625[#22625] -* Log failure to connect to node at info instead of debug {pull}21809[#21809] (issue: {issue}6468[#6468]) -* Truncate log messages from the end {pull}21609[#21609] (issue: {issue}21602[#21602]) -* Ensure logging is initialized in CLI tools {pull}20575[#20575] -* Give useful error message if log config is missing {pull}20493[#20493] -* Complete Elasticsearch logger names {pull}20457[#20457] (issue: {issue}20326[#20326]) -* Logging shutdown hack {pull}20389[#20389] (issue: {issue}20304[#20304]) -* Disable console logging {pull}20387[#20387] -* Warn on not enough masters during election {pull}20063[#20063] (issue: {issue}8362[#8362]) - -Mapping:: -* Do not index `_type` when there is at most one type. {pull}24363[#24363] -* Only allow one type on 6.0 indices {pull}24317[#24317] (issue: {issue}15613[#15613]) -* token_count type : add an option to count tokens (fix #23227) {pull}24175[#24175] (issue: {issue}23227[#23227]) -* Atomic mapping updates across types {pull}22220[#22220] -* Only update DocumentMapper if field type changes {pull}22165[#22165] -* Better error message when _parent isn't an object {pull}21987[#21987] -* Create the QueryShardContext lazily in DocumentMapperParser. {pull}21287[#21287] - -Nested Docs:: -* Avoid adding unnecessary nested filters when ranges are used. {pull}23427[#23427] - -Network:: -* Set available processors for Netty {pull}24420[#24420] (issue: {issue}6224[#6224]) -* Adjust default Netty receive predictor size to 64k {pull}23542[#23542] (issue: {issue}23185[#23185]) -* Keep the pipeline handler queue small initially {pull}23335[#23335] -* Set network receive predictor size to 32kb {pull}23284[#23284] (issue: {issue}23185[#23185]) -* TransportService.connectToNode should validate remote node ID {pull}22828[#22828] (issue: {issue}22194[#22194]) -* Disable the Netty recycler {pull}22452[#22452] (issues: {issue}22189[#22189], {issue}22360[#22360], {issue}22406[#22406], {issue}5904[#5904]) -* Tell Netty not to be unsafe in transport client {pull}22284[#22284] -* Introduce a low level protocol handshake {pull}22094[#22094] -* Detach handshake from connect to node {pull}22037[#22037] -* Reduce number of connections per node depending on the nodes role {pull}21849[#21849] -* Add a connect timeout to the ConnectionProfile to allow per node connect timeouts {pull}21847[#21847] (issue: {issue}19719[#19719]) -* Grant Netty permission to read system somaxconn {pull}21840[#21840] -* Remove connectToNodeLight and replace it with a connection profile {pull}21799[#21799] -* Lazy resolve unicast hosts {pull}21630[#21630] (issues: {issue}14441[#14441], {issue}16412[#16412]) -* Fix handler name on message not fully read {pull}21478[#21478] -* Handle rejected pings on shutdown gracefully {pull}20842[#20842] -* Network: Allow to listen on virtual interfaces. {pull}19568[#19568] (issues: {issue}17473[#17473], {issue}19537[#19537]) - -Packaging:: -* Introduce Java version check {pull}23194[#23194] (issue: {issue}21102[#21102]) -* Improve the out-of-the-box experience {pull}21920[#21920] (issues: {issue}18317[#18317], {issue}21783[#21783]) -* Add empty plugins dir for archive distributions {pull}21204[#21204] (issue: {issue}20342[#20342]) -* Make explicit missing settings for Windows service {pull}21200[#21200] (issue: {issue}18317[#18317]) -* Change permissions on config files {pull}20966[#20966] -* Add quiet option to disable console logging {pull}20422[#20422] (issues: {issue}15315[#15315], {issue}16159[#16159], {issue}17220[#17220]) - -Percolator:: -* Allowing range queries with now ranges inside percolator queries {pull}23921[#23921] (issue: {issue}23859[#23859]) -* Add term extraction support for MultiPhraseQuery {pull}23176[#23176] - -Plugin Discovery EC2:: -* Settings: Migrate ec2 discovery sensitive settings to elasticsearch keystore {pull}23961[#23961] (issue: {issue}22475[#22475]) -* Add support for ca-central-1 region to EC2 and S3 plugins {pull}22458[#22458] (issue: {issue}22454[#22454]) -* Support for eu-west-2 (London) cloud-aws plugin {pull}22308[#22308] (issue: {issue}22306[#22306]) -* Add us-east-2 AWS region {pull}21961[#21961] (issue: {issue}21881[#21881]) -* Add setting to set read timeout for EC2 discovery and S3 repository plugins {pull}21956[#21956] (issue: {issue}19078[#19078]) - -Plugin Ingest GeoIp:: -* Cache results of geoip lookups {pull}22231[#22231] (issue: {issue}22074[#22074]) - -Plugin Lang Painless:: -* Allow painless to load stored fields {pull}24290[#24290] -* Start on custom whitelists for Painless {pull}23563[#23563] -* Fix Painless's implementation of interfaces returning primitives {pull}23298[#23298] (issue: {issue}22983[#22983]) -* Allow painless to implement more interfaces {pull}22983[#22983] -* Generate reference links for painless API {pull}22775[#22775] -* Painless: Add augmentation to String for base 64 {pull}22665[#22665] (issue: {issue}22648[#22648]) -* Improve painless's ScriptException generation {pull}21762[#21762] (issue: {issue}21733[#21733]) -* Add Debug.explain to painless {pull}21723[#21723] (issue: {issue}20263[#20263]) -* Implement the ?: operator in painless {pull}21506[#21506] -* In painless suggest a long constant if int won't do {pull}21415[#21415] (issue: {issue}21313[#21313]) -* Support decimal constants with trailing [dD] in painless {pull}21412[#21412] (issue: {issue}21116[#21116]) -* Implement reading from null safe dereferences {pull}21239[#21239] -* Painless negative offsets {pull}21080[#21080] (issue: {issue}20870[#20870]) -* Remove more equivalents of the now method from the Painless whitelist. {pull}21047[#21047] -* Disable regexes by default in painless {pull}20427[#20427] (issue: {issue}20397[#20397]) - -Plugin Repository Azure:: -* Add Backoff policy to azure repository {pull}23387[#23387] (issue: {issue}22728[#22728]) - -Plugin Repository S3:: -* Removes the retry mechanism from the S3 blob store {pull}23952[#23952] (issue: {issue}22845[#22845]) -* S3 Repository: Eagerly load static settings {pull}23910[#23910] -* S3 repository: Add named configurations {pull}22762[#22762] (issues: {issue}22479[#22479], {issue}22520[#22520]) -* Make the default S3 buffer size depend on the available memory. {pull}21299[#21299] - -Plugins:: -* Plugins: Add support for platform specific plugins {pull}24265[#24265] -* Plugins: Remove leniency for missing plugins dir {pull}24173[#24173] -* Modify permissions dialog for plugins {pull}23742[#23742] -* Plugins: Add plugin cli specific exit codes {pull}23599[#23599] (issue: {issue}15295[#15295]) -* Plugins: Output better error message when existing plugin is incompatible {pull}23562[#23562] (issue: {issue}20691[#20691]) -* Add the ability to define search response listeners in search plugin {pull}22682[#22682] -* Pass ThreadContext to transport interceptors to allow header modification {pull}22618[#22618] (issue: {issue}22585[#22585]) -* Provide helpful error message if a plugin exists {pull}22305[#22305] (issue: {issue}22084[#22084]) -* Add shutdown hook for closing CLI commands {pull}22126[#22126] (issue: {issue}22111[#22111]) -* Allow plugins to install bootstrap checks {pull}22110[#22110] -* Clarify that plugins can be closed {pull}21669[#21669] -* Plugins: Convert custom discovery to pull based plugin {pull}21398[#21398] -* Removing plugin that isn't installed shouldn't trigger usage information {pull}21272[#21272] (issue: {issue}21250[#21250]) -* Remove pluggability of ZenPing {pull}21049[#21049] -* Make UnicastHostsProvider extension pull based {pull}21036[#21036] -* Revert "Display plugins versions" {pull}20807[#20807] (issues: {issue}18683[#18683], {issue}20668[#20668]) -* Provide error message when plugin id is missing {pull}20660[#20660] - -Query DSL:: -* Make it possible to validate a query on all shards instead of a single random shard {pull}23697[#23697] (issue: {issue}18254[#18254]) -* QueryString and SimpleQueryString Graph Support {pull}22541[#22541] -* Additional Graph Support in Match Query {pull}22503[#22503] (issue: {issue}22490[#22490]) -* RangeQuery WITHIN case now normalises query {pull}22431[#22431] (issue: {issue}22412[#22412]) -* Un-deprecate fuzzy query {pull}22088[#22088] (issue: {issue}15760[#15760]) -* support numeric bounds with decimal parts for long/integer/short/byte datatypes {pull}21972[#21972] (issue: {issue}21600[#21600]) -* Using ObjectParser in MatchAllQueryBuilder and IdsQueryBuilder {pull}21273[#21273] -* Expose splitOnWhitespace in `Query String Query` {pull}20965[#20965] (issue: {issue}20841[#20841]) -* Throw error if query element doesn't end with END_OBJECT {pull}20528[#20528] (issue: {issue}20515[#20515]) -* Remove `lowercase_expanded_terms` and `locale` from query-parser options. {pull}20208[#20208] (issue: {issue}9978[#9978]) - -REST:: -* Allow passing single scrollID in clear scroll API body {pull}24242[#24242] (issue: {issue}24233[#24233]) -* Validate top-level keys when parsing mget requests {pull}23746[#23746] (issue: {issue}23720[#23720]) -* Cluster stats should not render empty http/transport types {pull}23735[#23735] -* Add parameter to prefix aggs name with type in search responses {pull}22965[#22965] -* Add a REST spec for the create API {pull}20924[#20924] -* Add response params to REST params did you mean {pull}20753[#20753] (issues: {issue}20722[#20722], {issue}20747[#20747]) -* Add did you mean to strict REST params {pull}20747[#20747] (issue: {issue}20722[#20722]) - -Reindex API:: -* Increase visibility of doExecute so it can be used directly {pull}22614[#22614] -* Improve error message when reindex-from-remote gets bad json {pull}22536[#22536] (issue: {issue}22330[#22330]) -* Reindex: Better error message for pipeline in wrong place {pull}21985[#21985] -* Timeout improvements for rest client and reindex {pull}21741[#21741] (issue: {issue}21707[#21707]) -* Add "simple match" support for reindex-from-remote whitelist {pull}21004[#21004] -* Make reindex-from-remote ignore unknown fields {pull}20591[#20591] (issue: {issue}20504[#20504]) - -Scripting:: -* Expose multi-valued dates to scripts and document painless's date functions {pull}22875[#22875] (issue: {issue}22162[#22162]) -* Wrap VerifyError in ScriptException {pull}21769[#21769] -* Log ScriptException's xcontent if file script compilation fails {pull}21767[#21767] (issue: {issue}21733[#21733]) -* Support binary field type in script values {pull}21484[#21484] (issue: {issue}14469[#14469]) -* Mustache: Add {{#url}}{{/url}} function to URL encode strings {pull}20838[#20838] -* Expose `ctx._now` in update scripts {pull}20835[#20835] (issue: {issue}17895[#17895]) - -Search:: -* Remove leniency when merging fetched hits in a search response phase {pull}24158[#24158] -* Set shard count limit to unlimited {pull}24012[#24012] -* Streamline shard index availability in all SearchPhaseResults {pull}23788[#23788] -* Search took time should use a relative clock {pull}23662[#23662] -* Prevent negative `from` parameter in SearchSourceBuilder {pull}23358[#23358] (issue: {issue}23324[#23324]) -* Remove unnecessary result sorting in SearchPhaseController {pull}23321[#23321] -* Expose `batched_reduce_size` via `_search` {pull}23288[#23288] (issue: {issue}23253[#23253]) -* Adding fromXContent to Suggest and Suggestion class {pull}23226[#23226] (issue: {issue}23202[#23202]) -* Adding fromXContent to Suggestion.Entry and subclasses {pull}23202[#23202] -* Add CollapseSearchPhase as a successor for the FetchSearchPhase {pull}23165[#23165] -* Integrate IndexOrDocValuesQuery. {pull}23119[#23119] -* Detach SearchPhases from AbstractSearchAsyncAction {pull}23118[#23118] -* Fix GraphQuery expectation after Lucene upgrade to 6.5 {pull}23117[#23117] (issue: {issue}23102[#23102]) -* Nested queries should avoid adding unnecessary filters when possible. {pull}23079[#23079] (issue: {issue}20797[#20797]) -* Add xcontent parsing to completion suggestion option {pull}23071[#23071] -* Add xcontent parsing to suggestion options {pull}23018[#23018] -* Separate reduce (aggs, suggest and profile) from merging fetched hits {pull}23017[#23017] -* Add a setting to disable remote cluster connections on a node {pull}23005[#23005] -* First step towards separating individual search phases {pull}22802[#22802] -* Add parsing from xContent to SearchProfileShardResults and nested classes {pull}22649[#22649] -* Move SearchTransportService and SearchPhaseController creation outside of TransportSearchAction constructor {pull}21754[#21754] -* Don't carry ShardRouting around when not needed in AbstractSearchAsyncAction {pull}21753[#21753] -* ShardSearchRequest to take ShardId constructor argument rather than the whole ShardRouting {pull}21750[#21750] -* Use index uuid as key in the alias filter map rather than the index name {pull}21749[#21749] -* Add indices and filter information to search shards api output {pull}21738[#21738] (issue: {issue}20916[#20916]) -* remove pointless catch exception in TransportSearchAction {pull}21689[#21689] -* Optimize query with types filter in the URL (t/t/_search) {pull}20979[#20979] -* Makes search action cancelable by task management API {pull}20405[#20405] - -Search Templates:: -* Add profile and explain parameters to template API {pull}20451[#20451] - -Settings:: -* Add secure file setting to keystore {pull}24001[#24001] -* Add a property to mark setting as final {pull}23872[#23872] -* Remove obsolete index setting `index.version.minimum_compatible`. {pull}23593[#23593] -* Provide a method to retrieve a closeable char[] from a SecureString {pull}23389[#23389] -* Update indices settings api to support CBOR and SMILE format {pull}23309[#23309] (issues: {issue}23242[#23242], {issue}23245[#23245]) -* Improve setting deprecation message {pull}23156[#23156] (issue: {issue}22849[#22849]) -* Add secure settings validation on startup {pull}22894[#22894] -* Allow comma delimited array settings to have a space after each entry {pull}22591[#22591] (issue: {issue}22297[#22297]) -* Allow affix settings to be dynamic / updatable {pull}22526[#22526] -* Allow affix settings to delegate to actual settings {pull}22523[#22523] -* Make s3 repository sensitive settings use secure settings {pull}22479[#22479] -* Speed up filter and prefix settings operations {pull}22249[#22249] -* Add precise logging on unknown or invalid settings {pull}20951[#20951] (issue: {issue}20946[#20946]) - -Snapshot/Restore:: -* Ensure every repository has an incompatible-snapshots blob {pull}24403[#24403] (issue: {issue}22267[#22267]) -* Change snapshot status error to use generic SnapshotException {pull}24355[#24355] (issue: {issue}24225[#24225]) -* Duplicate snapshot name throws InvalidSnapshotNameException {pull}22921[#22921] (issue: {issue}18228[#18228]) -* Fixes retrieval of the latest snapshot index blob {pull}22700[#22700] -* Use general cluster state batching mechanism for snapshot state updates {pull}22528[#22528] (issue: {issue}14899[#14899]) -* Synchronize snapshot deletions on the cluster state {pull}22313[#22313] (issue: {issue}19957[#19957]) -* Abort snapshots on a node that leaves the cluster {pull}21084[#21084] (issue: {issue}20876[#20876]) - -Stats:: -* Show JVM arguments {pull}24450[#24450] -* Add cross-cluster search remote cluster info API {pull}23969[#23969] (issue: {issue}23925[#23925]) -* Add geo_point to FieldStats {pull}21947[#21947] (issue: {issue}20707[#20707]) -* Include unindexed field in FieldStats response {pull}21821[#21821] (issue: {issue}21952[#21952]) -* Remove load average leniency {pull}21380[#21380] -* Strengthen handling of unavailable cgroup stats {pull}21094[#21094] (issue: {issue}21029[#21029]) -* Add basic cgroup CPU metrics {pull}21029[#21029] - -Suggesters:: -* Provide informative error message in case of unknown suggestion context. {pull}24241[#24241] -* Allow different data types for category in Context suggester {pull}23491[#23491] (issue: {issue}22358[#22358]) - -Task Manager:: -* Limit IndexRequest toString() length {pull}22832[#22832] -* Improve the error message if task and node isn't found {pull}22062[#22062] (issue: {issue}22027[#22027]) -* Add descriptions to create snapshot and restore snapshot tasks. {pull}21901[#21901] (issue: {issue}21768[#21768]) -* Add proper descriptions to reindex, update-by-query and delete-by-query tasks. {pull}21841[#21841] (issue: {issue}21768[#21768]) -* Add search task descriptions {pull}21740[#21740] - -Tribe Node:: -* Add support for merging custom meta data in tribe node {pull}21552[#21552] (issues: {issue}20544[#20544], {issue}20791[#20791], {issue}9372[#9372]) - -Upgrade API:: -* Allow plugins to upgrade templates and index metadata on startup {pull}24379[#24379] - - -[[bug-6.0.0-alpha1-5x]] -[float] -=== Bug fixes - -Aggregations:: -* InternalPercentilesBucket should not rely on ordered percents array {pull}24336[#24336] (issue: {issue}24331[#24331]) -* Align behavior HDR percentiles iterator with percentile() method {pull}24206[#24206] -* The `filter` and `significant_terms` aggregations should parse the `filter` as a filter, not a query. {pull}23797[#23797] -* Completion suggestion should also consider text if prefix/regex is missing {pull}23451[#23451] (issue: {issue}23340[#23340]) -* Fixes the per term error in the terms aggregation {pull}23399[#23399] -* Fixes terms error count for multiple reduce phases {pull}23291[#23291] (issue: {issue}23286[#23286]) -* Fix scaled_float numeric type in aggregations {pull}22351[#22351] (issue: {issue}22350[#22350]) -* Allow terms aggregations on pure boolean scripts. {pull}22201[#22201] (issue: {issue}20941[#20941]) -* Fix numeric terms aggregations with includes/excludes and minDocCount=0 {pull}22141[#22141] (issue: {issue}22140[#22140]) -* Fix `missing` on aggs on `boolean` fields. {pull}22135[#22135] (issue: {issue}22009[#22009]) -* IP range masks exclude the maximum address of the range. {pull}22018[#22018] (issue: {issue}22005[#22005]) -* Fix `other_bucket` on the `filters` agg to be enabled if a key is set. {pull}21994[#21994] (issue: {issue}21951[#21951]) -* Rewrite Queries/Filter in FilterAggregationBuilder and ensure client usage marks query as non-cachable {pull}21303[#21303] (issue: {issue}21301[#21301]) -* Percentiles bucket fails for 100th percentile {pull}21218[#21218] -* Thread safety for scripted significance heuristics {pull}21113[#21113] (issue: {issue}18120[#18120]) -* `ip_range` aggregation should accept null bounds. {pull}21043[#21043] (issue: {issue}21006[#21006]) -* Fixes bug preventing script sort working on top_hits aggregation {pull}21023[#21023] (issue: {issue}21022[#21022]) -* Fixed writeable name from range to geo_distance {pull}20860[#20860] -* Fix date_range aggregation to not cache if now is used {pull}20740[#20740] -* The `top_hits` aggregation should compile scripts only once. {pull}20738[#20738] - -Allocation:: -* Discard stale node responses from async shard fetching {pull}24434[#24434] (issue: {issue}24007[#24007]) -* Cannot force allocate primary to a node where the shard already exists {pull}22031[#22031] (issue: {issue}22021[#22021]) -* Promote shadow replica to primary when initializing primary fails {pull}22021[#22021] -* Trim in-sync allocations set only when it grows {pull}21976[#21976] (issue: {issue}21719[#21719]) -* Allow master to assign primary shard to node that has shard store locked during shard state fetching {pull}21656[#21656] (issue: {issue}19416[#19416]) -* Keep a shadow replicas' allocation id when it is promoted to primary {pull}20863[#20863] (issue: {issue}20650[#20650]) -* IndicesClusterStateService should clean local started when re-assigns an initializing shard with the same aid {pull}20687[#20687] -* IndexRoutingTable.initializeEmpty shouldn't override supplied primary RecoverySource {pull}20638[#20638] (issue: {issue}20637[#20637]) -* Update incoming recoveries stats when shadow replica is reinitialized {pull}20612[#20612] -* `index.routing.allocation.initial_recovery` limits replica allocation {pull}20589[#20589] - -Analysis:: -* AsciiFoldingFilter's multi-term component should never preserve the original token. {pull}21982[#21982] -* Pre-built analysis factories do not implement MultiTermAware correctly. {pull}21981[#21981] -* Can load non-PreBuiltTokenFilter in Analyze API {pull}20396[#20396] -* Named analyzer should close the analyzer that it wraps {pull}20197[#20197] - -Bulk:: -* Reject empty IDs {pull}24118[#24118] (issue: {issue}24116[#24116]) - -CAT API:: -* Consume `full_id` request parameter early {pull}21270[#21270] (issue: {issue}21266[#21266]) - -CRUD:: -* Reject external versioning and explicit version numbers on create {pull}21998[#21998] -* MultiGet should not fail entirely if alias resolves to many indices {pull}20858[#20858] (issue: {issue}20845[#20845]) -* Fixed date math expression support in multi get requests. {pull}20659[#20659] (issue: {issue}17957[#17957]) - -Cache:: -* Invalidate cached query results if query timed out {pull}22807[#22807] (issue: {issue}22789[#22789]) -* Fix the request cache keys to not hold references to the SearchContext. {pull}21284[#21284] -* Prevent requests that use scripts or now() from being cached {pull}20750[#20750] (issue: {issue}20645[#20645]) - -Circuit Breakers:: -* ClusterState publishing shouldn't trigger circuit breakers {pull}20986[#20986] (issues: {issue}20827[#20827], {issue}20960[#20960]) - -Cluster:: -* Don't set local node on cluster state used for node join validation {pull}23311[#23311] (issues: {issue}21830[#21830], {issue}3[#3], {issue}4[#4], {issue}6[#6], {issue}9[#9]) -* Allow a cluster state applier to create an observer and wait for a better state {pull}23132[#23132] (issue: {issue}21817[#21817]) -* Cluster allocation explain to never return empty response body {pull}23054[#23054] -* IndicesService handles all exceptions during index deletion {pull}22433[#22433] -* Remove cluster update task when task times out {pull}21578[#21578] (issue: {issue}21568[#21568]) - -Core:: -* Check for default.path.data included in path.data {pull}24285[#24285] (issue: {issue}24283[#24283]) -* Improve performance of extracting warning value {pull}24114[#24114] (issue: {issue}24018[#24018]) -* Reject duplicate settings on the command line {pull}24053[#24053] -* Restrict build info loading to ES jar, not any jar {pull}24049[#24049] (issue: {issue}21955[#21955]) -* Streamline foreign stored context restore and allow to perserve response headers {pull}22677[#22677] (issue: {issue}22647[#22647]) -* Support negative numbers in readVLong {pull}22314[#22314] -* Add a StreamInput#readArraySize method that ensures sane array sizes {pull}21697[#21697] -* Use a buffer to do character to byte conversion in StreamOutput#writeString {pull}21680[#21680] (issue: {issue}21660[#21660]) -* Fix ShardInfo#toString {pull}21319[#21319] -* Protect BytesStreamOutput against overflows of the current number of written bytes. {pull}21174[#21174] (issue: {issue}21159[#21159]) -* Return target index name even if _rollover conditions are not met {pull}21138[#21138] -* .es_temp_file remains after system crash, causing it not to start again {pull}21007[#21007] (issue: {issue}20992[#20992]) -* StoreStatsCache should also ignore AccessDeniedException when checking file size {pull}20790[#20790] (issue: {issue}17580[#17580]) - -Dates:: -* Fix time zone rounding edge case for DST overlaps {pull}21550[#21550] (issue: {issue}20833[#20833]) - -Discovery:: -* ZenDiscovery - only validate min_master_nodes values if local node is master {pull}23915[#23915] (issue: {issue}23695[#23695]) -* Close InputStream when receiving cluster state in PublishClusterStateAction {pull}22711[#22711] -* Do not reply to pings from another cluster {pull}21894[#21894] (issue: {issue}21874[#21874]) -* Add current cluster state version to zen pings and use them in master election {pull}20384[#20384] (issue: {issue}20348[#20348]) - -Engine:: -* Close and flush refresh listeners on shard close {pull}22342[#22342] -* Die with dignity on the Lucene layer {pull}21721[#21721] (issue: {issue}19272[#19272]) -* Fix `InternalEngine#isThrottled` to not always return `false`. {pull}21592[#21592] -* Retrying replication requests on replica doesn't call `onRetry` {pull}21189[#21189] (issue: {issue}20211[#20211]) -* Take refresh IOExceptions into account when catching ACE in InternalEngine {pull}20546[#20546] (issue: {issue}19975[#19975]) - -Exceptions:: -* Stop returning "es." internal exception headers as http response headers {pull}22703[#22703] (issue: {issue}17593[#17593]) -* Fixing shard recovery error message to report the number of docs correctly for each node {pull}22515[#22515] (issue: {issue}21893[#21893]) - -Highlighting:: -* Fix FiltersFunctionScoreQuery highlighting {pull}21827[#21827] -* Fix highlighting on a stored keyword field {pull}21645[#21645] (issue: {issue}21636[#21636]) -* Fix highlighting of MultiTermQuery within a FunctionScoreQuery {pull}20400[#20400] (issue: {issue}20392[#20392]) - -Index APIs:: -* Fixes restore of a shrunken index when initial recovery node is gone {pull}24322[#24322] (issue: {issue}24257[#24257]) -* Honor update request timeout {pull}23825[#23825] -* Ensure shrunk indices carry over version information from its source {pull}22469[#22469] (issue: {issue}22373[#22373]) -* Validate the `_rollover` target index name early to also fail if dry_run=true {pull}21330[#21330] (issue: {issue}21149[#21149]) -* Only negate index expression on all indices with preceding wildcard {pull}20898[#20898] (issues: {issue}19800[#19800], {issue}20033[#20033]) -* Fix IndexNotFoundException in multi index search request. {pull}20188[#20188] (issue: {issue}3839[#3839]) - -Index Templates:: -* Fix integer overflows when dealing with templates. {pull}21628[#21628] (issue: {issue}21622[#21622]) - -Ingest:: -* Improve missing ingest processor error {pull}23379[#23379] (issue: {issue}23392[#23392]) -* update _ingest.timestamp to use new ZonedDateTime {pull}23174[#23174] (issue: {issue}23168[#23168]) -* fix date-processor to a new default year for every new pipeline execution {pull}22601[#22601] (issue: {issue}22547[#22547]) -* fix index out of bounds error in KV Processor {pull}22288[#22288] (issue: {issue}22272[#22272]) -* Fixes GrokProcessor's ignorance of named-captures with same name. {pull}22131[#22131] (issue: {issue}22117[#22117]) -* fix trace_match behavior for when there is only one grok pattern {pull}21413[#21413] (issue: {issue}21371[#21371]) -* Stored scripts and ingest node configurations should be included into a snapshot {pull}21227[#21227] (issue: {issue}21184[#21184]) -* make painless the default scripting language for ScriptProcessor {pull}20981[#20981] (issue: {issue}20943[#20943]) -* no null values in ingest configuration error messages {pull}20616[#20616] -* JSON Processor was not properly added {pull}20613[#20613] - -Inner Hits:: -* Replace NestedChildrenQuery with ParentChildrenBlockJoinQuery {pull}24016[#24016] (issue: {issue}24009[#24009]) -* Changed DisMaxQueryBuilder to extract inner hits from leaf queries {pull}23512[#23512] (issue: {issue}23482[#23482]) -* Inner hits and ignore unmapped {pull}21693[#21693] (issue: {issue}21620[#21620]) -* Skip adding a parent field to nested documents. {pull}21522[#21522] (issue: {issue}21503[#21503]) - -Internal:: -* Fix NPE if field caps request has a field that exists not in all indices {pull}24504[#24504] -* Add infrastructure to mark contexts as system contexts {pull}23830[#23830] -* Always restore the ThreadContext for operations delayed due to a block {pull}23349[#23349] -* Index creation and setting update may not return deprecation logging {pull}22702[#22702] -* Rethrow ExecutionException from the loader to concurrent callers of Cache#computeIfAbsent {pull}21549[#21549] -* Restore thread's original context before returning to the ThreadPool {pull}21411[#21411] -* Fix NPE in SearchContext.toString() {pull}21069[#21069] -* Prevent AbstractArrays from release bytes more than once {pull}20819[#20819] -* Source filtering should treat dots in field names as sub objects. {pull}20736[#20736] (issue: {issue}20719[#20719]) -* IndicesAliasesRequest should not implement CompositeIndicesRequest {pull}20726[#20726] -* Ensure elasticsearch doesn't start with unuspported indices {pull}20514[#20514] (issue: {issue}20512[#20512]) - -Java API:: -* Don't output empty ext object in SearchSourceBuilder#toXContent {pull}22093[#22093] (issue: {issue}20969[#20969]) -* Transport client: Fix remove address to actually work {pull}21743[#21743] -* Add a HostFailureListener to notify client code if a node got disconnected {pull}21709[#21709] (issue: {issue}21424[#21424]) -* Fix InternalSearchHit#hasSource to return the proper boolean value {pull}21441[#21441] (issue: {issue}21419[#21419]) -* Null checked for source when calling sourceRef {pull}21431[#21431] (issue: {issue}19279[#19279]) -* ClusterAdminClient.prepareDeletePipeline method should accept pipeline id to delete {pull}21228[#21228] -* fix IndexResponse#toString to print out shards info {pull}20562[#20562] - -Java High Level REST Client:: -* Correctly parse BulkItemResponse.Failure's status {pull}23432[#23432] - -Java REST Client:: -* Make buffer limit configurable in HeapBufferedConsumerFactory {pull}23970[#23970] (issue: {issue}23958[#23958]) -* RestClient asynchronous execution should not throw exceptions {pull}23307[#23307] -* Don't use null charset in RequestLogger {pull}22197[#22197] (issue: {issue}22190[#22190]) -* Rest client: don't reuse the same HttpAsyncResponseConsumer across multiple retries {pull}21378[#21378] - -Logging:: -* Do not prematurely shutdown Log4j {pull}21519[#21519] (issue: {issue}21514[#21514]) -* Assert status logger does not warn on Log4j usage {pull}21339[#21339] -* Fix logger names for Netty {pull}21223[#21223] (issue: {issue}20457[#20457]) -* Fix logger when you can not create an azure storage client {pull}20670[#20670] (issues: {issue}20633[#20633], {issue}20669[#20669]) -* Avoid unnecessary creation of prefix loggers {pull}20571[#20571] (issue: {issue}20570[#20570]) -* Fix logging hierarchy configs {pull}20463[#20463] -* Fix prefix logging {pull}20429[#20429] - -Mapping:: -* Preserve response headers when creating an index {pull}23950[#23950] (issue: {issue}23947[#23947]) -* Improves disabled fielddata error message {pull}23841[#23841] (issue: {issue}22768[#22768]) -* Fix MapperService StackOverflowError {pull}23605[#23605] (issue: {issue}23604[#23604]) -* Fix NPE with scaled floats stats when field is not indexed {pull}23528[#23528] (issue: {issue}23487[#23487]) -* Range types causing `GetFieldMappingsIndexRequest` to fail due to `NullPointerException` in `RangeFieldMapper.doXContentBody` when `include_defaults=true` is on the query string {pull}22925[#22925] -* Disallow introducing illegal object mappings (double '..') {pull}22891[#22891] (issue: {issue}22794[#22794]) -* The `_all` default mapper is not completely configured. {pull}22236[#22236] -* Fix MapperService.allEnabled(). {pull}22227[#22227] -* Dynamic `date` fields should use the `format` that was used to detect it is a date. {pull}22174[#22174] (issue: {issue}9410[#9410]) -* Sub-fields should not accept `include_in_all` parameter {pull}21971[#21971] (issue: {issue}21710[#21710]) -* Mappings: Fix get mapping when no indexes exist to not fail in response generation {pull}21924[#21924] (issue: {issue}21916[#21916]) -* Fail to index fields with dots in field names when one of the intermediate objects is nested. {pull}21787[#21787] (issue: {issue}21726[#21726]) -* Uncommitted mapping updates should not efect existing indices {pull}21306[#21306] (issue: {issue}21189[#21189]) - -Nested Docs:: -* Fix bug in query builder rewrite that ignores the ignore_unmapped option {pull}22456[#22456] - -Network:: -* Respect promises on pipelined responses {pull}23317[#23317] (issues: {issue}23310[#23310], {issue}23322[#23322]) -* Ensure that releasing listener is called {pull}23310[#23310] -* Pass `forceExecution` flag to transport interceptor {pull}22739[#22739] -* Ensure new connections won't be opened if transport is closed or closing {pull}22589[#22589] (issue: {issue}22554[#22554]) -* Prevent open channel leaks if handshake times out or is interrupted {pull}22554[#22554] -* Execute low level handshake in #openConnection {pull}22440[#22440] -* Handle connection close / reset events gracefully during handshake {pull}22178[#22178] -* Do not lose host information when pinging {pull}21939[#21939] (issue: {issue}21828[#21828]) -* DiscoveryNode and TransportAddress should preserve host information {pull}21828[#21828] -* Die with dignity on the network layer {pull}21720[#21720] (issue: {issue}19272[#19272]) -* Fix connection close header handling {pull}20956[#20956] (issue: {issue}20938[#20938]) -* Ensure port range is readable in the exception message {pull}20893[#20893] -* Prevent double release in TcpTransport if send listener throws an exception {pull}20880[#20880] - -Packaging:: -* Fall back to non-atomic move when removing plugins {pull}23548[#23548] (issue: {issue}35[#35]) -* Another fix for handling of paths on Windows {pull}22132[#22132] (issue: {issue}21921[#21921]) -* Fix handling of spaces in Windows paths {pull}21921[#21921] (issues: {issue}20809[#20809], {issue}21525[#21525]) -* Add option to skip kernel parameters on install {pull}21899[#21899] (issue: {issue}21877[#21877]) -* Set vm.max_map_count on systemd package install {pull}21507[#21507] -* Export ES_JVM_OPTIONS for SysV init {pull}21445[#21445] (issue: {issue}21255[#21255]) -* Debian: configure start-stop-daemon to not go into background {pull}21343[#21343] (issues: {issue}12716[#12716], {issue}21300[#21300]) -* Generate POM files with non-wildcard excludes {pull}21234[#21234] (issue: {issue}21170[#21170]) -* [Packaging] Do not remove scripts directory on upgrade {pull}20452[#20452] -* [Package] Remove bin/lib/modules directories on RPM uninstall/upgrade {pull}20448[#20448] - -Parent/Child:: -* Add null check in case of orphan child document {pull}22772[#22772] (issue: {issue}22770[#22770]) - -Percolator:: -* Fix memory leak when percolator uses bitset or field data cache {pull}24115[#24115] (issue: {issue}24108[#24108]) -* Fix NPE in percolator's 'now' range check for percolator queries with range queries {pull}22356[#22356] (issue: {issue}22355[#22355]) - -Plugin Analysis Stempel:: -* Fix thread safety of Stempel's token filter factory {pull}22610[#22610] (issue: {issue}21911[#21911]) - -Plugin Discovery EC2:: -* Fix ec2 discovery when used with IAM profiles. {pull}21048[#21048] (issue: {issue}21039[#21039]) - -Plugin Ingest GeoIp:: -* [ingest-geoip] update geoip to not include null-valued results from {pull}20455[#20455] - -Plugin Lang Painless:: -* painless: Fix method references to ctor with the new LambdaBootstrap and cleanup code {pull}24406[#24406] -* Fix Painless Lambdas for Java 9 {pull}24070[#24070] (issue: {issue}23473[#23473]) -* Fix painless's regex lexer and error messages {pull}23634[#23634] -* Replace Painless's Cast with casting strategies {pull}23369[#23369] -* Fix Bad Casts In Painless {pull}23282[#23282] (issue: {issue}23238[#23238]) -* Don't allow casting from void to def in painless {pull}22969[#22969] (issue: {issue}22908[#22908]) -* Fix def invoked qualified method refs {pull}22918[#22918] -* Whitelist some ScriptDocValues in painless {pull}22600[#22600] (issue: {issue}22584[#22584]) -* Update Painless Loop Counter to be Higher {pull}22560[#22560] (issue: {issue}22508[#22508]) -* Fix some issues with painless's strings {pull}22393[#22393] (issue: {issue}22372[#22372]) -* Test fix for def equals in Painless {pull}21945[#21945] (issue: {issue}21801[#21801]) -* Fix a VerifyError bug in Painless {pull}21765[#21765] -* Fix Lambdas in Painless to be Able to Use Top-Level Variables Such as params and doc {pull}21635[#21635] (issues: {issue}20869[#20869], {issue}21479[#21479]) -* Fix String Concatenation Bug In Painless {pull}20623[#20623] - -Plugin Repository Azure:: -* Azure blob store's readBlob() method first checks if the blob exists {pull}23483[#23483] (issue: {issue}23480[#23480]) -* Fixes default chunk size for Azure repositories {pull}22577[#22577] (issue: {issue}22513[#22513]) -* readonly on azure repository must be taken into account {pull}22055[#22055] (issues: {issue}22007[#22007], {issue}22053[#22053]) - -Plugin Repository HDFS:: -* Fixing permission errors for `KERBEROS` security mode for HDFS Repository {pull}23439[#23439] (issue: {issue}22156[#22156]) - -Plugin Repository S3:: -* Handle BlobPath's trailing separator case. Add test cases to BlobPathTests.java {pull}23091[#23091] -* Fixes leading forward slash in S3 repository base_path {pull}20861[#20861] - -Plugins:: -* Fix delete of plugin directory on remove plugin {pull}24266[#24266] (issue: {issue}24252[#24252]) -* Use a marker file when removing a plugin {pull}24252[#24252] (issue: {issue}24231[#24231]) -* Remove hidden file leniency from plugin service {pull}23982[#23982] (issue: {issue}12465[#12465]) -* Add check for null pluginName in remove command {pull}22930[#22930] (issue: {issue}22922[#22922]) -* Use sysprop like with es.path.home to pass conf dir {pull}18870[#18870] (issue: {issue}18689[#18689]) - -Query DSL:: -* FuzzyQueryBuilder should error when parsing array of values {pull}23762[#23762] (issue: {issue}23759[#23759]) -* Fix parsing for `max_determinized_states` {pull}22749[#22749] (issue: {issue}22722[#22722]) -* Fix script score function that combines _score and weight {pull}22713[#22713] (issue: {issue}21483[#21483]) -* Fixes date range query using epoch with timezone {pull}21542[#21542] (issue: {issue}21501[#21501]) -* Allow overriding all-field leniency when `lenient` option is specified {pull}21504[#21504] (issues: {issue}20925[#20925], {issue}21341[#21341]) -* Max score should be updated when a rescorer is used {pull}20977[#20977] (issue: {issue}20651[#20651]) -* Fixes MultiMatchQuery so that it doesn't provide a null context {pull}20882[#20882] -* Fix silently accepting malformed queries {pull}20515[#20515] (issue: {issue}20500[#20500]) -* Fix match_phrase_prefix query with single term on _all field {pull}20471[#20471] (issue: {issue}20470[#20470]) - -REST:: -* [API] change wait_for_completion default according to docs {pull}23672[#23672] -* Deprecate request_cache for clear-cache {pull}23638[#23638] (issue: {issue}22748[#22748]) -* HTTP transport stashes the ThreadContext instead of the RestController {pull}23456[#23456] -* Fix date format in warning headers {pull}23418[#23418] (issue: {issue}23275[#23275]) -* Align REST specs for HEAD requests {pull}23313[#23313] (issue: {issue}21125[#21125]) -* Correct warning header to be compliant {pull}23275[#23275] (issue: {issue}22986[#22986]) -* Fix get HEAD requests {pull}23186[#23186] (issue: {issue}21125[#21125]) -* Handle bad HTTP requests {pull}23153[#23153] (issue: {issue}23034[#23034]) -* Fix get source HEAD requests {pull}23151[#23151] (issue: {issue}21125[#21125]) -* Properly encode location header {pull}23133[#23133] (issues: {issue}21057[#21057], {issue}23115[#23115]) -* Fix template HEAD requests {pull}23130[#23130] (issue: {issue}21125[#21125]) -* Fix index HEAD requests {pull}23112[#23112] (issue: {issue}21125[#21125]) -* Fix alias HEAD requests {pull}23094[#23094] (issue: {issue}21125[#21125]) -* Strict level parsing for indices stats {pull}21577[#21577] (issue: {issue}21024[#21024]) -* The routing query string param is supported by mget but was missing from the rest spec {pull}21357[#21357] -* fix thread_pool_patterns path variable definition {pull}21332[#21332] -* Read indices options in indices upgrade API {pull}21281[#21281] (issue: {issue}21099[#21099]) -* ensure the XContentBuilder is always closed in RestBuilderListener {pull}21124[#21124] -* Add correct Content-Length on HEAD requests {pull}21123[#21123] (issue: {issue}21077[#21077]) -* Make sure HEAD / has 0 Content-Length {pull}21077[#21077] (issue: {issue}21075[#21075]) -* Adds percent-encoding for Location headers {pull}21057[#21057] (issue: {issue}21016[#21016]) -* Whitelist node stats indices level parameter {pull}21024[#21024] (issue: {issue}20722[#20722]) -* Remove lenient URL parameter parsing {pull}20722[#20722] (issue: {issue}14719[#14719]) -* XContentBuilder: Avoid building self-referencing objects {pull}20550[#20550] (issues: {issue}19475[#19475], {issue}20540[#20540]) - -Recovery:: -* Provide target allocation id as part of start recovery request {pull}24333[#24333] (issue: {issue}24167[#24167]) -* Fix primary relocation for shadow replicas {pull}22474[#22474] (issue: {issue}20300[#20300]) -* Don't close store under CancellableThreads {pull}22434[#22434] (issue: {issue}22325[#22325]) -* Use a fresh recovery id when retrying recoveries {pull}22325[#22325] (issue: {issue}22043[#22043]) -* Allow flush/force_merge/upgrade on shard marked as relocated {pull}22078[#22078] (issue: {issue}22043[#22043]) -* Fix concurrency issues between cancelling a relocation and marking shard as relocated {pull}20443[#20443] - -Reindex API:: -* Fix throttled reindex_from_remote {pull}23953[#23953] (issues: {issue}23828[#23828], {issue}23945[#23945]) -* Fix reindex with a remote source on a version before 2.0.0 {pull}23805[#23805] -* Make reindex wait for cleanup before responding {pull}23677[#23677] (issue: {issue}23653[#23653]) -* Reindex: do not log when can't clear old scroll {pull}22942[#22942] (issue: {issue}22937[#22937]) -* Fix reindex-from-remote from <2.0 {pull}22931[#22931] (issue: {issue}22893[#22893]) -* Fix reindex from remote clearing scroll {pull}22525[#22525] (issue: {issue}22514[#22514]) -* Fix source filtering in reindex-from-remote {pull}22514[#22514] (issue: {issue}22507[#22507]) -* Remove content type detection from reindex-from-remote {pull}22504[#22504] (issue: {issue}22329[#22329]) -* Don't close rest client from its callback {pull}22061[#22061] (issue: {issue}22027[#22027]) -* Keep context during reindex's retries {pull}21941[#21941] -* Ignore IllegalArgumentException with assertVersionSerializable {pull}21409[#21409] (issues: {issue}20767[#20767], {issue}21350[#21350]) -* Bump reindex-from-remote's buffer to 200mb {pull}21222[#21222] (issue: {issue}21185[#21185]) -* Fix reindex-from-remote for parent/child from <2.0 {pull}21070[#21070] (issue: {issue}21044[#21044]) - -Scripting:: -* Convert script/template objects to json format internally {pull}23308[#23308] (issue: {issue}23245[#23245]) -* Script: Fix value of `ctx._now` to be current epoch time in milliseconds {pull}23175[#23175] (issue: {issue}23169[#23169]) -* Expose `ip` fields as strings in scripts. {pull}21997[#21997] (issue: {issue}21977[#21977]) -* Add support for booleans in scripts {pull}20950[#20950] (issue: {issue}20949[#20949]) -* Native scripts should be created once per index, not per segment. {pull}20609[#20609] - -Search:: -* Include all aliases including non-filtering in `_search_shards` response {pull}24489[#24489] -* Cross Cluster Search: propagate original indices per cluster {pull}24328[#24328] -* Query string default field {pull}24214[#24214] -* Speed up parsing of large `terms` queries. {pull}24210[#24210] -* IndicesQueryCache should delegate the scorerSupplier method. {pull}24209[#24209] -* Disable graph analysis at query time for shingle and cjk filters producing tokens of different size {pull}23920[#23920] (issue: {issue}23918[#23918]) -* Fix cross-cluster remote node gateway attributes {pull}23863[#23863] -* Use a fixed seed for computing term hashCode in TermsSliceQuery {pull}23795[#23795] -* Honor max concurrent searches in multi-search {pull}23538[#23538] (issue: {issue}23527[#23527]) -* Avoid stack overflow in multi-search {pull}23527[#23527] (issue: {issue}23523[#23523]) -* Fix query_string_query to transform "foo:*" in an exists query on the field name {pull}23433[#23433] (issue: {issue}23356[#23356]) -* Factor out filling of TopDocs in SearchPhaseController {pull}23380[#23380] (issues: {issue}19356[#19356], {issue}23357[#23357]) -* Replace blocking calls in ExpandCollapseSearchResponseListener by asynchronous requests {pull}23053[#23053] (issue: {issue}23048[#23048]) -* Ensure fixed serialization order of InnerHitBuilder {pull}22820[#22820] (issue: {issue}22808[#22808]) -* Improve concurrency of ShardCoreKeyMap. {pull}22316[#22316] -* Make `-0` compare less than `+0` consistently. {pull}22173[#22173] (issue: {issue}22167[#22167]) -* Fix boost_mode propagation when the function score query builder is rewritten {pull}22172[#22172] (issue: {issue}22138[#22138]) -* FiltersAggregationBuilder: rewriting filter queries, the same way as in FilterAggregationBuilder {pull}22076[#22076] -* Fix cross_fields type on multi_match query with synonyms {pull}21638[#21638] (issue: {issue}21633[#21633]) -* Fix match_phrase_prefix on boosted fields {pull}21623[#21623] (issue: {issue}21613[#21613]) -* Respect default search timeout {pull}21599[#21599] (issues: {issue}12211[#12211], {issue}21595[#21595]) -* Remove LateParsingQuery to prevent timestamp access after context is frozen {pull}21328[#21328] (issue: {issue}21295[#21295]) -* Make range queries round up upper bounds again. {pull}20582[#20582] (issues: {issue}20579[#20579], {issue}8889[#8889]) -* Throw error when trying to fetch fields from source and source is disabled {pull}20424[#20424] (issues: {issue}20093[#20093], {issue}20408[#20408]) - -Search Templates:: -* No longer add illegal content type option to stored search templates {pull}24251[#24251] (issue: {issue}24227[#24227]) -* SearchTemplateRequest to implement CompositeIndicesRequest {pull}21865[#21865] (issue: {issue}21747[#21747]) - -Settings:: -* Do not set path.data in environment if not set {pull}24132[#24132] (issue: {issue}24099[#24099]) -* Correct handling of default and array settings {pull}24074[#24074] (issues: {issue}23981[#23981], {issue}24052[#24052]) -* Fix merge scheduler config settings {pull}23391[#23391] -* Settings: Fix keystore cli prompting for yes/no to handle console returning null {pull}23320[#23320] -* Expose `search.highlight.term_vector_multi_value` as a node level setting {pull}22999[#22999] -* NPE when no setting name passed to elasticsearch-keystore {pull}22609[#22609] -* Handle spaces in `action.auto_create_index` gracefully {pull}21790[#21790] (issue: {issue}21449[#21449]) -* Fix settings diff generation for affix and group settings {pull}21788[#21788] -* Don't reset non-dynamic settings unless explicitly requested {pull}21646[#21646] (issue: {issue}21593[#21593]) -* Fix Setting.timeValue() method {pull}20696[#20696] (issue: {issue}20662[#20662]) -* Add a hard limit for `index.number_of_shard` {pull}20682[#20682] -* Include complex settings in settings requests {pull}20622[#20622] - -Snapshot/Restore:: -* Fixes maintaining the shards a snapshot is waiting on {pull}24289[#24289] -* Fixes snapshot status on failed snapshots {pull}23833[#23833] (issue: {issue}23716[#23716]) -* Fixes snapshot deletion handling on in-progress snapshot failure {pull}23703[#23703] (issue: {issue}23663[#23663]) -* Prioritize listing index-N blobs over index.latest in reading snapshots {pull}23333[#23333] -* Gracefully handles pre 2.x compressed snapshots {pull}22267[#22267] -* URLRepository should throw NoSuchFileException to correctly adhere to readBlob contract {pull}22069[#22069] (issue: {issue}22004[#22004]) -* Fixes shard level snapshot metadata loading when index-N file is missing {pull}21813[#21813] -* Ensures cleanup of temporary index-* generational blobs during snapshotting {pull}21469[#21469] (issue: {issue}21462[#21462]) -* Fixes get snapshot duplicates when asking for _all {pull}21340[#21340] (issue: {issue}21335[#21335]) - -Stats:: -* Avoid overflow when computing total FS stats {pull}23641[#23641] -* Handle existence of cgroup version 2 hierarchy {pull}23493[#23493] (issue: {issue}23486[#23486]) -* Handle long overflow when adding paths' totals {pull}23293[#23293] (issue: {issue}23093[#23093]) -* Fix control group pattern {pull}23219[#23219] (issue: {issue}23218[#23218]) -* Fix total disk bytes returning negative value {pull}23093[#23093] -* Implement stats for geo_point and geo_shape field {pull}22391[#22391] (issue: {issue}22384[#22384]) -* Use reader for doc stats {pull}22317[#22317] (issue: {issue}22285[#22285]) -* Avoid NPE in NodeService#stats if HTTP is disabled {pull}22060[#22060] (issue: {issue}22058[#22058]) -* Add support for "include_segment_file_sizes" in indices stats REST handler {pull}21879[#21879] (issue: {issue}21878[#21878]) -* Remove output_uuid parameter from cluster stats {pull}21020[#21020] (issue: {issue}20722[#20722]) -* Fix FieldStats deserialization of `ip` field {pull}20522[#20522] (issue: {issue}20516[#20516]) - -Task Manager:: -* Task Management: Make TaskInfo parsing forwards compatible {pull}24073[#24073] (issue: {issue}23250[#23250]) -* Fix hanging cancelling task with no children {pull}22796[#22796] -* Fix broken TaskInfo.toString() {pull}22698[#22698] (issue: {issue}22387[#22387]) -* Task cancellation command should wait for all child nodes to receive cancellation request before returning {pull}21397[#21397] (issue: {issue}21126[#21126]) - -Term Vectors:: -* Fix _termvectors with preference to not hit NPE {pull}21959[#21959] -* Return correct term statistics when a field is not found in a shard {pull}21922[#21922] (issue: {issue}21906[#21906]) - -Tribe Node:: -* Add socket permissions for tribe nodes {pull}21546[#21546] (issues: {issue}16392[#16392], {issue}21122[#21122]) - - - -[[regression-6.0.0-alpha1-5x]] -[float] -=== Regressions - -Bulk:: -* Fix _bulk response when it can't create an index {pull}24048[#24048] (issues: {issue}22488[#22488], {issue}24028[#24028]) - -Core:: -* Source filtering: only accept array items if the previous include pattern matches {pull}22593[#22593] (issue: {issue}22557[#22557]) - -Highlighting:: -* Handle SynonymQuery extraction for the FastVectorHighlighter {pull}20829[#20829] (issue: {issue}20781[#20781]) - -Logging:: -* Restores the original default format of search slow log {pull}21770[#21770] (issue: {issue}21711[#21711]) - -Network:: -* You had one job Netty logging guard {pull}24469[#24469] (issues: {issue}5624[#5624], {issue}6568[#6568]) - -Plugin Discovery EC2:: -* Fix ec2 discovery when used with IAM profiles. {pull}21042[#21042] (issue: {issue}21039[#21039]) - -Plugin Repository S3:: -* Fix s3 repository when used with IAM profiles {pull}21058[#21058] (issue: {issue}21048[#21048]) - -Plugins:: -* Plugins: Add back user agent when downloading plugins {pull}20872[#20872] - -Search:: -* Handle specialized term queries in MappedFieldType.extractTerm(Query) {pull}21889[#21889] (issue: {issue}21882[#21882]) - - - -[[upgrade-6.0.0-alpha1-5x]] -[float] -=== Upgrades - -Aggregations:: -* Upgrade HDRHistogram to 2.1.9 {pull}23254[#23254] (issue: {issue}23239[#23239]) - -Core:: -* Upgrade to Lucene 6.5.0 {pull}23750[#23750] -* Upgrade from JNA 4.2.2 to JNA 4.4.0 {pull}23636[#23636] -* Upgrade to lucene-6.5.0-snapshot-d00c5ca {pull}23385[#23385] -* Upgrade to lucene-6.5.0-snapshot-f919485. {pull}23087[#23087] -* Upgrade to Lucene 6.4.0 {pull}22724[#22724] -* Update Jackson to 2.8.6 {pull}22596[#22596] (issue: {issue}22266[#22266]) -* Upgrade to lucene-6.4.0-snapshot-084f7a0. {pull}22413[#22413] -* Upgrade to lucene-6.4.0-snapshot-ec38570 {pull}21853[#21853] -* Upgrade to lucene-6.3.0. {pull}21464[#21464] - -Dates:: -* Update Joda Time to version 2.9.5 {pull}21468[#21468] (issues: {issue}20911[#20911], {issue}332[#332], {issue}373[#373], {issue}378[#378], {issue}379[#379], {issue}386[#386], {issue}394[#394], {issue}396[#396], {issue}397[#397], {issue}404[#404], {issue}69[#69]) - -Internal:: -* Upgrade to Lucene 6.4.1. {pull}22978[#22978] - -Logging:: -* Upgrade to Log4j 2.8.2 {pull}23995[#23995] -* Upgrade Log4j 2 to version 2.7 {pull}20805[#20805] (issue: {issue}20304[#20304]) - -Network:: -* Upgrade Netty to 4.1.10.Final {pull}24414[#24414] -* Upgrade to Netty 4.1.9 {pull}23540[#23540] (issues: {issue}23172[#23172], {issue}6308[#6308], {issue}6374[#6374]) -* Upgrade to Netty 4.1.8 {pull}23055[#23055] -* Upgrade to Netty 4.1.7 {pull}22587[#22587] -* Upgrade to Netty 4.1.6 {pull}21051[#21051] - -Plugin Repository Azure:: -* Update to Azure Storage 5.0.0 {pull}23517[#23517] (issue: {issue}23448[#23448]) - diff --git a/docs/reference/release-notes/6.0.0-alpha1.asciidoc b/docs/reference/release-notes/6.0.0-alpha1.asciidoc deleted file mode 100644 index a2001af7a2edf..0000000000000 --- a/docs/reference/release-notes/6.0.0-alpha1.asciidoc +++ /dev/null @@ -1,312 +0,0 @@ -[[release-notes-6.0.0-alpha1]] -== 6.0.0-alpha1 Release Notes - -The changes listed below have been released for the first time in Elasticsearch 6.0.0-alpha1. Changes in this release which were first released in the 5.x series are listed in <>. - - -Also see <>. - -[[breaking-6.0.0-alpha1]] -[float] -=== Breaking changes - -Allocation:: -* Remove `cluster.routing.allocation.snapshot.relocation_enabled` setting {pull}20994[#20994] - -Analysis:: -* Removing query-string parameters in `_analyze` API {pull}20704[#20704] (issue: {issue}20246[#20246]) - -CAT API:: -* Write -1 on unbounded queue in cat thread pool {pull}21342[#21342] (issue: {issue}21187[#21187]) - -CRUD:: -* Disallow `VersionType.FORCE` for GetRequest {pull}21079[#21079] (issue: {issue}20995[#20995]) -* Disallow `VersionType.FORCE` versioning for 6.x indices {pull}20995[#20995] (issue: {issue}20377[#20377]) - -Cluster:: -* No longer allow cluster name in data path {pull}20433[#20433] (issue: {issue}20391[#20391]) - -Core:: -* Simplify file store {pull}24402[#24402] (issue: {issue}24390[#24390]) -* Make boolean conversion strict {pull}22200[#22200] -* Remove the `default` store type. {pull}21616[#21616] -* Remove store throttling. {pull}21573[#21573] - -Geo:: -* Remove deprecated geo search features {pull}22876[#22876] -* Reduce GeoDistance Insanity {pull}19846[#19846] - -Index APIs:: -* Open/Close index api to allow_no_indices by default {pull}24401[#24401] (issues: {issue}24031[#24031], {issue}24341[#24341]) -* Remove support for controversial `ignore_unavailable` and `allow_no_indices` from indices exists api {pull}20712[#20712] - -Index Templates:: -* Allows multiple patterns to be specified for index templates {pull}21009[#21009] (issue: {issue}20690[#20690]) - -Java API:: -* Enforce Content-Type requirement on the rest layer and remove deprecated methods {pull}23146[#23146] (issue: {issue}19388[#19388]) - -Mapping:: -* Enforce at most one type. {pull}24428[#24428] (issue: {issue}24317[#24317]) -* Disallow `include_in_all` for 6.0+ indices {pull}22970[#22970] (issue: {issue}22923[#22923]) -* Disable _all by default, disallow configuring _all on 6.0+ indices {pull}22144[#22144] (issues: {issue}19784[#19784], {issue}20925[#20925], {issue}21341[#21341]) -* Throw an exception on unrecognized "match_mapping_type" {pull}22090[#22090] (issue: {issue}17285[#17285]) - -Network:: -* Remove blocking TCP clients and servers {pull}22639[#22639] -* Remove `modules/transport_netty_3` in favor of `netty_4` {pull}21590[#21590] -* Remove LocalTransport in favor of MockTcpTransport {pull}20695[#20695] - -Packaging:: -* Remove customization of ES_USER and ES_GROUP {pull}23989[#23989] (issue: {issue}23848[#23848]) - -Percolator:: -* Remove deprecated percolate and mpercolate apis {pull}22331[#22331] - -Plugin Delete By Query:: -* Require explicit query in _delete_by_query API {pull}23632[#23632] (issue: {issue}23629[#23629]) - -Plugin Discovery EC2:: -* Ec2 Discovery: Cleanup deprecated settings {pull}24150[#24150] -* Discovery EC2: Remove region setting {pull}23991[#23991] (issue: {issue}22758[#22758]) -* AWS Plugins: Remove signer type setting {pull}23984[#23984] (issue: {issue}22599[#22599]) - -Plugin Lang JS:: -* Remove lang-python and lang-javascript {pull}20734[#20734] (issue: {issue}20698[#20698]) - -Plugin Mapper Attachment:: -* Remove mapper attachments plugin {pull}20416[#20416] (issue: {issue}18837[#18837]) - -Plugin Repository Azure:: -* Remove global `repositories.azure` settings {pull}23262[#23262] (issues: {issue}22800[#22800], {issue}22856[#22856]) -* Remove auto creation of container for azure repository {pull}22858[#22858] (issue: {issue}22857[#22857]) - -Plugin Repository S3:: -* S3 Repository: Cleanup deprecated settings {pull}24097[#24097] -* S3 Repository: Remove region setting {pull}22853[#22853] (issue: {issue}22758[#22758]) -* S3 Repository: Remove bucket auto create {pull}22846[#22846] (issue: {issue}22761[#22761]) -* S3 Repository: Remove env var and sysprop credentials support {pull}22842[#22842] - -Query DSL:: -* Remove deprecated `minimum_number_should_match` in BoolQueryBuilder {pull}22416[#22416] -* Remove support for empty queries {pull}22092[#22092] (issue: {issue}17624[#17624]) -* Remove deprecated query names: in, geo_bbox, mlt, fuzzy_match and match_fuzzy {pull}21852[#21852] -* The `terms` query should always map to a Lucene `TermsQuery`. {pull}21786[#21786] -* Be strict when parsing values searching for booleans {pull}21555[#21555] (issue: {issue}21545[#21545]) -* Remove collect payloads parameter {pull}20385[#20385] - -REST:: -* Remove ldjson support and document ndjson for bulk/msearch {pull}23049[#23049] (issue: {issue}23025[#23025]) -* Enable strict duplicate checks for all XContent types {pull}22225[#22225] (issues: {issue}19614[#19614], {issue}22073[#22073]) -* Enable strict duplicate checks for JSON content {pull}22073[#22073] (issue: {issue}19614[#19614]) -* Remove lenient stats parsing {pull}21417[#21417] (issues: {issue}20722[#20722], {issue}21410[#21410]) -* Remove allow unquoted JSON {pull}20388[#20388] (issues: {issue}17674[#17674], {issue}17801[#17801]) -* Remove FORCE version_type {pull}20377[#20377] (issue: {issue}19769[#19769]) - -Scripting:: -* Make dates be ReadableDateTimes in scripts {pull}22948[#22948] (issue: {issue}22875[#22875]) -* Remove groovy scripting language {pull}21607[#21607] - -Search:: -* ProfileResult and CollectorResult should print machine readable timing information {pull}22561[#22561] -* Remove indices query {pull}21837[#21837] (issue: {issue}17710[#17710]) -* Remove ignored type parameter in search_shards api {pull}21688[#21688] - -Sequence IDs:: -* Change certain replica failures not to fail the replica shard {pull}22874[#22874] (issue: {issue}10708[#10708]) - -Shadow Replicas:: -* Remove shadow replicas {pull}23906[#23906] (issue: {issue}22024[#22024]) - - - -[[breaking-java-6.0.0-alpha1]] -[float] -=== Breaking Java changes - -Java API:: -* Java api: ActionRequestBuilder#execute to return a PlainActionFuture {pull}24415[#24415] (issues: {issue}24412[#24412], {issue}9201[#9201]) - -Network:: -* Simplify TransportAddress {pull}20798[#20798] - - - -[[deprecation-6.0.0-alpha1]] -[float] -=== Deprecations - -Index Templates:: -* Restore deprecation warning for invalid match_mapping_type values {pull}22304[#22304] - -Internal:: -* Deprecate XContentType auto detection methods in XContentFactory {pull}22181[#22181] (issue: {issue}19388[#19388]) - - - -[[feature-6.0.0-alpha1]] -[float] -=== New features - -Core:: -* Enable index-time sorting {pull}24055[#24055] (issue: {issue}6720[#6720]) - - - -[[enhancement-6.0.0-alpha1]] -[float] -=== Enhancements - -Aggregations:: -* Agg builder accessibility fixes {pull}24323[#24323] -* Remove support for the include/pattern syntax. {pull}23141[#23141] (issue: {issue}22933[#22933]) -* Promote longs to doubles when a terms agg mixes decimal and non-decimal numbers {pull}22449[#22449] (issue: {issue}22232[#22232]) - -Analysis:: -* Match- and MultiMatchQueryBuilder should only allow setting analyzer on string values {pull}23684[#23684] (issue: {issue}21665[#21665]) - -Bulk:: -* Simplify bulk request execution {pull}20109[#20109] - -CRUD:: -* Added validation for upsert request {pull}24282[#24282] (issue: {issue}16671[#16671]) - -Cluster:: -* Separate publishing from applying cluster states {pull}24236[#24236] -* Adds cluster state size to /_cluster/state response {pull}23440[#23440] (issue: {issue}3415[#3415]) - -Core:: -* Remove connect SocketPermissions from core {pull}22797[#22797] -* Add repository-url module and move URLRepository {pull}22752[#22752] (issue: {issue}22116[#22116]) -* Remove accept SocketPermissions from core {pull}22622[#22622] (issue: {issue}22116[#22116]) -* Move IfConfig.logIfNecessary call into bootstrap {pull}22455[#22455] (issue: {issue}22116[#22116]) -* Remove artificial default processors limit {pull}20874[#20874] (issue: {issue}20828[#20828]) -* Simplify write failure handling {pull}19105[#19105] (issue: {issue}20109[#20109]) - -Engine:: -* Fill missing sequence IDs up to max sequence ID when recovering from store {pull}24238[#24238] (issue: {issue}10708[#10708]) -* Use sequence numbers to identify out of order delivery in replicas & recovery {pull}24060[#24060] (issue: {issue}10708[#10708]) -* Add replica ops with version conflict to translog {pull}22626[#22626] -* Clarify global checkpoint recovery {pull}21934[#21934] (issue: {issue}21254[#21254]) - -Internal:: -* Try to convince the JVM not to lose stacktraces {pull}24426[#24426] (issue: {issue}24376[#24376]) -* Make document write requests immutable {pull}23038[#23038] - -Java High Level REST Client:: -* Add info method to High Level Rest client {pull}23350[#23350] -* Add support for named xcontent parsers to high level REST client {pull}23328[#23328] -* Add BulkRequest support to High Level Rest client {pull}23312[#23312] -* Add UpdateRequest support to High Level Rest client {pull}23266[#23266] -* Add delete API to the High Level Rest Client {pull}23187[#23187] -* Add Index API to High Level Rest Client {pull}23040[#23040] -* Add get/exists method to RestHighLevelClient {pull}22706[#22706] -* Add fromxcontent methods to delete response {pull}22680[#22680] (issue: {issue}22229[#22229]) -* Add REST high level client gradle submodule and first simple method {pull}22371[#22371] - -Java REST Client:: -* Wrap rest httpclient with doPrivileged blocks {pull}22603[#22603] (issue: {issue}22116[#22116]) - -Mapping:: -* Date detection should not rely on a hardcoded set of characters. {pull}22171[#22171] (issue: {issue}1694[#1694]) - -Network:: -* Isolate SocketPermissions to Netty {pull}23057[#23057] -* Wrap netty accept/connect ops with doPrivileged {pull}22572[#22572] (issue: {issue}22116[#22116]) -* Replace Socket, ServerSocket, and HttpServer usages in tests with mocksocket versions {pull}22287[#22287] (issue: {issue}22116[#22116]) - -Plugin Discovery EC2:: -* Read ec2 discovery address from aws instance tags {pull}22743[#22743] (issue: {issue}22566[#22566]) - -Plugin Repository HDFS:: -* Add doPrivilege blocks for socket connect ops in repository-hdfs {pull}22793[#22793] (issue: {issue}22116[#22116]) - -Plugins:: -* Add doPrivilege blocks for socket connect operations in plugins {pull}22534[#22534] (issue: {issue}22116[#22116]) - -Recovery:: -* Peer Recovery: remove maxUnsafeAutoIdTimestamp hand off {pull}24243[#24243] (issue: {issue}24149[#24149]) -* Introduce sequence-number-based recovery {pull}22484[#22484] (issue: {issue}10708[#10708]) - -Search:: -* Add parsing from xContent to Suggest {pull}22903[#22903] -* Add parsing from xContent to ShardSearchFailure {pull}22699[#22699] - -Sequence IDs:: -* Block global checkpoint advances when recovering {pull}24404[#24404] (issue: {issue}10708[#10708]) -* Add primary term to doc write response {pull}24171[#24171] (issue: {issue}10708[#10708]) -* Preserve multiple translog generations {pull}24015[#24015] (issue: {issue}10708[#10708]) -* Introduce translog generation rolling {pull}23606[#23606] (issue: {issue}10708[#10708]) -* Replicate write failures {pull}23314[#23314] -* Introduce sequence-number-aware translog {pull}22822[#22822] (issue: {issue}10708[#10708]) -* Introduce translog no-op {pull}22291[#22291] (issue: {issue}10708[#10708]) -* Tighten sequence numbers recovery {pull}22212[#22212] (issue: {issue}10708[#10708]) -* Add BWC layer to seq no infra and enable BWC tests {pull}22185[#22185] (issue: {issue}21670[#21670]) -* Add internal _primary_term doc values field, fix _seq_no indexing {pull}21637[#21637] (issues: {issue}10708[#10708], {issue}21480[#21480]) -* Add global checkpoint to translog checkpoints {pull}21254[#21254] -* Sequence numbers commit data for Lucene uses Iterable interface {pull}20793[#20793] (issue: {issue}10708[#10708]) -* Simplify GlobalCheckpointService and properly hook it for cluster state updates {pull}20720[#20720] - -Stats:: -* Expose disk usage estimates in nodes stats {pull}22081[#22081] (issue: {issue}8686[#8686]) - -Store:: -* Remote support for lucene versions without checksums {pull}24021[#24021] - -Suggesters:: -* Remove deprecated _suggest endpoint {pull}22203[#22203] (issue: {issue}20305[#20305]) - -Task Manager:: -* Add descriptions to bulk tasks {pull}22059[#22059] (issue: {issue}21768[#21768]) - - - -[[bug-6.0.0-alpha1]] -[float] -=== Bug fixes - -Ingest:: -* Remove support for Visio and potm files {pull}22079[#22079] (issue: {issue}22077[#22077]) - -Inner Hits:: -* If size / offset are out of bounds just do a plain count {pull}20556[#20556] (issue: {issue}20501[#20501]) - -Internal:: -* Fix handling of document failure exception in InternalEngine {pull}22718[#22718] - -Plugin Ingest Attachment:: -* Add missing mime4j library {pull}22764[#22764] (issue: {issue}22077[#22077]) - -Plugin Repository S3:: -* Wrap getCredentials() in a doPrivileged() block {pull}23297[#23297] (issues: {issue}22534[#22534], {issue}23271[#23271]) - -Sequence IDs:: -* Avoid losing ops in file-based recovery {pull}22945[#22945] (issue: {issue}22484[#22484]) - -Snapshot/Restore:: -* Keep snapshot restore state and routing table in sync {pull}20836[#20836] (issue: {issue}19774[#19774]) - -Translog:: -* Fix Translog.Delete serialization for sequence numbers {pull}22543[#22543] - - - -[[regression-6.0.0-alpha1]] -[float] -=== Regressions - -Bulk:: -* Only re-parse operation if a mapping update was needed {pull}23832[#23832] (issue: {issue}23665[#23665]) - - - -[[upgrade-6.0.0-alpha1]] -[float] -=== Upgrades - -Core:: -* Upgrade to a Lucene 7 snapshot {pull}24089[#24089] (issues: {issue}23966[#23966], {issue}24086[#24086], {issue}24087[#24087], {issue}24088[#24088]) - -Plugin Ingest Attachment:: -* Update to Tika 1.14 {pull}21591[#21591] (issue: {issue}20390[#20390]) - diff --git a/docs/reference/release-notes/6.0.0-alpha2.asciidoc b/docs/reference/release-notes/6.0.0-alpha2.asciidoc deleted file mode 100644 index c89ddf9cb37bc..0000000000000 --- a/docs/reference/release-notes/6.0.0-alpha2.asciidoc +++ /dev/null @@ -1,180 +0,0 @@ -[[release-notes-6.0.0-alpha2]] -== 6.0.0-alpha2 Release Notes - -Also see <>. - -[[breaking-6.0.0-alpha2]] -[float] -=== Breaking changes - -CRUD:: -* Deleting a document from a non-existing index creates the indexIf the index does not exist, delete document will not auto create it {pull}24518[#24518] (issue: {issue}15425[#15425]) - -Plugin Analysis ICU:: -* Upgrade icu4j to latest version {pull}24821[#24821] - -Plugin Repository S3:: -* Remove deprecated S3 settings {pull}24445[#24445] - -Scripting:: -* Remove script access to term statistics {pull}19462[#19462] (issue: {issue}19359[#19359]) - - - -[[breaking-java-6.0.0-alpha2]] -[float] -=== Breaking Java changes - -Aggregations:: -* Make Terms.Bucket an interface rather than an abstract class {pull}24492[#24492] -* Compound order for histogram aggregations {pull}22343[#22343] (issues: {issue}14771[#14771], {issue}20003[#20003], {issue}23613[#23613]) - -Plugins:: -* Drop name from TokenizerFactory {pull}24869[#24869] - - - -[[deprecation-6.0.0-alpha2]] -[float] -=== Deprecations - -Settings:: -* Deprecate settings in .yml and .json {pull}24059[#24059] (issue: {issue}19391[#19391]) - - - -[[feature-6.0.0-alpha2]] -[float] -=== New features - -Aggregations:: -* SignificantText aggregation - like significant_terms, but for text {pull}24432[#24432] (issue: {issue}23674[#23674]) - -Internal:: -* Automatically adjust search threadpool queue_size {pull}23884[#23884] (issue: {issue}3890[#3890]) - -Mapping:: -* Add new ip_range field type {pull}24433[#24433] - -Plugin Analysis ICU:: -* Add ICUCollationFieldMapper {pull}24126[#24126] - - - -[[enhancement-6.0.0-alpha2]] -[float] -=== Enhancements - -Core:: -* Improve bootstrap checks error messages {pull}24548[#24548] - -Engine:: -* Move the IndexDeletionPolicy to be engine internal {pull}24930[#24930] (issue: {issue}10708[#10708]) - -Internal:: -* Add assertions enabled helper {pull}24834[#24834] - -Java High Level REST Client:: -* Add doc_count to ParsedMatrixStats {pull}24952[#24952] (issue: {issue}24776[#24776]) -* Add fromXContent method to ClearScrollResponse {pull}24909[#24909] -* ClearScrollRequest to implement ToXContentObject {pull}24907[#24907] -* SearchScrollRequest to implement ToXContentObject {pull}24906[#24906] (issue: {issue}3889[#3889]) -* Add aggs parsers for high level REST Client {pull}24824[#24824] (issues: {issue}23965[#23965], {issue}23973[#23973], {issue}23974[#23974], {issue}24085[#24085], {issue}24160[#24160], {issue}24162[#24162], {issue}24182[#24182], {issue}24183[#24183], {issue}24208[#24208], {issue}24213[#24213], {issue}24239[#24239], {issue}24284[#24284], {issue}24312[#24312], {issue}24330[#24330], {issue}24365[#24365], {issue}24371[#24371], {issue}24442[#24442], {issue}24521[#24521], {issue}24524[#24524], {issue}24564[#24564], {issue}24583[#24583], {issue}24589[#24589], {issue}24648[#24648], {issue}24667[#24667], {issue}24675[#24675], {issue}24682[#24682], {issue}24700[#24700], {issue}24706[#24706], {issue}24717[#24717], {issue}24720[#24720], {issue}24738[#24738], {issue}24746[#24746], {issue}24789[#24789], {issue}24791[#24791], {issue}24794[#24794], {issue}24796[#24796], {issue}24822[#24822]) - -Mapping:: -* Identify documents by their `_id`. {pull}24460[#24460] - -Packaging:: -* Set number of processes in systemd unit file {pull}24970[#24970] (issue: {issue}20874[#20874]) - -Plugin Lang Painless:: -* Make Painless Compiler Use an Instance Per Context {pull}24972[#24972] -* Make PainlessScript An Interface {pull}24966[#24966] - -Recovery:: -* Introduce primary context {pull}25031[#25031] (issue: {issue}10708[#10708]) - -Scripting:: -* Add StatefulFactoryType as optional intermediate factory in script contexts {pull}24974[#24974] (issue: {issue}20426[#20426]) -* Make contexts available to ScriptEngine construction {pull}24896[#24896] -* Make ScriptEngine.compile generic on the script context {pull}24873[#24873] -* Add instance and compiled classes to script contexts {pull}24868[#24868] - -Search:: -* Eliminate array access in tight loops when profiling is enabled. {pull}24959[#24959] -* Support Multiple Inner Hits on a Field Collapse Request {pull}24517[#24517] -* Expand cross cluster search indices for search requests to the concrete index or to it's aliases {pull}24502[#24502] - -Search Templates:: -* Add max concurrent searches to multi template search {pull}24255[#24255] (issues: {issue}20912[#20912], {issue}21907[#21907]) - -Sequence IDs:: -* Fill gaps on primary promotion {pull}24945[#24945] (issue: {issue}10708[#10708]) -* Introduce clean transition on primary promotion {pull}24925[#24925] (issue: {issue}10708[#10708]) -* Guarantee that translog generations are seqNo conflict free {pull}24825[#24825] (issues: {issue}10708[#10708], {issue}24779[#24779]) -* Inline global checkpoints {pull}24513[#24513] (issue: {issue}10708[#10708]) - -Snapshot/Restore:: -* Enhances get snapshots API to allow retrieving repository index only {pull}24477[#24477] (issue: {issue}24288[#24288]) - - - -[[bug-6.0.0-alpha2]] -[float] -=== Bug fixes - -Aggregations:: -* Terms aggregation should remap global ordinal buckets when a sub-aggregator is used to sort the terms {pull}24941[#24941] (issue: {issue}24788[#24788]) -* Correctly set doc_count when MovAvg "predicts" values on existing buckets {pull}24892[#24892] (issue: {issue}24327[#24327]) -* DateHistogram: Fix `extended_bounds` with `offset` {pull}23789[#23789] (issue: {issue}23776[#23776]) -* Fix ArrayIndexOutOfBoundsException when no ranges are specified in the query {pull}23241[#23241] (issue: {issue}22881[#22881]) - -Analysis:: -* PatternAnalyzer should lowercase wildcard queries when `lowercase` is true. {pull}24967[#24967] - -Cache:: -* fix bug of weight computation {pull}24856[#24856] - -Core:: -* Fix cache expire after access {pull}24546[#24546] - -Index APIs:: -* Validates updated settings on closed indices {pull}24487[#24487] (issue: {issue}23787[#23787]) - -Ingest:: -* Fix floating-point error when DateProcessor parses UNIX {pull}24947[#24947] -* add option for _ingest.timestamp to use new ZonedDateTime (5.x backport) {pull}24030[#24030] (issues: {issue}23168[#23168], {issue}23174[#23174]) - -Inner Hits:: -* Fix Source filtering in new field collapsing feature {pull}24068[#24068] (issue: {issue}24063[#24063]) - -Internal:: -* Ensure remote cluster is connected before fetching `_field_caps` {pull}24845[#24845] (issue: {issue}24763[#24763]) - -Network:: -* Fix error message if an incompatible node connects {pull}24884[#24884] - -Plugins:: -* Fix plugin installation permissions {pull}24527[#24527] (issue: {issue}24480[#24480]) - -Scroll:: -* Fix single shard scroll within a cluster with nodes in version `>= 5.3` and `<= 5.3` {pull}24512[#24512] - -Search:: -* Fix script field sort returning Double.MAX_VALUE for all documents {pull}24942[#24942] (issue: {issue}24940[#24940]) -* Compute the took time of the query after the expand phase of field collapsing {pull}24902[#24902] (issue: {issue}24900[#24900]) - -Sequence IDs:: -* Handle primary failure handling replica response {pull}24926[#24926] (issue: {issue}24935[#24935]) - -Snapshot/Restore:: -* Fix inefficient (worst case exponential) loading of snapshot repository {pull}24510[#24510] (issue: {issue}24509[#24509]) - -Stats:: -* Avoid double decrement on current query counter {pull}24922[#24922] (issues: {issue}22996[#22996], {issue}24872[#24872]) -* Adjust available and free bytes to be non-negative on huge FSes {pull}24911[#24911] (issues: {issue}23093[#23093], {issue}24453[#24453]) - -Suggesters:: -* Fix context suggester to read values from keyword type field {pull}24200[#24200] (issue: {issue}24129[#24129]) - - diff --git a/docs/reference/release-notes/7.0.0-alpha1.asciidoc b/docs/reference/release-notes/7.0.0-alpha1.asciidoc new file mode 100644 index 0000000000000..cd2ed51789cf7 --- /dev/null +++ b/docs/reference/release-notes/7.0.0-alpha1.asciidoc @@ -0,0 +1,10 @@ +[[release-notes-7.0.0-alpha1]] +== 7.0.0-alpha1 Release Notes + +The changes listed below have been released for the first time in Elasticsearch 7.0.0-alpha1. + +[[breaking-7.0.0-alpha1]] +[float] +=== Breaking changes + +No breaking changes have been made (yet) diff --git a/docs/reference/search.asciidoc b/docs/reference/search.asciidoc index fcf047bfb17bd..d69175fec3f7c 100644 --- a/docs/reference/search.asciidoc +++ b/docs/reference/search.asciidoc @@ -11,10 +11,11 @@ exception of the <> endpoints. [[search-routing]] == Routing -When executing a search, it will be broadcast to all the index/indices -shards (round robin between replicas). Which shards will be searched on -can be controlled by providing the `routing` parameter. For example, -when indexing tweets, the routing value can be the user name: +When executing a search, Elasticsearch will pick the "best" copy of the data +based on the <> formula. +Which shards will be searched on can also be controlled by providing the +`routing` parameter. For example, when indexing tweets, the routing value can be +the user name: [source,js] -------------------------------------------------- @@ -56,6 +57,37 @@ The routing parameter can be multi valued represented as a comma separated string. This will result in hitting the relevant shards where the routing values match to. +[float] +[[search-adaptive-replica]] +== Adaptive Replica Selection + +By default, Elasticsearch will use what is called adaptive replica selection. +This allows the coordinating node to send the request to the copy deemed "best" +based on a number of criteria: + +- Response time of past requests between the coordinating node and the node + containing the copy of the data +- Time past search requests took to execute on the node containing the data +- The queue size of the search threadpool on the node containing the data + +This can be turned off by changing the dynamic cluster setting +`cluster.routing.use_adaptive_replica_selection` from `true` to `false`: + +[source,js] +-------------------------------------------------- +PUT /_cluster/settings +{ + "transient": { + "cluster.routing.use_adaptive_replica_selection": false + } +} +-------------------------------------------------- +// CONSOLE + +If adaptive replica selection is turned off, searches are sent to the +index/indices shards in a round robin fashion between all copies of the data +(primaries and replicas). + [float] [[stats-groups]] == Stats Groups diff --git a/docs/reference/search/explain.asciidoc b/docs/reference/search/explain.asciidoc index 04ff66f6c37a8..7f7024f1e81b2 100644 --- a/docs/reference/search/explain.asciidoc +++ b/docs/reference/search/explain.asciidoc @@ -143,11 +143,11 @@ This will yield the same result as the previous request. `df`:: The default field to use when no field prefix is defined within - the query. Defaults to _all field. + the query. `analyzer`:: The analyzer name to be used when analyzing the query - string. Defaults to the analyzer of the _all field. + string. Defaults to the default search analyzer. `analyze_wildcard`:: Should wildcard and prefix queries be analyzed or diff --git a/docs/reference/search/profile.asciidoc b/docs/reference/search/profile.asciidoc index db72026aa1412..c864c643c8f6b 100644 --- a/docs/reference/search/profile.asciidoc +++ b/docs/reference/search/profile.asciidoc @@ -1,7 +1,7 @@ [[search-profile]] == Profile API -WARNING: The Profile API is a debugging tool and adds signficant overhead to search execution. +WARNING: The Profile API is a debugging tool and adds significant overhead to search execution. The Profile API provides detailed timing information about the execution of individual components in a search request. It gives the user insight into how search requests are executed at a low level so that @@ -17,11 +17,11 @@ Any `_search` request can be profiled by adding a top-level `profile` parameter: [source,js] -------------------------------------------------- -GET /_search +GET /twitter/_search { "profile": true,<1> "query" : { - "match" : { "message" : "message number" } + "match" : { "message" : "some number" } } } -------------------------------------------------- @@ -58,7 +58,7 @@ This will yield the following result: "query": [ { "type": "BooleanQuery", - "description": "message:message message:number", + "description": "message:some message:number", "time_in_nanos": "1873811", "breakdown": { "score": 51306, @@ -77,7 +77,7 @@ This will yield the following result: "children": [ { "type": "TermQuery", - "description": "message:message", + "description": "message:some", "time_in_nanos": "391943", "breakdown": { "score": 28776, @@ -230,13 +230,13 @@ The overall structure of this query tree will resemble your original Elasticsear "query": [ { "type": "BooleanQuery", - "description": "message:message message:number", + "description": "message:some message:number", "time_in_nanos": "1873811", "breakdown": {...}, <1> "children": [ { "type": "TermQuery", - "description": "message:message", + "description": "message:some", "time_in_nanos": "391943", "breakdown": {...} }, @@ -291,7 +291,7 @@ The `breakdown` component lists detailed timing statistics about low-level Lucen "advance_count": 0 } -------------------------------------------------- -// TESTRESPONSE[s/^/{\n"took": $body.took,\n"timed_out": $body.timed_out,\n"_shards": $body._shards,\n"hits": $body.hits,\n"profile": {\n"shards": [ {\n"id": "$body.$_path",\n"searches": [{\n"query": [{\n"type": "BooleanQuery",\n"description": "message:message message:number",\n"time_in_nanos": $body.$_path,/] +// TESTRESPONSE[s/^/{\n"took": $body.took,\n"timed_out": $body.timed_out,\n"_shards": $body._shards,\n"hits": $body.hits,\n"profile": {\n"shards": [ {\n"id": "$body.$_path",\n"searches": [{\n"query": [{\n"type": "BooleanQuery",\n"description": "message:some message:number",\n"time_in_nanos": $body.$_path,/] // TESTRESPONSE[s/}$/},\n"children": $body.$_path}],\n"rewrite_time": $body.$_path, "collector": $body.$_path}], "aggregations": []}]}}/] // TESTRESPONSE[s/(?<=[" ])\d+(\.\d+)?/$body.$_path/] @@ -469,58 +469,48 @@ value is cumulative and contains the total time for all queries being rewritten. ==== A more complex example + To demonstrate a slightly more complex query and the associated results, we can profile the following query: [source,js] -------------------------------------------------- -GET /test/_search +GET /twitter/_search { "profile": true, "query": { "term": { - "message": { - "value": "search" + "user": { + "value": "test" } } }, "aggs": { - "non_global_term": { + "my_scoped_agg": { "terms": { - "field": "agg" - }, - "aggs": { - "second_term": { - "terms": { - "field": "sub_agg" - } - } + "field": "likes" } }, - "another_agg": { - "cardinality": { - "field": "aggB" - } - }, - "global_agg": { + "my_global_agg": { "global": {}, "aggs": { - "my_agg2": { + "my_level_agg": { "terms": { - "field": "globalAgg" + "field": "likes" } } } } }, "post_filter": { - "term": { - "my_field": "foo" + "match": { + "message": "some" } } } -------------------------------------------------- // CONSOLE -// TEST[s/^/PUT test\n/] +// TEST[s/_search/_search\?filter_path=profile.shards.id,profile.shards.searches,profile.shards.aggregations/] +// TEST[continued] This example has: @@ -531,10 +521,10 @@ This example has: And the response: - [source,js] -------------------------------------------------- { + ... "profile": { "shards": [ { @@ -544,38 +534,38 @@ And the response: "query": [ { "type": "TermQuery", - "description": "my_field:foo", + "description": "message:some", "time_in_nanos": "409456", "breakdown": { "score": 0, - "score_count": 1, - "next_doc": 0, - "next_doc_count": 2, - "match": 0, + "build_scorer_count": 1, "match_count": 0, "create_weight": 31584, + "next_doc": 0, + "match": 0, "create_weight_count": 1, + "next_doc_count": 2, + "score_count": 1, "build_scorer": 377872, - "build_scorer_count": 1, "advance": 0, "advance_count": 0 } }, { "type": "TermQuery", - "description": "message:search", + "description": "user:test", "time_in_nanos": "303702", "breakdown": { "score": 0, - "score_count": 1, - "next_doc": 5936, - "next_doc_count": 2, - "match": 0, + "build_scorer_count": 1, "match_count": 0, "create_weight": 185215, + "next_doc": 5936, + "match": 0, "create_weight_count": 1, + "next_doc_count": 2, + "score_count": 1, "build_scorer": 112551, - "build_scorer_count": 1, "advance": 0, "advance_count": 0 } @@ -584,80 +574,60 @@ And the response: "rewrite_time": 7208, "collector": [ { - "name": "MultiCollector", - "reason": "search_multi", - "time_in_nanos": "1378943", - "children": [ - { - "name": "FilteredCollector", - "reason": "search_post_filter", - "time_in_nanos": "403659", - "children": [ + "name": "CancellableCollector", + "reason": "search_cancelled", + "time_in_nanos": 2390, + "children": [ + { + "name": "MultiCollector", + "reason": "search_multi", + "time_in_nanos": 1820, + "children": [ + { + "name": "FilteredCollector", + "reason": "search_post_filter", + "time_in_nanos": 7735, + "children": [ { - "name": "SimpleTopScoreDocCollector", - "reason": "search_top_hits", - "time_in_nanos": "6391" + "name": "SimpleTopScoreDocCollector", + "reason": "search_top_hits", + "time_in_nanos": 1328 } - ] - }, - { - "name": "BucketCollector: [[non_global_term, another_agg]]", - "reason": "aggregation", - "time_in_nanos": "954602" - } - ] - } - ] - }, - { - "query": [ - { - "type": "MatchAllDocsQuery", - "description": "*:*", - "time_in_nanos": "48293", - "breakdown": { - "score": 0, - "score_count": 1, - "next_doc": 3672, - "next_doc_count": 2, - "match": 0, - "match_count": 0, - "create_weight": 6311, - "create_weight_count": 1, - "build_scorer": 38310, - "build_scorer_count": 1, - "advance": 0, - "advance_count": 0 - } - } - ], - "rewrite_time": 1067, - "collector": [ - { - "name": "GlobalAggregator: [global_agg]", - "reason": "aggregation_global", - "time_in_nanos": "122631" + ] + }, + { + "name": "BucketCollector: [[my_scoped_agg, my_global_agg]]", + "reason": "aggregation", + "time_in_nanos": 8273 + } + ] + } + ] } ] } - ] + ], + "aggregations": [...] <1> } ] } } -------------------------------------------------- +// TESTRESPONSE[s/"aggregations": \[\.\.\.\]/"aggregations": $body.$_path/] +// TESTRESPONSE[s/\.\.\.//] +// TESTRESPONSE[s/(?<=[" ])\d+(\.\d+)?/$body.$_path/] +// TESTRESPONSE[s/"id": "\[P6-vulHtQRWuD4YnubWb7A\]\[test\]\[0\]"/"id": $body.profile.shards.0.id/] +<1> The ``"aggregations"` portion has been omitted because it will be covered in the next section As you can see, the output is significantly verbose from before. All the major portions of the query are represented: -1. The first `TermQuery` (message:search) represents the main `term` query -2. The second `TermQuery` (my_field:foo) represents the `post_filter` query -3. There is a `MatchAllDocsQuery` (\*:*) query which is being executed as a second, distinct search. This was -not part of the query specified by the user, but is auto-generated by the global aggregation to provide a global query scope +1. The first `TermQuery` (user:test) represents the main `term` query +2. The second `TermQuery` (message:some) represents the `post_filter` query -The Collector tree is fairly straightforward, showing how a single MultiCollector wraps a FilteredCollector -to execute the post_filter (and in turn wraps the normal scoring SimpleCollector), a BucketCollector to run -all scoped aggregations. In the MatchAll search, there is a single GlobalAggregator to run the global aggregation. +The Collector tree is fairly straightforward, showing how a single CancellableCollector wraps a MultiCollector + which also wraps a FilteredCollector to execute the post_filter (and in turn wraps the normal scoring SimpleCollector), + a BucketCollector to run all scoped aggregations. ==== Understanding MultiTermQuery output @@ -674,7 +644,7 @@ Due to this dynamic, per-segment rewriting, the clean tree structure becomes dis "lineage" showing how one query rewrites into the next. At present time, all we can do is apologize, and suggest you collapse the details for that query's children if it is too confusing. Luckily, all the timing statistics are correct, just not the physical layout in the response, so it is sufficient to just analyze the top-level MultiTermQuery and -ignore it's children if you find the details too tricky to interpret. +ignore its children if you find the details too tricky to interpret. Hopefully this will be fixed in future iterations, but it is a tricky problem to solve and still in-progress :) @@ -682,84 +652,123 @@ Hopefully this will be fixed in future iterations, but it is a tricky problem to ==== `aggregations` Section + The `aggregations` section contains detailed timing of the aggregation tree executed by a particular shard. -The overall structure of this aggregation tree will resemble your original Elasticsearch request. Let's consider -the following example aggregations request: +The overall structure of this aggregation tree will resemble your original Elasticsearch request. Let's +execute the previous query again and look at the aggregation profile this time: [source,js] -------------------------------------------------- -GET /house-prices/_search +GET /twitter/_search { "profile": true, - "size": 0, + "query": { + "term": { + "user": { + "value": "test" + } + } + }, "aggs": { - "property_type": { + "my_scoped_agg": { "terms": { - "field": "propertyType" - }, + "field": "likes" + } + }, + "my_global_agg": { + "global": {}, "aggs": { - "avg_price": { - "avg": { - "field": "price" + "my_level_agg": { + "terms": { + "field": "likes" } } } } + }, + "post_filter": { + "match": { + "message": "some" + } } } -------------------------------------------------- // CONSOLE -// TEST[s/^/PUT house-prices\n/] +// TEST[s/_search/_search\?filter_path=profile.shards.aggregations/] +// TEST[continued] Which yields the following aggregation profile output [source,js] -------------------------------------------------- -"aggregations": [ - { - "type": "org.elasticsearch.search.aggregations.bucket.terms.GlobalOrdinalsStringTermsAggregator", - "description": "property_type", - "time_in_nanos": "4280456978", - "breakdown": { - "reduce": 0, - "reduce_count": 0, - "build_aggregation": 49765, - "build_aggregation_count": 300, - "initialise": 52785, - "initialize_count": 300, - "collect": 3155490036, - "collect_count": 1800 - }, - "children": [ +{ + "profile" : { + "shards" : [ { - "type": "org.elasticsearch.search.aggregations.metrics.avg.AvgAggregator", - "description": "avg_price", - "time_in_nanos": "1124864392", - "breakdown": { - "reduce": 0, - "reduce_count": 0, - "build_aggregation": 1394, - "build_aggregation_count": 150, - "initialise": 2883, - "initialize_count": 150, - "collect": 1124860115, - "collect_count": 900 - } + "aggregations" : [ + { + "type" : "LongTermsAggregator", + "description" : "my_scoped_agg", + "time_in_nanos" : 195386, + "breakdown" : { + "reduce" : 0, + "build_aggregation" : 81171, + "build_aggregation_count" : 1, + "initialize" : 22753, + "initialize_count" : 1, + "reduce_count" : 0, + "collect" : 91456, + "collect_count" : 4 + } + }, + { + "type" : "GlobalAggregator", + "description" : "my_global_agg", + "time_in_nanos" : 190430, + "breakdown" : { + "reduce" : 0, + "build_aggregation" : 59990, + "build_aggregation_count" : 1, + "initialize" : 29619, + "initialize_count" : 1, + "reduce_count" : 0, + "collect" : 100815, + "collect_count" : 4 + }, + "children" : [ + { + "type" : "LongTermsAggregator", + "description" : "my_level_agg", + "time_in_nanos" : 160329, + "breakdown" : { + "reduce" : 0, + "build_aggregation" : 55712, + "build_aggregation_count" : 1, + "initialize" : 10559, + "initialize_count" : 1, + "reduce_count" : 0, + "collect" : 94052, + "collect_count" : 4 + } + } + ] + } + ] } ] } -] +} -------------------------------------------------- +// TESTRESPONSE[s/\.\.\.//] +// TESTRESPONSE[s/(?<=[" ])\d+(\.\d+)?/$body.$_path/] +// TESTRESPONSE[s/"id": "\[P6-vulHtQRWuD4YnubWb7A\]\[test\]\[0\]"/"id": $body.profile.shards.0.id/] -From the profile structure we can see our `property_type` terms aggregation which is internally represented by the -`GlobalOrdinalsStringTermsAggregator` class and the sub aggregator `avg_price` which is internally represented by the `AvgAggregator` class. The `type` field displays the class used internally to represent the aggregation. The `description` field displays the name of the aggregation. - -The `time_in_nanos` field shows that it took ~4 seconds for the entire aggregation to execute. The recorded time is inclusive -of all children. +From the profile structure we can see that the `my_scoped_agg` is internally being run as a `LongTermsAggregator` (because the field it is +aggregating, `likes`, is a numeric field). At the same level, we see a `GlobalAggregator` which comes from `my_global_agg`. That +aggregation then has a child `LongTermsAggregator` which from the second terms aggregation on `likes`. -The `breakdown` field will give detailed stats about how the time was spent, we'll look at -that in a moment. Finally, the `children` array lists any sub-aggregations that may be present. Because we have an `avg_price` aggregation as a sub-aggregation to the `property_type` aggregation we see it listed as a child of the `property_type` aggregation. the two aggregation outputs have identical information (type, time, -breakdown, etc). Children are allowed to have their own children. +The `time_in_nanos` field shows the time executed by each aggregation, and is inclusive of all children. While the overall time is useful, +the `breakdown` field will give detailed stats about how the time was spent. ===== Timing Breakdown @@ -772,12 +781,14 @@ The `breakdown` component lists detailed timing statistics about low-level Lucen "reduce_count": 0, "build_aggregation": 49765, "build_aggregation_count": 300, - "initialise": 52785, + "initialize": 52785, "initialize_count": 300, + "reduce_count": 0, "collect": 3155490036, "collect_count": 1800 } -------------------------------------------------- +// NOTCONSOLE Timings are listed in wall-clock nanoseconds and are not normalized at all. All caveats about the overall `time` apply here. The intention of the breakdown is to give you a feel for A) what machinery in Elasticsearch is diff --git a/docs/reference/search/request/highlighting.asciidoc b/docs/reference/search/request/highlighting.asciidoc index dd7af4910529d..066df3e6fa053 100644 --- a/docs/reference/search/request/highlighting.asciidoc +++ b/docs/reference/search/request/highlighting.asciidoc @@ -11,9 +11,6 @@ Highlighting requires the actual content of a field. If the field is not stored (the mapping does not set `store` to `true`), the actual `_source` is loaded and the relevant field is extracted from `_source`. -NOTE: The `_all` field cannot be extracted from `_source`, so it can only -be used for highlighting if it is explicitly stored. - For example, to get highlights for the `content` field in each search hit using the default highlighter, include a `highlight` object in the request body that specifies the `content` field: @@ -267,7 +264,7 @@ GET /_search "number_of_fragments" : 3, "fragment_size" : 150, "fields" : { - "_all" : { "pre_tags" : [""], "post_tags" : [""] }, + "body" : { "pre_tags" : [""], "post_tags" : [""] }, "blog.title" : { "number_of_fragments" : 0 }, "blog.author" : { "number_of_fragments" : 0 }, "blog.comment" : { "number_of_fragments" : 5, "order" : "score" } @@ -392,7 +389,7 @@ GET /_search "pre_tags" : [""], "post_tags" : [""], "fields" : { - "_all" : {} + "body" : {} } } } @@ -414,7 +411,7 @@ GET /_search "pre_tags" : ["", ""], "post_tags" : ["", ""], "fields" : { - "_all" : {} + "body" : {} } } } @@ -484,7 +481,7 @@ GET /_search "highlight" : { "require_field_match": false, "fields": { - "_all" : { "pre_tags" : [""], "post_tags" : [""] } + "body" : { "pre_tags" : [""], "post_tags" : [""] } } } } @@ -719,7 +716,7 @@ GET /_search }, "highlight" : { "fields" : { - "_all" : {}, + "body" : {}, "blog.title" : {"number_of_fragments" : 0} } } @@ -912,7 +909,7 @@ Response: }, "highlight": { "message": [ - "some message with the number 1" + " with the number 1" ] } } diff --git a/docs/reference/search/request/inner-hits.asciidoc b/docs/reference/search/request/inner-hits.asciidoc index 952cdedd423c7..a9da737880912 100644 --- a/docs/reference/search/request/inner-hits.asciidoc +++ b/docs/reference/search/request/inner-hits.asciidoc @@ -158,10 +158,8 @@ An example of a response snippet that could be generated from the above search r }, "_score": 1.0, "_source": { - "comments" : { - "author": "nik9000", - "number": 2 - } + "author": "nik9000", + "number": 2 } } ] @@ -406,12 +404,8 @@ Which would look like: }, "_score": 0.6931472, "_source": { - "comments": { - "votes": { - "value": 1, - "voter": "kimchy" - } - } + "value": 1, + "voter": "kimchy" } } ] diff --git a/docs/reference/search/request/preference.asciidoc b/docs/reference/search/request/preference.asciidoc index d0f60d700a82c..dbd9055ff8c86 100644 --- a/docs/reference/search/request/preference.asciidoc +++ b/docs/reference/search/request/preference.asciidoc @@ -7,21 +7,6 @@ search. By default, the operation is randomized among the available shard copies The `preference` is a query string parameter which can be set to: [horizontal] -`_primary`:: - The operation will go and be executed only on the primary - shards. - -`_primary_first`:: - The operation will go and be executed on the primary - shard, and if not available (failover), will execute on other shards. - -`_replica`:: - The operation will go and be executed only on a replica shard. - -`_replica_first`:: - The operation will go and be executed only on a replica shard, and if - not available (failover), will execute on other shards. - `_local`:: The operation will prefer to be executed on a local allocated shard if possible. @@ -33,7 +18,7 @@ The `preference` is a query string parameter which can be set to: `_shards:2,3`:: Restricts the operation to the specified shards. (`2` and `3` in this case). This preference can be combined with other - preferences but it has to appear first: `_shards:2,3|_primary` + preferences but it has to appear first: `_shards:2,3|_local` `_only_nodes`:: Restricts the operation to nodes specified in <> diff --git a/docs/reference/search/request/sort.asciidoc b/docs/reference/search/request/sort.asciidoc index 89ad7ea2c479e..08c6f7fd90345 100644 --- a/docs/reference/search/request/sort.asciidoc +++ b/docs/reference/search/request/sort.asciidoc @@ -115,24 +115,35 @@ POST /_search Elasticsearch also supports sorting by fields that are inside one or more nested objects. The sorting by nested -field support has the following parameters on top of the already -existing sort options: +field support has a `nested` sort option with the following properties: -`nested_path`:: +`path`:: Defines on which nested object to sort. The actual sort field must be a direct field inside this nested object. When sorting by nested field, this field is mandatory. -`nested_filter`:: +`filter`:: A filter that the inner objects inside the nested path should match with in order for its field values to be taken into account by sorting. Common case is to repeat the query / filter inside the nested filter or query. By default no `nested_filter` is active. +`nested`:: + Same as top-level `nested` but applies to another nested path within the + current nested object. -===== Nested sorting example +[WARNING] +.Nested sort options before Elasticseach 6.1 +============================================ + +The `nested_path` and `nested_filter` options have been deprecated in +favor of the options documented above. + +============================================ + +===== Nested sorting examples In the below example `offer` is a field of type `nested`. -The `nested_path` needs to be specified; otherwise, elasticsearch doesn't know on what nested level sort values need to be captured. +The nested `path` needs to be specified; otherwise, elasticsearch doesn't know on what nested level sort values need to be captured. [source,js] -------------------------------------------------- @@ -146,9 +157,11 @@ POST /_search "offer.price" : { "mode" : "avg", "order" : "asc", - "nested_path" : "offer", - "nested_filter" : { - "term" : { "offer.color" : "blue" } + "nested": { + "path": "offer", + "filter": { + "term" : { "offer.color" : "blue" } + } } } } @@ -157,6 +170,53 @@ POST /_search -------------------------------------------------- // CONSOLE +In the below example `parent` and `child` fields are of type `nested`. +The `nested_path` needs to be specified at each level; otherwise, elasticsearch doesn't know on what nested level sort values need to be captured. + +[source,js] +-------------------------------------------------- +POST /_search +{ + "query": { + "nested": { + "path": "parent", + "query": { + "bool": { + "must": {"range": {"parent.age": {"gte": 21}}}, + "filter": { + "nested": { + "path": "parent.child", + "query": {"match": {"parent.child.name": "matt"}} + } + } + } + } + } + }, + "sort" : [ + { + "parent.child.age" : { + "mode" : "min", + "order" : "asc", + "nested": { + "path": "parent", + "filter": { + "range": {"parent.age": {"gte": 21}} + }, + "nested": { + "path": "parent.child", + "filter": { + "match": {"parent.child.name": "matt"} + } + } + } + } + } + ] +} +-------------------------------------------------- +// CONSOLE + Nested sorting is also supported when sorting by scripts and sorting by geo distance. diff --git a/docs/reference/search/suggesters/completion-suggest.asciidoc b/docs/reference/search/suggesters/completion-suggest.asciidoc index 1a42dedf47c74..5d8f5fa1cc5c2 100644 --- a/docs/reference/search/suggesters/completion-suggest.asciidoc +++ b/docs/reference/search/suggesters/completion-suggest.asciidoc @@ -156,9 +156,9 @@ POST music/_search?pretty { "suggest": { "song-suggest" : { - "prefix" : "nir", - "completion" : { - "field" : "suggest" + "prefix" : "nir", <1> + "completion" : { <2> + "field" : "suggest" <3> } } } @@ -167,6 +167,10 @@ POST music/_search?pretty // CONSOLE // TEST[continued] +<1> Prefix used to search for suggestions +<2> Type of suggestions +<3> Name of the field to search for suggestions in + returns this response: [source,js] @@ -218,14 +222,15 @@ filtering but using suggest on the `_search` endpoint does: [source,js] -------------------------------------------------- -POST music/_search?size=0 +POST music/_search { - "_source": "suggest", + "_source": "suggest", <1> "suggest": { "song-suggest" : { "prefix" : "nir", "completion" : { - "field" : "suggest" + "field" : "suggest", <2> + "size" : 5 <3> } } } @@ -234,6 +239,10 @@ POST music/_search?size=0 // CONSOLE // TEST[continued] +<1> Filter the source to return only the `suggest` field +<2> Name of the field to search for suggestions in +<3> Number of suggestions to return + Which should look like: [source,js] @@ -277,6 +286,7 @@ The basic completion suggester query supports the following parameters: `field`:: The name of the field on which to run the query (required). `size`:: The number of suggestions to return (defaults to `5`). +`skip_duplicates`:: Whether duplicate suggestions should be filtered out (defaults to `false`). NOTE: The completion suggester considers all documents in the index. See <> for an explanation of how to query a subset of @@ -291,6 +301,33 @@ index completions into a single shard index. In case of high heap usage due to shard size, it is still recommended to break index into multiple shards instead of optimizing for completion performance. +[[skip_duplicates]] +==== Skip duplicate suggestions + +Queries can return duplicate suggestions coming from different documents. +It is possible to modify this behavior by setting `skip_duplicates` to true. +When set, this option filters out documents with duplicate suggestions from the result. + +[source,js] +-------------------------------------------------- +POST music/_search?pretty +{ + "suggest": { + "song-suggest" : { + "prefix" : "nor", + "completion" : { + "field" : "suggest", + "skip_duplicates": true + } + } + } +} +-------------------------------------------------- +// CONSOLE + +WARNING: when set to true this option can slow down search because more suggestions +need to be visited to find the top N. + [[fuzzy]] ==== Fuzzy queries diff --git a/docs/reference/search/suggesters/phrase-suggest.asciidoc b/docs/reference/search/suggesters/phrase-suggest.asciidoc index 92138e7ecdfe0..cba299e97cb8d 100644 --- a/docs/reference/search/suggesters/phrase-suggest.asciidoc +++ b/docs/reference/search/suggesters/phrase-suggest.asciidoc @@ -126,7 +126,7 @@ The response contains suggestions scored by the most likely spell correction fir "options" : [ { "text" : "nobel prize", "highlighted": "nobel prize", - "score" : 0.5962314 + "score" : 0.48614594 }] } ] diff --git a/docs/reference/setup/bootstrap-checks.asciidoc b/docs/reference/setup/bootstrap-checks.asciidoc index 93a6fafde6815..3fd5b6053fa2f 100644 --- a/docs/reference/setup/bootstrap-checks.asciidoc +++ b/docs/reference/setup/bootstrap-checks.asciidoc @@ -23,39 +23,43 @@ documented individually. [float] === Development vs. production mode -By default, Elasticsearch binds to `localhost` for <> -and <> communication. This is -fine for downloading and playing with Elasticsearch, and everyday -development but it's useless for production systems. To form a cluster, -Elasticsearch instances must be reachable via transport communication so -they must bind transport to an external interface. Thus, we consider an -Elasticsearch instance to be in development mode if it does not bind -transport to an external interface (the default), and is otherwise in -production mode if it does bind transport to an external interface. - -Note that HTTP can be configured independently of transport via -<> and <>; -this can be useful for configuring a single instance to be reachable via -HTTP for testing purposes without triggering production mode. - -We recognize that some users need to bind transport to an external -interface for testing their usage of the transport client. For this -situation, we provide the discovery type `single-node` (configure it by -setting `discovery.type` to `single-node`); in this situation, a node -will elect itself master and will not form a cluster with any other -node. - -If you are running a single node in production, it is possible to evade -the bootstrap checks (either by not binding transport to an external -interface, or by binding transport to an external interface and setting -the discovery type to `single-node`). For this situation, you can force -execution of the bootstrap checks by setting the system property -`es.enforce.bootstrap.checks` to `true` (set this in <>, or -by adding `-Des.enforce.bootstrap.checks=true` to the environment -variable `ES_JAVA_OPTS`). We strongly encourage you to do this if you -are in this specific situation. This system property can be used to -force execution of the bootstrap checks independent of the node -configuration. +By default, Elasticsearch binds to `localhost` for <> and +<> communication. This is fine for +downloading and playing with Elasticsearch, and everyday development but it's +useless for production systems. To join a cluster, an Elasticsearch node must be +reachable via transport communication. To join a cluster over an external +network interface, a node must bind transport to an external interface and not +be using <>. Thus, we consider an +Elasticsearch node to be in development mode if it can not form a cluster with +another machine over an external network interface, and is otherwise in +production mode if it can join a cluster over an external interface. + +Note that HTTP and transport can be configured independently via +<> and <>; this +can be useful for configuring a single node to be reachable via HTTP for testing +purposes without triggering production mode. + +[[single-node-discovery]] +[float] +=== Single-node discovery +We recognize that some users need to bind transport to an external interface for +testing their usage of the transport client. For this situation, we provide the +discovery type `single-node` (configure it by setting `discovery.type` to +`single-node`); in this situation, a node will elect itself master and will not +join a cluster with any other node. + + +[float] +=== Forcing the bootstrap checks +If you are running a single node in production, it is possible to evade the +bootstrap checks (either by not binding transport to an external interface, or +by binding transport to an external interface and setting the discovery type to +`single-node`). For this situation, you can force execution of the bootstrap +checks by setting the system property `es.enforce.bootstrap.checks` to `true` +(set this in <>, or by adding `-Des.enforce.bootstrap.checks=true` +to the environment variable `ES_JAVA_OPTS`). We strongly encourage you to do +this if you are in this specific situation. This system property can be used to +force execution of the bootstrap checks independent of the node configuration. === Heap size check diff --git a/docs/reference/setup/cluster_restart.asciidoc b/docs/reference/setup/cluster_restart.asciidoc deleted file mode 100644 index 1504831955492..0000000000000 --- a/docs/reference/setup/cluster_restart.asciidoc +++ /dev/null @@ -1,147 +0,0 @@ -[[restart-upgrade]] -=== Full cluster restart upgrade - -Elasticsearch requires a full cluster restart when upgrading across major -versions. Rolling upgrades are not supported across major versions. Consult -this <> to verify that a full cluster restart is -required. - -The process to perform an upgrade with a full cluster restart is as follows: - -. *Disable shard allocation* -+ --- - -When you shut down a node, the allocation process will immediately try to -replicate the shards that were on that node to other nodes in the cluster, -causing a lot of wasted I/O. This can be avoided by disabling allocation -before shutting down a node: - -[source,js] --------------------------------------------------- -PUT _cluster/settings -{ - "persistent": { - "cluster.routing.allocation.enable": "none" - } -} --------------------------------------------------- -// CONSOLE -// TEST[skip:indexes don't assign] --- - -. *Perform a synced flush* -+ --- - -Shard recovery will be much faster if you stop indexing and issue a -<> request: - -[source,sh] --------------------------------------------------- -POST _flush/synced --------------------------------------------------- -// CONSOLE - -A synced flush request is a ``best effort'' operation. It will fail if there -are any pending indexing operations, but it is safe to reissue the request -multiple times if necessary. --- - -. *Shutdown and upgrade all nodes* -+ --- - -Stop all Elasticsearch services on all nodes in the cluster. Each node can be -upgraded following the same procedure described in <>. --- - -. *Upgrade any plugins* -+ --- - -Elasticsearch plugins must be upgraded when upgrading a node. Use the -`elasticsearch-plugin` script to install the correct version of any plugins -that you need. --- - -. *Start the cluster* -+ --- - -If you have dedicated master nodes -- nodes with `node.master` set to -`true`(the default) and `node.data` set to `false` -- then it is a good idea -to start them first. Wait for them to form a cluster and to elect a master -before proceeding with the data nodes. You can check progress by looking at the -logs. - -As soon as the <> -have discovered each other, they will form a cluster and elect a master. From -that point on, the <> and <> -APIs can be used to monitor nodes joining the cluster: - -[source,sh] --------------------------------------------------- -GET _cat/health - -GET _cat/nodes --------------------------------------------------- -// CONSOLE - -Use these APIs to check that all nodes have successfully joined the cluster. --- - -. *Wait for yellow* -+ --- - -As soon as each node has joined the cluster, it will start to recover any -primary shards that are stored locally. Initially, the -<> request will report a `status` of `red`, meaning -that not all primary shards have been allocated. - -Once each node has recovered its local shards, the `status` will become -`yellow`, meaning all primary shards have been recovered, but not all replica -shards are allocated. This is to be expected because allocation is still -disabled. --- - -. *Reenable allocation* -+ --- - -Delaying the allocation of replicas until all nodes have joined the cluster -allows the master to allocate replicas to nodes which already have local shard -copies. At this point, with all the nodes in the cluster, it is safe to -reenable shard allocation: - -[source,js] ------------------------------------------------------- -PUT _cluster/settings -{ - "persistent": { - "cluster.routing.allocation.enable": "all" - } -} ------------------------------------------------------- -// CONSOLE - -The cluster will now start allocating replica shards to all data nodes. At this -point it is safe to resume indexing and searching, but your cluster will -recover more quickly if you can delay indexing and searching until all shards -have recovered. - -You can monitor progress with the <> and -<> APIs: - -[source,sh] --------------------------------------------------- -GET _cat/health - -GET _cat/recovery --------------------------------------------------- -// CONSOLE - -Once the `status` column in the `_cat/health` output has reached `green`, all -primary and replica shards have been successfully allocated. --- diff --git a/docs/reference/setup/important-settings.asciidoc b/docs/reference/setup/important-settings.asciidoc index fc43f5869a916..aa86e9be2681d 100644 --- a/docs/reference/setup/important-settings.asciidoc +++ b/docs/reference/setup/important-settings.asciidoc @@ -12,6 +12,7 @@ configured before going into production. * <> * <> * <> +* <> [float] [[path-settings]] @@ -180,3 +181,23 @@ nodes should be set to `(3 / 2) + 1` or `2`: discovery.zen.minimum_master_nodes: 2 -------------------------------------------------- +[float] +[[heap-dump-path]] +=== JVM heap dump path + +The <> and <> package distributions default to configuring +the JVM to dump the heap on out of memory exceptions to +`/var/lib/elasticsearch`. If this path is not suitable for storing heap dumps, +you should modify the entry `-XX:HeapDumpPath=/var/lib/elasticsearch` in +<> to an alternate path. If you specify a filename +instead of a directory, the JVM will repeatedly use the same file; this is one +mechanism for preventing heap dumps from accumulating in the heap dump path. +Alternatively, you can configure a scheduled task via your OS to remove heap +dumps that are older than a configured age. + +Note that the archive distributions do not configure the heap dump path by +default. Instead, the JVM will default to dumping to the working directory for +the Elasticsearch process. If you wish to configure a heap dump path, you should +modify the entry `#-XX:HeapDumpPath=/heap/dump/path` in +<> to remove the comment marker `#` and to specify an +actual path. diff --git a/docs/reference/setup/install.asciidoc b/docs/reference/setup/install.asciidoc index 484d9dea970e5..babdccc2d95fe 100644 --- a/docs/reference/setup/install.asciidoc +++ b/docs/reference/setup/install.asciidoc @@ -37,7 +37,9 @@ Elasticsearch on Windows. MSIs may be downloaded from the Elasticsearch website. `docker`:: -An image is available for running Elasticsearch as a Docker container. It ships with {xpack-ref}/index.html[X-Pack] pre-installed and may be downloaded from the Elastic Docker Registry. +Images are available for running Elasticsearch as Docker containers. They may be +downloaded from the Elastic Docker Registry. The default image ships with +{xpack-ref}/index.html[X-Pack] pre-installed. + <> diff --git a/docs/reference/setup/install/deb.asciidoc b/docs/reference/setup/install/deb.asciidoc index 680753b99a2ec..536ee35551c70 100644 --- a/docs/reference/setup/install/deb.asciidoc +++ b/docs/reference/setup/install/deb.asciidoc @@ -160,9 +160,7 @@ include::check-running.asciidoc[] [[deb-configuring]] ==== Configuring Elasticsearch -Elasticsearch loads its configuration from the `/etc/elasticsearch/elasticsearch.yml` -file by default. The format of this config file is explained in -<>. +include::etc-elasticsearch.asciidoc[] The Debian package also has a system configuration file (`/etc/default/elasticsearch`), which allows you to set the following parameters: diff --git a/docs/reference/setup/install/docker.asciidoc b/docs/reference/setup/install/docker.asciidoc index c626262c162a4..1bcdefc5bc2b5 100644 --- a/docs/reference/setup/install/docker.asciidoc +++ b/docs/reference/setup/install/docker.asciidoc @@ -1,32 +1,54 @@ [[docker]] === Install Elasticsearch with Docker -Elasticsearch is also available as a Docker image. -The image is built with {xpack-ref}/index.html[X-Pack] and uses https://hub.docker.com/_/centos/[centos:7] as the base image. -The source code can be found on https://github.com/elastic/elasticsearch-docker/tree/{branch}[GitHub]. +Elasticsearch is also available as Docker images. +The images use https://hub.docker.com/_/centos/[centos:7] as the base image and +are available with {xpack-ref}/xpack-introduction.html[X-Pack]. -==== Security note +A list of all published Docker images and tags can be found in https://www.docker.elastic.co[www.docker.elastic.co]. The source code can be found +on https://github.com/elastic/elasticsearch-docker/tree/{branch}[GitHub]. -NOTE: {xpack-ref}/index.html[X-Pack] is preinstalled in this image. -Please take a few minutes to familiarize yourself with {xpack-ref}/security-getting-started.html[X-Pack Security] and how to change default passwords. The default password for the `elastic` user is `changeme`. +==== Image types -NOTE: X-Pack includes a trial license for 30 days. After that, you can obtain one of the https://www.elastic.co/subscriptions[available subscriptions] or {ref}/security-settings.html[disable Security]. The Basic license is free and includes the https://www.elastic.co/products/x-pack/monitoring[Monitoring] extension. +The images are available in three different configurations or "flavors". The +`basic` flavor, which is the default, ships with X-Pack Basic features +pre-installed and automatically activated with a free licence. The `platinum` +flavor features all X-Pack functionally under a 30-day trial licence. The `oss` +flavor does not include X-Pack, and contains only open-source Elasticsearch. + +NOTE: {xpack-ref}/xpack-security.html[X-Pack Security] is enabled in the `platinum` +image. To access your cluster, it's necessary to set an initial password for the +`elastic` user. The initial password can be set at start up time via the +`ELASTIC_PASSWORD` environment variable: + +["source","txt",subs="attributes"] +-------------------------------------------- +docker run -e ELASTIC_PASSWORD=MagicWord {docker-repo}-platinum:{version} +-------------------------------------------- + +NOTE: The `platinum` image includes a trial license for 30 days. After that, you +can obtain one of the https://www.elastic.co/subscriptions[available +subscriptions] or revert to a Basic licence. The Basic license is free and +includes a selection of X-Pack features. Obtaining Elasticsearch for Docker is as simple as issuing a +docker pull+ command against the Elastic Docker registry. ifeval::["{release-state}"=="unreleased"] -WARNING: Version {version} of Elasticsearch has not yet been released, so no Docker image is currently available for this version. +WARNING: Version {version} of Elasticsearch has not yet been released, so no +Docker image is currently available for this version. endif::[] ifeval::["{release-state}"!="unreleased"] -The Docker image can be retrieved with the following command: +Docker images can be retrieved with the following commands: ["source","sh",subs="attributes"] -------------------------------------------- -docker pull {docker-image} +docker pull {docker-repo}:{version} +docker pull {docker-repo}-platinum:{version} +docker pull {docker-repo}-oss:{version} -------------------------------------------- endif::[] @@ -49,7 +71,7 @@ Elasticsearch can be quickly started for development or testing use with the fol ["source","sh",subs="attributes"] -------------------------------------------- -docker run -p 9200:9200 -e "http.host=0.0.0.0" -e "transport.host=127.0.0.1" {docker-image} +docker run -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" {docker-image} -------------------------------------------- endif::[] @@ -61,12 +83,12 @@ endif::[] [IMPORTANT] ========================= -The `vm_max_map_count` kernel setting needs to be set to at least `262144` for production use. +The `vm.max_map_count` kernel setting needs to be set to at least `262144` for production use. Depending on your platform: * Linux + -The `vm_map_max_count` setting should be set permanently in /etc/sysctl.conf: +The `vm.max_map_count` setting should be set permanently in /etc/sysctl.conf: + [source,sh] -------------------------------------------- @@ -76,9 +98,9 @@ vm.max_map_count=262144 + To apply the setting on a live system type: `sysctl -w vm.max_map_count=262144` + -* OSX with https://docs.docker.com/engine/installation/mac/#/docker-for-mac[Docker for Mac] +* macOS with https://docs.docker.com/engine/installation/mac/#/docker-for-mac[Docker for Mac] + -The `vm_max_map_count` setting must be set within the xhyve virtual machine: +The `vm.max_map_count` setting must be set within the xhyve virtual machine: + ["source","sh"] -------------------------------------------- @@ -93,11 +115,11 @@ Then configure the `sysctl` setting as you would for Linux: sysctl -w vm.max_map_count=262144 -------------------------------------------- + -* OSX with https://docs.docker.com/engine/installation/mac/#docker-toolbox[Docker Toolbox] +* Windows and macOS with https://www.docker.com/products/docker-toolbox[Docker Toolbox] + -The `vm_max_map_count` setting must be set via docker-machine: +The `vm.max_map_count` setting must be set via docker-machine: + -["source","sh"] +["source","txt"] -------------------------------------------- docker-machine ssh sudo sysctl -w vm.max_map_count=262144 @@ -109,7 +131,8 @@ To bring up the cluster, use the < bin/elasticsearch -Ecluster.name=mynewclusternam ==== Notes for production use and defaults We have collected a number of best practices for production use. +Any Docker parameters mentioned below assume the use of `docker run`. -NOTE: Any Docker parameters mentioned below assume the use of `docker run`. - -. Elasticsearch runs inside the container as user `elasticsearch` using uid:gid `1000:1000`. If you are bind-mounting a local directory or file, ensure it is readable by this user, while the <> additionally require write access. +. By default, Elasticsearch runs inside the container as user `elasticsearch` using uid:gid `1000:1000`. ++ +CAUTION: One exception is https://docs.openshift.com/container-platform/3.6/creating_images/guidelines.html#openshift-specific-guidelines[Openshift] which runs containers using an arbitrarily assigned user ID. Openshift will present persistent volumes with the gid set to `0` which will work without any adjustments. ++ +If you are bind-mounting a local directory or file, ensure it is readable by this user, while the <> additionally require write access. A good strategy is to grant group access to gid `1000` or `0` for the local directory. As an example, to prepare a local directory for storing data through a bind-mount: ++ + mkdir esdatadir + chmod g+rwx esdatadir + chgrp 1000 esdatadir ++ +As a last resort, you can also force the container to mutate the ownership of any bind-mounts used for the <> through the environment variable `TAKE_FILE_OWNERSHIP`; in this case they will be owned by uid:gid `1000:0` providing read/write access to the elasticsearch process as required. + . It is important to ensure increased ulimits for <> and <> are available for the Elasticsearch containers. Verify the https://github.com/moby/moby/tree/ea4d1243953e6b652082305a9c3cda8656edab26/contrib/init[init system] for the Docker daemon is already setting those to acceptable values and, if needed, adjust them in the Daemon, or override them per container, for example using `docker run`: + @@ -273,13 +300,22 @@ NOTE: One way of checking the Docker daemon defaults for the aforementioned ulim + docker run --rm centos:7 /bin/bash -c 'ulimit -Hn && ulimit -Sn && ulimit -Hu && ulimit -Su' + -. Swapping needs to be disabled for performance and node stability. This can be achieved through any of the methods mentioned in the <>. If you opt for the `bootstrap.memory_lock: true` approach, apart from defining it through any of the <>, you will additionally need the `memlock: true` ulimit, either defined in the https://docs.docker.com/engine/reference/commandline/dockerd/#default-ulimits[Docker Daemon] or specifically set for the container. This has been demonstrated earlier in the <>, or using `docker run`: +. Swapping needs to be disabled for performance and node stability. This can be +achieved through any of the methods mentioned in the +<>. If you opt for the +`bootstrap.memory_lock: true` approach, apart from defining it through any of +the <>, you will +additionally need the `memlock: true` ulimit, either defined in the +https://docs.docker.com/engine/reference/commandline/dockerd/#default-ulimits[Docker +Daemon] or specifically set for the container. This is demonstrated above in the +<>. If using `docker run`: + - -e "bootstrap_memory_lock=true" --ulimit memlock=-1:-1 + -e "bootstrap.memory_lock=true" --ulimit memlock=-1:-1 + . The image https://docs.docker.com/engine/reference/builder/#/expose[exposes] TCP ports 9200 and 9300. For clusters it is recommended to randomize the published ports with `--publish-all`, unless you are pinning one container per host. + -. Use the `ES_JAVA_OPTS` environment variable to set heap size, e.g. to use 16GB use `-e ES_JAVA_OPTS="-Xms16g -Xmx16g"` with `docker run`. It is also recommended to set a https://docs.docker.com/engine/reference/run/#user-memory-constraints[memory limit] for the container. +. Use the `ES_JAVA_OPTS` environment variable to set heap size, e.g. to use 16GB +use `-e ES_JAVA_OPTS="-Xms16g -Xmx16g"` with `docker run`. + . Pin your deployments to a specific version of the Elasticsearch Docker image, e.g. +docker.elastic.co/elasticsearch/elasticsearch:{version}+. + @@ -289,7 +325,10 @@ NOTE: One way of checking the Docker daemon defaults for the aforementioned ulim .. Elasticsearch is I/O sensitive and the Docker storage driver is not ideal for fast I/O .. It allows the use of advanced https://docs.docker.com/engine/extend/plugins/#volume-plugins[Docker volume plugins] + -. If you are using the devicemapper storage driver (default on at least RedHat (rpm) based distributions) make sure you are not using the default `loop-lvm` mode. Configure docker-engine to use https://docs.docker.com/engine/userguide/storagedriver/device-mapper-driver/#configure-docker-with-devicemapper[direct-lvm] instead. +. If you are using the devicemapper storage driver, make sure you are not using +the default `loop-lvm` mode. Configure docker-engine to use +https://docs.docker.com/engine/userguide/storagedriver/device-mapper-driver/#configure-docker-with-devicemapper[direct-lvm] +instead. + . Consider centralizing your logs by using a different https://docs.docker.com/engine/admin/logging/overview/[logging driver]. Also note that the default json-file logging driver is not ideally suited for production use. diff --git a/docs/reference/setup/install/etc-elasticsearch.asciidoc b/docs/reference/setup/install/etc-elasticsearch.asciidoc new file mode 100644 index 0000000000000..593d146bfd200 --- /dev/null +++ b/docs/reference/setup/install/etc-elasticsearch.asciidoc @@ -0,0 +1,12 @@ +Elasticsearch defaults to using `/etc/elasticsearch` for runtime configuration. +The ownership of this directory and all files in this directory are set to +`root:elasticsearch` on package installation and the directory has the `setgid` +flag set so that any files and subdirectories created under `/etc/elasticsearch` +are created with this ownership as well (e.g., if a keystore is created using +the <>). It is expected that this be maintained so +that the Elasticsearch process can read the files under this directory via the +group permissions. + +Elasticsearch loads its configuration from the +`/etc/elasticsearch/elasticsearch.yml` file by default. The format of this +config file is explained in <>. diff --git a/docs/reference/setup/install/rpm.asciidoc b/docs/reference/setup/install/rpm.asciidoc index a62e324ad8c21..1530187d13180 100644 --- a/docs/reference/setup/install/rpm.asciidoc +++ b/docs/reference/setup/install/rpm.asciidoc @@ -148,9 +148,7 @@ include::check-running.asciidoc[] [[rpm-configuring]] ==== Configuring Elasticsearch -Elasticsearch loads its configuration from the `/etc/elasticsearch/elasticsearch.yml` -file by default. The format of this config file is explained in -<>. +include::etc-elasticsearch.asciidoc[] The RPM also has a system configuration file (`/etc/sysconfig/elasticsearch`), which allows you to set the following parameters: diff --git a/docs/reference/setup/install/windows.asciidoc b/docs/reference/setup/install/windows.asciidoc index 4abf0872779f3..b765466391ccc 100644 --- a/docs/reference/setup/install/windows.asciidoc +++ b/docs/reference/setup/install/windows.asciidoc @@ -1,6 +1,8 @@ [[windows]] === Install Elasticsearch with Windows MSI Installer +beta[] + Elasticsearch can be installed on Windows using the `.msi` package. This can install Elasticsearch as a Windows service or allow it to be run manually using the included `elasticsearch.exe` executable. diff --git a/docs/reference/setup/logging-config.asciidoc b/docs/reference/setup/logging-config.asciidoc index ac07f43f0ca44..d35b3db2df7f3 100644 --- a/docs/reference/setup/logging-config.asciidoc +++ b/docs/reference/setup/logging-config.asciidoc @@ -102,6 +102,74 @@ appenders can be found on the http://logging.apache.org/log4j/2.x/manual/configuration.html[Log4j documentation]. +[float] +[[configuring-logging-levels]] +=== Configuring logging levels + +There are four ways to configuring logging levels, each having situations in which they are appropriate to use. + +1. Via the command-line: `-E =` (e.g., + `-E logger.org.elasticsearch.transport=trace`). This is most appropriate when + you are temporarily debugging a problem on a single node (for example, a + problem with startup, or during development). +2. Via `elasticsearch.yml`: `: ` (e.g., + `logger.org.elasticsearch.transport: trace`). This is most appropriate when + you are temporarily debugging a problem but are not starting Elasticsearch + via the command-line (e.g., via a service) or you want a logging level + adjusted on a more permanent basis. +3. Via <>: ++ +-- +[source,js] +------------------------------- +PUT /_cluster/settings +{ + "transient": { + "": "" + } +} +------------------------------- +// NOTCONSOLE + +For example: + +[source,js] +------------------------------- +PUT /_cluster/settings +{ + "transient": { + "logger.org.elasticsearch.transport": "trace" + } +} +------------------------------- +// CONSOLE + +This is most appropriate when you need to dynamically need to adjust a logging +level on an actively-running cluster. + +-- +4. Via the `log4j2.properties`: ++ +-- +[source,properties] +-------------------------------------------------- +logger..name = +logger..level = +-------------------------------------------------- + +For example: + +[source,properties] +-------------------------------------------------- +logger.transport.name = org.elasticsearch.transport +logger.transport.level = trace +-------------------------------------------------- + +This is most appropriate when you need fine-grained control over the logger (for +example, you want to send the logger to another file, or manage the logger +differently; this is a rare use-case). +-- + [float] [[deprecation-logging]] === Deprecation logging diff --git a/docs/reference/setup/reindex_upgrade.asciidoc b/docs/reference/setup/reindex_upgrade.asciidoc deleted file mode 100644 index f9e7a60ee5bac..0000000000000 --- a/docs/reference/setup/reindex_upgrade.asciidoc +++ /dev/null @@ -1,106 +0,0 @@ -[[reindex-upgrade]] -=== Reindex to upgrade - -Elasticsearch is able to use indices created in the previous major version -only. For instance, Elasticsearch 6.x can use indices created in -Elasticsearch 5.x, but not those created in Elasticsearch 2.x or before. - -NOTE: Elasticsearch 6.x nodes will fail to start in the presence of too old indices. - -If you are running an Elasticsearch 5.x cluster which contains indices that -were created before 5.x, you will either need to delete those old indices or -to reindex them before upgrading to 6.x. See <>. - -If you are running an Elasticsearch 2.x cluster or older, you have two options: - -* First upgrade to Elasticsearch 5.x, reindex the old indices, then upgrade - to 6.x. See <>. - -* Create a new 6.x cluster and use reindex-from-remote to import indices - directly from the 2.x cluster. See <>. - -.Time-based indices and retention periods -******************************************* - -For many use cases with time-based indices, you will not need to worry about -carrying old 2.x indices with you to 6.x. Data in time-based indices usually -becomes less interesting as time passes. Old indices can be deleted once they -fall outside of your retention period. - -Users in this position can continue to use 5.x until all old 2.x indices have -been deleted, then upgrade to 6.x directly. - -******************************************* - - -[[reindex-upgrade-inplace]] -==== Reindex in place - -If you are running a 5.x cluster which contains indices created in -Elasticsearch 2.x, you will need to reindex (or delete) those indices before -upgrading to Elasticsearch 6.x. - -The reindex process works as follows: - -* Create a new index, copying the mappings and settings from the old index. - Set the `refresh_interval` to `-1` and the `number_of_replicas` to `0` for - efficient reindexing. - -* Reindex all documents from the old index to the new index using the - <>. - -* Reset the `refresh_interval` and `number_of_replicas` to the values - used in the old index, and wait for the index to become green. - -* In a single <> request: - - * Delete the old index. - * Add an alias with the old index name to the new index. - * Add any aliases that existed on the old index to the new index. - -At the end of this process, you will have a new 5.x index which can be used -by an Elasticsearch 6.x cluster. - -[[reindex-upgrade-remote]] -==== Upgrading with reindex-from-remote - -If you are running a 1.x or 2.x cluster and would like to migrate directly to 6.x -without first migrating to 5.x, you can do so using -<>. - -[WARNING] -============================================= - -Elasticsearch includes backwards compatibility code that allows indices from -the previous major version to be upgraded to the current major version. By -moving directly from Elasticsearch 2.x or before to 6.x, you will have to solve any -backwards compatibility issues yourself. - -============================================= - -You will need to set up a 6.x cluster alongside your existing old cluster. -The 6.x cluster needs to have access to the REST API of the old cluster. - -For each old index that you want to transfer to the 6.x cluster, you will need -to: - -* Create a new index in 6.x with the appropriate mappings and settings. Set - the `refresh_interval` to `-1` and set `number_of_replicas` to `0` for - faster reindexing. - -* Use <> to pull documents from the - old index into the new 6.x index. - -* If you run the reindex job in the background (with `wait_for_completion` set - to `false`), the reindex request will return a `task_id` which can be used to - monitor progress of the reindex job in the <>: - `GET _tasks/TASK_ID`. - -* Once reindex has completed, set the `refresh_interval` and - `number_of_replicas` to the desired values (the defaults are `30s` and `1` - respectively). - -* Once the new index has finished replication, you can delete the old index. - -The 6.x cluster can start out small, and you can gradually move nodes from the -old cluster to the 6.x cluster as you migrate indices across. diff --git a/docs/reference/setup/rolling_upgrade.asciidoc b/docs/reference/setup/rolling_upgrade.asciidoc deleted file mode 100644 index f5b29913cdb24..0000000000000 --- a/docs/reference/setup/rolling_upgrade.asciidoc +++ /dev/null @@ -1,217 +0,0 @@ -[[rolling-upgrades]] -=== Rolling upgrades - -A rolling upgrade allows the Elasticsearch cluster to be upgraded one node at -a time, with no downtime for end users. Running multiple versions of -Elasticsearch in the same cluster for any length of time beyond that required -for an upgrade is not supported, as shards will not be replicated from the -more recent version to the older version. - -Consult this <> to verify that rolling upgrades are -supported for your version of Elasticsearch. - -To perform a rolling upgrade: - -. *Disable shard allocation* -+ --- - -When you shut down a node, the allocation process will wait for one minute -before starting to replicate the shards that were on that node to other nodes -in the cluster, causing a lot of wasted I/O. This can be avoided by disabling -allocation before shutting down a node: - -[source,js] --------------------------------------------------- -PUT _cluster/settings -{ - "transient": { - "cluster.routing.allocation.enable": "none" - } -} --------------------------------------------------- -// CONSOLE -// TEST[skip:indexes don't assign] --- - -. *Stop non-essential indexing and perform a synced flush (Optional)* -+ --- - -You may happily continue indexing during the upgrade. However, shard recovery -will be much faster if you temporarily stop non-essential indexing and issue a -<> request: - -[source,js] --------------------------------------------------- -POST _flush/synced --------------------------------------------------- -// CONSOLE - -A synced flush request is a ``best effort'' operation. It will fail if there -are any pending indexing operations, but it is safe to reissue the request -multiple times if necessary. --- - -. [[upgrade-node]] *Stop and upgrade a single node* -+ --- - -Shut down one of the nodes in the cluster *before* starting the upgrade. - -[TIP] -================================================ - -When using the zip or tarball packages, the `config`, `data`, `logs` and -`plugins` directories are placed within the Elasticsearch home directory by -default. - -It is a good idea to place these directories in a different location so that -there is no chance of deleting them when upgrading Elasticsearch. These custom -paths can be <> with the `ES_PATH_CONF` environment -variable, and the `path.logs`, and `path.data` settings. - -The <> and <> packages place these directories in the -appropriate place for each operating system. - -================================================ - -To upgrade using a <> or <> package: - -* Use `rpm` or `dpkg` to install the new package. All files should be - placed in their proper locations, and config files should not be - overwritten. - -To upgrade using a zip or compressed tarball: - -* Extract the zip or tarball to a new directory, to be sure that you don't - overwrite the `config` or `data` directories. - -* Either copy the files in the `config` directory from your old installation - to your new installation, or set the environment variable - <> to point to a custom config - directory. - -* Either copy the files in the `data` directory from your old installation - to your new installation, or configure the location of the data directory - in the `config/elasticsearch.yml` file, with the `path.data` setting. --- - -. *Upgrade any plugins* -+ --- - -Elasticsearch plugins must be upgraded when upgrading a node. Use the -`elasticsearch-plugin` script to install the correct version of any plugins -that you need. --- - -. *Start the upgraded node* -+ --- - -Start the now upgraded node and confirm that it joins the cluster by checking -the log file or by checking the output of this request: - -[source,sh] --------------------------------------------------- -GET _cat/nodes --------------------------------------------------- -// CONSOLE --- - -. *Reenable shard allocation* -+ --- - -Once the node has joined the cluster, reenable shard allocation to start using -the node: - -[source,js] --------------------------------------------------- -PUT _cluster/settings -{ - "transient": { - "cluster.routing.allocation.enable": "all" - } -} --------------------------------------------------- -// CONSOLE --- - -. *Wait for the node to recover* -+ --- - -You should wait for the cluster to finish shard allocation before upgrading -the next node. You can check on progress with the <> -request: - -[source,sh] --------------------------------------------------- -GET _cat/health --------------------------------------------------- -// CONSOLE - -Wait for the `status` column to move from `yellow` to `green`. Status `green` -means that all primary and replica shards have been allocated. - -[IMPORTANT] -==================================================== -During a rolling upgrade, primary shards assigned to a node with the higher -version will never have their replicas assigned to a node with the lower -version, because the newer version may have a different data format which is -not understood by the older version. - -If it is not possible to assign the replica shards to another node with the -higher version -- e.g. if there is only one node with the higher version in -the cluster -- then the replica shards will remain unassigned and the -cluster health will remain status `yellow`. - -In this case, check that there are no initializing or relocating shards (the -`init` and `relo` columns) before proceding. - -As soon as another node is upgraded, the replicas should be assigned and the -cluster health will reach status `green`. - -==================================================== - -Shards that have not been <> may take some time to -recover. The recovery status of individual shards can be monitored with the -<> request: - -[source,sh] --------------------------------------------------- -GET _cat/recovery --------------------------------------------------- -// CONSOLE - -If you stopped indexing, then it is safe to resume indexing as soon as -recovery has completed. --- - -. *Repeat* -+ --- - -When the cluster is stable and the node has recovered, repeat the above steps -for all remaining nodes. --- - -[IMPORTANT] -==================================================== - -During a rolling upgrade the cluster will continue to operate as normal. Any -new functionality will be disabled or work in a backward compatible manner -until all nodes of the cluster have been upgraded. Once the upgrade is -completed and all nodes are on the new version, the new functionality will -become operational. Once that has happened, it is practically impossible to -go back to operating in a backward compatible mode. To protect against such a -scenario, nodes from the previous major version (e.g. 5.x) will not be allowed -to join a cluster where all nodes are of a higher major version (e.g. 6.x). - -In the unlikely case of a network malfunction during upgrades, where all -remaining old nodes are isolated from the cluster, you will have to take all -old nodes offline and upgrade them before they can rejoin the cluster. - -==================================================== \ No newline at end of file diff --git a/docs/reference/setup/sysconfig/configuring.asciidoc b/docs/reference/setup/sysconfig/configuring.asciidoc index 0473bed3a767d..d2cb534c57756 100644 --- a/docs/reference/setup/sysconfig/configuring.asciidoc +++ b/docs/reference/setup/sysconfig/configuring.asciidoc @@ -92,9 +92,11 @@ specified via systemd. The systemd service file (`/usr/lib/systemd/system/elasticsearch.service`) contains the limits that are applied by default. -To override these, add a file called -`/etc/systemd/system/elasticsearch.service.d/elasticsearch.conf` and specify -any changes in that file, such as: +To override them, add a file called +`/etc/systemd/system/elasticsearch.service.d/override.conf` (alternatively, +you may run `sudo systemctl edit elasticsearch` which opens the file +automatically inside your default editor). Set any changes in this file, +such as: [source,sh] --------------------------------- @@ -102,6 +104,13 @@ any changes in that file, such as: LimitMEMLOCK=infinity --------------------------------- +Once finished, run the following command to reload units: + +[source,sh] +--------------------------------- +sudo systemctl daemon-reload +--------------------------------- + [[jvm-options]] ==== Setting JVM options diff --git a/docs/reference/setup/sysconfig/file-descriptors.asciidoc b/docs/reference/setup/sysconfig/file-descriptors.asciidoc index f4bc95749ae23..17e7884be0d33 100644 --- a/docs/reference/setup/sysconfig/file-descriptors.asciidoc +++ b/docs/reference/setup/sysconfig/file-descriptors.asciidoc @@ -16,6 +16,9 @@ For the `.zip` and `.tar.gz` packages, set <> as root before starting Elasticsearch, or set `nofile` to `65536` in <>. +On macOS, you must also pass the JVM option `-XX:-MaxFDLimit` +to Elasticsearch in order for it to make use of the higher file descriptor limit. + RPM and Debian packages already default the maximum number of file descriptors to 65536 and do not require further configuration. diff --git a/docs/reference/setup/upgrade.asciidoc b/docs/reference/setup/upgrade.asciidoc index d5c649e9992f1..172ae13e32d8d 100644 --- a/docs/reference/setup/upgrade.asciidoc +++ b/docs/reference/setup/upgrade.asciidoc @@ -5,51 +5,61 @@ =========================================== Before upgrading Elasticsearch: -* Consult the <> docs. +* Review the <> for changes that +affect your application. +* Check the <> to see if you are using +any deprecated features. +* If you use custom plugins, make sure compatible versions are available. * Test upgrades in a dev environment before upgrading your production cluster. -* Always <> before upgrading. - You **cannot roll back** to an earlier version unless you have a backup of your data. -* If you are using custom plugins, check that a compatible version is available. +* <> before upgrading. +You **cannot roll back** to an earlier version unless you have a backup of +your data. + =========================================== -Elasticsearch can usually be upgraded using a rolling upgrade process, -resulting in no interruption of service. This section details how to perform -both rolling upgrades and upgrades with full cluster restarts. +Elasticsearch can usually be upgraded using a <> +process so upgrading does not interrupt service. However, you might +need to <> indices created in older versions. +Upgrades across major versions prior to 6.0 require a <>. -To determine whether a rolling upgrade is supported for your release, please -consult this table: +The following table shows when you can perform a rolling upgrade, when you +need to reindex or delete old indices, and when a full cluster restart is +required. +[[upgrade-paths]] [cols="1> |5.x |5.y |<> (where `y > x`) -|5.x |6.x |<> -|6.0.0 pre GA |6.x |<> -|6.x |6.y |<> (where `y > x`) +|5.6 |6.x |<> footnoteref:[reindexfn, You must delete or reindex any indices created in 2.x before upgrading.] +|5.0-5.5 |6.x |<> footnoteref:[reindexfn] +|<5.x |6.x |<> +|6.x |6.y |<> (where `y > x`) footnote:[Upgrading from a 6.0.0 pre GA version requires a full cluster restart.] |======================================================================= [IMPORTANT] -.Indices created in Elasticsearch 2.x or before =============================================== -Elasticsearch is able to read indices created in the *previous major version -only*. For instance, Elasticsearch 6.x can use indices created in -Elasticsearch 5.x, but not those created in Elasticsearch 2.x or before. +Elasticsearch can read indices created in the *previous major version*. +Older indices must be reindexed or deleted. Elasticsearch 6.x +can use indices created in Elasticsearch 5.x, but not those created in +Elasticsearch 2.x or before. Elasticsearch 5.x can use indices created in +Elasticsearch 2.x, but not those created in 1.x or before. + +This also applies to indices backed up with <>. If an index was originally created in 2.x, it cannot be +restored to a 6.x cluster even if the snapshot was created by a 2.x cluster. -This condition also applies to indices backed up with -<>. If an index was originally -created in 2.x, it cannot be restored into a 6.x cluster even if the -snapshot was made by a 5.x cluster. +Elasticsearch nodes will fail to start if incompatible indices are present. -Elasticsearch 6.x nodes will fail to start in the presence of too old indices. +For information about how to upgrade old indices, see <>. -See <> for more information about how to upgrade old indices. =============================================== -include::rolling_upgrade.asciidoc[] +include::upgrade/rolling_upgrade.asciidoc[] -include::cluster_restart.asciidoc[] +include::upgrade/cluster_restart.asciidoc[] -include::reindex_upgrade.asciidoc[] \ No newline at end of file +include::upgrade/reindex_upgrade.asciidoc[] \ No newline at end of file diff --git a/docs/reference/setup/upgrade/cluster_restart.asciidoc b/docs/reference/setup/upgrade/cluster_restart.asciidoc new file mode 100644 index 0000000000000..edb23cf7e6df7 --- /dev/null +++ b/docs/reference/setup/upgrade/cluster_restart.asciidoc @@ -0,0 +1,120 @@ +[[restart-upgrade]] +=== Full cluster restart upgrade + +A full cluster restart upgrade requires that you shut all nodes in the cluster +down, upgrade them, and restart the cluster. A full cluster restart was required +when upgrading to major versions prior to 6.x. Elasticsearch 6.x supports +<> from *Elasticsearch 5.6*. Upgrading to +6.x from earlier versions requires a full cluster restart. See the +<> to verify the type of upgrade you need +to perform. + +To perform a full cluster restart upgrade: + +. *Disable shard allocation.* ++ +-- +include::disable-shard-alloc.asciidoc[] +-- + +. *Stop indexing and perform a synced flush.* ++ +-- +Performing a <> speeds up shard +recovery. + +include::synced-flush.asciidoc[] +-- + +. *Shutdown all nodes.* ++ +-- +include::shut-down-node.asciidoc[] +-- + +. *Upgrade all nodes.* ++ +-- +include::upgrade-node.asciidoc[] +include::set-paths-tip.asciidoc[] +-- + +. *Upgrade any plugins.* ++ +Use the `elasticsearch-plugin` script to install the upgraded version of each +installed Elasticsearch plugin. All plugins must be upgraded when you upgrade +a node. + +. *Start each upgraded node.* ++ +-- +If you have dedicated master nodes, start them first and wait for them to +form a cluster and elect a master before proceeding with your data nodes. +You can check progress by looking at the logs. + +As soon as the <> +have discovered each other, they form a cluster and elect a master. At +that point, you can use <> and +<> to monitor nodes joining the cluster: + +[source,sh] +-------------------------------------------------- +GET _cat/health + +GET _cat/nodes +-------------------------------------------------- +// CONSOLE + +The `status` column returned by `_cat/health` shows the health of each node +in the cluster: `red`, `yellow`, or `green`. +-- + +. *Wait for all nodes to join the cluster and report a status of yellow.* ++ +-- +When a node joins the cluster, it begins to recover any primary shards that +are stored locally. The <> API initially reports +a `status` of `red`, indicating that not all primary shards have been allocated. + +Once a node recovers its local shards, the cluster `status` switches to `yellow`, +indicating that all primary shards have been recovered, but not all replica +shards are allocated. This is to be expected because you have not yet +reenabled allocation. Delaying the allocation of replicas until all nodes +are `yellow` allows the master to allocate replicas to nodes that +already have local shard copies. +-- + +. *Reenable allocation.* ++ +-- +When all nodes have joined the cluster and recovered their primary shards, +reenable allocation. + +[source,js] +------------------------------------------------------ +PUT _cluster/settings +{ + "persistent": { + "cluster.routing.allocation.enable": "all" + } +} +------------------------------------------------------ +// CONSOLE + +Once allocation is reenabled, the cluster starts allocating replica shards to +the data nodes. At this point it is safe to resume indexing and searching, +but your cluster will recover more quickly if you can wait until all primary +and replica shards have been successfully allocated and the status of all nodes +is `green`. + +You can monitor progress with the <> and +<> APIs: + +[source,sh] +-------------------------------------------------- +GET _cat/health + +GET _cat/recovery +-------------------------------------------------- +// CONSOLE +-- diff --git a/docs/reference/setup/upgrade/disable-shard-alloc.asciidoc b/docs/reference/setup/upgrade/disable-shard-alloc.asciidoc new file mode 100644 index 0000000000000..107d20f1135ce --- /dev/null +++ b/docs/reference/setup/upgrade/disable-shard-alloc.asciidoc @@ -0,0 +1,17 @@ + +When you shut down a node, the allocation process waits for one minute +before starting to replicate the shards on that node to other nodes +in the cluster, causing a lot of wasted I/O. You can avoid racing the clock +by disabling allocation before shutting down the node: + +[source,js] +-------------------------------------------------- +PUT _cluster/settings +{ + "persistent": { + "cluster.routing.allocation.enable": "none" + } +} +-------------------------------------------------- +// CONSOLE +// TEST[skip:indexes don't assign] \ No newline at end of file diff --git a/docs/reference/setup/upgrade/reindex_upgrade.asciidoc b/docs/reference/setup/upgrade/reindex_upgrade.asciidoc new file mode 100644 index 0000000000000..04ef0f18b60f9 --- /dev/null +++ b/docs/reference/setup/upgrade/reindex_upgrade.asciidoc @@ -0,0 +1,177 @@ +[[reindex-upgrade]] +=== Reindex before upgrading + +Elasticsearch can read indices created in the *previous major version*. +Older indices must be reindexed or deleted. Elasticsearch 6.x +can use indices created in Elasticsearch 5.x, but not those created in +Elasticsearch 2.x or before. Elasticsearch 5.x can use indices created in +Elasticsearch 2.x, but not those created in 1.x or before. + +Elasticsearch nodes will fail to start if incompatible indices are present. + +To upgrade an Elasticsearch 5.x cluster that contains indices created in 2.x, +you must reindex or delete them before upgrading to 6.x. +For more information, see <>. + +To upgrade an Elasticsearch cluster running 2.x, you have two options: + +* Perform a <> to 5.6, + <> the 2.x indices, then perform a + <> to 6.x. If your Elasticsearch 2.x + cluster contains indices that were created before 2.x, you must either + delete or reindex them before upgrading to 5.6. For more information about + upgrading from 2.x to 5.6, see https://www.elastic.co/guide/en/elasticsearch/reference/5.6/setup-upgrade.html[ + Upgrading Elasticsearch] in the Elasticsearch 5.6 Reference. + +* Create a new 6.x cluster and <> to import indices directly from the 2.x cluster. + +To upgrade an Elasticsearch 1.x cluster, you have two options: + +* Perform a <> to Elasticsearch + 2.4.x and <> or delete the 1.x indices. + Then, perform a full cluster restart upgrade to 5.6 and reindex or delete + the 2.x indices. Finally, perform a <> + to 6.x. For more information about upgrading from 1.x to 2.4, see https://www.elastic.co/guide/en/elasticsearch/reference/2.4/setup-upgrade.html[ + Upgrading Elasticsearch] in the Elasticsearch 2.4 Reference. + For more information about upgrading from 2.4 to 5.6, see https://www.elastic.co/guide/en/elasticsearch/reference/5.6/setup-upgrade.html[ + Upgrading Elasticsearch] in the Elasticsearch 5.6 Reference. + +* Create a new 6.x cluster and <> to import indices directly from the 1.x cluster. + +.Upgrading time-based indices +******************************************* + +If you use time-based indices, you likely won't need to carry +pre-5.x indices forward to 6.x. Data in time-based indices +generally becomes less useful as time passes and are +deleted as they age past your retention period. + +Unless you have an unusally long retention period, you can just +wait to upgrade to 6.x until all of your pre-5.x indices have +been deleted. + +******************************************* + + +[[reindex-upgrade-inplace]] +==== Reindex in place + +To manually reindex your old indices with the <>: + +. Create a new index and copy the mappings and settings from the old index. +. Set the `refresh_interval` to `-1` and the `number_of_replicas` to `0` for + efficient reindexing. +. Reindex all documents from the old index into the new index using the + <>. +. Reset the `refresh_interval` and `number_of_replicas` to the values + used in the old index. +. Wait for the index status to change to `green`. +. In a single <> request: + +.. Delete the old index. +.. Add an alias with the old index name to the new index. +.. Add any aliases that existed on the old index to the new index. + + +// Flag this as X-Pack and conditionally include at GA. +// Need to update the CSS to override sidebar titles. +[role="xpack"] +.Migration assistance and upgrade tools +******************************************* +{xpack} 5.6 provides migration assistance and upgrade tools that simplify +reindexing and upgrading to 6.x. These tools are free with the X-Pack trial +and Basic licenses and you can use them to upgrade whether or not X-Pack is a +regular part of your Elastic Stack. For more information, see +{stack-guide}/upgrading-elastic-stack.html. +******************************************* + +[[reindex-upgrade-remote]] +==== Reindex from a remote cluster + +You can use <> to migrate indices from +your old cluster to a new 6.x cluster. This enables you move to 6.x from a +pre-5.6 cluster without interrupting service. + +[WARNING] +============================================= + +Elasticsearch provides backwards compatibility support that enables +indices from the previous major version to be upgraded to the +current major version. Skipping a major version means that you must +resolve any backward compatibility issues yourself. + +============================================= + +To migrate your indices: + +. Set up a new 6.x cluster alongside your old cluster. Enable it to access +your old cluster by adding your old cluster to the `reindex.remote.whitelist` in `elasticsearch.yml`: ++ +-- +[source,yaml] +-------------------------------------------------- +reindex.remote.whitelist: oldhost:9200 +-------------------------------------------------- + +[NOTE] +============================================= +The new cluster doesn't have to start fully-scaled out. As you migrate +indices and shift the load to the new cluster, you can add nodes to the new +cluster and remove nodes from the old one. + +============================================= +-- + +. For each index that you need to migrate to the 6.x cluster: + +.. Create a new index in 6.x with the appropriate mappings and settings. Set the + `refresh_interval` to `-1` and set `number_of_replicas` to `0` for + faster reindexing. + +.. <> to pull documents from the + old index into the new 6.x index: ++ +-- +[source,js] +-------------------------------------------------- +POST _reindex +{ + "source": { + "remote": { + "host": "http://oldhost:9200", + "username": "user", + "password": "pass" + }, + "index": "source", + "query": { + "match": { + "test": "data" + } + } + }, + "dest": { + "index": "dest" + } +} +-------------------------------------------------- +// CONSOLE +// TEST[setup:host] +// TEST[s/^/PUT source\n/] +// TEST[s/oldhost:9200",/\${host}"/] +// TEST[s/"username": "user",//] +// TEST[s/"password": "pass"//] + +If you run the reindex job in the background by setting `wait_for_completion` +to `false`, the reindex request returns a `task_id` you can use to +monitor progress of the reindex job with the <>: +`GET _tasks/TASK_ID`. +-- + +.. When the reindex job completes, set the `refresh_interval` and + `number_of_replicas` to the desired values (the default settings are + `30s` and `1`). + +.. Once replication is complete and the status of the new index is `green`, + you can delete the old index. diff --git a/docs/reference/setup/upgrade/rolling_upgrade.asciidoc b/docs/reference/setup/upgrade/rolling_upgrade.asciidoc new file mode 100644 index 0000000000000..09427ba26ad5f --- /dev/null +++ b/docs/reference/setup/upgrade/rolling_upgrade.asciidoc @@ -0,0 +1,159 @@ +[[rolling-upgrades]] +=== Rolling upgrades + +A rolling upgrade allows an Elasticsearch cluster to be upgraded one node at +a time so upgrading does not interrupt service. Running multiple versions of +Elasticsearch in the same cluster beyond the duration of an upgrade is +not supported, as shards cannot be replicated from upgraded nodes to nodes +running the older version. + +Rolling upgrades can be performed between minor versions. Elasticsearch +6.x supports rolling upgrades from *Elasticsearch 5.6*. +Upgrading from earlier 5.x versions requires a <>. You must <> from +versions prior to 5.x. + +To perform a rolling upgrade: + +. *Disable shard allocation*. ++ +-- +include::disable-shard-alloc.asciidoc[] +-- + +. *Stop non-essential indexing and perform a synced flush.* (Optional) ++ +-- +While you can continue indexing during the upgrade, shard recovery +is much faster if you temporarily stop non-essential indexing and perform a +<>. + +include::synced-flush.asciidoc[] + +-- + +. [[upgrade-node]] *Shut down a single node*. ++ +-- +include::shut-down-node.asciidoc[] +-- + +. *Upgrade the node you shut down.* ++ +-- +include::upgrade-node.asciidoc[] +include::set-paths-tip.asciidoc[] +-- + +. *Upgrade any plugins.* ++ +Use the `elasticsearch-plugin` script to install the upgraded version of each +installed Elasticsearch plugin. All plugins must be upgraded when you upgrade +a node. + +. *Start the upgraded node.* ++ +-- + +Start the newly-upgraded node and confirm that it joins the cluster by checking +the log file or by submitting a `_cat/nodes` request: + +[source,sh] +-------------------------------------------------- +GET _cat/nodes +-------------------------------------------------- +// CONSOLE +-- + +. *Reenable shard allocation.* ++ +-- + +Once the node has joined the cluster, reenable shard allocation to start using +the node: + +[source,js] +-------------------------------------------------- +PUT _cluster/settings +{ + "transient": { + "cluster.routing.allocation.enable": "all" + } +} +-------------------------------------------------- +// CONSOLE +-- + +. *Wait for the node to recover.* ++ +-- + +Before upgrading the next node, wait for the cluster to finish shard allocation. +You can check progress by submitting a <> request: + +[source,sh] +-------------------------------------------------- +GET _cat/health +-------------------------------------------------- +// CONSOLE + +Wait for the `status` column to switch from `yellow` to `green`. Once the +node is `green`, all primary and replica shards have been allocated. + +[IMPORTANT] +==================================================== +During a rolling upgrade, primary shards assigned to a node running the new +version cannot have their replicas assigned to a node with the old +version. The new version might have a different data format that is +not understood by the old version. + +If it is not possible to assign the replica shards to another node +(there is only one upgraded node in the cluster), the replica +shards remain unassigned and status stays `yellow`. + +In this case, you can proceed once there are no initializing or relocating shards +(check the `init` and `relo` columns). + +As soon as another node is upgraded, the replicas can be assigned and the +status will change to `green`. +==================================================== + +Shards that were not <> might take longer to +recover. You can monitor the recovery status of individual shards by +submitting a <> request: + +[source,sh] +-------------------------------------------------- +GET _cat/recovery +-------------------------------------------------- +// CONSOLE + +If you stopped indexing, it is safe to resume indexing as soon as +recovery completes. +-- + +. *Repeat* ++ +-- + +When the node has recovered and the cluster is stable, repeat these steps +for each node that needs to be updated. + +-- + +[IMPORTANT] +==================================================== + +During a rolling upgrade, the cluster continues to operate normally. However, +any new functionality is disabled or operates in a backward compatible mode +until all nodes in the cluster are upgraded. New functionality +becomes operational once the upgrade is complete and all nodes are running the +new version. Once that has happened, there's no way to return to operating +in a backward compatible mode. Nodes running the previous major version will +not be allowed to join the fully-updated cluster. + +In the unlikely case of a network malfunction during the upgrade process that +isolates all remaining old nodes from the cluster, you must take the +old nodes offline and upgrade them to enable them to join the cluster. + +==================================================== \ No newline at end of file diff --git a/docs/reference/setup/upgrade/set-paths-tip.asciidoc b/docs/reference/setup/upgrade/set-paths-tip.asciidoc new file mode 100644 index 0000000000000..38a07f7ac2be3 --- /dev/null +++ b/docs/reference/setup/upgrade/set-paths-tip.asciidoc @@ -0,0 +1,18 @@ +[TIP] +================================================ + +When you extract the zip or tarball packages, the `elasticsearch-n.n.n` +directory contains the Elasticsearh `config`, `data`, `logs` and +`plugins` directories. + +We recommend moving these directories out of the Elasticsearch directory +so that there is no chance of deleting them when you upgrade Elasticsearch. +To specify the new locations, use the `ES_PATH_CONF` environment +variable and the `path.data` and `path.logs` settings. For more information, +see <>. + +The <> and <> packages place these directories in the +appropriate place for each operating system. In production, we recommend +installing using the deb or rpm package. + +================================================ \ No newline at end of file diff --git a/docs/reference/setup/upgrade/shut-down-node.asciidoc b/docs/reference/setup/upgrade/shut-down-node.asciidoc new file mode 100644 index 0000000000000..258d170906a67 --- /dev/null +++ b/docs/reference/setup/upgrade/shut-down-node.asciidoc @@ -0,0 +1,20 @@ +* If you are running Elasticsearch with `systemd`: ++ +[source,sh] +-------------------------------------------------- +sudo systemctl stop elasticsearch.service +-------------------------------------------------- + +* If you are running Elasticsearch with SysV `init`: ++ +[source,sh] +-------------------------------------------------- +sudo -i service elasticsearch stop +-------------------------------------------------- + +* If you are running Elasticsearch as a daemon: ++ +[source,sh] +-------------------------------------------------- +kill $(cat pid) +-------------------------------------------------- \ No newline at end of file diff --git a/docs/reference/setup/upgrade/synced-flush.asciidoc b/docs/reference/setup/upgrade/synced-flush.asciidoc new file mode 100644 index 0000000000000..d909688f6a434 --- /dev/null +++ b/docs/reference/setup/upgrade/synced-flush.asciidoc @@ -0,0 +1,11 @@ + +[source,sh] +-------------------------------------------------- +POST _flush/synced +-------------------------------------------------- +// CONSOLE + +When you perform a synced flush, check the response to make sure there are +no failures. Synced flush operations that fail due to pending indexing +operations are listed in the response body, although the request itself +still returns a 200 OK status. If there are failures, reissue the request. diff --git a/docs/reference/setup/upgrade/upgrade-node.asciidoc b/docs/reference/setup/upgrade/upgrade-node.asciidoc new file mode 100644 index 0000000000000..db9d352e83184 --- /dev/null +++ b/docs/reference/setup/upgrade/upgrade-node.asciidoc @@ -0,0 +1,23 @@ +To upgrade using a <> or <> package: + +* Use `rpm` or `dpkg` to install the new package. All files are + installed in the appropriate location for the operating system + and Elasticsearch config files are not overwritten. + +To upgrade using a zip or compressed tarball: + +.. Extract the zip or tarball to a _new_ directory. This is critical if you + are not using external `config` and `data` directories. + +.. Set the `ES_PATH_CONF` environment variable to specify the location of + your external `config` directory and `jvm.options` file. If you are not + using an external `config` directory, copy your old configuration + over to the new installation. + +.. Set `path.data` in `config/elasticsearch.yml` to point to your external + data directory. If you are not using an external `data` directory, copy + your old data directory over to the new installation. + +.. Set `path.logs` in `config/elasticsearch.yml` to point to the location + where you want to store your logs. If you do not specify this setting, + logs are stored in the directory you extracted the archive to. \ No newline at end of file diff --git a/docs/resiliency/index.asciidoc b/docs/resiliency/index.asciidoc index 458e3b89fe49c..aac8c192372c5 100644 --- a/docs/resiliency/index.asciidoc +++ b/docs/resiliency/index.asciidoc @@ -77,7 +77,7 @@ This problem is mostly fixed by {GIT}20384[#20384] (v5.0.0), which takes committ election. This considerably reduces the chance of this rare problem occurring but does not fully mitigate it. If the second partition happens concurrently with a cluster state update and blocks the cluster state commit message from reaching a majority of nodes, it may be that the in flight update will be lost. If the now-isolated master can still acknowledge the cluster state update to the client this -will amount to the loss of an acknowledged change. Fixing that last scenario needs considerable work and is currently targeted at (v6.0.0). +will amount to the loss of an acknowledged change. Fixing that last scenario needs considerable work. We are currently working on it but have no ETA yet. [float] === Better request retry mechanism when nodes are disconnected (STATUS: ONGOING) diff --git a/docs/src/test/java/org/elasticsearch/smoketest/DocsClientYamlTestSuiteIT.java b/docs/src/test/java/org/elasticsearch/smoketest/DocsClientYamlTestSuiteIT.java index a4870aa0c1171..46e448fa54da9 100644 --- a/docs/src/test/java/org/elasticsearch/smoketest/DocsClientYamlTestSuiteIT.java +++ b/docs/src/test/java/org/elasticsearch/smoketest/DocsClientYamlTestSuiteIT.java @@ -19,11 +19,11 @@ package org.elasticsearch.smoketest; +import org.apache.http.HttpHost; import com.carrotsearch.randomizedtesting.annotations.Name; import com.carrotsearch.randomizedtesting.annotations.ParametersFactory; import org.elasticsearch.Version; import org.elasticsearch.client.RestClient; -import org.elasticsearch.client.http.HttpHost; import org.elasticsearch.test.rest.yaml.ClientYamlDocsTestClient; import org.elasticsearch.test.rest.yaml.ClientYamlTestCandidate; import org.elasticsearch.test.rest.yaml.ClientYamlTestClient; diff --git a/modules/aggs-matrix-stats/src/test/java/org/elasticsearch/search/aggregations/matrix/stats/InternalMatrixStatsTests.java b/modules/aggs-matrix-stats/src/test/java/org/elasticsearch/search/aggregations/matrix/stats/InternalMatrixStatsTests.java index 69e14c14a7c60..6ff132b32fa29 100644 --- a/modules/aggs-matrix-stats/src/test/java/org/elasticsearch/search/aggregations/matrix/stats/InternalMatrixStatsTests.java +++ b/modules/aggs-matrix-stats/src/test/java/org/elasticsearch/search/aggregations/matrix/stats/InternalMatrixStatsTests.java @@ -193,7 +193,7 @@ protected void assertFromXContent(InternalMatrixStats expected, ParsedAggregatio } final String unknownField = randomAlphaOfLength(3); - final String other = randomAlphaOfLength(3); + final String other = randomValueOtherThan(unknownField, () -> randomAlphaOfLength(3)); for (MatrixStats matrix : Arrays.asList(actual)) { diff --git a/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/ASCIIFoldingTokenFilterFactory.java b/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/ASCIIFoldingTokenFilterFactory.java index f8e0c7383a09b..4a5af46feffd2 100644 --- a/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/ASCIIFoldingTokenFilterFactory.java +++ b/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/ASCIIFoldingTokenFilterFactory.java @@ -42,9 +42,7 @@ public class ASCIIFoldingTokenFilterFactory extends AbstractTokenFilterFactory public ASCIIFoldingTokenFilterFactory(IndexSettings indexSettings, Environment environment, String name, Settings settings) { super(indexSettings, name, settings); - preserveOriginal = settings.getAsBooleanLenientForPreEs6Indices( - indexSettings.getIndexVersionCreated(), PRESERVE_ORIGINAL.getPreferredName(), - DEFAULT_PRESERVE_ORIGINAL, deprecationLogger); + preserveOriginal = settings.getAsBoolean(PRESERVE_ORIGINAL.getPreferredName(), DEFAULT_PRESERVE_ORIGINAL); } @Override diff --git a/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/AbstractCompoundWordTokenFilterFactory.java b/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/AbstractCompoundWordTokenFilterFactory.java index b59cc166f09a5..92d32c571502a 100644 --- a/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/AbstractCompoundWordTokenFilterFactory.java +++ b/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/AbstractCompoundWordTokenFilterFactory.java @@ -44,9 +44,8 @@ protected AbstractCompoundWordTokenFilterFactory(IndexSettings indexSettings, En minWordSize = settings.getAsInt("min_word_size", CompoundWordTokenFilterBase.DEFAULT_MIN_WORD_SIZE); minSubwordSize = settings.getAsInt("min_subword_size", CompoundWordTokenFilterBase.DEFAULT_MIN_SUBWORD_SIZE); maxSubwordSize = settings.getAsInt("max_subword_size", CompoundWordTokenFilterBase.DEFAULT_MAX_SUBWORD_SIZE); - onlyLongestMatch = settings - .getAsBooleanLenientForPreEs6Indices(indexSettings.getIndexVersionCreated(), "only_longest_match", false, deprecationLogger); - wordList = Analysis.getWordSet(env, indexSettings.getIndexVersionCreated(), settings, "word_list"); + onlyLongestMatch = settings.getAsBoolean("only_longest_match", false); + wordList = Analysis.getWordSet(env, settings, "word_list"); if (wordList == null) { throw new IllegalArgumentException("word_list must be provided for [" + name + "], either as a path to a file, or directly"); } diff --git a/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/BengaliNormalizationFilterFactory.java b/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/BengaliNormalizationFilterFactory.java new file mode 100644 index 0000000000000..fbec142bf3c1b --- /dev/null +++ b/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/BengaliNormalizationFilterFactory.java @@ -0,0 +1,47 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.analysis.common; + +import org.apache.lucene.analysis.TokenStream; +import org.apache.lucene.analysis.bn.BengaliNormalizationFilter; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.env.Environment; +import org.elasticsearch.index.IndexSettings; +import org.elasticsearch.index.analysis.AbstractTokenFilterFactory; +import org.elasticsearch.index.analysis.MultiTermAwareComponent; + +/** + * Factory for {@link BengaliNormalizationFilter} + */ +public class BengaliNormalizationFilterFactory extends AbstractTokenFilterFactory implements MultiTermAwareComponent { + + BengaliNormalizationFilterFactory(IndexSettings indexSettings, Environment environment, String name, Settings settings) { + super(indexSettings, name, settings); + } + + @Override + public TokenStream create(TokenStream tokenStream) { + return new BengaliNormalizationFilter(tokenStream); + } + + @Override + public Object getMultiTermComponent() { + return this; + } +} diff --git a/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/CJKBigramFilterFactory.java b/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/CJKBigramFilterFactory.java index 75323eac10774..be1f2495f0b23 100644 --- a/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/CJKBigramFilterFactory.java +++ b/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/CJKBigramFilterFactory.java @@ -29,6 +29,7 @@ import java.util.Arrays; import java.util.HashSet; +import java.util.List; import java.util.Set; /** @@ -52,12 +53,11 @@ public final class CJKBigramFilterFactory extends AbstractTokenFilterFactory { CJKBigramFilterFactory(IndexSettings indexSettings, Environment environment, String name, Settings settings) { super(indexSettings, name, settings); - outputUnigrams = settings.getAsBooleanLenientForPreEs6Indices( - indexSettings.getIndexVersionCreated(), "output_unigrams", false, deprecationLogger); - final String[] asArray = settings.getAsArray("ignored_scripts"); + outputUnigrams = settings.getAsBoolean("output_unigrams", false); + final List asArray = settings.getAsList("ignored_scripts"); Set scripts = new HashSet<>(Arrays.asList("han", "hiragana", "katakana", "hangul")); if (asArray != null) { - scripts.removeAll(Arrays.asList(asArray)); + scripts.removeAll(asArray); } int flags = 0; for (String script : scripts) { diff --git a/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/CommonAnalysisPlugin.java b/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/CommonAnalysisPlugin.java index b94c4f1c79095..813075fa73f06 100644 --- a/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/CommonAnalysisPlugin.java +++ b/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/CommonAnalysisPlugin.java @@ -25,6 +25,7 @@ import org.apache.lucene.analysis.TokenStream; import org.apache.lucene.analysis.ar.ArabicNormalizationFilter; import org.apache.lucene.analysis.ar.ArabicStemFilter; +import org.apache.lucene.analysis.bn.BengaliNormalizationFilter; import org.apache.lucene.analysis.br.BrazilianStemFilter; import org.apache.lucene.analysis.charfilter.HTMLStripCharFilter; import org.apache.lucene.analysis.cjk.CJKBigramFilter; @@ -94,6 +95,7 @@ public Map> getTokenFilters() { filters.put("arabic_normalization", ArabicNormalizationFilterFactory::new); filters.put("arabic_stem", ArabicStemTokenFilterFactory::new); filters.put("asciifolding", ASCIIFoldingTokenFilterFactory::new); + filters.put("bengali_normalization", BengaliNormalizationFilterFactory::new); filters.put("brazilian_stem", BrazilianStemTokenFilterFactory::new); filters.put("cjk_bigram", CJKBigramFilterFactory::new); filters.put("cjk_width", CJKWidthFilterFactory::new); @@ -180,6 +182,7 @@ public List getPreConfiguredTokenFilters() { filters.add(PreConfiguredTokenFilter.singleton("arabic_normalization", true, ArabicNormalizationFilter::new)); filters.add(PreConfiguredTokenFilter.singleton("arabic_stem", false, ArabicStemFilter::new)); filters.add(PreConfiguredTokenFilter.singleton("asciifolding", true, ASCIIFoldingFilter::new)); + filters.add(PreConfiguredTokenFilter.singleton("bengali_normalization", true, BengaliNormalizationFilter::new)); filters.add(PreConfiguredTokenFilter.singleton("brazilian_stem", false, BrazilianStemFilter::new)); filters.add(PreConfiguredTokenFilter.singleton("cjk_bigram", false, CJKBigramFilter::new)); filters.add(PreConfiguredTokenFilter.singleton("cjk_width", true, CJKWidthFilter::new)); diff --git a/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/CommonGramsTokenFilterFactory.java b/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/CommonGramsTokenFilterFactory.java index a6e9baeab8d81..8de6dcacb735f 100644 --- a/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/CommonGramsTokenFilterFactory.java +++ b/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/CommonGramsTokenFilterFactory.java @@ -39,10 +39,8 @@ public class CommonGramsTokenFilterFactory extends AbstractTokenFilterFactory { CommonGramsTokenFilterFactory(IndexSettings indexSettings, Environment env, String name, Settings settings) { super(indexSettings, name, settings); - this.ignoreCase = settings.getAsBooleanLenientForPreEs6Indices(indexSettings.getIndexVersionCreated(), - "ignore_case", false, deprecationLogger); - this.queryMode = settings.getAsBooleanLenientForPreEs6Indices(indexSettings.getIndexVersionCreated(), - "query_mode", false, deprecationLogger); + this.ignoreCase = settings.getAsBoolean("ignore_case", false); + this.queryMode = settings.getAsBoolean("query_mode", false); this.words = Analysis.parseCommonWords(env, settings, null, ignoreCase); if (this.words == null) { diff --git a/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/ElisionTokenFilterFactory.java b/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/ElisionTokenFilterFactory.java index 94fc52165dd23..d3f920d9e63a2 100644 --- a/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/ElisionTokenFilterFactory.java +++ b/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/ElisionTokenFilterFactory.java @@ -35,7 +35,7 @@ public class ElisionTokenFilterFactory extends AbstractTokenFilterFactory implem ElisionTokenFilterFactory(IndexSettings indexSettings, Environment env, String name, Settings settings) { super(indexSettings, name, settings); - this.articles = Analysis.parseArticles(env, indexSettings.getIndexVersionCreated(), settings); + this.articles = Analysis.parseArticles(env, settings); } @Override diff --git a/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/HtmlStripCharFilterFactory.java b/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/HtmlStripCharFilterFactory.java index 9ee889e3af610..760c1c79ba4cd 100644 --- a/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/HtmlStripCharFilterFactory.java +++ b/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/HtmlStripCharFilterFactory.java @@ -26,6 +26,7 @@ import org.elasticsearch.index.analysis.AbstractCharFilterFactory; import java.io.Reader; +import java.util.List; import java.util.Set; import static java.util.Collections.unmodifiableSet; @@ -36,8 +37,8 @@ public class HtmlStripCharFilterFactory extends AbstractCharFilterFactory { HtmlStripCharFilterFactory(IndexSettings indexSettings, Environment env, String name, Settings settings) { super(indexSettings, name); - String[] escapedTags = settings.getAsArray("escaped_tags"); - if (escapedTags.length > 0) { + List escapedTags = settings.getAsList("escaped_tags"); + if (escapedTags.size() > 0) { this.escapedTags = unmodifiableSet(newHashSet(escapedTags)); } else { this.escapedTags = null; diff --git a/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/KeepTypesFilterFactory.java b/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/KeepTypesFilterFactory.java index 4da560836eb13..0f94b521e4b7d 100644 --- a/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/KeepTypesFilterFactory.java +++ b/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/KeepTypesFilterFactory.java @@ -27,8 +27,8 @@ import org.elasticsearch.index.analysis.AbstractTokenFilterFactory; import org.elasticsearch.index.analysis.TokenFilterFactory; -import java.util.Arrays; import java.util.HashSet; +import java.util.List; import java.util.Set; /** @@ -48,12 +48,12 @@ public class KeepTypesFilterFactory extends AbstractTokenFilterFactory { KeepTypesFilterFactory(IndexSettings indexSettings, Environment env, String name, Settings settings) { super(indexSettings, name, settings); - final String[] arrayKeepTypes = settings.getAsArray(KEEP_TYPES_KEY, null); + final List arrayKeepTypes = settings.getAsList(KEEP_TYPES_KEY, null); if ((arrayKeepTypes == null)) { throw new IllegalArgumentException("keep_types requires `" + KEEP_TYPES_KEY + "` to be configured"); } - this.keepTypes = new HashSet<>(Arrays.asList(arrayKeepTypes)); + this.keepTypes = new HashSet<>(arrayKeepTypes); } @Override diff --git a/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/KeepWordFilterFactory.java b/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/KeepWordFilterFactory.java index f42797e0ff644..df67f24cc7f5f 100644 --- a/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/KeepWordFilterFactory.java +++ b/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/KeepWordFilterFactory.java @@ -22,7 +22,6 @@ import org.apache.lucene.analysis.CharArraySet; import org.apache.lucene.analysis.TokenStream; import org.apache.lucene.analysis.miscellaneous.KeepWordFilter; -import org.apache.lucene.util.Version; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.env.Environment; import org.elasticsearch.index.IndexSettings; @@ -31,6 +30,8 @@ import org.elasticsearch.index.analysis.StopTokenFilterFactory; import org.elasticsearch.index.analysis.TokenFilterFactory; +import java.util.List; + /** * A {@link TokenFilterFactory} for {@link KeepWordFilter}. This filter only * keep tokens that are contained in the term set configured via @@ -61,7 +62,7 @@ public class KeepWordFilterFactory extends AbstractTokenFilterFactory { KeepWordFilterFactory(IndexSettings indexSettings, Environment env, String name, Settings settings) { super(indexSettings, name, settings); - final String[] arrayKeepWords = settings.getAsArray(KEEP_WORDS_KEY, null); + final List arrayKeepWords = settings.getAsList(KEEP_WORDS_KEY, null); final String keepWordsPath = settings.get(KEEP_WORDS_PATH_KEY, null); if ((arrayKeepWords == null && keepWordsPath == null) || (arrayKeepWords != null && keepWordsPath != null)) { // we don't allow both or none @@ -71,7 +72,7 @@ public class KeepWordFilterFactory extends AbstractTokenFilterFactory { if (settings.get(ENABLE_POS_INC_KEY) != null) { throw new IllegalArgumentException(ENABLE_POS_INC_KEY + " is not supported anymore. Please fix your analysis chain"); } - this.keepWords = Analysis.getWordSet(env, indexSettings.getIndexVersionCreated(), settings, KEEP_WORDS_KEY); + this.keepWords = Analysis.getWordSet(env, settings, KEEP_WORDS_KEY); } @Override diff --git a/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/KeywordMarkerTokenFilterFactory.java b/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/KeywordMarkerTokenFilterFactory.java index a57e322ff0244..b32f9ad4b63c9 100644 --- a/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/KeywordMarkerTokenFilterFactory.java +++ b/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/KeywordMarkerTokenFilterFactory.java @@ -56,7 +56,7 @@ public class KeywordMarkerTokenFilterFactory extends AbstractTokenFilterFactory super(indexSettings, name, settings); boolean ignoreCase = - settings.getAsBooleanLenientForPreEs6Indices(indexSettings.getIndexVersionCreated(), "ignore_case", false, deprecationLogger); + settings.getAsBoolean("ignore_case", false); String patternString = settings.get("keywords_pattern"); if (patternString != null) { // a pattern for matching keywords is specified, as opposed to a @@ -68,7 +68,7 @@ public class KeywordMarkerTokenFilterFactory extends AbstractTokenFilterFactory keywordPattern = Pattern.compile(patternString); keywordLookup = null; } else { - Set rules = Analysis.getWordSet(env, indexSettings.getIndexVersionCreated(), settings, "keywords"); + Set rules = Analysis.getWordSet(env, settings, "keywords"); if (rules == null) { throw new IllegalArgumentException( "keyword filter requires either `keywords`, `keywords_path`, " + diff --git a/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/LimitTokenCountFilterFactory.java b/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/LimitTokenCountFilterFactory.java index 862c2e67261db..5e6b70f90dd66 100644 --- a/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/LimitTokenCountFilterFactory.java +++ b/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/LimitTokenCountFilterFactory.java @@ -37,8 +37,7 @@ public class LimitTokenCountFilterFactory extends AbstractTokenFilterFactory { LimitTokenCountFilterFactory(IndexSettings indexSettings, Environment env, String name, Settings settings) { super(indexSettings, name, settings); this.maxTokenCount = settings.getAsInt("max_token_count", DEFAULT_MAX_TOKEN_COUNT); - this.consumeAllTokens = settings.getAsBooleanLenientForPreEs6Indices( - indexSettings.getIndexVersionCreated(), "consume_all_tokens", DEFAULT_CONSUME_ALL_TOKENS, deprecationLogger); + this.consumeAllTokens = settings.getAsBoolean("consume_all_tokens", DEFAULT_CONSUME_ALL_TOKENS); } @Override diff --git a/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/NGramTokenFilterFactory.java b/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/NGramTokenFilterFactory.java index 2d7a8c52fd63e..22b060613163c 100644 --- a/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/NGramTokenFilterFactory.java +++ b/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/NGramTokenFilterFactory.java @@ -25,6 +25,8 @@ import org.elasticsearch.env.Environment; import org.elasticsearch.index.IndexSettings; import org.elasticsearch.index.analysis.AbstractTokenFilterFactory; +import org.elasticsearch.Version; + public class NGramTokenFilterFactory extends AbstractTokenFilterFactory { @@ -36,8 +38,21 @@ public class NGramTokenFilterFactory extends AbstractTokenFilterFactory { NGramTokenFilterFactory(IndexSettings indexSettings, Environment environment, String name, Settings settings) { super(indexSettings, name, settings); + int maxAllowedNgramDiff = indexSettings.getMaxNgramDiff(); this.minGram = settings.getAsInt("min_gram", NGramTokenFilter.DEFAULT_MIN_NGRAM_SIZE); this.maxGram = settings.getAsInt("max_gram", NGramTokenFilter.DEFAULT_MAX_NGRAM_SIZE); + int ngramDiff = maxGram - minGram; + if (ngramDiff > maxAllowedNgramDiff) { + if (indexSettings.getIndexVersionCreated().onOrAfter(Version.V_7_0_0_alpha1)) { + throw new IllegalArgumentException( + "The difference between max_gram and min_gram in NGram Tokenizer must be less than or equal to: [" + + maxAllowedNgramDiff + "] but was [" + ngramDiff + "]. This limit can be set by changing the [" + + IndexSettings.MAX_NGRAM_DIFF_SETTING.getKey() + "] index level setting."); + } else { + deprecationLogger.deprecated("Deprecated big difference between max_gram and min_gram in NGram Tokenizer," + + "expected difference must be less than or equal to: [" + maxAllowedNgramDiff + "]"); + } + } } @Override diff --git a/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/PatternCaptureGroupTokenFilterFactory.java b/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/PatternCaptureGroupTokenFilterFactory.java index 7c58bc1491ade..7e69e44ffff24 100644 --- a/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/PatternCaptureGroupTokenFilterFactory.java +++ b/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/PatternCaptureGroupTokenFilterFactory.java @@ -27,6 +27,7 @@ import org.elasticsearch.index.IndexSettings; import org.elasticsearch.index.analysis.AbstractTokenFilterFactory; +import java.util.List; import java.util.regex.Pattern; public class PatternCaptureGroupTokenFilterFactory extends AbstractTokenFilterFactory { @@ -37,17 +38,16 @@ public class PatternCaptureGroupTokenFilterFactory extends AbstractTokenFilterFa PatternCaptureGroupTokenFilterFactory(IndexSettings indexSettings, Environment environment, String name, Settings settings) { super(indexSettings, name, settings); - String[] regexes = settings.getAsArray(PATTERNS_KEY, null, false); + List regexes = settings.getAsList(PATTERNS_KEY, null, false); if (regexes == null) { throw new IllegalArgumentException("required setting '" + PATTERNS_KEY + "' is missing for token filter [" + name + "]"); } - patterns = new Pattern[regexes.length]; - for (int i = 0; i < regexes.length; i++) { - patterns[i] = Pattern.compile(regexes[i]); + patterns = new Pattern[regexes.size()]; + for (int i = 0; i < regexes.size(); i++) { + patterns[i] = Pattern.compile(regexes.get(i)); } - preserveOriginal = settings.getAsBooleanLenientForPreEs6Indices( - indexSettings.getIndexVersionCreated(), PRESERVE_ORIG_KEY, true, deprecationLogger); + preserveOriginal = settings.getAsBoolean(PRESERVE_ORIG_KEY, true); } @Override diff --git a/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/PatternReplaceTokenFilterFactory.java b/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/PatternReplaceTokenFilterFactory.java index 8af85861875d8..a503ad61cce6b 100644 --- a/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/PatternReplaceTokenFilterFactory.java +++ b/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/PatternReplaceTokenFilterFactory.java @@ -44,7 +44,7 @@ public PatternReplaceTokenFilterFactory(IndexSettings indexSettings, Environment } this.pattern = Regex.compile(sPattern, settings.get("flags")); this.replacement = settings.get("replacement", ""); - this.all = settings.getAsBooleanLenientForPreEs6Indices(indexSettings.getIndexVersionCreated(), "all", true, deprecationLogger); + this.all = settings.getAsBoolean("all", true); } @Override diff --git a/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/StemmerTokenFilterFactory.java b/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/StemmerTokenFilterFactory.java index c94a449afd2c1..630a6a6ebeca4 100644 --- a/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/StemmerTokenFilterFactory.java +++ b/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/StemmerTokenFilterFactory.java @@ -22,6 +22,7 @@ import org.apache.lucene.analysis.TokenStream; import org.apache.lucene.analysis.ar.ArabicStemFilter; import org.apache.lucene.analysis.bg.BulgarianStemFilter; +import org.apache.lucene.analysis.bn.BengaliStemFilter; import org.apache.lucene.analysis.br.BrazilianStemFilter; import org.apache.lucene.analysis.ckb.SoraniStemFilter; import org.apache.lucene.analysis.cz.CzechStemFilter; @@ -102,6 +103,8 @@ public TokenStream create(TokenStream tokenStream) { return new SnowballFilter(tokenStream, new ArmenianStemmer()); } else if ("basque".equalsIgnoreCase(language)) { return new SnowballFilter(tokenStream, new BasqueStemmer()); + } else if ("bengali".equalsIgnoreCase(language)) { + return new BengaliStemFilter(tokenStream); } else if ("brazilian".equalsIgnoreCase(language)) { return new BrazilianStemFilter(tokenStream); } else if ("bulgarian".equalsIgnoreCase(language)) { diff --git a/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/UniqueTokenFilterFactory.java b/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/UniqueTokenFilterFactory.java index 256e3dad5c09a..120947f5fe871 100644 --- a/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/UniqueTokenFilterFactory.java +++ b/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/UniqueTokenFilterFactory.java @@ -31,8 +31,7 @@ public class UniqueTokenFilterFactory extends AbstractTokenFilterFactory { UniqueTokenFilterFactory(IndexSettings indexSettings, Environment environment, String name, Settings settings) { super(indexSettings, name, settings); - this.onlyOnSamePosition = settings.getAsBooleanLenientForPreEs6Indices( - indexSettings.getIndexVersionCreated(), "only_on_same_position", false, deprecationLogger); + this.onlyOnSamePosition = settings.getAsBoolean("only_on_same_position", false); } @Override diff --git a/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/WordDelimiterGraphTokenFilterFactory.java b/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/WordDelimiterGraphTokenFilterFactory.java index 1613339853117..6173cfdc84af4 100644 --- a/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/WordDelimiterGraphTokenFilterFactory.java +++ b/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/WordDelimiterGraphTokenFilterFactory.java @@ -85,8 +85,7 @@ public WordDelimiterGraphTokenFilterFactory(IndexSettings indexSettings, Environ // If set, causes trailing "'s" to be removed for each subword: "O'Neil's" => "O", "Neil" flags |= getFlag(STEM_ENGLISH_POSSESSIVE, settings, "stem_english_possessive", true); // If not null is the set of tokens to protect from being delimited - Set protectedWords = Analysis.getWordSet(env, indexSettings.getIndexVersionCreated(), - settings, "protected_words"); + Set protectedWords = Analysis.getWordSet(env, settings, "protected_words"); this.protoWords = protectedWords == null ? null : CharArraySet.copy(protectedWords); this.flags = flags; } diff --git a/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/WordDelimiterTokenFilterFactory.java b/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/WordDelimiterTokenFilterFactory.java index 8c38beb8f8b7b..93677d0898fa2 100644 --- a/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/WordDelimiterTokenFilterFactory.java +++ b/modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/WordDelimiterTokenFilterFactory.java @@ -90,8 +90,7 @@ public WordDelimiterTokenFilterFactory(IndexSettings indexSettings, Environment // If set, causes trailing "'s" to be removed for each subword: "O'Neil's" => "O", "Neil" flags |= getFlag(STEM_ENGLISH_POSSESSIVE, settings, "stem_english_possessive", true); // If not null is the set of tokens to protect from being delimited - Set protectedWords = Analysis.getWordSet(env, indexSettings.getIndexVersionCreated(), - settings, "protected_words"); + Set protectedWords = Analysis.getWordSet(env, settings, "protected_words"); this.protoWords = protectedWords == null ? null : CharArraySet.copy(protectedWords); this.flags = flags; } @@ -105,8 +104,7 @@ public TokenStream create(TokenStream tokenStream) { } public int getFlag(int flag, Settings settings, String key, boolean defaultValue) { - if (settings.getAsBooleanLenientForPreEs6Indices(indexSettings.getIndexVersionCreated(), - key, defaultValue, deprecationLogger)) { + if (settings.getAsBoolean(key, defaultValue)) { return flag; } return 0; diff --git a/modules/analysis-common/src/test/java/org/elasticsearch/analysis/common/CommonAnalysisFactoryTests.java b/modules/analysis-common/src/test/java/org/elasticsearch/analysis/common/CommonAnalysisFactoryTests.java index 22d34f218e0f2..707930277e7a2 100644 --- a/modules/analysis-common/src/test/java/org/elasticsearch/analysis/common/CommonAnalysisFactoryTests.java +++ b/modules/analysis-common/src/test/java/org/elasticsearch/analysis/common/CommonAnalysisFactoryTests.java @@ -67,6 +67,7 @@ protected Map> getTokenFilters() { filters.put("uppercase", UpperCaseTokenFilterFactory.class); filters.put("ngram", NGramTokenFilterFactory.class); filters.put("edgengram", EdgeNGramTokenFilterFactory.class); + filters.put("bengalistem", StemmerTokenFilterFactory.class); filters.put("bulgarianstem", StemmerTokenFilterFactory.class); filters.put("englishminimalstem", StemmerTokenFilterFactory.class); filters.put("englishpossessive", StemmerTokenFilterFactory.class); @@ -106,6 +107,7 @@ protected Map> getTokenFilters() { filters.put("patternreplace", PatternReplaceTokenFilterFactory.class); filters.put("patterncapturegroup", PatternCaptureGroupTokenFilterFactory.class); filters.put("arabicnormalization", ArabicNormalizationFilterFactory.class); + filters.put("bengalinormalization", BengaliNormalizationFilterFactory.class); filters.put("germannormalization", GermanNormalizationFilterFactory.class); filters.put("hindinormalization", HindiNormalizationFilterFactory.class); filters.put("indicnormalization", IndicNormalizationFilterFactory.class); @@ -159,6 +161,7 @@ protected Map> getPreConfiguredTokenFilters() { filters.put("arabic_normalization", null); filters.put("arabic_stem", null); filters.put("asciifolding", null); + filters.put("bengali_normalization", null); filters.put("brazilian_stem", null); filters.put("cjk_bigram", null); filters.put("cjk_width", null); diff --git a/modules/analysis-common/src/test/java/org/elasticsearch/analysis/common/CommonGramsTokenFilterFactoryTests.java b/modules/analysis-common/src/test/java/org/elasticsearch/analysis/common/CommonGramsTokenFilterFactoryTests.java index e8578fde60d45..8efc0d5941f9e 100644 --- a/modules/analysis-common/src/test/java/org/elasticsearch/analysis/common/CommonGramsTokenFilterFactoryTests.java +++ b/modules/analysis-common/src/test/java/org/elasticsearch/analysis/common/CommonGramsTokenFilterFactoryTests.java @@ -56,7 +56,7 @@ public void testDefault() throws IOException { public void testWithoutCommonWordsMatch() throws IOException { { Settings settings = Settings.builder().put("index.analysis.filter.common_grams_default.type", "common_grams") - .putArray("index.analysis.filter.common_grams_default.common_words", "chromosome", "protein") + .putList("index.analysis.filter.common_grams_default.common_words", "chromosome", "protein") .put(Environment.PATH_HOME_SETTING.getKey(), createTempDir().toString()) .build(); @@ -75,7 +75,7 @@ public void testWithoutCommonWordsMatch() throws IOException { Settings settings = Settings.builder().put("index.analysis.filter.common_grams_default.type", "common_grams") .put("index.analysis.filter.common_grams_default.query_mode", false) .put(Environment.PATH_HOME_SETTING.getKey(), createTempDir().toString()) - .putArray("index.analysis.filter.common_grams_default.common_words", "chromosome", "protein") + .putList("index.analysis.filter.common_grams_default.common_words", "chromosome", "protein") .build(); ESTestCase.TestAnalysis analysis = createTestAnalysisFromSettings(settings); { @@ -94,7 +94,7 @@ public void testSettings() throws IOException { Settings settings = Settings.builder().put("index.analysis.filter.common_grams_1.type", "common_grams") .put("index.analysis.filter.common_grams_1.ignore_case", true) .put(Environment.PATH_HOME_SETTING.getKey(), createTempDir().toString()) - .putArray("index.analysis.filter.common_grams_1.common_words", "the", "Or", "Not", "a", "is", "an", "they", "are") + .putList("index.analysis.filter.common_grams_1.common_words", "the", "Or", "Not", "a", "is", "an", "they", "are") .build(); ESTestCase.TestAnalysis analysis = createTestAnalysisFromSettings(settings); TokenFilterFactory tokenFilter = analysis.tokenFilter.get("common_grams_1"); @@ -109,7 +109,7 @@ public void testSettings() throws IOException { Settings settings = Settings.builder().put("index.analysis.filter.common_grams_2.type", "common_grams") .put("index.analysis.filter.common_grams_2.ignore_case", false) .put(Environment.PATH_HOME_SETTING.getKey(), createTempDir().toString()) - .putArray("index.analysis.filter.common_grams_2.common_words", "the", "Or", "noT", "a", "is", "an", "they", "are") + .putList("index.analysis.filter.common_grams_2.common_words", "the", "Or", "noT", "a", "is", "an", "they", "are") .build(); ESTestCase.TestAnalysis analysis = createTestAnalysisFromSettings(settings); TokenFilterFactory tokenFilter = analysis.tokenFilter.get("common_grams_2"); @@ -122,7 +122,7 @@ public void testSettings() throws IOException { } { Settings settings = Settings.builder().put("index.analysis.filter.common_grams_3.type", "common_grams") - .putArray("index.analysis.filter.common_grams_3.common_words", "the", "or", "not", "a", "is", "an", "they", "are") + .putList("index.analysis.filter.common_grams_3.common_words", "the", "or", "not", "a", "is", "an", "they", "are") .put(Environment.PATH_HOME_SETTING.getKey(), createTempDir().toString()) .build(); ESTestCase.TestAnalysis analysis = createTestAnalysisFromSettings(settings); @@ -139,7 +139,7 @@ public void testSettings() throws IOException { public void testCommonGramsAnalysis() throws IOException { String json = "/org/elasticsearch/analysis/common/commongrams.json"; Settings settings = Settings.builder() - .loadFromStream(json, getClass().getResourceAsStream(json)) + .loadFromStream(json, getClass().getResourceAsStream(json), false) .put(Environment.PATH_HOME_SETTING.getKey(), createHome()) .build(); { @@ -166,7 +166,7 @@ public void testQueryModeSettings() throws IOException { { Settings settings = Settings.builder().put("index.analysis.filter.common_grams_1.type", "common_grams") .put("index.analysis.filter.common_grams_1.query_mode", true) - .putArray("index.analysis.filter.common_grams_1.common_words", "the", "Or", "Not", "a", "is", "an", "they", "are") + .putList("index.analysis.filter.common_grams_1.common_words", "the", "Or", "Not", "a", "is", "an", "they", "are") .put("index.analysis.filter.common_grams_1.ignore_case", true) .put(Environment.PATH_HOME_SETTING.getKey(), createTempDir().toString()) .build(); @@ -181,7 +181,7 @@ public void testQueryModeSettings() throws IOException { { Settings settings = Settings.builder().put("index.analysis.filter.common_grams_2.type", "common_grams") .put("index.analysis.filter.common_grams_2.query_mode", true) - .putArray("index.analysis.filter.common_grams_2.common_words", "the", "Or", "noT", "a", "is", "an", "they", "are") + .putList("index.analysis.filter.common_grams_2.common_words", "the", "Or", "noT", "a", "is", "an", "they", "are") .put("index.analysis.filter.common_grams_2.ignore_case", false) .put(Environment.PATH_HOME_SETTING.getKey(), createTempDir().toString()) .build(); @@ -196,7 +196,7 @@ public void testQueryModeSettings() throws IOException { { Settings settings = Settings.builder().put("index.analysis.filter.common_grams_3.type", "common_grams") .put("index.analysis.filter.common_grams_3.query_mode", true) - .putArray("index.analysis.filter.common_grams_3.common_words", "the", "Or", "noT", "a", "is", "an", "they", "are") + .putList("index.analysis.filter.common_grams_3.common_words", "the", "Or", "noT", "a", "is", "an", "they", "are") .put(Environment.PATH_HOME_SETTING.getKey(), createTempDir().toString()) .build(); ESTestCase.TestAnalysis analysis = createTestAnalysisFromSettings(settings); @@ -210,7 +210,7 @@ public void testQueryModeSettings() throws IOException { { Settings settings = Settings.builder().put("index.analysis.filter.common_grams_4.type", "common_grams") .put("index.analysis.filter.common_grams_4.query_mode", true) - .putArray("index.analysis.filter.common_grams_4.common_words", "the", "or", "not", "a", "is", "an", "they", "are") + .putList("index.analysis.filter.common_grams_4.common_words", "the", "or", "not", "a", "is", "an", "they", "are") .put(Environment.PATH_HOME_SETTING.getKey(), createTempDir().toString()) .build(); ESTestCase.TestAnalysis analysis = createTestAnalysisFromSettings(settings); @@ -226,7 +226,7 @@ public void testQueryModeSettings() throws IOException { public void testQueryModeCommonGramsAnalysis() throws IOException { String json = "/org/elasticsearch/analysis/common/commongrams_query_mode.json"; Settings settings = Settings.builder() - .loadFromStream(json, getClass().getResourceAsStream(json)) + .loadFromStream(json, getClass().getResourceAsStream(json), false) .put(Environment.PATH_HOME_SETTING.getKey(), createHome()) .build(); { diff --git a/modules/analysis-common/src/test/java/org/elasticsearch/analysis/common/CompoundAnalysisTests.java b/modules/analysis-common/src/test/java/org/elasticsearch/analysis/common/CompoundAnalysisTests.java index 13b512f86e0a5..9a7bf5eb91570 100644 --- a/modules/analysis-common/src/test/java/org/elasticsearch/analysis/common/CompoundAnalysisTests.java +++ b/modules/analysis-common/src/test/java/org/elasticsearch/analysis/common/CompoundAnalysisTests.java @@ -24,10 +24,9 @@ import org.apache.lucene.analysis.tokenattributes.CharTermAttribute; import org.elasticsearch.Version; import org.elasticsearch.cluster.metadata.IndexMetaData; -import org.elasticsearch.common.lucene.all.AllEntries; -import org.elasticsearch.common.lucene.all.AllTokenStream; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.env.Environment; +import org.elasticsearch.env.TestEnvironment; import org.elasticsearch.index.IndexSettings; import org.elasticsearch.index.analysis.IndexAnalyzers; import org.elasticsearch.index.analysis.MyFilterTokenFilterFactory; @@ -75,10 +74,7 @@ private List analyze(Settings settings, String analyzerName, String text IndexAnalyzers indexAnalyzers = analysisModule.getAnalysisRegistry().build(idxSettings); Analyzer analyzer = indexAnalyzers.get(analyzerName).analyzer(); - AllEntries allEntries = new AllEntries(); - allEntries.addText("field1", text, 1.0f); - - TokenStream stream = AllTokenStream.allTokenStream("_all", text, 1.0f, analyzer); + TokenStream stream = analyzer.tokenStream("" , text); stream.reset(); CharTermAttribute termAtt = stream.addAttribute(CharTermAttribute.class); @@ -92,7 +88,7 @@ private List analyze(Settings settings, String analyzerName, String text private AnalysisModule createAnalysisModule(Settings settings) throws IOException { CommonAnalysisPlugin commonAnalysisPlugin = new CommonAnalysisPlugin(); - return new AnalysisModule(new Environment(settings), Arrays.asList(commonAnalysisPlugin, new AnalysisPlugin() { + return new AnalysisModule(TestEnvironment.newEnvironment(settings), Arrays.asList(commonAnalysisPlugin, new AnalysisPlugin() { @Override public Map> getTokenFilters() { return singletonMap("myfilter", MyFilterTokenFilterFactory::new); @@ -103,7 +99,7 @@ public Map> getTokenFilters() { private Settings getJsonSettings() throws IOException { String json = "/org/elasticsearch/analysis/common/test1.json"; return Settings.builder() - .loadFromStream(json, getClass().getResourceAsStream(json)) + .loadFromStream(json, getClass().getResourceAsStream(json), false) .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT) .put(Environment.PATH_HOME_SETTING.getKey(), createTempDir().toString()) .build(); @@ -112,7 +108,7 @@ private Settings getJsonSettings() throws IOException { private Settings getYamlSettings() throws IOException { String yaml = "/org/elasticsearch/analysis/common/test1.yml"; return Settings.builder() - .loadFromStream(yaml, getClass().getResourceAsStream(yaml)) + .loadFromStream(yaml, getClass().getResourceAsStream(yaml), false) .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT) .put(Environment.PATH_HOME_SETTING.getKey(), createTempDir().toString()) .build(); diff --git a/modules/analysis-common/src/test/java/org/elasticsearch/analysis/common/HighlighterWithAnalyzersTests.java b/modules/analysis-common/src/test/java/org/elasticsearch/analysis/common/HighlighterWithAnalyzersTests.java index bb1f2a55f7cb4..96e8043570d5e 100644 --- a/modules/analysis-common/src/test/java/org/elasticsearch/analysis/common/HighlighterWithAnalyzersTests.java +++ b/modules/analysis-common/src/test/java/org/elasticsearch/analysis/common/HighlighterWithAnalyzersTests.java @@ -21,6 +21,7 @@ import org.elasticsearch.action.search.SearchResponse; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.index.IndexSettings; import org.elasticsearch.index.query.Operator; import org.elasticsearch.plugins.Plugin; import org.elasticsearch.search.fetch.subphase.highlight.HighlightBuilder; @@ -66,12 +67,13 @@ public void testNgramHighlightingWithBrokenPositions() throws IOException { .endObject()) .setSettings(Settings.builder() .put(indexSettings()) + .put(IndexSettings.MAX_NGRAM_DIFF_SETTING.getKey(), 19) .put("analysis.tokenizer.autocomplete.max_gram", 20) .put("analysis.tokenizer.autocomplete.min_gram", 1) .put("analysis.tokenizer.autocomplete.token_chars", "letter,digit") .put("analysis.tokenizer.autocomplete.type", "nGram") .put("analysis.filter.wordDelimiter.type", "word_delimiter") - .putArray("analysis.filter.wordDelimiter.type_table", + .putList("analysis.filter.wordDelimiter.type_table", "& => ALPHANUM", "| => ALPHANUM", "! => ALPHANUM", "? => ALPHANUM", ". => ALPHANUM", "- => ALPHANUM", "# => ALPHANUM", "% => ALPHANUM", "+ => ALPHANUM", @@ -88,10 +90,10 @@ public void testNgramHighlightingWithBrokenPositions() throws IOException { .put("analysis.filter.wordDelimiter.catenate_all", false) .put("analysis.analyzer.autocomplete.tokenizer", "autocomplete") - .putArray("analysis.analyzer.autocomplete.filter", + .putList("analysis.analyzer.autocomplete.filter", "lowercase", "wordDelimiter") .put("analysis.analyzer.search_autocomplete.tokenizer", "whitespace") - .putArray("analysis.analyzer.search_autocomplete.filter", + .putList("analysis.analyzer.search_autocomplete.filter", "lowercase", "wordDelimiter"))); client().prepareIndex("test", "test", "1") .setSource("name", "ARCOTEL Hotels Deutschland").get(); @@ -121,7 +123,7 @@ public void testMultiPhraseCutoff() throws IOException { .put("analysis.filter.wordDelimiter.catenate_numbers", true) .put("analysis.filter.wordDelimiter.catenate_all", false) .put("analysis.analyzer.custom_analyzer.tokenizer", "whitespace") - .putArray("analysis.analyzer.custom_analyzer.filter", + .putList("analysis.analyzer.custom_analyzer.filter", "lowercase", "wordDelimiter")) ); diff --git a/modules/analysis-common/src/test/java/org/elasticsearch/analysis/common/KeepFilterFactoryTests.java b/modules/analysis-common/src/test/java/org/elasticsearch/analysis/common/KeepFilterFactoryTests.java index 83373e169b418..e9248c3d21289 100644 --- a/modules/analysis-common/src/test/java/org/elasticsearch/analysis/common/KeepFilterFactoryTests.java +++ b/modules/analysis-common/src/test/java/org/elasticsearch/analysis/common/KeepFilterFactoryTests.java @@ -76,7 +76,7 @@ public void testKeepWordsPathSettings() { } settings = Settings.builder().put(settings) - .put("index.analysis.filter.non_broken_keep_filter.keep_words", new String[]{"test"}) + .putList("index.analysis.filter.non_broken_keep_filter.keep_words", "test") .build(); try { // test our none existing setup is picked up diff --git a/modules/analysis-common/src/test/java/org/elasticsearch/analysis/common/KeepTypesFilterFactoryTests.java b/modules/analysis-common/src/test/java/org/elasticsearch/analysis/common/KeepTypesFilterFactoryTests.java index 4df1fb780e932..a19882d6faa00 100644 --- a/modules/analysis-common/src/test/java/org/elasticsearch/analysis/common/KeepTypesFilterFactoryTests.java +++ b/modules/analysis-common/src/test/java/org/elasticsearch/analysis/common/KeepTypesFilterFactoryTests.java @@ -38,7 +38,7 @@ public void testKeepTypes() throws IOException { Settings settings = Settings.builder() .put(Environment.PATH_HOME_SETTING.getKey(), createTempDir().toString()) .put("index.analysis.filter.keep_numbers.type", "keep_types") - .putArray("index.analysis.filter.keep_numbers.types", new String[] {"", ""}) + .putList("index.analysis.filter.keep_numbers.types", new String[] {"", ""}) .build(); ESTestCase.TestAnalysis analysis = AnalysisTestsHelper.createTestAnalysisFromSettings(settings, new CommonAnalysisPlugin()); TokenFilterFactory tokenFilter = analysis.tokenFilter.get("keep_numbers"); diff --git a/modules/analysis-common/src/test/java/org/elasticsearch/analysis/common/MassiveWordListTests.java b/modules/analysis-common/src/test/java/org/elasticsearch/analysis/common/MassiveWordListTests.java new file mode 100644 index 0000000000000..f454e8c776c12 --- /dev/null +++ b/modules/analysis-common/src/test/java/org/elasticsearch/analysis/common/MassiveWordListTests.java @@ -0,0 +1,50 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.analysis.common; + +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.plugins.Plugin; +import org.elasticsearch.test.ESSingleNodeTestCase; + +import java.util.Collection; +import java.util.Collections; + +public class MassiveWordListTests extends ESSingleNodeTestCase { + + @Override + protected Collection> getPlugins() { + return Collections.singleton(CommonAnalysisPlugin.class); + } + + public void testCreateIndexWithMassiveWordList() { + String[] wordList = new String[100000]; + for (int i = 0; i < wordList.length; i++) { + wordList[i] = "hello world"; + } + client().admin().indices().prepareCreate("test").setSettings(Settings.builder() + .put("index.number_of_shards", 1) + .put("analysis.analyzer.test_analyzer.type", "custom") + .put("analysis.analyzer.test_analyzer.tokenizer", "standard") + .putList("analysis.analyzer.test_analyzer.filter", "dictionary_decompounder", "lowercase") + .put("analysis.filter.dictionary_decompounder.type", "dictionary_decompounder") + .putList("analysis.filter.dictionary_decompounder.word_list", wordList) + ).get(); + } +} diff --git a/modules/analysis-common/src/test/java/org/elasticsearch/analysis/common/NGramTokenizerFactoryTests.java b/modules/analysis-common/src/test/java/org/elasticsearch/analysis/common/NGramTokenizerFactoryTests.java index 24efd89b7e0c8..3c6250eacfa66 100644 --- a/modules/analysis-common/src/test/java/org/elasticsearch/analysis/common/NGramTokenizerFactoryTests.java +++ b/modules/analysis-common/src/test/java/org/elasticsearch/analysis/common/NGramTokenizerFactoryTests.java @@ -76,9 +76,10 @@ public void testParseTokenChars() { public void testNoTokenChars() throws IOException { final Index index = new Index("test", "_na_"); final String name = "ngr"; - final Settings indexSettings = newAnalysisSettingsBuilder().build(); + final Settings indexSettings = newAnalysisSettingsBuilder().put(IndexSettings.MAX_NGRAM_DIFF_SETTING.getKey(), 2).build(); + final Settings settings = newAnalysisSettingsBuilder().put("min_gram", 2).put("max_gram", 4) - .putArray("token_chars", new String[0]).build(); + .putList("token_chars", new String[0]).build(); Tokenizer tokenizer = new NGramTokenizerFactory(IndexSettingsModule.newIndexSettings(index, indexSettings), null, name, settings) .create(); tokenizer.setReader(new StringReader("1.34")); @@ -152,6 +153,31 @@ public void testBackwardsCompatibilityEdgeNgramTokenFilter() throws Exception { } + /*` + * test that throws an error when trying to get a NGramTokenizer where difference between max_gram and min_gram + * is greater than the allowed value of max_ngram_diff + */ + public void testMaxNGramDiffException() throws Exception{ + final Index index = new Index("test", "_na_"); + final String name = "ngr"; + final Settings indexSettings = newAnalysisSettingsBuilder().build(); + IndexSettings indexProperties = IndexSettingsModule.newIndexSettings(index, indexSettings); + + int maxAllowedNgramDiff = indexProperties.getMaxNgramDiff(); + int ngramDiff = maxAllowedNgramDiff + 1; + int min_gram = 2; + int max_gram = min_gram + ngramDiff; + + final Settings settings = newAnalysisSettingsBuilder().put("min_gram", min_gram).put("max_gram", max_gram).build(); + IllegalArgumentException ex = expectThrows(IllegalArgumentException.class, () -> + new NGramTokenizerFactory(indexProperties, null, name, settings).create()); + assertEquals( + "The difference between max_gram and min_gram in NGram Tokenizer must be less than or equal to: [" + + maxAllowedNgramDiff + "] but was [" + ngramDiff + "]. This limit can be set by changing the [" + + IndexSettings.MAX_NGRAM_DIFF_SETTING.getKey() + "] index level setting.", + ex.getMessage()); + } + private Version randomVersion(Random random) throws IllegalArgumentException, IllegalAccessException { Field[] declaredFields = Version.class.getFields(); List versionFields = new ArrayList<>(); diff --git a/modules/analysis-common/src/test/java/org/elasticsearch/analysis/common/PatternCaptureTokenFilterTests.java b/modules/analysis-common/src/test/java/org/elasticsearch/analysis/common/PatternCaptureTokenFilterTests.java index 34bc7d9e02601..57b5a5a0abb24 100644 --- a/modules/analysis-common/src/test/java/org/elasticsearch/analysis/common/PatternCaptureTokenFilterTests.java +++ b/modules/analysis-common/src/test/java/org/elasticsearch/analysis/common/PatternCaptureTokenFilterTests.java @@ -37,7 +37,7 @@ public void testPatternCaptureTokenFilter() throws Exception { String json = "/org/elasticsearch/analysis/common/pattern_capture.json"; Settings settings = Settings.builder() .put(Environment.PATH_HOME_SETTING.getKey(), createTempDir()) - .loadFromStream(json, getClass().getResourceAsStream(json)) + .loadFromStream(json, getClass().getResourceAsStream(json), false) .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT) .build(); diff --git a/modules/analysis-common/src/test/java/org/elasticsearch/analysis/common/QueryStringWithAnalyzersTests.java b/modules/analysis-common/src/test/java/org/elasticsearch/analysis/common/QueryStringWithAnalyzersTests.java index 3fe7416dade7f..fb04ee59a7cd8 100644 --- a/modules/analysis-common/src/test/java/org/elasticsearch/analysis/common/QueryStringWithAnalyzersTests.java +++ b/modules/analysis-common/src/test/java/org/elasticsearch/analysis/common/QueryStringWithAnalyzersTests.java @@ -20,6 +20,7 @@ package org.elasticsearch.analysis.common; import org.elasticsearch.action.search.SearchResponse; +import org.elasticsearch.common.settings.Settings; import org.elasticsearch.index.query.Operator; import org.elasticsearch.plugins.Plugin; import org.elasticsearch.test.ESIntegTestCase; @@ -42,17 +43,18 @@ protected Collection> nodePlugins() { */ public void testCustomWordDelimiterQueryString() { assertAcked(client().admin().indices().prepareCreate("test") - .setSettings("analysis.analyzer.my_analyzer.type", "custom", - "analysis.analyzer.my_analyzer.tokenizer", "whitespace", - "analysis.analyzer.my_analyzer.filter", "custom_word_delimiter", - "analysis.filter.custom_word_delimiter.type", "word_delimiter", - "analysis.filter.custom_word_delimiter.generate_word_parts", "true", - "analysis.filter.custom_word_delimiter.generate_number_parts", "false", - "analysis.filter.custom_word_delimiter.catenate_numbers", "true", - "analysis.filter.custom_word_delimiter.catenate_words", "false", - "analysis.filter.custom_word_delimiter.split_on_case_change", "false", - "analysis.filter.custom_word_delimiter.split_on_numerics", "false", - "analysis.filter.custom_word_delimiter.stem_english_possessive", "false") + .setSettings(Settings.builder() + .put("analysis.analyzer.my_analyzer.type", "custom") + .put("analysis.analyzer.my_analyzer.tokenizer", "whitespace") + .put("analysis.analyzer.my_analyzer.filter", "custom_word_delimiter") + .put("analysis.filter.custom_word_delimiter.type", "word_delimiter") + .put("analysis.filter.custom_word_delimiter.generate_word_parts", "true") + .put("analysis.filter.custom_word_delimiter.generate_number_parts", "false") + .put("analysis.filter.custom_word_delimiter.catenate_numbers", "true") + .put("analysis.filter.custom_word_delimiter.catenate_words", "false") + .put("analysis.filter.custom_word_delimiter.split_on_case_change", "false") + .put("analysis.filter.custom_word_delimiter.split_on_numerics", "false") + .put("analysis.filter.custom_word_delimiter.stem_english_possessive", "false")) .addMapping("type1", "field1", "type=text,analyzer=my_analyzer", "field2", "type=text,analyzer=my_analyzer")); diff --git a/modules/analysis-common/src/test/resources/rest-api-spec/test/analysis-common/20_analyzers.yml b/modules/analysis-common/src/test/resources/rest-api-spec/test/analysis-common/20_analyzers.yml index abd9a3f5ae0d9..6ff3b8c802735 100644 --- a/modules/analysis-common/src/test/resources/rest-api-spec/test/analysis-common/20_analyzers.yml +++ b/modules/analysis-common/src/test/resources/rest-api-spec/test/analysis-common/20_analyzers.yml @@ -27,3 +27,13 @@ - match: { detail.analyzer.tokens.1.start_offset: 4 } - match: { detail.analyzer.tokens.1.end_offset: 8 } - match: { detail.analyzer.tokens.1.position: 1 } + +--- +"bengali": + - do: + indices.analyze: + body: + text: বাড়ী + analyzer: bengali + - length: { tokens: 1 } + - match: { tokens.0.token: বার } diff --git a/modules/analysis-common/src/test/resources/rest-api-spec/test/analysis-common/30_tokenizers.yml b/modules/analysis-common/src/test/resources/rest-api-spec/test/analysis-common/30_tokenizers.yml index c0945e047c503..e6b69db8a0eb9 100644 --- a/modules/analysis-common/src/test/resources/rest-api-spec/test/analysis-common/30_tokenizers.yml +++ b/modules/analysis-common/src/test/resources/rest-api-spec/test/analysis-common/30_tokenizers.yml @@ -27,6 +27,21 @@ - match: { detail.tokenizer.tokens.2.token: od } --- +"nGram_exception": + - skip: + version: " - 6.99.99" + reason: only starting from version 7.x this throws an error + - do: + catch: /The difference between max_gram and min_gram in NGram Tokenizer must be less than or equal to[:] \[1\] but was \[2\]\. This limit can be set by changing the \[index.max_ngram_diff\] index level setting\./ + indices.analyze: + body: + text: good + explain: true + tokenizer: + type: nGram + min_gram: 2 + max_gram: 4 +--- "simple_pattern": - do: indices.analyze: diff --git a/modules/analysis-common/src/test/resources/rest-api-spec/test/analysis-common/40_token_filters.yml b/modules/analysis-common/src/test/resources/rest-api-spec/test/analysis-common/40_token_filters.yml index 95638025f0bca..47eb436788abf 100644 --- a/modules/analysis-common/src/test/resources/rest-api-spec/test/analysis-common/40_token_filters.yml +++ b/modules/analysis-common/src/test/resources/rest-api-spec/test/analysis-common/40_token_filters.yml @@ -695,6 +695,37 @@ - length: { tokens: 1 } - match: { tokens.0.token: اجن } +--- +"bengali_normalization": + - do: + indices.create: + index: test + body: + settings: + analysis: + filter: + my_bengali_normalization: + type: bengali_normalization + - do: + indices.analyze: + index: test + body: + text: চাঁদ + tokenizer: keyword + filter: [my_bengali_normalization] + - length: { tokens: 1 } + - match: { tokens.0.token: চাদ } + + # Test pre-configured token filter too: + - do: + indices.analyze: + body: + text: চাঁদ + tokenizer: keyword + filter: [bengali_normalization] + - length: { tokens: 1 } + - match: { tokens.0.token: চাদ } + --- "german_normalization": - do: @@ -1475,3 +1506,26 @@ filter: [russian_stem] - length: { tokens: 1 } - match: { tokens.0.token: журнал } + +--- +"bengali_stem": + - do: + indices.create: + index: test + body: + settings: + analysis: + filter: + my_bengali_stem: + type: stemmer + language: bengali + + - do: + indices.analyze: + index: test + body: + text: করেছিলাম + tokenizer: keyword + filter: [my_bengali_stem] + - length: { tokens: 1 } + - match: { tokens.0.token: কর } diff --git a/modules/analysis-common/src/test/resources/rest-api-spec/test/indices.analyze/10_analyze.yml b/modules/analysis-common/src/test/resources/rest-api-spec/test/indices.analyze/10_analyze.yml index 0866dc5bc4dfd..cbb8f053cfbba 100644 --- a/modules/analysis-common/src/test/resources/rest-api-spec/test/indices.analyze/10_analyze.yml +++ b/modules/analysis-common/src/test/resources/rest-api-spec/test/indices.analyze/10_analyze.yml @@ -6,7 +6,7 @@ version: " - 5.99.99" reason: normalizer support in 6.0.0 - do: - catch: request + catch: bad_request indices.analyze: body: text: ABc diff --git a/modules/analysis-common/src/test/resources/rest-api-spec/test/search.query/30_ngram_highligthing.yml b/modules/analysis-common/src/test/resources/rest-api-spec/test/search.query/30_ngram_highligthing.yml index b04496965eb02..c1dca047f60d1 100644 --- a/modules/analysis-common/src/test/resources/rest-api-spec/test/search.query/30_ngram_highligthing.yml +++ b/modules/analysis-common/src/test/resources/rest-api-spec/test/search.query/30_ngram_highligthing.yml @@ -6,6 +6,7 @@ settings: number_of_shards: 1 number_of_replicas: 0 + index.max_ngram_diff: 19 analysis: tokenizer: my_ngramt: diff --git a/modules/ingest-common/src/main/java/org/elasticsearch/ingest/common/DateIndexNameProcessor.java b/modules/ingest-common/src/main/java/org/elasticsearch/ingest/common/DateIndexNameProcessor.java index 6ed5f0b66cf9b..311f30513c119 100644 --- a/modules/ingest-common/src/main/java/org/elasticsearch/ingest/common/DateIndexNameProcessor.java +++ b/modules/ingest-common/src/main/java/org/elasticsearch/ingest/common/DateIndexNameProcessor.java @@ -25,6 +25,7 @@ import java.util.List; import java.util.Locale; import java.util.Map; +import java.util.Objects; import java.util.function.Function; import org.elasticsearch.ExceptionsHelper; @@ -61,7 +62,13 @@ public final class DateIndexNameProcessor extends AbstractProcessor { @Override public void execute(IngestDocument ingestDocument) throws Exception { - String date = ingestDocument.getFieldValue(field, String.class); + // Date can be specified as a string or long: + Object obj = ingestDocument.getFieldValue(field, Object.class); + String date = null; + if (obj != null) { + // Not use Objects.toString(...) here, because null gets changed to "null" which may confuse some date parsers + date = obj.toString(); + } DateTime dateTime = null; Exception lastException = null; diff --git a/modules/ingest-common/src/main/java/org/elasticsearch/ingest/common/DateProcessor.java b/modules/ingest-common/src/main/java/org/elasticsearch/ingest/common/DateProcessor.java index e6034138873aa..f1e7dcdcf55b0 100644 --- a/modules/ingest-common/src/main/java/org/elasticsearch/ingest/common/DateProcessor.java +++ b/modules/ingest-common/src/main/java/org/elasticsearch/ingest/common/DateProcessor.java @@ -20,6 +20,7 @@ package org.elasticsearch.ingest.common; import org.elasticsearch.ExceptionsHelper; +import org.elasticsearch.common.util.LocaleUtils; import org.elasticsearch.ingest.AbstractProcessor; import org.elasticsearch.ingest.ConfigurationUtils; import org.elasticsearch.ingest.IngestDocument; @@ -29,7 +30,6 @@ import org.joda.time.format.ISODateTimeFormat; import java.util.ArrayList; -import java.util.IllformedLocaleException; import java.util.List; import java.util.Locale; import java.util.Map; @@ -63,7 +63,12 @@ public final class DateProcessor extends AbstractProcessor { @Override public void execute(IngestDocument ingestDocument) { - String value = ingestDocument.getFieldValue(field, String.class); + Object obj = ingestDocument.getFieldValue(field, Object.class); + String value = null; + if (obj != null) { + // Not use Objects.toString(...) here, because null gets changed to "null" which may confuse some date parsers + value = obj.toString(); + } DateTime dateTime = null; Exception lastException = null; @@ -110,7 +115,6 @@ List getFormats() { public static final class Factory implements Processor.Factory { - @SuppressWarnings("unchecked") public DateProcessor create(Map registry, String processorTag, Map config) throws Exception { String field = ConfigurationUtils.readStringProperty(TYPE, processorTag, config, "field"); @@ -118,13 +122,9 @@ public DateProcessor create(Map registry, String proc String timezoneString = ConfigurationUtils.readOptionalStringProperty(TYPE, processorTag, config, "timezone"); DateTimeZone timezone = timezoneString == null ? DateTimeZone.UTC : DateTimeZone.forID(timezoneString); String localeString = ConfigurationUtils.readOptionalStringProperty(TYPE, processorTag, config, "locale"); - Locale locale = Locale.ENGLISH; + Locale locale = Locale.ROOT; if (localeString != null) { - try { - locale = (new Locale.Builder()).setLanguageTag(localeString).build(); - } catch (IllformedLocaleException e) { - throw new IllegalArgumentException("Invalid language tag specified: " + localeString); - } + locale = LocaleUtils.parse(localeString); } List formats = ConfigurationUtils.readList(TYPE, processorTag, config, "formats"); return new DateProcessor(processorTag, timezone, locale, field, formats, targetField); diff --git a/modules/ingest-common/src/main/java/org/elasticsearch/ingest/common/ScriptProcessor.java b/modules/ingest-common/src/main/java/org/elasticsearch/ingest/common/ScriptProcessor.java index ad574115208da..bc8aa6b04a950 100644 --- a/modules/ingest-common/src/main/java/org/elasticsearch/ingest/common/ScriptProcessor.java +++ b/modules/ingest-common/src/main/java/org/elasticsearch/ingest/common/ScriptProcessor.java @@ -19,10 +19,12 @@ package org.elasticsearch.ingest.common; -import org.apache.logging.log4j.Logger; -import org.elasticsearch.common.Strings; -import org.elasticsearch.common.logging.DeprecationLogger; -import org.elasticsearch.common.logging.ESLoggerFactory; +import com.fasterxml.jackson.core.JsonFactory; + +import org.elasticsearch.common.xcontent.NamedXContentRegistry; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.json.JsonXContent; +import org.elasticsearch.common.xcontent.json.JsonXContentParser; import org.elasticsearch.ingest.AbstractProcessor; import org.elasticsearch.ingest.IngestDocument; import org.elasticsearch.ingest.Processor; @@ -31,15 +33,10 @@ import org.elasticsearch.script.ScriptException; import org.elasticsearch.script.ScriptService; +import java.util.Arrays; import java.util.Map; -import static java.util.Collections.emptyMap; -import static org.elasticsearch.common.Strings.hasLength; import static org.elasticsearch.ingest.ConfigurationUtils.newConfigurationException; -import static org.elasticsearch.ingest.ConfigurationUtils.readOptionalMap; -import static org.elasticsearch.ingest.ConfigurationUtils.readOptionalStringProperty; -import static org.elasticsearch.script.ScriptType.INLINE; -import static org.elasticsearch.script.ScriptType.STORED; /** * Processor that evaluates a script with an ingest document in its context. @@ -47,6 +44,7 @@ public final class ScriptProcessor extends AbstractProcessor { public static final String TYPE = "script"; + private static final JsonFactory JSON_FACTORY = new JsonFactory(); private final Script script; private final ScriptService scriptService; @@ -87,9 +85,6 @@ Script getScript() { } public static final class Factory implements Processor.Factory { - private final Logger logger = ESLoggerFactory.getLogger(Factory.class); - private final DeprecationLogger deprecationLogger = new DeprecationLogger(logger); - private final ScriptService scriptService; public Factory(ScriptService scriptService) { @@ -97,56 +92,20 @@ public Factory(ScriptService scriptService) { } @Override - @SuppressWarnings("unchecked") public ScriptProcessor create(Map registry, String processorTag, Map config) throws Exception { - String lang = readOptionalStringProperty(TYPE, processorTag, config, "lang"); - String source = readOptionalStringProperty(TYPE, processorTag, config, "source"); - String id = readOptionalStringProperty(TYPE, processorTag, config, "id"); - Map params = readOptionalMap(TYPE, processorTag, config, "params"); - - if (source == null) { - source = readOptionalStringProperty(TYPE, processorTag, config, "inline"); - if (source != null) { - deprecationLogger.deprecated("Specifying script source with [inline] is deprecated, use [source] instead."); - } - } - - boolean containsNoScript = !hasLength(id) && !hasLength(source); - if (containsNoScript) { - throw newConfigurationException(TYPE, processorTag, null, "Need [id] or [source] parameter to refer to scripts"); - } - - boolean moreThanOneConfigured = Strings.hasLength(id) && Strings.hasLength(source); - if (moreThanOneConfigured) { - throw newConfigurationException(TYPE, processorTag, null, "Only one of [id] or [source] may be configured"); - } - - if (lang == null) { - lang = Script.DEFAULT_SCRIPT_LANG; - } + XContentBuilder builder = XContentBuilder.builder(JsonXContent.jsonXContent).map(config); + JsonXContentParser parser = new JsonXContentParser(NamedXContentRegistry.EMPTY, + JSON_FACTORY.createParser(builder.bytes().streamInput())); + Script script = Script.parse(parser); - if (params == null) { - params = emptyMap(); - } - - final Script script; - String scriptPropertyUsed; - if (Strings.hasLength(source)) { - script = new Script(INLINE, lang, source, (Map)params); - scriptPropertyUsed = "source"; - } else if (Strings.hasLength(id)) { - script = new Script(STORED, null, id, (Map)params); - scriptPropertyUsed = "id"; - } else { - throw newConfigurationException(TYPE, processorTag, null, "Could not initialize script"); - } + Arrays.asList("id", "source", "inline", "lang", "params", "options").forEach(config::remove); // verify script is able to be compiled before successfully creating processor. try { scriptService.compile(script, ExecutableScript.INGEST_CONTEXT); } catch (ScriptException e) { - throw newConfigurationException(TYPE, processorTag, scriptPropertyUsed, e); + throw newConfigurationException(TYPE, processorTag, null, e); } return new ScriptProcessor(processorTag, script, scriptService); diff --git a/modules/ingest-common/src/main/resources/patterns/grok-patterns b/modules/ingest-common/src/main/resources/patterns/grok-patterns index cb4c3fffc6ad6..6351a7710164e 100644 --- a/modules/ingest-common/src/main/resources/patterns/grok-patterns +++ b/modules/ingest-common/src/main/resources/patterns/grok-patterns @@ -94,7 +94,7 @@ SYSLOGBASE %{SYSLOGTIMESTAMP:timestamp} (?:%{SYSLOGFACILITY} )?%{SYSLOGHOST:logs COMMONAPACHELOG %{IPORHOST:clientip} %{HTTPDUSER:ident} %{USER:auth} \[%{HTTPDATE:timestamp}\] "(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})" %{NUMBER:response} (?:%{NUMBER:bytes}|-) COMBINEDAPACHELOG %{COMMONAPACHELOG} %{QS:referrer} %{QS:agent} HTTPD20_ERRORLOG \[%{HTTPDERROR_DATE:timestamp}\] \[%{LOGLEVEL:loglevel}\] (?:\[client %{IPORHOST:clientip}\] ){0,1}%{GREEDYDATA:errormsg} -HTTPD24_ERRORLOG \[%{HTTPDERROR_DATE:timestamp}\] \[%{WORD:module}:%{LOGLEVEL:loglevel}\] \[pid %{POSINT:pid}:tid %{NUMBER:tid}\]( \(%{POSINT:proxy_errorcode}\)%{DATA:proxy_errormessage}:)?( \[client %{IPORHOST:client}:%{POSINT:clientport}\])? %{DATA:errorcode}: %{GREEDYDATA:message} +HTTPD24_ERRORLOG \[%{HTTPDERROR_DATE:timestamp}\] \[(%{WORD:module})?:%{LOGLEVEL:loglevel}\] \[pid %{POSINT:pid}(:tid %{NUMBER:tid})?\]( \(%{POSINT:proxy_errorcode}\)%{DATA:proxy_errormessage}:)?( \[client %{IPORHOST:client}:%{POSINT:clientport}\])?( %{DATA:errorcode}:)? %{GREEDYDATA:message} HTTPD_ERRORLOG %{HTTPD20_ERRORLOG}|%{HTTPD24_ERRORLOG} diff --git a/modules/ingest-common/src/test/java/org/elasticsearch/ingest/common/DateIndexNameProcessorTests.java b/modules/ingest-common/src/test/java/org/elasticsearch/ingest/common/DateIndexNameProcessorTests.java index 19d791dd8648c..6736594613954 100644 --- a/modules/ingest-common/src/test/java/org/elasticsearch/ingest/common/DateIndexNameProcessorTests.java +++ b/modules/ingest-common/src/test/java/org/elasticsearch/ingest/common/DateIndexNameProcessorTests.java @@ -62,6 +62,11 @@ public void testUnixMs()throws Exception { Collections.singletonMap("_field", "1000500")); dateProcessor.execute(document); assertThat(document.getSourceAndMetadata().get("_index"), equalTo("")); + + document = new IngestDocument("_index", "_type", "_id", null, null, + Collections.singletonMap("_field", 1000500L)); + dateProcessor.execute(document); + assertThat(document.getSourceAndMetadata().get("_index"), equalTo("")); } public void testUnix()throws Exception { diff --git a/modules/ingest-common/src/test/java/org/elasticsearch/ingest/common/DateProcessorFactoryTests.java b/modules/ingest-common/src/test/java/org/elasticsearch/ingest/common/DateProcessorFactoryTests.java index 2dc16ad7bd7fd..f722f658bd1ff 100644 --- a/modules/ingest-common/src/test/java/org/elasticsearch/ingest/common/DateProcessorFactoryTests.java +++ b/modules/ingest-common/src/test/java/org/elasticsearch/ingest/common/DateProcessorFactoryTests.java @@ -46,7 +46,7 @@ public void testBuildDefaults() throws Exception { assertThat(processor.getField(), equalTo(sourceField)); assertThat(processor.getTargetField(), equalTo(DateProcessor.DEFAULT_TARGET_FIELD)); assertThat(processor.getFormats(), equalTo(Collections.singletonList("dd/MM/yyyyy"))); - assertThat(processor.getLocale(), equalTo(Locale.ENGLISH)); + assertThat(processor.getLocale(), equalTo(Locale.ROOT)); assertThat(processor.getTimezone(), equalTo(DateTimeZone.UTC)); } @@ -87,7 +87,7 @@ public void testParseLocale() throws Exception { String sourceField = randomAlphaOfLengthBetween(1, 10); config.put("field", sourceField); config.put("formats", Collections.singletonList("dd/MM/yyyyy")); - Locale locale = randomLocale(random()); + Locale locale = randomFrom(Locale.GERMANY, Locale.FRENCH, Locale.ROOT); config.put("locale", locale.toLanguageTag()); DateProcessor processor = factory.create(null, null, config); @@ -95,17 +95,30 @@ public void testParseLocale() throws Exception { } public void testParseInvalidLocale() throws Exception { - DateProcessor.Factory factory = new DateProcessor.Factory(); - Map config = new HashMap<>(); - String sourceField = randomAlphaOfLengthBetween(1, 10); - config.put("field", sourceField); - config.put("formats", Collections.singletonList("dd/MM/yyyyy")); - config.put("locale", "invalid_locale"); - try { - factory.create(null, null, config); - fail("should fail with invalid locale"); - } catch (IllegalArgumentException e) { - assertThat(e.getMessage(), equalTo("Invalid language tag specified: invalid_locale")); + String[] locales = new String[] { "invalid_locale", "english", "xy", "xy-US" }; + for (String locale : locales) { + DateProcessor.Factory factory = new DateProcessor.Factory(); + Map config = new HashMap<>(); + String sourceField = randomAlphaOfLengthBetween(1, 10); + config.put("field", sourceField); + config.put("formats", Collections.singletonList("dd/MM/yyyyy")); + config.put("locale", locale); + IllegalArgumentException e = expectThrows(IllegalArgumentException.class, + () -> factory.create(null, null, config)); + assertThat(e.getMessage(), equalTo("Unknown language: " + locale.split("[_-]")[0])); + } + + locales = new String[] { "en-XY", "en-Canada" }; + for (String locale : locales) { + DateProcessor.Factory factory = new DateProcessor.Factory(); + Map config = new HashMap<>(); + String sourceField = randomAlphaOfLengthBetween(1, 10); + config.put("field", sourceField); + config.put("formats", Collections.singletonList("dd/MM/yyyyy")); + config.put("locale", locale); + IllegalArgumentException e = expectThrows(IllegalArgumentException.class, + () -> factory.create(null, null, config)); + assertThat(e.getMessage(), equalTo("Unknown country: " + locale.split("[_-]")[1])); } } diff --git a/modules/ingest-common/src/test/java/org/elasticsearch/ingest/common/DateProcessorTests.java b/modules/ingest-common/src/test/java/org/elasticsearch/ingest/common/DateProcessorTests.java index d92f2e84be135..cc68340ec59f4 100644 --- a/modules/ingest-common/src/test/java/org/elasticsearch/ingest/common/DateProcessorTests.java +++ b/modules/ingest-common/src/test/java/org/elasticsearch/ingest/common/DateProcessorTests.java @@ -134,6 +134,12 @@ public void testUnixMs() { IngestDocument ingestDocument = RandomDocumentPicks.randomIngestDocument(random(), document); dateProcessor.execute(ingestDocument); assertThat(ingestDocument.getFieldValue("date_as_date", String.class), equalTo("1970-01-01T00:16:40.500Z")); + + document = new HashMap<>(); + document.put("date_as_string", 1000500L); + ingestDocument = RandomDocumentPicks.randomIngestDocument(random(), document); + dateProcessor.execute(ingestDocument); + assertThat(ingestDocument.getFieldValue("date_as_date", String.class), equalTo("1970-01-01T00:16:40.500Z")); } public void testUnix() { diff --git a/modules/ingest-common/src/test/java/org/elasticsearch/ingest/common/ScriptProcessorFactoryTests.java b/modules/ingest-common/src/test/java/org/elasticsearch/ingest/common/ScriptProcessorFactoryTests.java index 07467f9f98825..f1a7add303f1b 100644 --- a/modules/ingest-common/src/test/java/org/elasticsearch/ingest/common/ScriptProcessorFactoryTests.java +++ b/modules/ingest-common/src/test/java/org/elasticsearch/ingest/common/ScriptProcessorFactoryTests.java @@ -82,17 +82,17 @@ public void testFactoryValidationForMultipleScriptingTypes() throws Exception { ElasticsearchException exception = expectThrows(ElasticsearchException.class, () -> factory.create(null, randomAlphaOfLength(10), configMap)); - assertThat(exception.getMessage(), is("Only one of [id] or [source] may be configured")); + assertThat(exception.getMessage(), is("[script] failed to parse field [source]")); } public void testFactoryValidationAtLeastOneScriptingType() throws Exception { Map configMap = new HashMap<>(); configMap.put("lang", "mockscript"); - ElasticsearchException exception = expectThrows(ElasticsearchException.class, + IllegalArgumentException exception = expectThrows(IllegalArgumentException.class, () -> factory.create(null, randomAlphaOfLength(10), configMap)); - assertThat(exception.getMessage(), is("Need [id] or [source] parameter to refer to scripts")); + assertThat(exception.getMessage(), is("must specify either [source] for an inline script or [id] for a stored script")); } public void testInlineBackcompat() throws Exception { @@ -100,7 +100,7 @@ public void testInlineBackcompat() throws Exception { configMap.put("inline", "code"); factory.create(null, randomAlphaOfLength(10), configMap); - assertWarnings("Specifying script source with [inline] is deprecated, use [source] instead."); + assertWarnings("Deprecated field [inline] used, expected [source] instead"); } public void testFactoryInvalidateWithInvalidCompiledScript() throws Exception { @@ -112,7 +112,6 @@ public void testFactoryInvalidateWithInvalidCompiledScript() throws Exception { factory = new ScriptProcessor.Factory(mockedScriptService); Map configMap = new HashMap<>(); - configMap.put("lang", "mockscript"); configMap.put(randomType, "my_script"); ElasticsearchException exception = expectThrows(ElasticsearchException.class, diff --git a/modules/ingest-common/src/test/resources/rest-api-spec/test/ingest/20_crud.yml b/modules/ingest-common/src/test/resources/rest-api-spec/test/ingest/20_crud.yml index b041e0664bb6c..0e348bbd7265d 100644 --- a/modules/ingest-common/src/test/resources/rest-api-spec/test/ingest/20_crud.yml +++ b/modules/ingest-common/src/test/resources/rest-api-spec/test/ingest/20_crud.yml @@ -142,7 +142,7 @@ teardown: --- "Test invalid processor config": - do: - catch: request + catch: bad_request ingest.put_pipeline: id: "my_pipeline" body: > diff --git a/modules/ingest-common/src/test/resources/rest-api-spec/test/ingest/50_on_failure.yml b/modules/ingest-common/src/test/resources/rest-api-spec/test/ingest/50_on_failure.yml index 53c1a9a7923b1..4b40d9f670bfe 100644 --- a/modules/ingest-common/src/test/resources/rest-api-spec/test/ingest/50_on_failure.yml +++ b/modules/ingest-common/src/test/resources/rest-api-spec/test/ingest/50_on_failure.yml @@ -122,7 +122,7 @@ teardown: --- "Test pipeline with empty on_failure in a processor": - do: - catch: request + catch: bad_request ingest.put_pipeline: id: "my_pipeline" body: > @@ -155,7 +155,7 @@ teardown: --- "Test pipeline with empty on_failure in pipeline": - do: - catch: request + catch: bad_request ingest.put_pipeline: id: "my_pipeline" body: > diff --git a/modules/ingest-common/src/test/resources/rest-api-spec/test/ingest/90_simulate.yml b/modules/ingest-common/src/test/resources/rest-api-spec/test/ingest/90_simulate.yml index 8b08535c12494..8b3ed313314bb 100644 --- a/modules/ingest-common/src/test/resources/rest-api-spec/test/ingest/90_simulate.yml +++ b/modules/ingest-common/src/test/resources/rest-api-spec/test/ingest/90_simulate.yml @@ -79,7 +79,7 @@ teardown: --- "Test simulate with provided invalid pipeline definition": - do: - catch: request + catch: bad_request ingest.simulate: body: > { @@ -183,7 +183,7 @@ teardown: --- "Test simulate with no provided pipeline or pipeline_id": - do: - catch: request + catch: bad_request ingest.simulate: body: > { @@ -206,7 +206,7 @@ teardown: --- "Test simulate with invalid processor config": - do: - catch: request + catch: bad_request ingest.simulate: body: > { diff --git a/modules/lang-expression/build.gradle b/modules/lang-expression/build.gradle index 23286edd28aea..2fd6e53effa34 100644 --- a/modules/lang-expression/build.gradle +++ b/modules/lang-expression/build.gradle @@ -35,6 +35,3 @@ dependencyLicenses { mapping from: /asm-.*/, to: 'asm' } -integTestCluster { - setting 'script.max_compilations_per_minute', '1000' -} diff --git a/modules/lang-expression/licenses/lucene-NOTICE.txt b/modules/lang-expression/licenses/lucene-NOTICE.txt index ecf08201a5ee6..1a1d51572432a 100644 --- a/modules/lang-expression/licenses/lucene-NOTICE.txt +++ b/modules/lang-expression/licenses/lucene-NOTICE.txt @@ -54,13 +54,14 @@ The KStem stemmer in was developed by Bob Krovetz and Sergio Guzman-Lara (CIIR-UMass Amherst) under the BSD-license. -The Arabic,Persian,Romanian,Bulgarian, and Hindi analyzers (common) come with a default +The Arabic,Persian,Romanian,Bulgarian, Hindi and Bengali analyzers (common) come with a default stopword list that is BSD-licensed created by Jacques Savoy. These files reside in: analysis/common/src/resources/org/apache/lucene/analysis/ar/stopwords.txt, analysis/common/src/resources/org/apache/lucene/analysis/fa/stopwords.txt, analysis/common/src/resources/org/apache/lucene/analysis/ro/stopwords.txt, analysis/common/src/resources/org/apache/lucene/analysis/bg/stopwords.txt, -analysis/common/src/resources/org/apache/lucene/analysis/hi/stopwords.txt +analysis/common/src/resources/org/apache/lucene/analysis/hi/stopwords.txt, +analysis/common/src/resources/org/apache/lucene/analysis/bn/stopwords.txt See http://members.unine.ch/jacques.savoy/clef/index.html. The German,Spanish,Finnish,French,Hungarian,Italian,Portuguese,Russian and Swedish light stemmers diff --git a/modules/lang-expression/licenses/lucene-expressions-7.0.0-snapshot-a128fcb.jar.sha1 b/modules/lang-expression/licenses/lucene-expressions-7.0.0-snapshot-a128fcb.jar.sha1 deleted file mode 100644 index d59aa9efb7861..0000000000000 --- a/modules/lang-expression/licenses/lucene-expressions-7.0.0-snapshot-a128fcb.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -e23c499af85576f35b96fcf9c1145a47d0ae610f \ No newline at end of file diff --git a/modules/lang-expression/licenses/lucene-expressions-7.1.0.jar.sha1 b/modules/lang-expression/licenses/lucene-expressions-7.1.0.jar.sha1 new file mode 100644 index 0000000000000..29689e4e74f00 --- /dev/null +++ b/modules/lang-expression/licenses/lucene-expressions-7.1.0.jar.sha1 @@ -0,0 +1 @@ +714927eb1d1db641bff9aa658e7e112c368f3e6d \ No newline at end of file diff --git a/modules/lang-expression/src/main/java/org/elasticsearch/script/expression/ExpressionScriptEngine.java b/modules/lang-expression/src/main/java/org/elasticsearch/script/expression/ExpressionScriptEngine.java index 8f3df14659a95..b50eb788c6f57 100644 --- a/modules/lang-expression/src/main/java/org/elasticsearch/script/expression/ExpressionScriptEngine.java +++ b/modules/lang-expression/src/main/java/org/elasticsearch/script/expression/ExpressionScriptEngine.java @@ -38,6 +38,7 @@ import org.elasticsearch.index.mapper.MapperService; import org.elasticsearch.script.ClassPermission; import org.elasticsearch.script.ExecutableScript; +import org.elasticsearch.script.FilterScript; import org.elasticsearch.script.ScriptContext; import org.elasticsearch.script.ScriptEngine; import org.elasticsearch.script.ScriptException; @@ -107,8 +108,11 @@ protected Class loadClass(String name, boolean resolve) throws ClassNotFoundE } else if (context.instanceClazz.equals(ExecutableScript.class)) { ExecutableScript.Factory factory = (p) -> new ExpressionExecutableScript(expr, p); return context.factoryClazz.cast(factory); + } else if (context.instanceClazz.equals(FilterScript.class)) { + FilterScript.Factory factory = (p, lookup) -> newFilterScript(expr, lookup, p); + return context.factoryClazz.cast(factory); } - throw new IllegalArgumentException("painless does not know how to handle context [" + context.name + "]"); + throw new IllegalArgumentException("expression engine does not know how to handle script context [" + context.name + "]"); } private SearchScript.LeafFactory newSearchScript(Expression expr, SearchLookup lookup, @Nullable Map vars) { @@ -236,6 +240,27 @@ private SearchScript.LeafFactory newSearchScript(Expression expr, SearchLookup l return new ExpressionSearchScript(expr, bindings, specialValue, needsScores); } + /** + * This is a hack for filter scripts, which must return booleans instead of doubles as expression do. + * See https://github.com/elastic/elasticsearch/issues/26429. + */ + private FilterScript.LeafFactory newFilterScript(Expression expr, SearchLookup lookup, @Nullable Map vars) { + SearchScript.LeafFactory searchLeafFactory = newSearchScript(expr, lookup, vars); + return ctx -> { + SearchScript script = searchLeafFactory.newInstance(ctx); + return new FilterScript(vars, lookup, ctx) { + @Override + public boolean execute() { + return script.runAsDouble() != 0.0; + } + @Override + public void setDocument(int docid) { + script.setDocument(docid); + } + }; + }; + } + /** * converts a ParseException at compile-time or link-time to a ScriptException */ diff --git a/modules/lang-expression/src/test/java/org/elasticsearch/script/expression/MoreExpressionTests.java b/modules/lang-expression/src/test/java/org/elasticsearch/script/expression/MoreExpressionTests.java index d8d09ffba790a..9a91fccf4ad30 100644 --- a/modules/lang-expression/src/test/java/org/elasticsearch/script/expression/MoreExpressionTests.java +++ b/modules/lang-expression/src/test/java/org/elasticsearch/script/expression/MoreExpressionTests.java @@ -700,4 +700,19 @@ public void testBoolean() throws Exception { assertEquals(2.0D, rsp.getHits().getAt(1).field("foo").getValue(), 1.0D); assertEquals(2.0D, rsp.getHits().getAt(2).field("foo").getValue(), 1.0D); } + + public void testFilterScript() throws Exception { + createIndex("test"); + ensureGreen("test"); + indexRandom(true, + client().prepareIndex("test", "doc", "1").setSource("foo", 1.0), + client().prepareIndex("test", "doc", "2").setSource("foo", 0.0)); + SearchRequestBuilder builder = buildRequest("doc['foo'].value"); + Script script = new Script(ScriptType.INLINE, "expression", "doc['foo'].value", Collections.emptyMap()); + builder.setQuery(QueryBuilders.boolQuery().filter(QueryBuilders.scriptQuery(script))); + SearchResponse rsp = builder.get(); + assertSearchResponse(rsp); + assertEquals(1, rsp.getHits().getTotalHits()); + assertEquals(1.0D, rsp.getHits().getAt(0).field("foo").getValue(), 0.0D); + } } diff --git a/modules/lang-mustache/build.gradle b/modules/lang-mustache/build.gradle index 0802ac602cded..2a46bd9ed2efa 100644 --- a/modules/lang-mustache/build.gradle +++ b/modules/lang-mustache/build.gradle @@ -27,6 +27,3 @@ dependencies { compile "com.github.spullara.mustache.java:compiler:0.9.3" } -integTestCluster { - setting 'script.max_compilations_per_minute', '1000' -} diff --git a/modules/lang-mustache/src/main/java/org/elasticsearch/script/mustache/RestSearchTemplateAction.java b/modules/lang-mustache/src/main/java/org/elasticsearch/script/mustache/RestSearchTemplateAction.java index 6b7360f82fdaa..c3303cc30b528 100644 --- a/modules/lang-mustache/src/main/java/org/elasticsearch/script/mustache/RestSearchTemplateAction.java +++ b/modules/lang-mustache/src/main/java/org/elasticsearch/script/mustache/RestSearchTemplateAction.java @@ -19,7 +19,6 @@ package org.elasticsearch.script.mustache; -import org.elasticsearch.ElasticsearchException; import org.elasticsearch.action.search.SearchRequest; import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.common.ParseField; @@ -94,7 +93,7 @@ public String getName() { public RestChannelConsumer prepareRequest(RestRequest request, NodeClient client) throws IOException { // Creates the search request with all required params SearchRequest searchRequest = new SearchRequest(); - RestSearchAction.parseSearchRequest(searchRequest, request, null); + RestSearchAction.parseSearchRequest(searchRequest, request, null, size -> searchRequest.source().size(size)); // Creates the search template request SearchTemplateRequest searchTemplateRequest; diff --git a/modules/lang-mustache/src/main/java/org/elasticsearch/script/mustache/SearchTemplateRequest.java b/modules/lang-mustache/src/main/java/org/elasticsearch/script/mustache/SearchTemplateRequest.java index 8ff30fb0e5b4e..b0186b7b0e3cf 100644 --- a/modules/lang-mustache/src/main/java/org/elasticsearch/script/mustache/SearchTemplateRequest.java +++ b/modules/lang-mustache/src/main/java/org/elasticsearch/script/mustache/SearchTemplateRequest.java @@ -137,7 +137,7 @@ public ActionRequestValidationException validate() { @Override public void readFrom(StreamInput in) throws IOException { super.readFrom(in); - request = in.readOptionalStreamable(SearchRequest::new); + request = in.readOptionalWriteable(SearchRequest::new); simulate = in.readBoolean(); explain = in.readBoolean(); profile = in.readBoolean(); diff --git a/modules/lang-mustache/src/test/java/org/elasticsearch/script/mustache/SearchTemplateIT.java b/modules/lang-mustache/src/test/java/org/elasticsearch/script/mustache/SearchTemplateIT.java index 39657bc177736..69739ff2cb8ef 100644 --- a/modules/lang-mustache/src/test/java/org/elasticsearch/script/mustache/SearchTemplateIT.java +++ b/modules/lang-mustache/src/test/java/org/elasticsearch/script/mustache/SearchTemplateIT.java @@ -22,7 +22,6 @@ import org.elasticsearch.action.admin.cluster.storedscripts.GetStoredScriptResponse; import org.elasticsearch.action.bulk.BulkRequestBuilder; import org.elasticsearch.action.search.SearchRequest; -import org.elasticsearch.common.ParsingException; import org.elasticsearch.common.bytes.BytesArray; import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.common.xcontent.json.JsonXContent; @@ -283,8 +282,10 @@ public void testIndexedTemplateOverwrite() throws Exception { for (int i = 1; i < iterations; i++) { assertAcked(client().admin().cluster().preparePutStoredScript() .setId("git01") - .setContent(new BytesArray("{\"template\":{\"query\": {\"match\": {\"searchtext\": {\"query\": \"{{P_Keyword1}}\"," + - "\"type\": \"ooophrase_prefix\"}}}}}"), XContentType.JSON)); + .setContent(new BytesArray( + "{\"template\":{\"query\": {\"match_phrase_prefix\": {\"searchtext\": {\"query\": \"{{P_Keyword1}}\"," + + "\"slop\": -1}}}}}"), + XContentType.JSON)); GetStoredScriptResponse getResponse = client().admin().cluster().prepareGetStoredScript("git01").get(); assertNotNull(getResponse.getSource()); @@ -292,24 +293,22 @@ public void testIndexedTemplateOverwrite() throws Exception { Map templateParams = new HashMap<>(); templateParams.put("P_Keyword1", "dev"); - ParsingException e = expectThrows(ParsingException.class, () -> new SearchTemplateRequestBuilder(client()) + IllegalArgumentException e = expectThrows(IllegalArgumentException.class, () -> new SearchTemplateRequestBuilder(client()) .setRequest(new SearchRequest("testindex").types("test")) .setScript("git01").setScriptType(ScriptType.STORED).setScriptParams(templateParams) .get()); - assertThat(e.getMessage(), containsString("[match] query does not support type ooophrase_prefix")); - assertWarnings("Deprecated field [type] used, replaced by [match_phrase and match_phrase_prefix query]"); + assertThat(e.getMessage(), containsString("No negative slop allowed")); assertAcked(client().admin().cluster().preparePutStoredScript() .setId("git01") - .setContent(new BytesArray("{\"query\": {\"match\": {\"searchtext\": {\"query\": \"{{P_Keyword1}}\"," + - "\"type\": \"phrase_prefix\"}}}}"), XContentType.JSON)); + .setContent(new BytesArray("{\"query\": {\"match_phrase_prefix\": {\"searchtext\": {\"query\": \"{{P_Keyword1}}\"," + + "\"slop\": 0}}}}"), XContentType.JSON)); SearchTemplateResponse searchResponse = new SearchTemplateRequestBuilder(client()) .setRequest(new SearchRequest("testindex").types("test")) .setScript("git01").setScriptType(ScriptType.STORED).setScriptParams(templateParams) .get(); assertHitCount(searchResponse.getResponse(), 1); - assertWarnings("Deprecated field [type] used, replaced by [match_phrase and match_phrase_prefix query]"); } } diff --git a/modules/lang-painless/build.gradle b/modules/lang-painless/build.gradle index 85f609d7757bc..0bd96725c66b4 100644 --- a/modules/lang-painless/build.gradle +++ b/modules/lang-painless/build.gradle @@ -24,8 +24,12 @@ esplugin { classname 'org.elasticsearch.painless.PainlessPlugin' } +integTestCluster { + module project.project(':modules:mapper-extras') +} + dependencies { - compile 'org.antlr:antlr4-runtime:4.5.1-1' + compile 'org.antlr:antlr4-runtime:4.5.3' compile 'org.ow2.asm:asm-debug-all:5.1' } @@ -37,14 +41,11 @@ test { jvmArg '-XX:-OmitStackTraceInFastThrow' } -integTestCluster { - setting 'script.max_compilations_per_minute', '1000' -} - /* Build Javadoc for the Java classes in Painless's public API that are in the * Painless plugin */ task apiJavadoc(type: Javadoc) { source = sourceSets.main.allJava + classpath = sourceSets.main.runtimeClasspath include '**/org/elasticsearch/painless/api/' destinationDir = new File(docsDir, 'apiJavadoc') } @@ -58,7 +59,7 @@ assemble.dependsOn apiJavadocJar task generatePainlessApi(type: JavaExec) { main = 'org.elasticsearch.painless.PainlessDocGenerator' classpath = sourceSets.test.runtimeClasspath - args file('../../docs/reference/painless-api-reference') + args file('../../docs/painless/painless-api-reference') } /********************************************** @@ -70,7 +71,7 @@ configurations { } dependencies { - regenerate 'org.antlr:antlr4:4.5.1-1' + regenerate 'org.antlr:antlr4:4.5.3' } String grammarPath = 'src/main/antlr' @@ -145,7 +146,7 @@ task regen { fileset(dir: outputPath, includes: 'Painless*.java') } // fix line endings - ant.fixcrlf(srcdir: outputPath) { + ant.fixcrlf(srcdir: outputPath, eol: 'lf') { patternset(includes: 'Painless*.java') } } diff --git a/modules/lang-painless/licenses/antlr4-runtime-4.5.1-1.jar.sha1 b/modules/lang-painless/licenses/antlr4-runtime-4.5.1-1.jar.sha1 deleted file mode 100644 index 37f80b91724f6..0000000000000 --- a/modules/lang-painless/licenses/antlr4-runtime-4.5.1-1.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -66144204f9d6d7d3f3f775622c2dd7e9bd511d97 \ No newline at end of file diff --git a/modules/lang-painless/licenses/antlr4-runtime-4.5.3.jar.sha1 b/modules/lang-painless/licenses/antlr4-runtime-4.5.3.jar.sha1 new file mode 100644 index 0000000000000..535955b7d6826 --- /dev/null +++ b/modules/lang-painless/licenses/antlr4-runtime-4.5.3.jar.sha1 @@ -0,0 +1 @@ +2609e36f18f7e8d593cc1cddfb2ac776dc96b8e0 \ No newline at end of file diff --git a/modules/lang-painless/src/main/java/org/elasticsearch/painless/AnalyzerCaster.java b/modules/lang-painless/src/main/java/org/elasticsearch/painless/AnalyzerCaster.java index 54ec8b5ac39ba..02b7105593fb7 100644 --- a/modules/lang-painless/src/main/java/org/elasticsearch/painless/AnalyzerCaster.java +++ b/modules/lang-painless/src/main/java/org/elasticsearch/painless/AnalyzerCaster.java @@ -20,39 +20,23 @@ package org.elasticsearch.painless; import org.elasticsearch.painless.Definition.Cast; -import org.elasticsearch.painless.Definition.Sort; import org.elasticsearch.painless.Definition.Type; import java.util.Objects; -import static org.elasticsearch.painless.Definition.BOOLEAN_OBJ_TYPE; -import static org.elasticsearch.painless.Definition.BOOLEAN_TYPE; -import static org.elasticsearch.painless.Definition.BYTE_OBJ_TYPE; -import static org.elasticsearch.painless.Definition.BYTE_TYPE; -import static org.elasticsearch.painless.Definition.CHAR_OBJ_TYPE; -import static org.elasticsearch.painless.Definition.CHAR_TYPE; -import static org.elasticsearch.painless.Definition.DEF_TYPE; -import static org.elasticsearch.painless.Definition.DOUBLE_OBJ_TYPE; -import static org.elasticsearch.painless.Definition.DOUBLE_TYPE; -import static org.elasticsearch.painless.Definition.FLOAT_OBJ_TYPE; -import static org.elasticsearch.painless.Definition.FLOAT_TYPE; -import static org.elasticsearch.painless.Definition.INT_OBJ_TYPE; -import static org.elasticsearch.painless.Definition.INT_TYPE; -import static org.elasticsearch.painless.Definition.LONG_OBJ_TYPE; -import static org.elasticsearch.painless.Definition.LONG_TYPE; -import static org.elasticsearch.painless.Definition.NUMBER_TYPE; -import static org.elasticsearch.painless.Definition.OBJECT_TYPE; -import static org.elasticsearch.painless.Definition.SHORT_OBJ_TYPE; -import static org.elasticsearch.painless.Definition.SHORT_TYPE; -import static org.elasticsearch.painless.Definition.STRING_TYPE; - /** * Used during the analysis phase to collect legal type casts and promotions * for type-checking and later to write necessary casts in the bytecode. */ public final class AnalyzerCaster { - public static Cast getLegalCast(Location location, Type actual, Type expected, boolean explicit, boolean internal) { + private Definition definition; + + public AnalyzerCaster(Definition definition) { + this.definition = definition; + } + + public Cast getLegalCast(Location location, Type actual, Type expected, boolean explicit, boolean internal) { Objects.requireNonNull(actual); Objects.requireNonNull(expected); @@ -60,767 +44,465 @@ public static Cast getLegalCast(Location location, Type actual, Type expected, b return null; } - switch (actual.sort) { - case BOOL: - switch (expected.sort) { - case DEF: - return new Cast(BOOLEAN_OBJ_TYPE, DEF_TYPE, explicit, null, null, BOOLEAN_TYPE, null); - case OBJECT: - if (OBJECT_TYPE.equals(expected) && internal) - return new Cast(BOOLEAN_OBJ_TYPE, OBJECT_TYPE, explicit, null, null, BOOLEAN_TYPE, null); - - break; - case BOOL_OBJ: - if (internal) - return new Cast(BOOLEAN_TYPE, BOOLEAN_TYPE, explicit, null, null, null, BOOLEAN_TYPE); - } - - break; - case BYTE: - switch (expected.sort) { - case SHORT: - case INT: - case LONG: - case FLOAT: - case DOUBLE: - return new Cast(BYTE_TYPE, expected, explicit); - case CHAR: - if (explicit) - return new Cast(BYTE_TYPE, CHAR_TYPE, true); - - break; - case DEF: - return new Cast(BYTE_OBJ_TYPE, DEF_TYPE, explicit, null, null, BYTE_TYPE, null); - case OBJECT: - if (OBJECT_TYPE.equals(expected) && internal) - return new Cast(BYTE_OBJ_TYPE, OBJECT_TYPE, explicit, null, null, BYTE_TYPE, null); - - break; - case NUMBER: - if (internal) - return new Cast(BYTE_OBJ_TYPE, NUMBER_TYPE, explicit, null, null, BYTE_TYPE, null); - - break; - case BYTE_OBJ: - if (internal) - return new Cast(BYTE_TYPE, BYTE_TYPE, explicit, null, null, null, BYTE_TYPE); - - break; - case SHORT_OBJ: - if (internal) - return new Cast(BYTE_TYPE, SHORT_TYPE, explicit, null, null, null, SHORT_TYPE); - - break; - case INT_OBJ: - if (internal) - return new Cast(BYTE_TYPE, INT_TYPE, explicit, null, null, null, INT_TYPE); - - break; - case LONG_OBJ: - if (internal) - return new Cast(BYTE_TYPE, LONG_TYPE, explicit, null, null, null, LONG_TYPE); - - break; - case FLOAT_OBJ: - if (internal) - return new Cast(BYTE_TYPE, FLOAT_TYPE, explicit, null, null, null, FLOAT_TYPE); - - break; - case DOUBLE_OBJ: - if (internal) - return new Cast(BYTE_TYPE, DOUBLE_TYPE, explicit, null, null, null, DOUBLE_TYPE); - - break; - case CHAR_OBJ: - if (explicit && internal) - return new Cast(BYTE_TYPE, CHAR_TYPE, true, null, null, null, CHAR_TYPE); - - break; - } - - break; - case SHORT: - switch (expected.sort) { - case INT: - case LONG: - case FLOAT: - case DOUBLE: - return new Cast(SHORT_TYPE, expected, explicit); - case BYTE: - case CHAR: - if (explicit) - return new Cast(SHORT_TYPE, expected, true); - - break; - case DEF: - return new Cast(SHORT_OBJ_TYPE, DEF_TYPE, explicit, null, null, SHORT_TYPE, null); - case OBJECT: - if (OBJECT_TYPE.equals(expected) && internal) - return new Cast(SHORT_OBJ_TYPE, OBJECT_TYPE, explicit, null, null, SHORT_TYPE, null); - - break; - case NUMBER: - if (internal) - return new Cast(SHORT_OBJ_TYPE, NUMBER_TYPE, explicit, null, null, SHORT_TYPE, null); - - break; - case SHORT_OBJ: - if (internal) - return new Cast(SHORT_TYPE, SHORT_TYPE, explicit, null, null, null, SHORT_TYPE); - - break; - case INT_OBJ: - if (internal) - return new Cast(SHORT_TYPE, INT_TYPE, explicit, null, null, null, INT_TYPE); - - break; - case LONG_OBJ: - if (internal) - return new Cast(SHORT_TYPE, LONG_TYPE, explicit, null, null, null, LONG_TYPE); - - break; - case FLOAT_OBJ: - if (internal) - return new Cast(SHORT_TYPE, FLOAT_TYPE, explicit, null, null, null, FLOAT_TYPE); - - break; - case DOUBLE_OBJ: - if (internal) - return new Cast(SHORT_TYPE, DOUBLE_TYPE, explicit, null, null, null, DOUBLE_TYPE); - - break; - case BYTE_OBJ: - if (explicit && internal) - return new Cast(SHORT_TYPE, BYTE_TYPE, true, null, null, null, BYTE_TYPE); - - break; - case CHAR_OBJ: - if (explicit && internal) - return new Cast(SHORT_TYPE, CHAR_TYPE, true, null, null, null, CHAR_TYPE); - - break; - } - - break; - case CHAR: - switch (expected.sort) { - case INT: - case LONG: - case FLOAT: - case DOUBLE: - return new Cast(CHAR_TYPE, expected, explicit); - case BYTE: - case SHORT: - if (explicit) - return new Cast(actual, expected, true); - - break; - case DEF: - return new Cast(CHAR_OBJ_TYPE, DEF_TYPE, explicit, null, null, CHAR_TYPE, null); - case OBJECT: - if (OBJECT_TYPE.equals(expected) && internal) - return new Cast(CHAR_OBJ_TYPE, OBJECT_TYPE, explicit, null, null, CHAR_TYPE, null); - - break; - case NUMBER: - if (internal) - return new Cast(CHAR_OBJ_TYPE, NUMBER_TYPE, explicit, null, null, CHAR_TYPE, null); - - break; - case CHAR_OBJ: - if (internal) - return new Cast(CHAR_TYPE, CHAR_TYPE, explicit, null, null, null, CHAR_TYPE); - - break; - case STRING: - return new Cast(CHAR_TYPE, STRING_TYPE, explicit); - case INT_OBJ: - if (internal) - return new Cast(CHAR_TYPE, INT_TYPE, explicit, null, null, null, INT_TYPE); - - break; - case LONG_OBJ: - if (internal) - return new Cast(CHAR_TYPE, LONG_TYPE, explicit, null, null, null, LONG_TYPE); - - break; - case FLOAT_OBJ: - if (internal) - return new Cast(CHAR_TYPE, FLOAT_TYPE, explicit, null, null, null, FLOAT_TYPE); - - break; - case DOUBLE_OBJ: - if (internal) - return new Cast(CHAR_TYPE, DOUBLE_TYPE, explicit, null, null, null, DOUBLE_TYPE); - - break; - case BYTE_OBJ: - if (explicit && internal) - return new Cast(CHAR_TYPE, BYTE_TYPE, true, null, null, null, BYTE_TYPE); - - break; - case SHORT_OBJ: - if (explicit && internal) - return new Cast(CHAR_TYPE, SHORT_TYPE, true, null, null, null, SHORT_TYPE); - - break; - } - - break; - case INT: - switch (expected.sort) { - case LONG: - case FLOAT: - case DOUBLE: - return new Cast(INT_TYPE, expected, explicit); - case BYTE: - case SHORT: - case CHAR: - if (explicit) - return new Cast(INT_TYPE, expected, true); - - break; - case DEF: - return new Cast(INT_OBJ_TYPE, DEF_TYPE, explicit, null, null, INT_TYPE, null); - case OBJECT: - if (OBJECT_TYPE.equals(expected) && internal) - return new Cast(INT_OBJ_TYPE, OBJECT_TYPE, explicit, null, null, INT_TYPE, null); - - break; - case NUMBER: - if (internal) - return new Cast(INT_OBJ_TYPE, NUMBER_TYPE, explicit, null, null, INT_TYPE, null); - - break; - case INT_OBJ: - if (internal) - return new Cast(INT_TYPE, INT_TYPE, explicit, null, null, null, INT_TYPE); - - break; - case LONG_OBJ: - if (internal) - return new Cast(INT_TYPE, LONG_TYPE, explicit, null, null, null, LONG_TYPE); - - break; - case FLOAT_OBJ: - if (internal) - return new Cast(INT_TYPE, FLOAT_TYPE, explicit, null, null, null, FLOAT_TYPE); - - break; - case DOUBLE_OBJ: - if (internal) - return new Cast(INT_TYPE, DOUBLE_TYPE, explicit, null, null, null, DOUBLE_TYPE); - - break; - case BYTE_OBJ: - if (explicit && internal) - return new Cast(INT_TYPE, BYTE_TYPE, true, null, null, null, BYTE_TYPE); - - break; - case SHORT_OBJ: - if (explicit && internal) - return new Cast(INT_TYPE, SHORT_TYPE, true, null, null, null, SHORT_TYPE); - - break; - case CHAR_OBJ: - if (explicit && internal) - return new Cast(INT_TYPE, CHAR_TYPE, true, null, null, null, CHAR_TYPE); - - break; - } - - break; - case LONG: - switch (expected.sort) { - case FLOAT: - case DOUBLE: - return new Cast(LONG_TYPE, expected, explicit); - case BYTE: - case SHORT: - case CHAR: - case INT: - if (explicit) - return new Cast(actual, expected, true); - - break; - case DEF: - return new Cast(LONG_TYPE, DEF_TYPE, explicit, null, null, LONG_TYPE, null); - case OBJECT: - if (OBJECT_TYPE.equals(expected) && internal) - return new Cast(LONG_TYPE, actual, explicit, null, null, LONG_TYPE, null); - - break; - case NUMBER: - if (internal) - return new Cast(LONG_OBJ_TYPE, NUMBER_TYPE, explicit, null, null, LONG_TYPE, null); - - break; - case LONG_OBJ: - if (internal) - return new Cast(LONG_TYPE, LONG_TYPE, explicit, null, null, null, LONG_TYPE); - - break; - case FLOAT_OBJ: - if (internal) - return new Cast(LONG_TYPE, FLOAT_TYPE, explicit, null, null, null, FLOAT_TYPE); - - break; - case DOUBLE_OBJ: - if (internal) - return new Cast(LONG_TYPE, DOUBLE_TYPE, explicit, null, null, null, DOUBLE_TYPE); - - break; - case BYTE_OBJ: - if (explicit && internal) - return new Cast(LONG_TYPE, BYTE_TYPE, true, null, null, null, BYTE_TYPE); - - break; - case SHORT_OBJ: - if (explicit && internal) - return new Cast(LONG_TYPE, SHORT_TYPE, true, null, null, null, SHORT_TYPE); - - break; - case CHAR_OBJ: - if (explicit && internal) - return new Cast(LONG_TYPE, CHAR_TYPE, true, null, null, null, CHAR_TYPE); - - break; - case INT_OBJ: - if (explicit && internal) - return new Cast(LONG_TYPE, INT_TYPE, true, null, null, null, INT_TYPE); - - break; - } - - break; - case FLOAT: - switch (expected.sort) { - case DOUBLE: - return new Cast(actual, expected, explicit); - case BYTE: - case SHORT: - case CHAR: - case INT: - case LONG: - if (explicit) - return new Cast(actual, expected, true); - - break; - case DEF: - return new Cast(FLOAT_OBJ_TYPE, DEF_TYPE, explicit, null, null, FLOAT_TYPE, null); - case OBJECT: - if (OBJECT_TYPE.equals(expected) && internal) - return new Cast(FLOAT_OBJ_TYPE, OBJECT_TYPE, explicit, null, null, FLOAT_TYPE, null); - - break; - case NUMBER: - if (internal) - return new Cast(FLOAT_OBJ_TYPE, NUMBER_TYPE, explicit, null, null, FLOAT_TYPE, null); - - break; - case FLOAT_OBJ: - if (internal) - return new Cast(FLOAT_TYPE, FLOAT_TYPE, explicit, null, null, null, FLOAT_TYPE); - - break; - case DOUBLE_OBJ: - if (internal) - return new Cast(FLOAT_TYPE, DOUBLE_TYPE, explicit, null, null, null, DOUBLE_TYPE); - - break; - case BYTE_OBJ: - if (explicit && internal) - return new Cast(FLOAT_TYPE, BYTE_TYPE, true, null, null, null, BYTE_TYPE); - - break; - case SHORT_OBJ: - if (explicit && internal) - return new Cast(FLOAT_TYPE, SHORT_TYPE, true, null, null, null, SHORT_TYPE); - - break; - case CHAR_OBJ: - if (explicit && internal) - return new Cast(FLOAT_TYPE, CHAR_TYPE, true, null, null, null, CHAR_TYPE); - - break; - case INT_OBJ: - if (explicit && internal) - return new Cast(FLOAT_TYPE, INT_TYPE, true, null, null, null, INT_TYPE); - - break; - case LONG_OBJ: - if (explicit && internal) - return new Cast(FLOAT_TYPE, LONG_TYPE, true, null, null, null, LONG_TYPE); - - break; - } - - break; - case DOUBLE: - switch (expected.sort) { - case BYTE: - case SHORT: - case CHAR: - case INT: - case LONG: - case FLOAT: - if (explicit) - return new Cast(DOUBLE_TYPE, expected, true); - - break; - case DEF: - return new Cast(DOUBLE_OBJ_TYPE, DEF_TYPE, explicit, null, null, DOUBLE_TYPE, null); - case OBJECT: - if (OBJECT_TYPE.equals(expected) && internal) - return new Cast(DOUBLE_OBJ_TYPE, OBJECT_TYPE, explicit, null, null, DOUBLE_TYPE, null); - - break; - case NUMBER: - if (internal) - return new Cast(DOUBLE_OBJ_TYPE, NUMBER_TYPE, explicit, null, null, DOUBLE_TYPE, null); - - break; - case DOUBLE_OBJ: - if (internal) - return new Cast(DOUBLE_TYPE, DOUBLE_TYPE, explicit, null, null, null, DOUBLE_TYPE); - - break; - case BYTE_OBJ: - if (explicit && internal) - return new Cast(DOUBLE_TYPE, BYTE_TYPE, true, null, null, null, BYTE_TYPE); - - break; - case SHORT_OBJ: - if (explicit && internal) - return new Cast(DOUBLE_TYPE, SHORT_TYPE, true, null, null, null, SHORT_TYPE); - - break; - case CHAR_OBJ: - if (explicit && internal) - return new Cast(DOUBLE_TYPE, CHAR_TYPE, true, null, null, null, CHAR_TYPE); - - break; - case INT_OBJ: - if (explicit && internal) - return new Cast(DOUBLE_TYPE, INT_TYPE, true, null, null, null, INT_TYPE); - - break; - case LONG_OBJ: - if (explicit && internal) - return new Cast(DOUBLE_TYPE, LONG_TYPE, true, null, null, null, LONG_TYPE); - - break; - case FLOAT_OBJ: - if (explicit && internal) - return new Cast(DOUBLE_TYPE, FLOAT_TYPE, true, null, null, null, FLOAT_TYPE); - - break; - } - - break; - case OBJECT: - if (OBJECT_TYPE.equals(actual)) - switch (expected.sort) { - case BYTE: - if (internal && explicit) - return new Cast(OBJECT_TYPE, BYTE_OBJ_TYPE, true, null, BYTE_TYPE, null, null); - - break; - case SHORT: - if (internal && explicit) - return new Cast(OBJECT_TYPE, SHORT_OBJ_TYPE, true, null, SHORT_TYPE, null, null); - - break; - case CHAR: - if (internal && explicit) - return new Cast(OBJECT_TYPE, CHAR_OBJ_TYPE, true, null, CHAR_TYPE, null, null); - - break; - case INT: - if (internal && explicit) - return new Cast(OBJECT_TYPE, INT_OBJ_TYPE, true, null, INT_TYPE, null, null); - - break; - case LONG: - if (internal && explicit) - return new Cast(OBJECT_TYPE, LONG_OBJ_TYPE, true, null, LONG_TYPE, null, null); - - break; - case FLOAT: - if (internal && explicit) - return new Cast(OBJECT_TYPE, FLOAT_OBJ_TYPE, true, null, FLOAT_TYPE, null, null); - - break; - case DOUBLE: - if (internal && explicit) - return new Cast(OBJECT_TYPE, DOUBLE_OBJ_TYPE, true, null, DOUBLE_TYPE, null, null); - - break; - } - break; - case NUMBER: - switch (expected.sort) { - case BYTE: - if (internal && explicit) - return new Cast(NUMBER_TYPE, BYTE_OBJ_TYPE, true, null, BYTE_TYPE, null, null); - - break; - case SHORT: - if (internal && explicit) - return new Cast(NUMBER_TYPE, SHORT_OBJ_TYPE, true, null, SHORT_TYPE, null, null); - - break; - case CHAR: - if (internal && explicit) - return new Cast(NUMBER_TYPE, CHAR_OBJ_TYPE, true, null, CHAR_TYPE, null, null); - - break; - case INT: - if (internal && explicit) - return new Cast(NUMBER_TYPE, INT_OBJ_TYPE, true, null, INT_TYPE, null, null); - - break; - case LONG: - if (internal && explicit) - return new Cast(NUMBER_TYPE, LONG_OBJ_TYPE, true, null, LONG_TYPE, null, null); - - break; - case FLOAT: - if (internal && explicit) - return new Cast(NUMBER_TYPE, FLOAT_OBJ_TYPE, true, null, FLOAT_TYPE, null, null); - - break; - case DOUBLE: - if (internal && explicit) - return new Cast(NUMBER_TYPE, DOUBLE_OBJ_TYPE, true, null, DOUBLE_TYPE, null, null); - - break; - } - - break; - case BOOL_OBJ: - switch (expected.sort) { - case BOOL: - if (internal) - return new Cast(BOOLEAN_TYPE, BOOLEAN_TYPE, explicit, BOOLEAN_TYPE, null, null, null); - - break; - } - - break; - case BYTE_OBJ: - switch (expected.sort) { - case BYTE: - case SHORT: - case INT: - case LONG: - case FLOAT: - case DOUBLE: - if (internal) - return new Cast(BYTE_TYPE, expected, explicit, BYTE_TYPE, null, null, null); - - break; - case CHAR: - if (internal && explicit) - return new Cast(BYTE_TYPE, expected, true, BYTE_TYPE, null, null, null); - - break; - } - - break; - case SHORT_OBJ: - switch (expected.sort) { - case SHORT: - case INT: - case LONG: - case FLOAT: - case DOUBLE: - if (internal) - return new Cast(SHORT_TYPE, expected, explicit, SHORT_TYPE, null, null, null); - - break; - case BYTE: - case CHAR: - if (internal && explicit) - return new Cast(SHORT_TYPE, expected, true, SHORT_TYPE, null, null, null); - - break; - } - - break; - case CHAR_OBJ: - switch (expected.sort) { - case CHAR: - case INT: - case LONG: - case FLOAT: - case DOUBLE: - if (internal) - return new Cast(CHAR_TYPE, expected, explicit, CHAR_TYPE, null, null, null); - - break; - case BYTE: - case SHORT: - if (internal && explicit) - return new Cast(CHAR_TYPE, expected, true, CHAR_TYPE, null, null, null); - - break; - } - - break; - case INT_OBJ: - switch (expected.sort) { - case INT: - case LONG: - case FLOAT: - case DOUBLE: - if (internal) - return new Cast(INT_TYPE, expected, explicit, INT_TYPE, null, null, null); - - break; - case BYTE: - case SHORT: - case CHAR: - if (internal && explicit) - return new Cast(INT_TYPE, expected, true, INT_TYPE, null, null, null); - - break; - } - - break; - case LONG_OBJ: - switch (expected.sort) { - case LONG: - case FLOAT: - case DOUBLE: - if (internal) - return new Cast(LONG_TYPE, expected, explicit, LONG_TYPE, null, null, null); - - break; - case BYTE: - case SHORT: - case CHAR: - case INT: - if (internal && explicit) - return new Cast(LONG_TYPE, expected, true, LONG_TYPE, null, null, null); - - break; - } - - break; - case FLOAT_OBJ: - switch (expected.sort) { - case FLOAT: - case DOUBLE: - if (internal) - return new Cast(FLOAT_TYPE, expected, explicit, FLOAT_TYPE, null, null, null); - - break; - case BYTE: - case SHORT: - case CHAR: - case INT: - case LONG: - if (internal && explicit) - return new Cast(FLOAT_TYPE, expected, true, FLOAT_TYPE, null, null, null); - - break; - } - - break; - case DOUBLE_OBJ: - switch (expected.sort) { - case DOUBLE: - if (internal) - return new Cast(DOUBLE_TYPE, expected, explicit, DOUBLE_TYPE, null, null, null); - - break; - case BYTE: - case SHORT: - case CHAR: - case INT: - case LONG: - case FLOAT: - if (internal && explicit) - return new Cast(DOUBLE_TYPE, expected, true, DOUBLE_TYPE, null, null, null); - - break; - } - - break; - case DEF: - switch (expected.sort) { - case BOOL: - return new Cast(DEF_TYPE, BOOLEAN_OBJ_TYPE, explicit, null, BOOLEAN_TYPE, null, null); - case BYTE: - return new Cast(DEF_TYPE, BYTE_OBJ_TYPE, explicit, null, BYTE_TYPE, null, null); - case SHORT: - return new Cast(DEF_TYPE, SHORT_OBJ_TYPE, explicit, null, SHORT_TYPE, null, null); - case CHAR: - return new Cast(DEF_TYPE, CHAR_OBJ_TYPE, explicit, null, CHAR_TYPE, null, null); - case INT: - return new Cast(DEF_TYPE, INT_OBJ_TYPE, explicit, null, INT_TYPE, null, null); - case LONG: - return new Cast(DEF_TYPE, LONG_OBJ_TYPE, explicit, null, LONG_TYPE, null, null); - case FLOAT: - return new Cast(DEF_TYPE, FLOAT_OBJ_TYPE, explicit, null, FLOAT_TYPE, null, null); - case DOUBLE: - return new Cast(DEF_TYPE, DOUBLE_OBJ_TYPE, explicit, null, DOUBLE_TYPE, null, null); - } - - break; - case STRING: - switch (expected.sort) { - case CHAR: - if (explicit) - return new Cast(STRING_TYPE, CHAR_TYPE, true); - - break; - } - - break; + if (actual.dynamic) { + if (expected.clazz == boolean.class) { + return new Cast(definition.DefType, definition.BooleanType, explicit, null, definition.booleanType, null, null); + } else if (expected.clazz == byte.class) { + return new Cast(definition.DefType, definition.ByteType, explicit, null, definition.byteType, null, null); + } else if (expected.clazz == short.class) { + return new Cast(definition.DefType, definition.ShortType, explicit, null, definition.shortType, null, null); + } else if (expected.clazz == char.class) { + return new Cast(definition.DefType, definition.CharacterType, explicit, null, definition.charType, null, null); + } else if (expected.clazz == int.class) { + return new Cast(definition.DefType, definition.IntegerType, explicit, null, definition.intType, null, null); + } else if (expected.clazz == long.class) { + return new Cast(definition.DefType, definition.LongType, explicit, null, definition.longType, null, null); + } else if (expected.clazz == float.class) { + return new Cast(definition.DefType, definition.FloatType, explicit, null, definition.floatType, null, null); + } else if (expected.clazz == double.class) { + return new Cast(definition.DefType, definition.DoubleType, explicit, null, definition.doubleType, null, null); + } + } else if (actual.clazz == Object.class) { + if (expected.clazz == byte.class && explicit && internal) { + return new Cast(definition.ObjectType, definition.ByteType, true, null, definition.byteType, null, null); + } else if (expected.clazz == short.class && explicit && internal) { + return new Cast(definition.ObjectType, definition.ShortType, true, null, definition.shortType, null, null); + } else if (expected.clazz == char.class && explicit && internal) { + return new Cast(definition.ObjectType, definition.CharacterType, true, null, definition.charType, null, null); + } else if (expected.clazz == int.class && explicit && internal) { + return new Cast(definition.ObjectType, definition.IntegerType, true, null, definition.intType, null, null); + } else if (expected.clazz == long.class && explicit && internal) { + return new Cast(definition.ObjectType, definition.LongType, true, null, definition.longType, null, null); + } else if (expected.clazz == float.class && explicit && internal) { + return new Cast(definition.ObjectType, definition.FloatType, true, null, definition.floatType, null, null); + } else if (expected.clazz == double.class && explicit && internal) { + return new Cast(definition.ObjectType, definition.DoubleType, true, null, definition.doubleType, null, null); + } + } else if (actual.clazz == Number.class) { + if (expected.clazz == byte.class && explicit && internal) { + return new Cast(definition.NumberType, definition.ByteType, true, null, definition.byteType, null, null); + } else if (expected.clazz == short.class && explicit && internal) { + return new Cast(definition.NumberType, definition.ShortType, true, null, definition.shortType, null, null); + } else if (expected.clazz == char.class && explicit && internal) { + return new Cast(definition.NumberType, definition.CharacterType, true, null, definition.charType, null, null); + } else if (expected.clazz == int.class && explicit && internal) { + return new Cast(definition.NumberType, definition.IntegerType, true, null, definition.intType, null, null); + } else if (expected.clazz == long.class && explicit && internal) { + return new Cast(definition.NumberType, definition.LongType, true, null, definition.longType, null, null); + } else if (expected.clazz == float.class && explicit && internal) { + return new Cast(definition.NumberType, definition.FloatType, true, null, definition.floatType, null, null); + } else if (expected.clazz == double.class && explicit && internal) { + return new Cast(definition.NumberType, definition.DoubleType, true, null, definition.doubleType, null, null); + } + } else if (actual.clazz == String.class) { + if (expected.clazz == char.class && explicit) { + return new Cast(definition.StringType, definition.charType, true); + } + } else if (actual.clazz == boolean.class) { + if (expected.dynamic) { + return new Cast(definition.BooleanType, definition.DefType, explicit, null, null, definition.booleanType, null); + } else if (expected.clazz == Object.class && internal) { + return new Cast(definition.BooleanType, definition.ObjectType, explicit, null, null, definition.booleanType, null); + } else if (expected.clazz == Boolean.class && internal) { + return new Cast(definition.booleanType, definition.booleanType, explicit, null, null, null, definition.booleanType); + } + } else if (actual.clazz == byte.class) { + if (expected.dynamic) { + return new Cast(definition.ByteType, definition.DefType, explicit, null, null, definition.byteType, null); + } else if (expected.clazz == Object.class && internal) { + return new Cast(definition.ByteType, definition.ObjectType, explicit, null, null, definition.byteType, null); + } else if (expected.clazz == Number.class && internal) { + return new Cast(definition.ByteType, definition.NumberType, explicit, null, null, definition.byteType, null); + } else if (expected.clazz == short.class) { + return new Cast(definition.byteType, definition.shortType, explicit); + } else if (expected.clazz == char.class && explicit) { + return new Cast(definition.byteType, definition.charType, true); + } else if (expected.clazz == int.class) { + return new Cast(definition.byteType, definition.intType, explicit); + } else if (expected.clazz == long.class) { + return new Cast(definition.byteType, definition.longType, explicit); + } else if (expected.clazz == float.class) { + return new Cast(definition.byteType, definition.floatType, explicit); + } else if (expected.clazz == double.class) { + return new Cast(definition.byteType, definition.doubleType, explicit); + } else if (expected.clazz == Byte.class && internal) { + return new Cast(definition.byteType, definition.byteType, explicit, null, null, null, definition.byteType); + } else if (expected.clazz == Short.class && internal) { + return new Cast(definition.byteType, definition.shortType, explicit, null, null, null, definition.shortType); + } else if (expected.clazz == Character.class && explicit && internal) { + return new Cast(definition.byteType, definition.charType, true, null, null, null, definition.charType); + } else if (expected.clazz == Integer.class && internal) { + return new Cast(definition.byteType, definition.intType, explicit, null, null, null, definition.intType); + } else if (expected.clazz == Long.class && internal) { + return new Cast(definition.byteType, definition.longType, explicit, null, null, null, definition.longType); + } else if (expected.clazz == Float.class && internal) { + return new Cast(definition.byteType, definition.floatType, explicit, null, null, null, definition.floatType); + } else if (expected.clazz == Double.class && internal) { + return new Cast(definition.byteType, definition.doubleType, explicit, null, null, null, definition.doubleType); + } + } else if (actual.clazz == short.class) { + if (expected.dynamic) { + return new Cast(definition.ShortType, definition.DefType, explicit, null, null, definition.shortType, null); + } else if (expected.clazz == Object.class && internal) { + return new Cast(definition.ShortType, definition.ObjectType, explicit, null, null, definition.shortType, null); + } else if (expected.clazz == Number.class && internal) { + return new Cast(definition.ShortType, definition.NumberType, explicit, null, null, definition.shortType, null); + } else if (expected.clazz == byte.class && explicit) { + return new Cast(definition.shortType, definition.byteType, true); + } else if (expected.clazz == char.class && explicit) { + return new Cast(definition.shortType, definition.charType, true); + } else if (expected.clazz == int.class) { + return new Cast(definition.shortType, definition.intType, explicit); + } else if (expected.clazz == long.class) { + return new Cast(definition.shortType, definition.longType, explicit); + } else if (expected.clazz == float.class) { + return new Cast(definition.shortType, definition.floatType, explicit); + } else if (expected.clazz == double.class) { + return new Cast(definition.shortType, definition.doubleType, explicit); + } else if (expected.clazz == Byte.class && explicit && internal) { + return new Cast(definition.shortType, definition.byteType, true, null, null, null, definition.byteType); + } else if (expected.clazz == Short.class && internal) { + return new Cast(definition.shortType, definition.shortType, explicit, null, null, null, definition.shortType); + } else if (expected.clazz == Character.class && explicit && internal) { + return new Cast(definition.shortType, definition.charType, true, null, null, null, definition.charType); + } else if (expected.clazz == Integer.class && internal) { + return new Cast(definition.shortType, definition.intType, explicit, null, null, null, definition.intType); + } else if (expected.clazz == Long.class && internal) { + return new Cast(definition.shortType, definition.longType, explicit, null, null, null, definition.longType); + } else if (expected.clazz == Float.class && internal) { + return new Cast(definition.shortType, definition.floatType, explicit, null, null, null, definition.floatType); + } else if (expected.clazz == Double.class && internal) { + return new Cast(definition.shortType, definition.doubleType, explicit, null, null, null, definition.doubleType); + } + } else if (actual.clazz == char.class) { + if (expected.dynamic) { + return new Cast(definition.CharacterType, definition.DefType, explicit, null, null, definition.charType, null); + } else if (expected.clazz == Object.class && internal) { + return new Cast(definition.CharacterType, definition.ObjectType, explicit, null, null, definition.charType, null); + } else if (expected.clazz == Number.class && internal) { + return new Cast(definition.CharacterType, definition.NumberType, explicit, null, null, definition.charType, null); + } else if (expected.clazz == String.class) { + return new Cast(definition.charType, definition.StringType, explicit); + } else if (expected.clazz == byte.class && explicit) { + return new Cast(definition.charType, definition.byteType, true); + } else if (expected.clazz == short.class && explicit) { + return new Cast(definition.charType, definition.shortType, true); + } else if (expected.clazz == int.class) { + return new Cast(definition.charType, definition.intType, explicit); + } else if (expected.clazz == long.class) { + return new Cast(definition.charType, definition.longType, explicit); + } else if (expected.clazz == float.class) { + return new Cast(definition.charType, definition.floatType, explicit); + } else if (expected.clazz == double.class) { + return new Cast(definition.charType, definition.doubleType, explicit); + } else if (expected.clazz == Byte.class && explicit && internal) { + return new Cast(definition.charType, definition.byteType, true, null, null, null, definition.byteType); + } else if (expected.clazz == Short.class && internal) { + return new Cast(definition.charType, definition.shortType, explicit, null, null, null, definition.shortType); + } else if (expected.clazz == Character.class && internal) { + return new Cast(definition.charType, definition.charType, true, null, null, null, definition.charType); + } else if (expected.clazz == Integer.class && internal) { + return new Cast(definition.charType, definition.intType, explicit, null, null, null, definition.intType); + } else if (expected.clazz == Long.class && internal) { + return new Cast(definition.charType, definition.longType, explicit, null, null, null, definition.longType); + } else if (expected.clazz == Float.class && internal) { + return new Cast(definition.charType, definition.floatType, explicit, null, null, null, definition.floatType); + } else if (expected.clazz == Double.class && internal) { + return new Cast(definition.charType, definition.doubleType, explicit, null, null, null, definition.doubleType); + } + } else if (actual.clazz == int.class) { + if (expected.dynamic) { + return new Cast(definition.IntegerType, definition.DefType, explicit, null, null, definition.intType, null); + } else if (expected.clazz == Object.class && internal) { + return new Cast(definition.IntegerType, definition.ObjectType, explicit, null, null, definition.intType, null); + } else if (expected.clazz == Number.class && internal) { + return new Cast(definition.IntegerType, definition.NumberType, explicit, null, null, definition.intType, null); + } else if (expected.clazz == byte.class && explicit) { + return new Cast(definition.intType, definition.byteType, true); + } else if (expected.clazz == char.class && explicit) { + return new Cast(definition.intType, definition.charType, true); + } else if (expected.clazz == short.class && explicit) { + return new Cast(definition.intType, definition.shortType, true); + } else if (expected.clazz == long.class) { + return new Cast(definition.intType, definition.longType, explicit); + } else if (expected.clazz == float.class) { + return new Cast(definition.intType, definition.floatType, explicit); + } else if (expected.clazz == double.class) { + return new Cast(definition.intType, definition.doubleType, explicit); + } else if (expected.clazz == Byte.class && explicit && internal) { + return new Cast(definition.intType, definition.byteType, true, null, null, null, definition.byteType); + } else if (expected.clazz == Short.class && explicit && internal) { + return new Cast(definition.intType, definition.shortType, true, null, null, null, definition.shortType); + } else if (expected.clazz == Character.class && explicit && internal) { + return new Cast(definition.intType, definition.charType, true, null, null, null, definition.charType); + } else if (expected.clazz == Integer.class && internal) { + return new Cast(definition.intType, definition.intType, explicit, null, null, null, definition.intType); + } else if (expected.clazz == Long.class && internal) { + return new Cast(definition.intType, definition.longType, explicit, null, null, null, definition.longType); + } else if (expected.clazz == Float.class && internal) { + return new Cast(definition.intType, definition.floatType, explicit, null, null, null, definition.floatType); + } else if (expected.clazz == Double.class && internal) { + return new Cast(definition.intType, definition.doubleType, explicit, null, null, null, definition.doubleType); + } + } else if (actual.clazz == long.class) { + if (expected.dynamic) { + return new Cast(definition.LongType, definition.DefType, explicit, null, null, definition.longType, null); + } else if (expected.clazz == Object.class && internal) { + return new Cast(definition.LongType, definition.ObjectType, explicit, null, null, definition.longType, null); + } else if (expected.clazz == Number.class && internal) { + return new Cast(definition.LongType, definition.NumberType, explicit, null, null, definition.longType, null); + } else if (expected.clazz == byte.class && explicit) { + return new Cast(definition.longType, definition.byteType, true); + } else if (expected.clazz == char.class && explicit) { + return new Cast(definition.longType, definition.charType, true); + } else if (expected.clazz == short.class && explicit) { + return new Cast(definition.longType, definition.shortType, true); + } else if (expected.clazz == int.class && explicit) { + return new Cast(definition.longType, definition.intType, true); + } else if (expected.clazz == float.class) { + return new Cast(definition.longType, definition.floatType, explicit); + } else if (expected.clazz == double.class) { + return new Cast(definition.longType, definition.doubleType, explicit); + } else if (expected.clazz == Byte.class && explicit && internal) { + return new Cast(definition.longType, definition.byteType, true, null, null, null, definition.byteType); + } else if (expected.clazz == Short.class && explicit && internal) { + return new Cast(definition.longType, definition.shortType, true, null, null, null, definition.shortType); + } else if (expected.clazz == Character.class && explicit && internal) { + return new Cast(definition.longType, definition.charType, true, null, null, null, definition.charType); + } else if (expected.clazz == Integer.class && explicit && internal) { + return new Cast(definition.longType, definition.intType, true, null, null, null, definition.intType); + } else if (expected.clazz == Long.class && internal) { + return new Cast(definition.longType, definition.longType, explicit, null, null, null, definition.longType); + } else if (expected.clazz == Float.class && internal) { + return new Cast(definition.longType, definition.floatType, explicit, null, null, null, definition.floatType); + } else if (expected.clazz == Double.class && internal) { + return new Cast(definition.longType, definition.doubleType, explicit, null, null, null, definition.doubleType); + } + } else if (actual.clazz == float.class) { + if (expected.dynamic) { + return new Cast(definition.FloatType, definition.DefType, explicit, null, null, definition.floatType, null); + } else if (expected.clazz == Object.class && internal) { + return new Cast(definition.FloatType, definition.ObjectType, explicit, null, null, definition.floatType, null); + } else if (expected.clazz == Number.class && internal) { + return new Cast(definition.FloatType, definition.NumberType, explicit, null, null, definition.floatType, null); + } else if (expected.clazz == byte.class && explicit) { + return new Cast(definition.floatType, definition.byteType, true); + } else if (expected.clazz == char.class && explicit) { + return new Cast(definition.floatType, definition.charType, true); + } else if (expected.clazz == short.class && explicit) { + return new Cast(definition.floatType, definition.shortType, true); + } else if (expected.clazz == int.class && explicit) { + return new Cast(definition.floatType, definition.intType, true); + } else if (expected.clazz == long.class && explicit) { + return new Cast(definition.floatType, definition.longType, true); + } else if (expected.clazz == double.class) { + return new Cast(definition.floatType, definition.doubleType, explicit); + } else if (expected.clazz == Byte.class && explicit && internal) { + return new Cast(definition.floatType, definition.byteType, true, null, null, null, definition.byteType); + } else if (expected.clazz == Short.class && explicit && internal) { + return new Cast(definition.floatType, definition.shortType, true, null, null, null, definition.shortType); + } else if (expected.clazz == Character.class && explicit && internal) { + return new Cast(definition.floatType, definition.charType, true, null, null, null, definition.charType); + } else if (expected.clazz == Integer.class && explicit && internal) { + return new Cast(definition.floatType, definition.intType, true, null, null, null, definition.intType); + } else if (expected.clazz == Long.class && explicit && internal) { + return new Cast(definition.floatType, definition.longType, true, null, null, null, definition.longType); + } else if (expected.clazz == Float.class && internal) { + return new Cast(definition.floatType, definition.floatType, explicit, null, null, null, definition.floatType); + } else if (expected.clazz == Double.class && internal) { + return new Cast(definition.floatType, definition.doubleType, explicit, null, null, null, definition.doubleType); + } + } else if (actual.clazz == double.class) { + if (expected.dynamic) { + return new Cast(definition.DoubleType, definition.DefType, explicit, null, null, definition.doubleType, null); + } else if (expected.clazz == Object.class && internal) { + return new Cast(definition.DoubleType, definition.ObjectType, explicit, null, null, definition.doubleType, null); + } else if (expected.clazz == Number.class && internal) { + return new Cast(definition.DoubleType, definition.NumberType, explicit, null, null, definition.doubleType, null); + } else if (expected.clazz == byte.class && explicit) { + return new Cast(definition.doubleType, definition.byteType, true); + } else if (expected.clazz == char.class && explicit) { + return new Cast(definition.doubleType, definition.charType, true); + } else if (expected.clazz == short.class && explicit) { + return new Cast(definition.doubleType, definition.shortType, true); + } else if (expected.clazz == int.class && explicit) { + return new Cast(definition.doubleType, definition.intType, true); + } else if (expected.clazz == long.class && explicit) { + return new Cast(definition.doubleType, definition.longType, true); + } else if (expected.clazz == float.class && explicit) { + return new Cast(definition.doubleType, definition.floatType, true); + } else if (expected.clazz == Byte.class && explicit && internal) { + return new Cast(definition.doubleType, definition.byteType, true, null, null, null, definition.byteType); + } else if (expected.clazz == Short.class && explicit && internal) { + return new Cast(definition.doubleType, definition.shortType, true, null, null, null, definition.shortType); + } else if (expected.clazz == Character.class && explicit && internal) { + return new Cast(definition.doubleType, definition.charType, true, null, null, null, definition.charType); + } else if (expected.clazz == Integer.class && explicit && internal) { + return new Cast(definition.doubleType, definition.intType, true, null, null, null, definition.intType); + } else if (expected.clazz == Long.class && explicit && internal) { + return new Cast(definition.doubleType, definition.longType, true, null, null, null, definition.longType); + } else if (expected.clazz == Float.class && explicit && internal) { + return new Cast(definition.doubleType, definition.floatType, true, null, null, null, definition.floatType); + } else if (expected.clazz == Double.class && internal) { + return new Cast(definition.doubleType, definition.doubleType, explicit, null, null, null, definition.doubleType); + } + } else if (actual.clazz == Boolean.class) { + if (expected.clazz == boolean.class && internal) { + return new Cast(definition.booleanType, definition.booleanType, explicit, definition.booleanType, null, null, null); + } + } else if (actual.clazz == Byte.class) { + if (expected.clazz == byte.class && internal) { + return new Cast(definition.byteType, definition.byteType, explicit, definition.byteType, null, null, null); + } else if (expected.clazz == short.class && internal) { + return new Cast(definition.byteType, definition.shortType, explicit, definition.byteType, null, null, null); + } else if (expected.clazz == char.class && explicit && internal) { + return new Cast(definition.byteType, definition.charType, true, definition.byteType, null, null, null); + } else if (expected.clazz == int.class && internal) { + return new Cast(definition.byteType, definition.intType, explicit, definition.byteType, null, null, null); + } else if (expected.clazz == long.class && internal) { + return new Cast(definition.byteType, definition.longType, explicit, definition.byteType, null, null, null); + } else if (expected.clazz == float.class && internal) { + return new Cast(definition.byteType, definition.floatType, explicit, definition.byteType, null, null, null); + } else if (expected.clazz == double.class && internal) { + return new Cast(definition.byteType, definition.doubleType, explicit, definition.byteType, null, null, null); + } + } else if (actual.clazz == Short.class) { + if (expected.clazz == byte.class && explicit && internal) { + return new Cast(definition.shortType, definition.byteType, true, definition.shortType, null, null, null); + } else if (expected.clazz == short.class && internal) { + return new Cast(definition.shortType, definition.shortType, explicit, definition.shortType, null, null, null); + } else if (expected.clazz == char.class && explicit && internal) { + return new Cast(definition.shortType, definition.charType, true, definition.shortType, null, null, null); + } else if (expected.clazz == int.class && internal) { + return new Cast(definition.shortType, definition.intType, explicit, definition.shortType, null, null, null); + } else if (expected.clazz == long.class && internal) { + return new Cast(definition.shortType, definition.longType, explicit, definition.shortType, null, null, null); + } else if (expected.clazz == float.class && internal) { + return new Cast(definition.shortType, definition.floatType, explicit, definition.shortType, null, null, null); + } else if (expected.clazz == double.class && internal) { + return new Cast(definition.shortType, definition.doubleType, explicit, definition.shortType, null, null, null); + } + } else if (actual.clazz == Character.class) { + if (expected.clazz == byte.class && explicit && internal) { + return new Cast(definition.charType, definition.byteType, true, definition.charType, null, null, null); + } else if (expected.clazz == short.class && explicit && internal) { + return new Cast(definition.charType, definition.shortType, true, definition.charType, null, null, null); + } else if (expected.clazz == char.class && internal) { + return new Cast(definition.charType, definition.charType, explicit, definition.charType, null, null, null); + } else if (expected.clazz == int.class && internal) { + return new Cast(definition.charType, definition.intType, explicit, definition.charType, null, null, null); + } else if (expected.clazz == long.class && internal) { + return new Cast(definition.charType, definition.longType, explicit, definition.charType, null, null, null); + } else if (expected.clazz == float.class && internal) { + return new Cast(definition.charType, definition.floatType, explicit, definition.charType, null, null, null); + } else if (expected.clazz == double.class && internal) { + return new Cast(definition.charType, definition.doubleType, explicit, definition.charType, null, null, null); + } + } else if (actual.clazz == Integer.class) { + if (expected.clazz == byte.class && explicit && internal) { + return new Cast(definition.intType, definition.byteType, true, definition.intType, null, null, null); + } else if (expected.clazz == short.class && explicit && internal) { + return new Cast(definition.intType, definition.shortType, true, definition.intType, null, null, null); + } else if (expected.clazz == char.class && explicit && internal) { + return new Cast(definition.intType, definition.charType, true, definition.intType, null, null, null); + } else if (expected.clazz == int.class && internal) { + return new Cast(definition.intType, definition.intType, explicit, definition.intType, null, null, null); + } else if (expected.clazz == long.class && internal) { + return new Cast(definition.intType, definition.longType, explicit, definition.intType, null, null, null); + } else if (expected.clazz == float.class && internal) { + return new Cast(definition.intType, definition.floatType, explicit, definition.intType, null, null, null); + } else if (expected.clazz == double.class && internal) { + return new Cast(definition.intType, definition.doubleType, explicit, definition.intType, null, null, null); + } + } else if (actual.clazz == Long.class) { + if (expected.clazz == byte.class && explicit && internal) { + return new Cast(definition.longType, definition.byteType, true, definition.longType, null, null, null); + } else if (expected.clazz == short.class && explicit && internal) { + return new Cast(definition.longType, definition.shortType, true, definition.longType, null, null, null); + } else if (expected.clazz == char.class && explicit && internal) { + return new Cast(definition.longType, definition.charType, true, definition.longType, null, null, null); + } else if (expected.clazz == int.class && explicit && internal) { + return new Cast(definition.longType, definition.intType, true, definition.longType, null, null, null); + } else if (expected.clazz == long.class && internal) { + return new Cast(definition.longType, definition.longType, explicit, definition.longType, null, null, null); + } else if (expected.clazz == float.class && internal) { + return new Cast(definition.longType, definition.floatType, explicit, definition.longType, null, null, null); + } else if (expected.clazz == double.class && internal) { + return new Cast(definition.longType, definition.doubleType, explicit, definition.longType, null, null, null); + } + } else if (actual.clazz == Float.class) { + if (expected.clazz == byte.class && explicit && internal) { + return new Cast(definition.floatType, definition.byteType, true, definition.floatType, null, null, null); + } else if (expected.clazz == short.class && explicit && internal) { + return new Cast(definition.floatType, definition.shortType, true, definition.floatType, null, null, null); + } else if (expected.clazz == char.class && explicit && internal) { + return new Cast(definition.floatType, definition.charType, true, definition.floatType, null, null, null); + } else if (expected.clazz == int.class && explicit && internal) { + return new Cast(definition.floatType, definition.intType, true, definition.floatType, null, null, null); + } else if (expected.clazz == long.class && explicit && internal) { + return new Cast(definition.floatType, definition.longType, true, definition.floatType, null, null, null); + } else if (expected.clazz == float.class && internal) { + return new Cast(definition.floatType, definition.floatType, explicit, definition.floatType, null, null, null); + } else if (expected.clazz == double.class && internal) { + return new Cast(definition.floatType, definition.doubleType, explicit, definition.floatType, null, null, null); + } + } else if (actual.clazz == Double.class) { + if (expected.clazz == byte.class && explicit && internal) { + return new Cast(definition.doubleType, definition.byteType, true, definition.doubleType, null, null, null); + } else if (expected.clazz == short.class && explicit && internal) { + return new Cast(definition.doubleType, definition.shortType, true, definition.doubleType, null, null, null); + } else if (expected.clazz == char.class && explicit && internal) { + return new Cast(definition.doubleType, definition.charType, true, definition.doubleType, null, null, null); + } else if (expected.clazz == int.class && explicit && internal) { + return new Cast(definition.doubleType, definition.intType, true, definition.doubleType, null, null, null); + } else if (expected.clazz == long.class && explicit && internal) { + return new Cast(definition.doubleType, definition.longType, true, definition.doubleType, null, null, null); + } else if (expected.clazz == float.class && explicit && internal) { + return new Cast(definition.doubleType, definition.floatType, true, definition.doubleType, null, null, null); + } else if (expected.clazz == double.class && internal) { + return new Cast(definition.doubleType, definition.doubleType, explicit, definition.doubleType, null, null, null); + } } - if ( actual.sort == Sort.DEF - || (actual.sort != Sort.VOID && expected.sort == Sort.DEF) - || expected.clazz.isAssignableFrom(actual.clazz) - || (explicit && actual.clazz.isAssignableFrom(expected.clazz))) { + if ( actual.dynamic || + (actual.clazz != void.class && expected.dynamic) || + expected.clazz.isAssignableFrom(actual.clazz) || + (actual.clazz.isAssignableFrom(expected.clazz) && explicit)) { return new Cast(actual, expected, explicit); } else { throw location.createError(new ClassCastException("Cannot cast from [" + actual.name + "] to [" + expected.name + "].")); } } - public static Object constCast(Location location, final Object constant, final Cast cast) { - final Sort fsort = cast.from.sort; - final Sort tsort = cast.to.sort; + public Object constCast(Location location, final Object constant, final Cast cast) { + Class fsort = cast.from.clazz; + Class tsort = cast.to.clazz; if (fsort == tsort) { return constant; - } else if (fsort == Sort.STRING && tsort == Sort.CHAR) { + } else if (fsort == String.class && tsort == char.class) { return Utility.StringTochar((String)constant); - } else if (fsort == Sort.CHAR && tsort == Sort.STRING) { + } else if (fsort == char.class && tsort == String.class) { return Utility.charToString((char)constant); - } else if (fsort.numeric && tsort.numeric) { + } else if (fsort.isPrimitive() && fsort != boolean.class && tsort.isPrimitive() && tsort != boolean.class) { final Number number; - if (fsort == Sort.CHAR) { + if (fsort == char.class) { number = (int)(char)constant; } else { number = (Number)constant; } - switch (tsort) { - case BYTE: return number.byteValue(); - case SHORT: return number.shortValue(); - case CHAR: return (char)number.intValue(); - case INT: return number.intValue(); - case LONG: return number.longValue(); - case FLOAT: return number.floatValue(); - case DOUBLE: return number.doubleValue(); - default: - throw location.createError(new IllegalStateException("Cannot cast from " + - "[" + cast.from.clazz.getCanonicalName() + "] to [" + cast.to.clazz.getCanonicalName() + "].")); + if (tsort == byte.class) return number.byteValue(); + else if (tsort == short.class) return number.shortValue(); + else if (tsort == char.class) return (char)number.intValue(); + else if (tsort == int.class) return number.intValue(); + else if (tsort == long.class) return number.longValue(); + else if (tsort == float.class) return number.floatValue(); + else if (tsort == double.class) return number.doubleValue(); + else { + throw location.createError(new IllegalStateException("Cannot cast from " + + "[" + cast.from.clazz.getCanonicalName() + "] to [" + cast.to.clazz.getCanonicalName() + "].")); } } else { throw location.createError(new IllegalStateException("Cannot cast from " + @@ -828,237 +510,233 @@ public static Object constCast(Location location, final Object constant, final C } } - public static Type promoteNumeric(Type from, boolean decimal) { - final Sort sort = from.sort; - - if (sort == Sort.DEF) { - return DEF_TYPE; - } else if ((sort == Sort.DOUBLE) && decimal) { - return DOUBLE_TYPE; - } else if ((sort == Sort.FLOAT) && decimal) { - return FLOAT_TYPE; - } else if (sort == Sort.LONG) { - return LONG_TYPE; - } else if (sort == Sort.INT || sort == Sort.CHAR || sort == Sort.SHORT || sort == Sort.BYTE) { - return INT_TYPE; + public Type promoteNumeric(Type from, boolean decimal) { + Class sort = from.clazz; + + if (from.dynamic) { + return definition.DefType; + } else if ((sort == double.class) && decimal) { + return definition.doubleType; + } else if ((sort == float.class) && decimal) { + return definition.floatType; + } else if (sort == long.class) { + return definition.longType; + } else if (sort == int.class || sort == char.class || sort == short.class || sort == byte.class) { + return definition.intType; } return null; } - public static Type promoteNumeric(Type from0, Type from1, boolean decimal) { - final Sort sort0 = from0.sort; - final Sort sort1 = from1.sort; + public Type promoteNumeric(Type from0, Type from1, boolean decimal) { + Class sort0 = from0.clazz; + Class sort1 = from1.clazz; - if (sort0 == Sort.DEF || sort1 == Sort.DEF) { - return DEF_TYPE; + if (from0.dynamic || from1.dynamic) { + return definition.DefType; } if (decimal) { - if (sort0 == Sort.DOUBLE || sort1 == Sort.DOUBLE) { - return DOUBLE_TYPE; - } else if (sort0 == Sort.FLOAT || sort1 == Sort.FLOAT) { - return FLOAT_TYPE; + if (sort0 == double.class || sort1 == double.class) { + return definition.doubleType; + } else if (sort0 == float.class || sort1 == float.class) { + return definition.floatType; } } - if (sort0 == Sort.LONG || sort1 == Sort.LONG) { - return LONG_TYPE; - } else if (sort0 == Sort.INT || sort1 == Sort.INT || - sort0 == Sort.CHAR || sort1 == Sort.CHAR || - sort0 == Sort.SHORT || sort1 == Sort.SHORT || - sort0 == Sort.BYTE || sort1 == Sort.BYTE) { - return INT_TYPE; + if (sort0 == long.class || sort1 == long.class) { + return definition.longType; + } else if (sort0 == int.class || sort1 == int.class || + sort0 == char.class || sort1 == char.class || + sort0 == short.class || sort1 == short.class || + sort0 == byte.class || sort1 == byte.class) { + return definition.intType; } return null; } - public static Type promoteAdd(final Type from0, final Type from1) { - final Sort sort0 = from0.sort; - final Sort sort1 = from1.sort; + public Type promoteAdd(Type from0, Type from1) { + Class sort0 = from0.clazz; + Class sort1 = from1.clazz; - if (sort0 == Sort.STRING || sort1 == Sort.STRING) { - return STRING_TYPE; + if (sort0 == String.class || sort1 == String.class) { + return definition.StringType; } return promoteNumeric(from0, from1, true); } - public static Type promoteXor(final Type from0, final Type from1) { - final Sort sort0 = from0.sort; - final Sort sort1 = from1.sort; + public Type promoteXor(Type from0, Type from1) { + Class sort0 = from0.clazz; + Class sort1 = from1.clazz; - if (sort0 == Sort.DEF || sort1 == Sort.DEF) { - return DEF_TYPE; + if (from0.dynamic || from1.dynamic) { + return definition.DefType; } - if (sort0.bool || sort1.bool) { - return BOOLEAN_TYPE; + if (sort0 == boolean.class || sort1 == boolean.class) { + return definition.booleanType; } return promoteNumeric(from0, from1, false); } - public static Type promoteEquality(final Type from0, final Type from1) { - final Sort sort0 = from0.sort; - final Sort sort1 = from1.sort; + public Type promoteEquality(Type from0, Type from1) { + Class sort0 = from0.clazz; + Class sort1 = from1.clazz; - if (sort0 == Sort.DEF || sort1 == Sort.DEF) { - return DEF_TYPE; + if (from0.dynamic || from1.dynamic) { + return definition.DefType; } - if (sort0.primitive && sort1.primitive) { - if (sort0.bool && sort1.bool) { - return BOOLEAN_TYPE; + if (sort0.isPrimitive() && sort1.isPrimitive()) { + if (sort0 == boolean.class && sort1 == boolean.class) { + return definition.booleanType; } - if (sort0.numeric && sort1.numeric) { - return promoteNumeric(from0, from1, true); - } + return promoteNumeric(from0, from1, true); } - return OBJECT_TYPE; + return definition.ObjectType; } - public static Type promoteConditional(final Type from0, final Type from1, final Object const0, final Object const1) { + public Type promoteConditional(Type from0, Type from1, Object const0, Object const1) { if (from0.equals(from1)) { return from0; } - final Sort sort0 = from0.sort; - final Sort sort1 = from1.sort; + Class sort0 = from0.clazz; + Class sort1 = from1.clazz; - if (sort0 == Sort.DEF || sort1 == Sort.DEF) { - return DEF_TYPE; + if (from0.dynamic || from1.dynamic) { + return definition.DefType; } - if (sort0.primitive && sort1.primitive) { - if (sort0.bool && sort1.bool) { - return BOOLEAN_TYPE; + if (sort0.isPrimitive() && sort1.isPrimitive()) { + if (sort0 == boolean.class && sort1 == boolean.class) { + return definition.booleanType; } - if (sort0 == Sort.DOUBLE || sort1 == Sort.DOUBLE) { - return DOUBLE_TYPE; - } else if (sort0 == Sort.FLOAT || sort1 == Sort.FLOAT) { - return FLOAT_TYPE; - } else if (sort0 == Sort.LONG || sort1 == Sort.LONG) { - return LONG_TYPE; + if (sort0 == double.class || sort1 == double.class) { + return definition.doubleType; + } else if (sort0 == float.class || sort1 == float.class) { + return definition.floatType; + } else if (sort0 == long.class || sort1 == long.class) { + return definition.longType; } else { - if (sort0 == Sort.BYTE) { - if (sort1 == Sort.BYTE) { - return BYTE_TYPE; - } else if (sort1 == Sort.SHORT) { + if (sort0 == byte.class) { + if (sort1 == byte.class) { + return definition.byteType; + } else if (sort1 == short.class) { if (const1 != null) { final short constant = (short)const1; if (constant <= Byte.MAX_VALUE && constant >= Byte.MIN_VALUE) { - return BYTE_TYPE; + return definition.byteType; } } - return SHORT_TYPE; - } else if (sort1 == Sort.CHAR) { - return INT_TYPE; - } else if (sort1 == Sort.INT) { + return definition.shortType; + } else if (sort1 == char.class) { + return definition.intType; + } else if (sort1 == int.class) { if (const1 != null) { final int constant = (int)const1; if (constant <= Byte.MAX_VALUE && constant >= Byte.MIN_VALUE) { - return BYTE_TYPE; + return definition.byteType; } } - return INT_TYPE; + return definition.intType; } - } else if (sort0 == Sort.SHORT) { - if (sort1 == Sort.BYTE) { + } else if (sort0 == short.class) { + if (sort1 == byte.class) { if (const0 != null) { final short constant = (short)const0; if (constant <= Byte.MAX_VALUE && constant >= Byte.MIN_VALUE) { - return BYTE_TYPE; + return definition.byteType; } } - return SHORT_TYPE; - } else if (sort1 == Sort.SHORT) { - return SHORT_TYPE; - } else if (sort1 == Sort.CHAR) { - return INT_TYPE; - } else if (sort1 == Sort.INT) { + return definition.shortType; + } else if (sort1 == short.class) { + return definition.shortType; + } else if (sort1 == char.class) { + return definition.intType; + } else if (sort1 == int.class) { if (const1 != null) { final int constant = (int)const1; if (constant <= Short.MAX_VALUE && constant >= Short.MIN_VALUE) { - return SHORT_TYPE; + return definition.shortType; } } - return INT_TYPE; + return definition.intType; } - } else if (sort0 == Sort.CHAR) { - if (sort1 == Sort.BYTE) { - return INT_TYPE; - } else if (sort1 == Sort.SHORT) { - return INT_TYPE; - } else if (sort1 == Sort.CHAR) { - return CHAR_TYPE; - } else if (sort1 == Sort.INT) { + } else if (sort0 == char.class) { + if (sort1 == byte.class) { + return definition.intType; + } else if (sort1 == short.class) { + return definition.intType; + } else if (sort1 == char.class) { + return definition.charType; + } else if (sort1 == int.class) { if (const1 != null) { final int constant = (int)const1; if (constant <= Character.MAX_VALUE && constant >= Character.MIN_VALUE) { - return BYTE_TYPE; + return definition.byteType; } } - return INT_TYPE; + return definition.intType; } - } else if (sort0 == Sort.INT) { - if (sort1 == Sort.BYTE) { + } else if (sort0 == int.class) { + if (sort1 == byte.class) { if (const0 != null) { final int constant = (int)const0; if (constant <= Byte.MAX_VALUE && constant >= Byte.MIN_VALUE) { - return BYTE_TYPE; + return definition.byteType; } } - return INT_TYPE; - } else if (sort1 == Sort.SHORT) { + return definition.intType; + } else if (sort1 == short.class) { if (const0 != null) { final int constant = (int)const0; if (constant <= Short.MAX_VALUE && constant >= Short.MIN_VALUE) { - return BYTE_TYPE; + return definition.byteType; } } - return INT_TYPE; - } else if (sort1 == Sort.CHAR) { + return definition.intType; + } else if (sort1 == char.class) { if (const0 != null) { final int constant = (int)const0; if (constant <= Character.MAX_VALUE && constant >= Character.MIN_VALUE) { - return BYTE_TYPE; + return definition.byteType; } } - return INT_TYPE; - } else if (sort1 == Sort.INT) { - return INT_TYPE; + return definition.intType; + } else if (sort1 == int.class) { + return definition.intType; } } } } // TODO: In the rare case we still haven't reached a correct promotion we need - // to calculate the highest upper bound for the two types and return that. - // However, for now we just return objectType that may require an extra cast. + // TODO: to calculate the highest upper bound for the two types and return that. + // TODO: However, for now we just return objectType that may require an extra cast. - return OBJECT_TYPE; + return definition.ObjectType; } - - private AnalyzerCaster() {} } diff --git a/modules/lang-painless/src/main/java/org/elasticsearch/painless/Definition.java b/modules/lang-painless/src/main/java/org/elasticsearch/painless/Definition.java index f0897e70935f6..e5bfb82c73148 100644 --- a/modules/lang-painless/src/main/java/org/elasticsearch/painless/Definition.java +++ b/modules/lang-painless/src/main/java/org/elasticsearch/painless/Definition.java @@ -19,29 +19,20 @@ package org.elasticsearch.painless; -import org.apache.lucene.util.Constants; import org.apache.lucene.util.SetOnce; -import org.elasticsearch.painless.api.Augmentation; -import java.io.InputStream; -import java.io.InputStreamReader; -import java.io.LineNumberReader; import java.lang.invoke.MethodHandle; import java.lang.invoke.MethodHandles; import java.lang.invoke.MethodType; import java.lang.reflect.Modifier; -import java.nio.charset.StandardCharsets; -import java.time.LocalDate; import java.util.ArrayList; -import java.util.Arrays; import java.util.Collection; import java.util.Collections; import java.util.HashMap; import java.util.List; import java.util.Map; import java.util.Objects; -import java.util.PrimitiveIterator; -import java.util.Spliterator; +import java.util.Stack; /** * The entire API for Painless. Also used as a whitelist for checking for legal @@ -49,122 +40,74 @@ */ public final class Definition { - private static final List DEFINITION_FILES = Collections.unmodifiableList( - Arrays.asList("org.elasticsearch.txt", - "java.lang.txt", - "java.math.txt", - "java.text.txt", - "java.time.txt", - "java.time.chrono.txt", - "java.time.format.txt", - "java.time.temporal.txt", - "java.time.zone.txt", - "java.util.txt", - "java.util.function.txt", - "java.util.regex.txt", - "java.util.stream.txt", - "joda.time.txt")); + private static final String[] DEFINITION_FILES = new String[] { + "org.elasticsearch.txt", + "java.lang.txt", + "java.math.txt", + "java.text.txt", + "java.time.txt", + "java.time.chrono.txt", + "java.time.format.txt", + "java.time.temporal.txt", + "java.time.zone.txt", + "java.util.txt", + "java.util.function.txt", + "java.util.regex.txt", + "java.util.stream.txt", + "joda.time.txt" + }; /** * Whitelist that is "built in" to Painless and required by all scripts. */ - public static final Definition BUILTINS = new Definition(); + public static final Definition DEFINITION = new Definition( + Collections.singletonList(WhitelistLoader.loadFromResourceFiles(Definition.class, DEFINITION_FILES))); /** Some native types as constants: */ - public static final Type VOID_TYPE = BUILTINS.getType("void"); - public static final Type BOOLEAN_TYPE = BUILTINS.getType("boolean"); - public static final Type BOOLEAN_OBJ_TYPE = BUILTINS.getType("Boolean"); - public static final Type BYTE_TYPE = BUILTINS.getType("byte"); - public static final Type BYTE_OBJ_TYPE = BUILTINS.getType("Byte"); - public static final Type SHORT_TYPE = BUILTINS.getType("short"); - public static final Type SHORT_OBJ_TYPE = BUILTINS.getType("Short"); - public static final Type INT_TYPE = BUILTINS.getType("int"); - public static final Type INT_OBJ_TYPE = BUILTINS.getType("Integer"); - public static final Type LONG_TYPE = BUILTINS.getType("long"); - public static final Type LONG_OBJ_TYPE = BUILTINS.getType("Long"); - public static final Type FLOAT_TYPE = BUILTINS.getType("float"); - public static final Type FLOAT_OBJ_TYPE = BUILTINS.getType("Float"); - public static final Type DOUBLE_TYPE = BUILTINS.getType("double"); - public static final Type DOUBLE_OBJ_TYPE = BUILTINS.getType("Double"); - public static final Type CHAR_TYPE = BUILTINS.getType("char"); - public static final Type CHAR_OBJ_TYPE = BUILTINS.getType("Character"); - public static final Type OBJECT_TYPE = BUILTINS.getType("Object"); - public static final Type DEF_TYPE = BUILTINS.getType("def"); - public static final Type NUMBER_TYPE = BUILTINS.getType("Number"); - public static final Type STRING_TYPE = BUILTINS.getType("String"); - public static final Type EXCEPTION_TYPE = BUILTINS.getType("Exception"); - public static final Type PATTERN_TYPE = BUILTINS.getType("Pattern"); - public static final Type MATCHER_TYPE = BUILTINS.getType("Matcher"); - public static final Type ITERATOR_TYPE = BUILTINS.getType("Iterator"); - public static final Type ARRAY_LIST_TYPE = BUILTINS.getType("ArrayList"); - public static final Type HASH_MAP_TYPE = BUILTINS.getType("HashMap"); - - public enum Sort { - VOID( void.class , Void.class , null , 0 , true , false , false , false ), - BOOL( boolean.class , Boolean.class , null , 1 , true , true , false , true ), - BYTE( byte.class , Byte.class , null , 1 , true , false , true , true ), - SHORT( short.class , Short.class , null , 1 , true , false , true , true ), - CHAR( char.class , Character.class , null , 1 , true , false , true , true ), - INT( int.class , Integer.class , null , 1 , true , false , true , true ), - LONG( long.class , Long.class , null , 2 , true , false , true , true ), - FLOAT( float.class , Float.class , null , 1 , true , false , true , true ), - DOUBLE( double.class , Double.class , null , 2 , true , false , true , true ), - - VOID_OBJ( Void.class , null , void.class , 1 , true , false , false , false ), - BOOL_OBJ( Boolean.class , null , boolean.class , 1 , false , true , false , false ), - BYTE_OBJ( Byte.class , null , byte.class , 1 , false , false , true , false ), - SHORT_OBJ( Short.class , null , short.class , 1 , false , false , true , false ), - CHAR_OBJ( Character.class , null , char.class , 1 , false , false , true , false ), - INT_OBJ( Integer.class , null , int.class , 1 , false , false , true , false ), - LONG_OBJ( Long.class , null , long.class , 1 , false , false , true , false ), - FLOAT_OBJ( Float.class , null , float.class , 1 , false , false , true , false ), - DOUBLE_OBJ( Double.class , null , double.class , 1 , false , false , true , false ), - - NUMBER( Number.class , null , null , 1 , false , false , false , false ), - STRING( String.class , null , null , 1 , false , false , false , true ), - - OBJECT( null , null , null , 1 , false , false , false , false ), - DEF( null , null , null , 1 , false , false , false , false ), - ARRAY( null , null , null , 1 , false , false , false , false ); - - public final Class clazz; - public final Class boxed; - public final Class unboxed; - public final int size; - public final boolean primitive; - public final boolean bool; - public final boolean numeric; - public final boolean constant; - - Sort(final Class clazz, final Class boxed, final Class unboxed, final int size, - final boolean primitive, final boolean bool, final boolean numeric, final boolean constant) { - this.clazz = clazz; - this.boxed = boxed; - this.unboxed = unboxed; - this.size = size; - this.bool = bool; - this.primitive = primitive; - this.numeric = numeric; - this.constant = constant; - } - } + public final Type voidType; + public final Type booleanType; + public final Type BooleanType; + public final Type byteType; + public final Type ByteType; + public final Type shortType; + public final Type ShortType; + public final Type intType; + public final Type IntegerType; + public final Type longType; + public final Type LongType; + public final Type floatType; + public final Type FloatType; + public final Type doubleType; + public final Type DoubleType; + public final Type charType; + public final Type CharacterType; + public final Type ObjectType; + public final Type DefType; + public final Type NumberType; + public final Type StringType; + public final Type ExceptionType; + public final Type PatternType; + public final Type MatcherType; + public final Type IteratorType; + public final Type ArrayListType; + public final Type HashMapType; public static final class Type { public final String name; public final int dimensions; + public final boolean dynamic; public final Struct struct; public final Class clazz; public final org.objectweb.asm.Type type; - public final Sort sort; - private Type(final String name, final int dimensions, final Struct struct, - final Class clazz, final org.objectweb.asm.Type type, final Sort sort) { + private Type(final String name, final int dimensions, final boolean dynamic, + final Struct struct, final Class clazz, final org.objectweb.asm.Type type) { this.name = name; this.dimensions = dimensions; + this.dynamic = dynamic; this.struct = struct; this.clazz = clazz; this.type = type; - this.sort = sort; } @Override @@ -490,26 +433,82 @@ public Struct getStruct() { /** Returns whether or not a non-array type exists. */ public boolean isSimpleType(final String name) { - return BUILTINS.structsMap.containsKey(name); + return structsMap.containsKey(name); } /** Gets the type given by its name */ public Type getType(final String name) { - return BUILTINS.getTypeInternal(name); + return getTypeInternal(name); } /** Creates an array type from the given Struct. */ public Type getType(final Struct struct, final int dimensions) { - return BUILTINS.getTypeInternal(struct, dimensions); + return getTypeInternal(struct, dimensions); + } + + public Type getBoxedType(Type unboxed) { + if (unboxed.clazz == boolean.class) { + return BooleanType; + } else if (unboxed.clazz == byte.class) { + return ByteType; + } else if (unboxed.clazz == short.class) { + return ShortType; + } else if (unboxed.clazz == char.class) { + return CharacterType; + } else if (unboxed.clazz == int.class) { + return IntegerType; + } else if (unboxed.clazz == long.class) { + return LongType; + } else if (unboxed.clazz == float.class) { + return FloatType; + } else if (unboxed.clazz == double.class) { + return DoubleType; + } + + return unboxed; + } + + public Type getUnboxedType(Type boxed) { + if (boxed.clazz == Boolean.class) { + return booleanType; + } else if (boxed.clazz == Byte.class) { + return byteType; + } else if (boxed.clazz == Short.class) { + return shortType; + } else if (boxed.clazz == Character.class) { + return charType; + } else if (boxed.clazz == Integer.class) { + return intType; + } else if (boxed.clazz == Long.class) { + return longType; + } else if (boxed.clazz == Float.class) { + return floatType; + } else if (boxed.clazz == Double.class) { + return doubleType; + } + + return boxed; + } + + public static boolean isConstantType(Type constant) { + return constant.clazz == boolean.class || + constant.clazz == byte.class || + constant.clazz == short.class || + constant.clazz == char.class || + constant.clazz == int.class || + constant.clazz == long.class || + constant.clazz == float.class || + constant.clazz == double.class || + constant.clazz == String.class; } public RuntimeClass getRuntimeClass(Class clazz) { - return BUILTINS.runtimeMap.get(clazz); + return runtimeMap.get(clazz); } /** Collection of all simple types. Used by {@code PainlessDocGenerator} to generate an API reference. */ - static Collection allSimpleTypes() { - return BUILTINS.simpleTypesMap.values(); + Collection allSimpleTypes() { + return simpleTypesMap.values(); } // INTERNAL IMPLEMENTATION: @@ -518,424 +517,465 @@ static Collection allSimpleTypes() { private final Map structsMap; private final Map simpleTypesMap; - private Definition() { + public AnalyzerCaster caster; + + private Definition(List whitelists) { structsMap = new HashMap<>(); simpleTypesMap = new HashMap<>(); runtimeMap = new HashMap<>(); - // parse the classes and return hierarchy (map of class name -> superclasses/interfaces) - Map> hierarchy = addStructs(); - // add every method for each class - addElements(); - // apply hierarchy: this means e.g. copying Object's methods into String (thats how subclasses work) - for (Map.Entry> clazz : hierarchy.entrySet()) { - copyStruct(clazz.getKey(), clazz.getValue()); - } - // if someone declares an interface type, its still an Object - for (Map.Entry clazz : structsMap.entrySet()) { - String name = clazz.getKey(); - Class javaPeer = clazz.getValue().clazz; - if (javaPeer.isInterface()) { - copyStruct(name, Collections.singletonList("Object")); - } else if (name.equals("def") == false && name.equals("Object") == false && javaPeer.isPrimitive() == false) { - // but otherwise, unless its a primitive type, it really should - assert hierarchy.get(name) != null : "class '" + name + "' does not extend Object!"; - assert hierarchy.get(name).contains("Object") : "class '" + name + "' does not extend Object!"; + Map, Struct> javaClassesToPainlessStructs = new HashMap<>(); + String origin = null; + + // add the universal def type + structsMap.put("def", new Struct("def", Object.class, org.objectweb.asm.Type.getType(Object.class))); + + try { + // first iteration collects all the Painless type names that + // are used for validation during the second iteration + for (Whitelist whitelist : whitelists) { + for (Whitelist.Struct whitelistStruct : whitelist.whitelistStructs) { + Struct painlessStruct = structsMap.get(whitelistStruct.painlessTypeName); + + if (painlessStruct != null && painlessStruct.clazz.getName().equals(whitelistStruct.javaClassName) == false) { + throw new IllegalArgumentException("struct [" + painlessStruct.name + "] cannot represent multiple classes " + + "[" + painlessStruct.clazz.getName() + "] and [" + whitelistStruct.javaClassName + "]"); + } + + origin = whitelistStruct.origin; + addStruct(whitelist.javaClassLoader, whitelistStruct); + + painlessStruct = structsMap.get(whitelistStruct.painlessTypeName); + javaClassesToPainlessStructs.put(painlessStruct.clazz, painlessStruct); + } } - } - // mark functional interfaces (or set null, to mark class is not) - for (Struct clazz : structsMap.values()) { - clazz.functionalMethod.set(computeFunctionalInterfaceMethod(clazz)); - } - // precompute runtime classes - for (Struct struct : structsMap.values()) { - addRuntimeClass(struct); - } - // copy all structs to make them unmodifiable for outside users: - for (final Map.Entry entry : structsMap.entrySet()) { - entry.setValue(entry.getValue().freeze()); - } - } + // second iteration adds all the constructors, methods, and fields that will + // be available in Painless along with validating they exist and all their types have + // been white-listed during the first iteration + for (Whitelist whitelist : whitelists) { + for (Whitelist.Struct whitelistStruct : whitelist.whitelistStructs) { + for (Whitelist.Constructor whitelistConstructor : whitelistStruct.whitelistConstructors) { + origin = whitelistConstructor.origin; + addConstructor(whitelistStruct.painlessTypeName, whitelistConstructor); + } - /** adds classes from definition. returns hierarchy */ - private Map> addStructs() { - final Map> hierarchy = new HashMap<>(); - for (String file : DEFINITION_FILES) { - int currentLine = -1; - try { - try (InputStream stream = Definition.class.getResourceAsStream(file); - LineNumberReader reader = new LineNumberReader(new InputStreamReader(stream, StandardCharsets.UTF_8))) { - String line = null; - while ((line = reader.readLine()) != null) { - currentLine = reader.getLineNumber(); - line = line.trim(); - if (line.length() == 0 || line.charAt(0) == '#') { - continue; - } - if (line.startsWith("class ")) { - String elements[] = line.split("\u0020"); - assert elements[2].equals("->") : "Invalid struct definition [" + String.join(" ", elements) +"]"; - if (elements.length == 7) { - hierarchy.put(elements[1], Arrays.asList(elements[5].split(","))); - } else { - assert elements.length == 5 : "Invalid struct definition [" + String.join(" ", elements) + "]"; - } - String className = elements[1]; - String javaPeer = elements[3]; - final Class javaClazz; - switch (javaPeer) { - case "void": - javaClazz = void.class; - break; - case "boolean": - javaClazz = boolean.class; - break; - case "byte": - javaClazz = byte.class; - break; - case "short": - javaClazz = short.class; - break; - case "char": - javaClazz = char.class; - break; - case "int": - javaClazz = int.class; - break; - case "long": - javaClazz = long.class; - break; - case "float": - javaClazz = float.class; - break; - case "double": - javaClazz = double.class; - break; - default: - javaClazz = Class.forName(javaPeer); - break; - } - addStruct(className, javaClazz); - } + for (Whitelist.Method whitelistMethod : whitelistStruct.whitelistMethods) { + origin = whitelistMethod.origin; + addMethod(whitelist.javaClassLoader, whitelistStruct.painlessTypeName, whitelistMethod); + } + + for (Whitelist.Field whitelistField : whitelistStruct.whitelistFields) { + origin = whitelistField.origin; + addField(whitelistStruct.painlessTypeName, whitelistField); } } - } catch (Exception e) { - throw new RuntimeException("error in " + file + ", line: " + currentLine, e); } + } catch (Exception exception) { + throw new IllegalArgumentException("error loading whitelist(s) " + origin, exception); } - return hierarchy; - } - /** adds class methods/fields/ctors */ - private void addElements() { - for (String file : DEFINITION_FILES) { - int currentLine = -1; - try { - try (InputStream stream = Definition.class.getResourceAsStream(file); - LineNumberReader reader = new LineNumberReader(new InputStreamReader(stream, StandardCharsets.UTF_8))) { - String line = null; - String currentClass = null; - while ((line = reader.readLine()) != null) { - currentLine = reader.getLineNumber(); - line = line.trim(); - if (line.length() == 0 || line.charAt(0) == '#') { - continue; - } else if (line.startsWith("class ")) { - assert currentClass == null; - currentClass = line.split("\u0020")[1]; - } else if (line.equals("}")) { - assert currentClass != null; - currentClass = null; - } else { - assert currentClass != null; - addSignature(currentClass, line); - } + // goes through each Painless struct and determines the inheritance list, + // and then adds all inherited types to the Painless struct's whitelist + for (Struct painlessStruct : structsMap.values()) { + List painlessSuperStructs = new ArrayList<>(); + Class javaSuperClass = painlessStruct.clazz.getSuperclass(); + + Stack> javaInteraceLookups = new Stack<>(); + javaInteraceLookups.push(painlessStruct.clazz); + + // adds super classes to the inheritance list + if (javaSuperClass != null && javaSuperClass.isInterface() == false) { + while (javaSuperClass != null) { + Struct painlessSuperStruct = javaClassesToPainlessStructs.get(javaSuperClass); + + if (painlessSuperStruct != null) { + painlessSuperStructs.add(painlessSuperStruct.name); } + + javaInteraceLookups.push(javaSuperClass); + javaSuperClass = javaSuperClass.getSuperclass(); } - } catch (Exception e) { - throw new RuntimeException("syntax error in " + file + ", line: " + currentLine, e); } - } - } - private void addStruct(final String name, final Class clazz) { - if (!name.matches("^[_a-zA-Z][\\.,_a-zA-Z0-9]*$")) { - throw new IllegalArgumentException("Invalid struct name [" + name + "]."); - } + // adds all super interfaces to the inheritance list + while (javaInteraceLookups.isEmpty() == false) { + Class javaInterfaceLookup = javaInteraceLookups.pop(); - if (structsMap.containsKey(name)) { - throw new IllegalArgumentException("Duplicate struct name [" + name + "]."); - } + for (Class javaSuperInterface : javaInterfaceLookup.getInterfaces()) { + Struct painlessInterfaceStruct = javaClassesToPainlessStructs.get(javaSuperInterface); - final Struct struct = new Struct(name, clazz, org.objectweb.asm.Type.getType(clazz)); + if (painlessInterfaceStruct != null) { + String painlessInterfaceStructName = painlessInterfaceStruct.name; - structsMap.put(name, struct); - simpleTypesMap.put(name, getTypeInternal(name)); - } + if (painlessSuperStructs.contains(painlessInterfaceStructName) == false) { + painlessSuperStructs.add(painlessInterfaceStructName); + } - private void addConstructorInternal(final String struct, final String name, final Type[] args) { - final Struct owner = structsMap.get(struct); + for (Class javaPushInterface : javaInterfaceLookup.getInterfaces()) { + javaInteraceLookups.push(javaPushInterface); + } + } + } + } - if (owner == null) { - throw new IllegalArgumentException( - "Owner struct [" + struct + "] not defined for constructor [" + name + "]."); + // copies methods and fields from super structs to the parent struct + copyStruct(painlessStruct.name, painlessSuperStructs); + + // copies methods and fields from Object into interface types + if (painlessStruct.clazz.isInterface() || ("def").equals(painlessStruct.name)) { + Struct painlessObjectStruct = javaClassesToPainlessStructs.get(Object.class); + + if (painlessObjectStruct != null) { + copyStruct(painlessStruct.name, Collections.singletonList(painlessObjectStruct.name)); + } + } } - if (!name.matches("")) { - throw new IllegalArgumentException( - "Invalid constructor name [" + name + "] with the struct [" + owner.name + "]."); + // mark functional interfaces (or set null, to mark class is not) + for (Struct clazz : structsMap.values()) { + clazz.functionalMethod.set(computeFunctionalInterfaceMethod(clazz)); + } + + // precompute runtime classes + for (Struct struct : structsMap.values()) { + addRuntimeClass(struct); + } + // copy all structs to make them unmodifiable for outside users: + for (final Map.Entry entry : structsMap.entrySet()) { + entry.setValue(entry.getValue().freeze()); } - MethodKey methodKey = new MethodKey(name, args.length); + voidType = getType("void"); + booleanType = getType("boolean"); + BooleanType = getType("Boolean"); + byteType = getType("byte"); + ByteType = getType("Byte"); + shortType = getType("short"); + ShortType = getType("Short"); + intType = getType("int"); + IntegerType = getType("Integer"); + longType = getType("long"); + LongType = getType("Long"); + floatType = getType("float"); + FloatType = getType("Float"); + doubleType = getType("double"); + DoubleType = getType("Double"); + charType = getType("char"); + CharacterType = getType("Character"); + ObjectType = getType("Object"); + DefType = getType("def"); + NumberType = getType("Number"); + StringType = getType("String"); + ExceptionType = getType("Exception"); + PatternType = getType("Pattern"); + MatcherType = getType("Matcher"); + IteratorType = getType("Iterator"); + ArrayListType = getType("ArrayList"); + HashMapType = getType("HashMap"); + + caster = new AnalyzerCaster(this); + } - if (owner.constructors.containsKey(methodKey)) { - throw new IllegalArgumentException( - "Duplicate constructor [" + methodKey + "] found within the struct [" + owner.name + "]."); + private void addStruct(ClassLoader whitelistClassLoader, Whitelist.Struct whitelistStruct) { + if (!whitelistStruct.painlessTypeName.matches("^[_a-zA-Z][._a-zA-Z0-9]*")) { + throw new IllegalArgumentException("invalid struct type name [" + whitelistStruct.painlessTypeName + "]"); } - if (owner.staticMethods.containsKey(methodKey)) { - throw new IllegalArgumentException("Constructors and static methods may not have the same signature" + - " [" + methodKey + "] within the same struct [" + owner.name + "]."); + Class javaClass; + + if ("void".equals(whitelistStruct.javaClassName)) javaClass = void.class; + else if ("boolean".equals(whitelistStruct.javaClassName)) javaClass = boolean.class; + else if ("byte".equals(whitelistStruct.javaClassName)) javaClass = byte.class; + else if ("short".equals(whitelistStruct.javaClassName)) javaClass = short.class; + else if ("char".equals(whitelistStruct.javaClassName)) javaClass = char.class; + else if ("int".equals(whitelistStruct.javaClassName)) javaClass = int.class; + else if ("long".equals(whitelistStruct.javaClassName)) javaClass = long.class; + else if ("float".equals(whitelistStruct.javaClassName)) javaClass = float.class; + else if ("double".equals(whitelistStruct.javaClassName)) javaClass = double.class; + else { + try { + javaClass = Class.forName(whitelistStruct.javaClassName, true, whitelistClassLoader); + } catch (ClassNotFoundException cnfe) { + throw new IllegalArgumentException("invalid java class name [" + whitelistStruct.javaClassName + "]" + + " for struct [" + whitelistStruct.painlessTypeName + "]"); + } } - if (owner.methods.containsKey(methodKey)) { - throw new IllegalArgumentException("Constructors and methods may not have the same signature" + - " [" + methodKey + "] within the same struct [" + owner.name + "]."); + Struct existingStruct = structsMap.get(whitelistStruct.painlessTypeName); + + if (existingStruct == null) { + Struct struct = new Struct(whitelistStruct.painlessTypeName, javaClass, org.objectweb.asm.Type.getType(javaClass)); + + structsMap.put(whitelistStruct.painlessTypeName, struct); + simpleTypesMap.put(whitelistStruct.painlessTypeName, getTypeInternal(whitelistStruct.painlessTypeName)); + } else if (existingStruct.clazz.equals(javaClass) == false) { + throw new IllegalArgumentException("struct [" + whitelistStruct.painlessTypeName + "] is used to " + + "illegally represent multiple java classes [" + whitelistStruct.javaClassName + "] and " + + "[" + existingStruct.clazz.getName() + "]"); } + } - final Class[] classes = new Class[args.length]; + private void addConstructor(String ownerStructName, Whitelist.Constructor whitelistConstructor) { + Struct ownerStruct = structsMap.get(ownerStructName); - for (int count = 0; count < classes.length; ++count) { - classes[count] = args[count].clazz; + if (ownerStruct == null) { + throw new IllegalArgumentException("owner struct [" + ownerStructName + "] not defined for constructor with " + + "parameters " + whitelistConstructor.painlessParameterTypeNames); } - final java.lang.reflect.Constructor reflect; + List painlessParametersTypes = new ArrayList<>(whitelistConstructor.painlessParameterTypeNames.size()); + Class[] javaClassParameters = new Class[whitelistConstructor.painlessParameterTypeNames.size()]; - try { - reflect = owner.clazz.getConstructor(classes); - } catch (final NoSuchMethodException exception) { - throw new IllegalArgumentException("Constructor [" + name + "] not found for class" + - " [" + owner.clazz.getName() + "] with arguments " + Arrays.toString(classes) + "."); + for (int parameterCount = 0; parameterCount < whitelistConstructor.painlessParameterTypeNames.size(); ++parameterCount) { + String painlessParameterTypeName = whitelistConstructor.painlessParameterTypeNames.get(parameterCount); + + try { + Type painlessParameterType = getTypeInternal(painlessParameterTypeName); + + painlessParametersTypes.add(painlessParameterType); + javaClassParameters[parameterCount] = painlessParameterType.clazz; + } catch (IllegalArgumentException iae) { + throw new IllegalArgumentException("struct not defined for constructor parameter [" + painlessParameterTypeName + "] " + + "with owner struct [" + ownerStructName + "] and constructor parameters " + + whitelistConstructor.painlessParameterTypeNames, iae); + } } - final org.objectweb.asm.commons.Method asm = org.objectweb.asm.commons.Method.getMethod(reflect); - final Type returnType = getTypeInternal("void"); - final MethodHandle handle; + java.lang.reflect.Constructor javaConstructor; try { - handle = MethodHandles.publicLookup().in(owner.clazz).unreflectConstructor(reflect); - } catch (final IllegalAccessException exception) { - throw new IllegalArgumentException("Constructor " + - " not found for class [" + owner.clazz.getName() + "]" + - " with arguments " + Arrays.toString(classes) + "."); + javaConstructor = ownerStruct.clazz.getConstructor(javaClassParameters); + } catch (NoSuchMethodException exception) { + throw new IllegalArgumentException("constructor not defined for owner struct [" + ownerStructName + "] " + + " with constructor parameters " + whitelistConstructor.painlessParameterTypeNames, exception); } - final Method constructor = new Method(name, owner, null, returnType, Arrays.asList(args), asm, reflect.getModifiers(), handle); + MethodKey painlessMethodKey = new MethodKey("", whitelistConstructor.painlessParameterTypeNames.size()); + Method painlessConstructor = ownerStruct.constructors.get(painlessMethodKey); - owner.constructors.put(methodKey, constructor); - } + if (painlessConstructor == null) { + org.objectweb.asm.commons.Method asmConstructor = org.objectweb.asm.commons.Method.getMethod(javaConstructor); + MethodHandle javaHandle; - /** - * Adds a new signature to the definition. - *

    - * Signatures have the following forms: - *

      - *
    • {@code void method(String,int)} - *
    • {@code boolean field} - *
    • {@code Class (String)} - *
    - * no spaces allowed. - */ - private void addSignature(String className, String signature) { - String elements[] = signature.split("\u0020"); - if (elements.length != 2) { - throw new IllegalArgumentException("Malformed signature: " + signature); - } - // method or field type (e.g. return type) - Type rtn = getTypeInternal(elements[0]); - int parenIndex = elements[1].indexOf('('); - if (parenIndex != -1) { - // method or ctor - int parenEnd = elements[1].indexOf(')'); - final Type args[]; - if (parenEnd > parenIndex + 1) { - String arguments[] = elements[1].substring(parenIndex + 1, parenEnd).split(","); - args = new Type[arguments.length]; - for (int i = 0; i < arguments.length; i++) { - args[i] = getTypeInternal(arguments[i]); - } - } else { - args = new Type[0]; + try { + javaHandle = MethodHandles.publicLookup().in(ownerStruct.clazz).unreflectConstructor(javaConstructor); + } catch (IllegalAccessException exception) { + throw new IllegalArgumentException("constructor not defined for owner struct [" + ownerStructName + "] " + + " with constructor parameters " + whitelistConstructor.painlessParameterTypeNames); } - String methodName = elements[1].substring(0, parenIndex); - if (methodName.equals("")) { - if (!elements[0].equals(className)) { - throw new IllegalArgumentException("Constructors must return their own type"); - } - addConstructorInternal(className, "", args); - } else { - int index = methodName.lastIndexOf("."); - if (index >= 0) { - String augmentation = methodName.substring(0, index); - methodName = methodName.substring(index + 1); - addMethodInternal(className, methodName, augmentation, rtn, args); - } else { - addMethodInternal(className, methodName, null, rtn, args); - } - } - } else { - // field - addFieldInternal(className, elements[1], rtn); + painlessConstructor = new Method("", ownerStruct, null, getTypeInternal("void"), painlessParametersTypes, + asmConstructor, javaConstructor.getModifiers(), javaHandle); + ownerStruct.constructors.put(painlessMethodKey, painlessConstructor); + } else if (painlessConstructor.equals(painlessParametersTypes) == false){ + throw new IllegalArgumentException( + "illegal duplicate constructors [" + painlessMethodKey + "] found within the struct [" + ownerStruct.name + "] " + + "with parameters " + painlessParametersTypes + " and " + painlessConstructor.arguments); } } - private void addMethodInternal(String struct, String name, String augmentation, Type rtn, Type[] args) { - final Struct owner = structsMap.get(struct); + private void addMethod(ClassLoader whitelistClassLoader, String ownerStructName, Whitelist.Method whitelistMethod) { + Struct ownerStruct = structsMap.get(ownerStructName); - if (owner == null) { - throw new IllegalArgumentException("Owner struct [" + struct + "] not defined" + - " for method [" + name + "]."); + if (ownerStruct == null) { + throw new IllegalArgumentException("owner struct [" + ownerStructName + "] not defined for method with " + + "name [" + whitelistMethod.javaMethodName + "] and parameters " + whitelistMethod.painlessParameterTypeNames); } - if (!name.matches("^[_a-zA-Z][_a-zA-Z0-9]*$")) { - throw new IllegalArgumentException("Invalid method name" + - " [" + name + "] with the struct [" + owner.name + "]."); + if (!whitelistMethod.javaMethodName.matches("^[_a-zA-Z][_a-zA-Z0-9]*$")) { + throw new IllegalArgumentException("invalid method name" + + " [" + whitelistMethod.javaMethodName + "] for owner struct [" + ownerStructName + "]."); } - MethodKey methodKey = new MethodKey(name, args.length); + Class javaAugmentedClass = null; - if (owner.constructors.containsKey(methodKey)) { - throw new IllegalArgumentException("Constructors and methods" + - " may not have the same signature [" + methodKey + "] within the same struct" + - " [" + owner.name + "]."); + if (whitelistMethod.javaAugmentedClassName != null) { + try { + javaAugmentedClass = Class.forName(whitelistMethod.javaAugmentedClassName, true, whitelistClassLoader); + } catch (ClassNotFoundException cnfe) { + throw new IllegalArgumentException("augmented class [" + whitelistMethod.javaAugmentedClassName + "] " + + "not found for method with name [" + whitelistMethod.javaMethodName + "] " + + "and parameters " + whitelistMethod.painlessParameterTypeNames, cnfe); + } } - if (owner.staticMethods.containsKey(methodKey) || owner.methods.containsKey(methodKey)) { - throw new IllegalArgumentException( - "Duplicate method signature [" + methodKey + "] found within the struct [" + owner.name + "]."); + int augmentedOffset = javaAugmentedClass == null ? 0 : 1; + + List painlessParametersTypes = new ArrayList<>(whitelistMethod.painlessParameterTypeNames.size()); + Class[] javaClassParameters = new Class[whitelistMethod.painlessParameterTypeNames.size() + augmentedOffset]; + + if (javaAugmentedClass != null) { + javaClassParameters[0] = ownerStruct.clazz; } - final Class implClass; - final Class[] params; + for (int parameterCount = 0; parameterCount < whitelistMethod.painlessParameterTypeNames.size(); ++parameterCount) { + String painlessParameterTypeName = whitelistMethod.painlessParameterTypeNames.get(parameterCount); - if (augmentation == null) { - implClass = owner.clazz; - params = new Class[args.length]; - for (int count = 0; count < args.length; ++count) { - params[count] = args[count].clazz; - } - } else { try { - implClass = Class.forName(augmentation); - } catch (ClassNotFoundException cnfe) { - throw new IllegalArgumentException("Augmentation class [" + augmentation + "]" + - " not found for struct [" + struct + "] using method name [" + name + "].", cnfe); + Type painlessParameterType = getTypeInternal(painlessParameterTypeName); + + painlessParametersTypes.add(painlessParameterType); + javaClassParameters[parameterCount + augmentedOffset] = painlessParameterType.clazz; + } catch (IllegalArgumentException iae) { + throw new IllegalArgumentException("struct not defined for method parameter [" + painlessParameterTypeName + "] " + + "with owner struct [" + ownerStructName + "] and method with name [" + whitelistMethod.javaMethodName + "] " + + "and parameters " + whitelistMethod.painlessParameterTypeNames, iae); } + } - params = new Class[args.length + 1]; - params[0] = owner.clazz; - for (int count = 0; count < args.length; ++count) { - params[count+1] = args[count].clazz; - } + Class javaImplClass = javaAugmentedClass == null ? ownerStruct.clazz : javaAugmentedClass; + java.lang.reflect.Method javaMethod; + + try { + javaMethod = javaImplClass.getMethod(whitelistMethod.javaMethodName, javaClassParameters); + } catch (NoSuchMethodException nsme) { + throw new IllegalArgumentException("method with name [" + whitelistMethod.javaMethodName + "] " + + "and parameters " + whitelistMethod.painlessParameterTypeNames + " not found for class [" + + javaImplClass.getName() + "]", nsme); } - final java.lang.reflect.Method reflect; + Type painlessReturnType; try { - reflect = implClass.getMethod(name, params); - } catch (NoSuchMethodException exception) { - throw new IllegalArgumentException("Method [" + name + - "] not found for class [" + implClass.getName() + "]" + - " with arguments " + Arrays.toString(params) + "."); + painlessReturnType = getTypeInternal(whitelistMethod.painlessReturnTypeName); + } catch (IllegalArgumentException iae) { + throw new IllegalArgumentException("struct not defined for return type [" + whitelistMethod.painlessReturnTypeName + "] " + + "with owner struct [" + ownerStructName + "] and method with name [" + whitelistMethod.javaMethodName + "] " + + "and parameters " + whitelistMethod.painlessParameterTypeNames, iae); } - if (!reflect.getReturnType().equals(rtn.clazz)) { - throw new IllegalArgumentException("Specified return type class [" + rtn.clazz + "]" + - " does not match the found return type class [" + reflect.getReturnType() + "] for the" + - " method [" + name + "]" + - " within the struct [" + owner.name + "]."); + if (javaMethod.getReturnType().equals(painlessReturnType.clazz) == false) { + throw new IllegalArgumentException("specified return type class [" + painlessReturnType.clazz + "] " + + "does not match the return type class [" + javaMethod.getReturnType() + "] for the " + + "method with name [" + whitelistMethod.javaMethodName + "] " + + "and parameters " + whitelistMethod.painlessParameterTypeNames); } - final org.objectweb.asm.commons.Method asm = org.objectweb.asm.commons.Method.getMethod(reflect); + MethodKey painlessMethodKey = new MethodKey(whitelistMethod.javaMethodName, whitelistMethod.painlessParameterTypeNames.size()); - MethodHandle handle; + if (javaAugmentedClass == null && Modifier.isStatic(javaMethod.getModifiers())) { + Method painlessMethod = ownerStruct.staticMethods.get(painlessMethodKey); - try { - handle = MethodHandles.publicLookup().in(implClass).unreflect(reflect); - } catch (final IllegalAccessException exception) { - throw new IllegalArgumentException("Method [" + name + "]" + - " not found for class [" + implClass.getName() + "]" + - " with arguments " + Arrays.toString(params) + "."); - } + if (painlessMethod == null) { + org.objectweb.asm.commons.Method asmMethod = org.objectweb.asm.commons.Method.getMethod(javaMethod); + MethodHandle javaMethodHandle; - final int modifiers = reflect.getModifiers(); - final Method method = - new Method(name, owner, augmentation == null ? null : implClass, rtn, Arrays.asList(args), asm, modifiers, handle); + try { + javaMethodHandle = MethodHandles.publicLookup().in(javaImplClass).unreflect(javaMethod); + } catch (IllegalAccessException exception) { + throw new IllegalArgumentException("method handle not found for method with name " + + "[" + whitelistMethod.javaMethodName + "] and parameters " + whitelistMethod.painlessParameterTypeNames); + } - if (augmentation == null && java.lang.reflect.Modifier.isStatic(modifiers)) { - owner.staticMethods.put(methodKey, method); + painlessMethod = new Method(whitelistMethod.javaMethodName, ownerStruct, null, painlessReturnType, + painlessParametersTypes, asmMethod, javaMethod.getModifiers(), javaMethodHandle); + ownerStruct.staticMethods.put(painlessMethodKey, painlessMethod); + } else if ((painlessMethod.name.equals(whitelistMethod.javaMethodName) && painlessMethod.rtn.equals(painlessReturnType) && + painlessMethod.arguments.equals(painlessParametersTypes)) == false) { + throw new IllegalArgumentException("illegal duplicate static methods [" + painlessMethodKey + "] " + + "found within the struct [" + ownerStruct.name + "] with name [" + whitelistMethod.javaMethodName + "], " + + "return types [" + painlessReturnType + "] and [" + painlessMethod.rtn.name + "], " + + "and parameters " + painlessParametersTypes + " and " + painlessMethod.arguments); + } } else { - owner.methods.put(methodKey, method); + Method painlessMethod = ownerStruct.methods.get(painlessMethodKey); + + if (painlessMethod == null) { + org.objectweb.asm.commons.Method asmMethod = org.objectweb.asm.commons.Method.getMethod(javaMethod); + MethodHandle javaMethodHandle; + + try { + javaMethodHandle = MethodHandles.publicLookup().in(javaImplClass).unreflect(javaMethod); + } catch (IllegalAccessException exception) { + throw new IllegalArgumentException("method handle not found for method with name " + + "[" + whitelistMethod.javaMethodName + "] and parameters " + whitelistMethod.painlessParameterTypeNames); + } + + painlessMethod = new Method(whitelistMethod.javaMethodName, ownerStruct, javaAugmentedClass, painlessReturnType, + painlessParametersTypes, asmMethod, javaMethod.getModifiers(), javaMethodHandle); + ownerStruct.methods.put(painlessMethodKey, painlessMethod); + } else if ((painlessMethod.name.equals(whitelistMethod.javaMethodName) && painlessMethod.rtn.equals(painlessReturnType) && + painlessMethod.arguments.equals(painlessParametersTypes)) == false) { + throw new IllegalArgumentException("illegal duplicate member methods [" + painlessMethodKey + "] " + + "found within the struct [" + ownerStruct.name + "] with name [" + whitelistMethod.javaMethodName + "], " + + "return types [" + painlessReturnType + "] and [" + painlessMethod.rtn.name + "], " + + "and parameters " + painlessParametersTypes + " and " + painlessMethod.arguments); + } } } - private void addFieldInternal(String struct, String name, Type type) { - final Struct owner = structsMap.get(struct); + private void addField(String ownerStructName, Whitelist.Field whitelistField) { + Struct ownerStruct = structsMap.get(ownerStructName); - if (owner == null) { - throw new IllegalArgumentException("Owner struct [" + struct + "] not defined for " + - " field [" + name + "]."); + if (ownerStruct == null) { + throw new IllegalArgumentException("owner struct [" + ownerStructName + "] not defined for method with " + + "name [" + whitelistField.javaFieldName + "] and type " + whitelistField.painlessFieldTypeName); } - if (!name.matches("^[_a-zA-Z][_a-zA-Z0-9]*$")) { - throw new IllegalArgumentException("Invalid field " + - " name [" + name + "] with the struct [" + owner.name + "]."); + if (!whitelistField.javaFieldName.matches("^[_a-zA-Z][_a-zA-Z0-9]*$")) { + throw new IllegalArgumentException("invalid field name " + + "[" + whitelistField.painlessFieldTypeName + "] for owner struct [" + ownerStructName + "]."); } - if (owner.staticMembers.containsKey(name) || owner.members.containsKey(name)) { - throw new IllegalArgumentException("Duplicate field name [" + name + "]" + - " found within the struct [" + owner.name + "]."); + java.lang.reflect.Field javaField; + + try { + javaField = ownerStruct.clazz.getField(whitelistField.javaFieldName); + } catch (NoSuchFieldException exception) { + throw new IllegalArgumentException("field [" + whitelistField.javaFieldName + "] " + + "not found for class [" + ownerStruct.clazz.getName() + "]."); } - java.lang.reflect.Field reflect; + Type painlessFieldType; try { - reflect = owner.clazz.getField(name); - } catch (final NoSuchFieldException exception) { - throw new IllegalArgumentException("Field [" + name + "]" + - " not found for class [" + owner.clazz.getName() + "]."); + painlessFieldType = getTypeInternal(whitelistField.painlessFieldTypeName); + } catch (IllegalArgumentException iae) { + throw new IllegalArgumentException("struct not defined for return type [" + whitelistField.painlessFieldTypeName + "] " + + "with owner struct [" + ownerStructName + "] and field with name [" + whitelistField.javaFieldName + "]", iae); } - final int modifiers = reflect.getModifiers(); - boolean isStatic = java.lang.reflect.Modifier.isStatic(modifiers); + if (Modifier.isStatic(javaField.getModifiers())) { + if (Modifier.isFinal(javaField.getModifiers()) == false) { + throw new IllegalArgumentException("static [" + whitelistField.javaFieldName + "] " + + "with owner struct [" + ownerStruct.name + "] is not final"); + } - MethodHandle getter = null; - MethodHandle setter = null; + Field painlessField = ownerStruct.staticMembers.get(whitelistField.javaFieldName); - try { - if (!isStatic) { - getter = MethodHandles.publicLookup().unreflectGetter(reflect); - setter = MethodHandles.publicLookup().unreflectSetter(reflect); + if (painlessField == null) { + painlessField = new Field(whitelistField.javaFieldName, javaField.getName(), + ownerStruct, painlessFieldType, javaField.getModifiers(), null, null); + ownerStruct.staticMembers.put(whitelistField.javaFieldName, painlessField); + } else if (painlessField.type.equals(painlessFieldType) == false) { + throw new IllegalArgumentException("illegal duplicate static fields [" + whitelistField.javaFieldName + "] " + + "found within the struct [" + ownerStruct.name + "] with type [" + whitelistField.painlessFieldTypeName + "]"); } - } catch (final IllegalAccessException exception) { - throw new IllegalArgumentException("Getter/Setter [" + name + "]" + - " not found for class [" + owner.clazz.getName() + "]."); - } - - final Field field = new Field(name, reflect.getName(), owner, type, modifiers, getter, setter); + } else { + MethodHandle javaMethodHandleGetter = null; + MethodHandle javaMethodHandleSetter = null; - if (isStatic) { - // require that all static fields are static final - if (!java.lang.reflect.Modifier.isFinal(modifiers)) { - throw new IllegalArgumentException("Static [" + name + "]" + - " within the struct [" + owner.name + "] is not final."); + try { + if (Modifier.isStatic(javaField.getModifiers()) == false) { + javaMethodHandleGetter = MethodHandles.publicLookup().unreflectGetter(javaField); + javaMethodHandleSetter = MethodHandles.publicLookup().unreflectSetter(javaField); + } + } catch (IllegalAccessException exception) { + throw new IllegalArgumentException("getter/setter [" + whitelistField.javaFieldName + "]" + + " not found for class [" + ownerStruct.clazz.getName() + "]."); } - owner.staticMembers.put(name, field); - } else { - owner.members.put(name, field); + Field painlessField = ownerStruct.staticMembers.get(whitelistField.javaFieldName); + + if (painlessField == null) { + painlessField = new Field(whitelistField.javaFieldName, javaField.getName(), + ownerStruct, painlessFieldType, javaField.getModifiers(), javaMethodHandleGetter, javaMethodHandleSetter); + ownerStruct.staticMembers.put(whitelistField.javaFieldName, painlessField); + } else if (painlessField.type.equals(painlessFieldType) == false) { + throw new IllegalArgumentException("illegal duplicate member fields [" + whitelistField.javaFieldName + "] " + + "found within the struct [" + ownerStruct.name + "] with type [" + whitelistField.painlessFieldTypeName + "]"); + } } } @@ -963,8 +1003,12 @@ private void copyStruct(String struct, List children) { MethodKey methodKey = kvPair.getKey(); Method method = kvPair.getValue(); if (owner.methods.get(methodKey) == null) { + // TODO: some of these are no longer valid or outright don't work + // TODO: since classes may not come from the Painless classloader + // TODO: and it was dependent on the order of the extends which + // TODO: which no longer exists since this is generated automatically // sanity check, look for missing covariant/generic override - if (owner.clazz.isInterface() && child.clazz == Object.class) { + /*if (owner.clazz.isInterface() && child.clazz == Object.class) { // ok } else if (child.clazz == Spliterator.OfPrimitive.class || child.clazz == PrimitiveIterator.class) { // ok, we rely on generics erasure for these (its guaranteed in the javadocs though!!!!) @@ -1004,7 +1048,7 @@ private void copyStruct(String struct, List children) { } catch (ReflectiveOperationException e) { throw new AssertionError(e); } - } + }*/ owner.methods.put(methodKey, method); } } @@ -1099,7 +1143,7 @@ private Method computeFunctionalInterfaceMethod(Struct clazz) { if (methods.size() != 1) { if (hasAnnotation) { throw new IllegalArgumentException("Class: " + clazz.name + - " is marked with FunctionalInterface but doesn't fit the bill: " + methods); + " is marked with FunctionalInterface but doesn't fit the bill: " + methods); } return null; } @@ -1108,7 +1152,7 @@ private Method computeFunctionalInterfaceMethod(Struct clazz) { Method painless = clazz.methods.get(new Definition.MethodKey(oneMethod.getName(), oneMethod.getParameterCount())); if (painless == null || painless.method.equals(org.objectweb.asm.commons.Method.getMethod(oneMethod)) == false) { throw new IllegalArgumentException("Class: " + clazz.name + " is functional but the functional " + - "method is not whitelisted!"); + "method is not whitelisted!"); } return painless; } @@ -1136,7 +1180,6 @@ private Type getTypeInternal(Struct struct, int dimensions) { String name = struct.name; org.objectweb.asm.Type type = struct.type; Class clazz = struct.clazz; - Sort sort; if (dimensions > 0) { StringBuilder builder = new StringBuilder(name); @@ -1158,27 +1201,9 @@ private Type getTypeInternal(Struct struct, int dimensions) { throw new IllegalArgumentException("The class [" + type.getInternalName() + "]" + " could not be found to create type [" + name + "]."); } - - sort = Sort.ARRAY; - } else if ("def".equals(struct.name)) { - sort = Sort.DEF; - } else { - sort = Sort.OBJECT; - - for (Sort value : Sort.values()) { - if (value.clazz == null) { - continue; - } - - if (value.clazz.equals(struct.clazz)) { - sort = value; - - break; - } - } } - return new Type(name, dimensions, struct, clazz, type, sort); + return new Type(name, dimensions, "def".equals(name), struct, clazz, type); } private int getDimensions(String name) { diff --git a/modules/lang-painless/src/main/java/org/elasticsearch/painless/Locals.java b/modules/lang-painless/src/main/java/org/elasticsearch/painless/Locals.java index b83a16df3aebb..9f769d484135b 100644 --- a/modules/lang-painless/src/main/java/org/elasticsearch/painless/Locals.java +++ b/modules/lang-painless/src/main/java/org/elasticsearch/painless/Locals.java @@ -72,7 +72,7 @@ public static Locals newLambdaScope(Locals programScope, Type returnType, List

    0) { - locals.defineVariable(null, Definition.INT_TYPE, LOOP, true); + locals.defineVariable(null, locals.getDefinition().intType, LOOP, true); } return locals; } @@ -85,7 +85,7 @@ public static Locals newFunctionScope(Locals programScope, Type returnType, List } // Loop counter to catch infinite loops. Internal use only. if (maxLoopCounter > 0) { - locals.defineVariable(null, Definition.INT_TYPE, LOOP, true); + locals.defineVariable(null, locals.getDefinition().intType, LOOP, true); } return locals; } @@ -104,7 +104,7 @@ public static Locals newMainMethodScope(ScriptClassInfo scriptClassInfo, Locals // Loop counter to catch infinite loops. Internal use only. if (maxLoopCounter > 0) { - locals.defineVariable(null, Definition.INT_TYPE, LOOP, true); + locals.defineVariable(null, locals.getDefinition().intType, LOOP, true); } return locals; } diff --git a/modules/lang-painless/src/main/java/org/elasticsearch/painless/MethodWriter.java b/modules/lang-painless/src/main/java/org/elasticsearch/painless/MethodWriter.java index ac902ee134e13..b0c15abbfb0d5 100644 --- a/modules/lang-painless/src/main/java/org/elasticsearch/painless/MethodWriter.java +++ b/modules/lang-painless/src/main/java/org/elasticsearch/painless/MethodWriter.java @@ -20,7 +20,6 @@ package org.elasticsearch.painless; import org.elasticsearch.painless.Definition.Cast; -import org.elasticsearch.painless.Definition.Sort; import org.elasticsearch.painless.Definition.Type; import org.objectweb.asm.ClassVisitor; import org.objectweb.asm.Label; @@ -131,35 +130,39 @@ public void writeLoopCounter(int slot, int count, Location location) { public void writeCast(final Cast cast) { if (cast != null) { - if (cast.from.sort == Sort.CHAR && cast.to.sort == Sort.STRING) { + if (cast.from.clazz == char.class && cast.to.clazz == String.class) { invokeStatic(UTILITY_TYPE, CHAR_TO_STRING); - } else if (cast.from.sort == Sort.STRING && cast.to.sort == Sort.CHAR) { + } else if (cast.from.clazz == String.class && cast.to.clazz == char.class) { invokeStatic(UTILITY_TYPE, STRING_TO_CHAR); } else if (cast.unboxFrom != null) { unbox(cast.unboxFrom.type); writeCast(cast.from, cast.to); } else if (cast.unboxTo != null) { - if (cast.from.sort == Sort.DEF) { + if (cast.from.dynamic) { if (cast.explicit) { - if (cast.to.sort == Sort.BOOL_OBJ) invokeStatic(DEF_UTIL_TYPE, DEF_TO_BOOLEAN); - else if (cast.to.sort == Sort.BYTE_OBJ) invokeStatic(DEF_UTIL_TYPE, DEF_TO_BYTE_EXPLICIT); - else if (cast.to.sort == Sort.SHORT_OBJ) invokeStatic(DEF_UTIL_TYPE, DEF_TO_SHORT_EXPLICIT); - else if (cast.to.sort == Sort.CHAR_OBJ) invokeStatic(DEF_UTIL_TYPE, DEF_TO_CHAR_EXPLICIT); - else if (cast.to.sort == Sort.INT_OBJ) invokeStatic(DEF_UTIL_TYPE, DEF_TO_INT_EXPLICIT); - else if (cast.to.sort == Sort.LONG_OBJ) invokeStatic(DEF_UTIL_TYPE, DEF_TO_LONG_EXPLICIT); - else if (cast.to.sort == Sort.FLOAT_OBJ) invokeStatic(DEF_UTIL_TYPE, DEF_TO_FLOAT_EXPLICIT); - else if (cast.to.sort == Sort.DOUBLE_OBJ) invokeStatic(DEF_UTIL_TYPE, DEF_TO_DOUBLE_EXPLICIT); - else throw new IllegalStateException("Illegal tree structure."); + if (cast.to.clazz == Boolean.class) invokeStatic(DEF_UTIL_TYPE, DEF_TO_BOOLEAN); + else if (cast.to.clazz == Byte.class) invokeStatic(DEF_UTIL_TYPE, DEF_TO_BYTE_EXPLICIT); + else if (cast.to.clazz == Short.class) invokeStatic(DEF_UTIL_TYPE, DEF_TO_SHORT_EXPLICIT); + else if (cast.to.clazz == Character.class) invokeStatic(DEF_UTIL_TYPE, DEF_TO_CHAR_EXPLICIT); + else if (cast.to.clazz == Integer.class) invokeStatic(DEF_UTIL_TYPE, DEF_TO_INT_EXPLICIT); + else if (cast.to.clazz == Long.class) invokeStatic(DEF_UTIL_TYPE, DEF_TO_LONG_EXPLICIT); + else if (cast.to.clazz == Float.class) invokeStatic(DEF_UTIL_TYPE, DEF_TO_FLOAT_EXPLICIT); + else if (cast.to.clazz == Double.class) invokeStatic(DEF_UTIL_TYPE, DEF_TO_DOUBLE_EXPLICIT); + else { + throw new IllegalStateException("Illegal tree structure."); + } } else { - if (cast.to.sort == Sort.BOOL_OBJ) invokeStatic(DEF_UTIL_TYPE, DEF_TO_BOOLEAN); - else if (cast.to.sort == Sort.BYTE_OBJ) invokeStatic(DEF_UTIL_TYPE, DEF_TO_BYTE_IMPLICIT); - else if (cast.to.sort == Sort.SHORT_OBJ) invokeStatic(DEF_UTIL_TYPE, DEF_TO_SHORT_IMPLICIT); - else if (cast.to.sort == Sort.CHAR_OBJ) invokeStatic(DEF_UTIL_TYPE, DEF_TO_CHAR_IMPLICIT); - else if (cast.to.sort == Sort.INT_OBJ) invokeStatic(DEF_UTIL_TYPE, DEF_TO_INT_IMPLICIT); - else if (cast.to.sort == Sort.LONG_OBJ) invokeStatic(DEF_UTIL_TYPE, DEF_TO_LONG_IMPLICIT); - else if (cast.to.sort == Sort.FLOAT_OBJ) invokeStatic(DEF_UTIL_TYPE, DEF_TO_FLOAT_IMPLICIT); - else if (cast.to.sort == Sort.DOUBLE_OBJ) invokeStatic(DEF_UTIL_TYPE, DEF_TO_DOUBLE_IMPLICIT); - else throw new IllegalStateException("Illegal tree structure."); + if (cast.to.clazz == Boolean.class) invokeStatic(DEF_UTIL_TYPE, DEF_TO_BOOLEAN); + else if (cast.to.clazz == Byte.class) invokeStatic(DEF_UTIL_TYPE, DEF_TO_BYTE_IMPLICIT); + else if (cast.to.clazz == Short.class) invokeStatic(DEF_UTIL_TYPE, DEF_TO_SHORT_IMPLICIT); + else if (cast.to.clazz == Character.class) invokeStatic(DEF_UTIL_TYPE, DEF_TO_CHAR_IMPLICIT); + else if (cast.to.clazz == Integer.class) invokeStatic(DEF_UTIL_TYPE, DEF_TO_INT_IMPLICIT); + else if (cast.to.clazz == Long.class) invokeStatic(DEF_UTIL_TYPE, DEF_TO_LONG_IMPLICIT); + else if (cast.to.clazz == Float.class) invokeStatic(DEF_UTIL_TYPE, DEF_TO_FLOAT_IMPLICIT); + else if (cast.to.clazz == Double.class) invokeStatic(DEF_UTIL_TYPE, DEF_TO_DOUBLE_IMPLICIT); + else { + throw new IllegalStateException("Illegal tree structure."); + } } } else { writeCast(cast.from, cast.to); @@ -182,7 +185,7 @@ private void writeCast(final Type from, final Type to) { return; } - if (from.sort.numeric && from.sort.primitive && to.sort.numeric && to.sort.primitive) { + if (from.clazz != boolean.class && from.clazz.isPrimitive() && to.clazz != boolean.class && to.clazz.isPrimitive()) { cast(from.type, to.type); } else { if (!to.clazz.isAssignableFrom(from.clazz)) { @@ -238,18 +241,16 @@ public void writeAppendStrings(final Type type) { } } else { // Java 8: push a StringBuilder append - switch (type.sort) { - case BOOL: invokeVirtual(STRINGBUILDER_TYPE, STRINGBUILDER_APPEND_BOOLEAN); break; - case CHAR: invokeVirtual(STRINGBUILDER_TYPE, STRINGBUILDER_APPEND_CHAR); break; - case BYTE: - case SHORT: - case INT: invokeVirtual(STRINGBUILDER_TYPE, STRINGBUILDER_APPEND_INT); break; - case LONG: invokeVirtual(STRINGBUILDER_TYPE, STRINGBUILDER_APPEND_LONG); break; - case FLOAT: invokeVirtual(STRINGBUILDER_TYPE, STRINGBUILDER_APPEND_FLOAT); break; - case DOUBLE: invokeVirtual(STRINGBUILDER_TYPE, STRINGBUILDER_APPEND_DOUBLE); break; - case STRING: invokeVirtual(STRINGBUILDER_TYPE, STRINGBUILDER_APPEND_STRING); break; - default: invokeVirtual(STRINGBUILDER_TYPE, STRINGBUILDER_APPEND_OBJECT); - } + if (type.clazz == boolean.class) invokeVirtual(STRINGBUILDER_TYPE, STRINGBUILDER_APPEND_BOOLEAN); + else if (type.clazz == char.class) invokeVirtual(STRINGBUILDER_TYPE, STRINGBUILDER_APPEND_CHAR); + else if (type.clazz == byte.class || + type.clazz == short.class || + type.clazz == int.class) invokeVirtual(STRINGBUILDER_TYPE, STRINGBUILDER_APPEND_INT); + else if (type.clazz == long.class) invokeVirtual(STRINGBUILDER_TYPE, STRINGBUILDER_APPEND_LONG); + else if (type.clazz == float.class) invokeVirtual(STRINGBUILDER_TYPE, STRINGBUILDER_APPEND_FLOAT); + else if (type.clazz == double.class) invokeVirtual(STRINGBUILDER_TYPE, STRINGBUILDER_APPEND_DOUBLE); + else if (type.clazz == String.class) invokeVirtual(STRINGBUILDER_TYPE, STRINGBUILDER_APPEND_STRING); + else invokeVirtual(STRINGBUILDER_TYPE, STRINGBUILDER_APPEND_OBJECT); } } @@ -318,9 +319,7 @@ public void writeDynamicBinaryInstruction(Location location, Type returnType, Ty /** Writes a static binary instruction */ public void writeBinaryInstruction(Location location, Type type, Operation operation) { - final Sort sort = type.sort; - - if ((sort == Sort.FLOAT || sort == Sort.DOUBLE) && + if ((type.clazz == float.class || type.clazz == double.class) && (operation == Operation.LSH || operation == Operation.USH || operation == Operation.RSH || operation == Operation.BWAND || operation == Operation.XOR || operation == Operation.BWOR)) { diff --git a/modules/lang-painless/src/main/java/org/elasticsearch/painless/PainlessPlugin.java b/modules/lang-painless/src/main/java/org/elasticsearch/painless/PainlessPlugin.java index faf8521616fd5..efdd36172d47e 100644 --- a/modules/lang-painless/src/main/java/org/elasticsearch/painless/PainlessPlugin.java +++ b/modules/lang-painless/src/main/java/org/elasticsearch/painless/PainlessPlugin.java @@ -38,7 +38,7 @@ public final class PainlessPlugin extends Plugin implements ScriptPlugin { // force to parse our definition at startup (not on the user's first script) static { - Definition.VOID_TYPE.hashCode(); + Definition.DEFINITION.hashCode(); } @Override diff --git a/modules/lang-painless/src/main/java/org/elasticsearch/painless/PainlessScriptEngine.java b/modules/lang-painless/src/main/java/org/elasticsearch/painless/PainlessScriptEngine.java index 39f5c48b65e38..5299adb1dc8dd 100644 --- a/modules/lang-painless/src/main/java/org/elasticsearch/painless/PainlessScriptEngine.java +++ b/modules/lang-painless/src/main/java/org/elasticsearch/painless/PainlessScriptEngine.java @@ -101,9 +101,9 @@ public PainlessScriptEngine(Settings settings, Collection> cont for (ScriptContext context : contexts) { if (context.instanceClazz.equals(SearchScript.class) || context.instanceClazz.equals(ExecutableScript.class)) { - contextsToCompilers.put(context, new Compiler(GenericElasticsearchScript.class, Definition.BUILTINS)); + contextsToCompilers.put(context, new Compiler(GenericElasticsearchScript.class, Definition.DEFINITION)); } else { - contextsToCompilers.put(context, new Compiler(context.instanceClazz, Definition.BUILTINS)); + contextsToCompilers.put(context, new Compiler(context.instanceClazz, Definition.DEFINITION)); } } diff --git a/modules/lang-painless/src/main/java/org/elasticsearch/painless/ScriptClassInfo.java b/modules/lang-painless/src/main/java/org/elasticsearch/painless/ScriptClassInfo.java index abf834c8acbbc..e52aaf2596965 100644 --- a/modules/lang-painless/src/main/java/org/elasticsearch/painless/ScriptClassInfo.java +++ b/modules/lang-painless/src/main/java/org/elasticsearch/painless/ScriptClassInfo.java @@ -191,7 +191,7 @@ private static Definition.Type definitionTypeForClass(Definition definition, Cla } Definition.Struct struct; if (componentType.equals(Object.class)) { - struct = Definition.DEF_TYPE.struct; + struct = definition.DefType.struct; } else { Definition.RuntimeClass runtimeClass = definition.getRuntimeClass(componentType); if (runtimeClass == null) { diff --git a/modules/lang-painless/src/main/java/org/elasticsearch/painless/Whitelist.java b/modules/lang-painless/src/main/java/org/elasticsearch/painless/Whitelist.java new file mode 100644 index 0000000000000..7fd3493d51701 --- /dev/null +++ b/modules/lang-painless/src/main/java/org/elasticsearch/painless/Whitelist.java @@ -0,0 +1,198 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.painless; + +import java.util.Collections; +import java.util.List; +import java.util.Objects; + +/** + * Whitelist contains data structures designed to be used to generate a white-list of Java classes, + * constructors, methods, and fields that can be used within a Painless script at both compile-time + * and run-time. + * + * A white-list consists of several pieces with {@link Struct}s as the top level. Each {@link Struct} + * will contain zero-to-many {@link Constructor}s, {@link Method}s, and {@link Field}s which are what + * will be available with a Painless script. See each individual white-list object for more detail. + */ +public final class Whitelist { + + /** + * Struct represents the equivalent of a Java class in Painless complete with super classes, + * constructors, methods, and fields. In Painless a class is known as a struct primarily to avoid + * naming conflicts internally. There must be a one-to-one mapping of struct names to Java classes. + * Though, since multiple white-lists may be combined into a single white-list for a specific + * {@link org.elasticsearch.script.ScriptContext}, as long as multiple structs representing the same + * Java class have the same Painless type name and have legal constructor/method overloading they + * can be merged together. + * + * Structs in Painless allow for arity overloading for constructors and methods. Arity overloading + * means that multiple constructors are allowed for a single struct as long as they have a different + * number of parameter types, and multiples methods with the same name are allowed for a single struct + * as long as they have the same return type and a different number of parameter types. + * + * Structs will automatically extend other white-listed structs if the Java class they represent is a + * subclass of other structs including Java interfaces. + */ + public static final class Struct { + + /** Information about where this struct was white-listed from. Can be used for error messages. */ + public final String origin; + + /** The Painless name of this struct which will also be the name of a type in a Painless script. */ + public final String painlessTypeName; + + /** The Java class name this struct represents. */ + public final String javaClassName; + + /** The {@link List} of white-listed ({@link Constructor}s) available to this struct. */ + public final List whitelistConstructors; + + /** The {@link List} of white-listed ({@link Method}s) available to this struct. */ + public final List whitelistMethods; + + /** The {@link List} of white-listed ({@link Field}s) available to this struct. */ + public final List whitelistFields; + + /** Standard constructor. All values must be not {@code null}. */ + public Struct(String origin, String painlessTypeName, String javaClassName, + List whitelistConstructors, List whitelistMethods, List whitelistFields) { + this.origin = Objects.requireNonNull(origin); + this.painlessTypeName = Objects.requireNonNull(painlessTypeName); + this.javaClassName = Objects.requireNonNull(javaClassName); + + this.whitelistConstructors = Collections.unmodifiableList(Objects.requireNonNull(whitelistConstructors)); + this.whitelistMethods = Collections.unmodifiableList(Objects.requireNonNull(whitelistMethods)); + this.whitelistFields = Collections.unmodifiableList(Objects.requireNonNull(whitelistFields)); + } + } + + /** + * Constructor represents the equivalent of a Java constructor available as a white-listed struct + * constructor within Painless. Constructors for Painless structs may be accessed exactly as + * constructors for Java classes are using the 'new' keyword. Painless structs may have multiple + * constructors as long as they comply with arity overloading described for {@link Struct}. + */ + public static final class Constructor { + + /** Information about where this constructor was white-listed from. Can be used for error messages. */ + public final String origin; + + /** + * A {@link List} of {@link String}s that are the Painless type names for the parameters of the + * constructor which can be used to look up the Java constructor through reflection. + */ + public final List painlessParameterTypeNames; + + /** Standard constructor. All values must be not {@code null}. */ + public Constructor(String origin, List painlessParameterTypeNames) { + this.origin = Objects.requireNonNull(origin); + this.painlessParameterTypeNames = Collections.unmodifiableList(Objects.requireNonNull(painlessParameterTypeNames)); + } + } + + /** + * Method represents the equivalent of a Java method available as a white-listed struct method + * within Painless. Methods for Painless structs may be accessed exactly as methods for Java classes + * are using the '.' operator on an existing struct variable/field. Painless structs may have multiple + * methods with the same name as long as they comply with arity overloading described for {@link Method}. + * + * Structs may also have additional methods that are not part of the Java class the struct represents - + * these are known as augmented methods. An augmented method can be added to a struct as a part of any + * Java class as long as the method is static and the first parameter of the method is the Java class + * represented by the struct. Note that the augmented method's parent Java class does not need to be + * white-listed. + */ + public static class Method { + + /** Information about where this method was white-listed from. Can be used for error messages. */ + public final String origin; + + /** + * The Java class name for the owner of an augmented method. If the method is not augmented + * this should be {@code null}. + */ + public final String javaAugmentedClassName; + + /** The Java method name used to look up the Java method through reflection. */ + public final String javaMethodName; + + /** + * The Painless type name for the return type of the method which can be used to look up the Java + * method through reflection. + */ + public final String painlessReturnTypeName; + + /** + * A {@link List} of {@link String}s that are the Painless type names for the parameters of the + * method which can be used to look up the Java method through reflection. + */ + public final List painlessParameterTypeNames; + + /** + * Standard constructor. All values must be not {@code null} with the exception of jAugmentedClass; + * jAugmentedClass will be {@code null} unless the method is augmented as described in the class documentation. + */ + public Method(String origin, String javaAugmentedClassName, String javaMethodName, + String painlessReturnTypeName, List painlessParameterTypeNames) { + this.origin = Objects.requireNonNull(origin); + this.javaAugmentedClassName = javaAugmentedClassName; + this.javaMethodName = javaMethodName; + this.painlessReturnTypeName = Objects.requireNonNull(painlessReturnTypeName); + this.painlessParameterTypeNames = Collections.unmodifiableList(Objects.requireNonNull(painlessParameterTypeNames)); + } + } + + /** + * Field represents the equivalent of a Java field available as a white-listed struct field + * within Painless. Fields for Painless structs may be accessed exactly as fields for Java classes + * are using the '.' operator on an existing struct variable/field. + */ + public static class Field { + + /** Information about where this method was white-listed from. Can be used for error messages. */ + public final String origin; + + /** The Java field name used to look up the Java field through reflection. */ + public final String javaFieldName; + + /** The Painless type name for the field which can be used to look up the Java field through reflection. */ + public final String painlessFieldTypeName; + + /** Standard constructor. All values must be not {@code null}. */ + public Field(String origin, String javaFieldName, String painlessFieldTypeName) { + this.origin = Objects.requireNonNull(origin); + this.javaFieldName = Objects.requireNonNull(javaFieldName); + this.painlessFieldTypeName = Objects.requireNonNull(painlessFieldTypeName); + } + } + + /** The {@link ClassLoader} used to look up the white-listed Java classes, constructors, methods, and fields. */ + public final ClassLoader javaClassLoader; + + /** The {@link List} of all the white-listed Painless structs. */ + public final List whitelistStructs; + + /** Standard constructor. All values must be not {@code null}. */ + public Whitelist(ClassLoader javaClassLoader, List whitelistStructs) { + this.javaClassLoader = Objects.requireNonNull(javaClassLoader); + this.whitelistStructs = Collections.unmodifiableList(Objects.requireNonNull(whitelistStructs)); + } +} diff --git a/modules/lang-painless/src/main/java/org/elasticsearch/painless/WhitelistLoader.java b/modules/lang-painless/src/main/java/org/elasticsearch/painless/WhitelistLoader.java new file mode 100644 index 0000000000000..ad33d9c7ba5b7 --- /dev/null +++ b/modules/lang-painless/src/main/java/org/elasticsearch/painless/WhitelistLoader.java @@ -0,0 +1,290 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.painless; + +import java.io.InputStreamReader; +import java.io.LineNumberReader; +import java.lang.reflect.Constructor; +import java.lang.reflect.Field; +import java.lang.reflect.Method; +import java.nio.charset.StandardCharsets; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.List; + +/** Loads and creates a {@link Whitelist} from one to many text files. */ +public final class WhitelistLoader { + + /** + * Loads and creates a {@link Whitelist} from one to many text files. The file paths are passed in as an array of + * {@link String}s with a single {@link Class} to be be used to load the resources where each {@link String} + * is the path of a single text file. The {@link Class}'s {@link ClassLoader} will be used to lookup the Java + * reflection objects for each individual {@link Class}, {@link Constructor}, {@link Method}, and {@link Field} + * specified as part of the white-list in the text file. + * + * A single pass is made through each file to collect all the information about each struct, constructor, method, + * and field. Most validation will be done at a later point after all white-lists have been gathered and their + * merging takes place. + * + * The following can be parsed from each white-list text file: + *

      + *
    • Blank lines will be ignored by the parser.
    • + *
    • Comments may be created starting with a pound '#' symbol and end with a newline. These will + * be ignored by the parser.
    • + *
    • Primitive types may be specified starting with 'class' and followed by the Painless type + * name (often the same as the Java type name), an arrow symbol, the Java type name, + * an opening bracket, a newline, a closing bracket, and a final newline.
    • + *
    • Complex types may be specified starting with 'class' and followed by the Painless type name, + * an arrow symbol, the Java class name, a opening bracket, a newline, constructor/method/field + * specifications, a closing bracket, and a final newline. Within a complex type the following + * may be parsed: + *
        + *
      • A constructor may be specified starting with an opening parenthesis, followed by a + * comma-delimited list of Painless type names corresponding to the type/class names for + * the equivalent Java parameter types (these must be white-listed as well), a closing + * parenthesis, and a newline.
      • + *
      • A method may be specified starting with a Painless type name for the return type, + * followed by the Java name of the method (which will also be the Painless name for the + * method), an opening parenthesis, a comma-delimited list of Painless type names + * corresponding to the type/class names for the equivalent Java parameter types + * (these must be white-listed as well), a closing parenthesis, and a newline.
      • + *
      • An augmented method may be specified starting with a Painless type name for the return + * type, followed by the fully qualified Java name of the class the augmented method is + * part of (this class does not need to be white-listed), the Java name of the method + * (which will also be the Painless name for the method), an opening parenthesis, a + * comma-delimited list of Painless type names corresponding to the type/class names + * for the equivalent Java parameter types (these must be white-listed as well), a closing + * parenthesis, and a newline.
      • + *
      • A field may be specified starting with a Painless type name for the equivalent Java type + * of the field, followed by the Java name of the field (which all be the Painless name + * for the field), and a newline.
      • + *
      + *
    + * + * Note there must be a one-to-one correspondence of Painless type names to Java type/class names. + * If the same Painless type is defined across multiple files and the Java class is the same, all + * specified constructors, methods, and fields will be merged into a single Painless type. The + * Painless dynamic type, 'def', used as part of constructor, method, and field definitions will + * be appropriately parsed and handled. + * + * The following example is used to create a single white-list text file: + * + * {@code + * # primitive types + * + * class int -> int { + * } + * + * # complex types + * + * class Example -> my.package.Example { + * # constructors + * () + * (int) + * (def, def) + * (Example, def) + * + * # method + * Example add(int, def) + * int add(Example, Example) + * void example() + * + * # augmented + * Example some.other.Class sub(Example, int, def) + * + * # fields + * int value0 + * int value1 + * def value2 + * } + * } + */ + public static Whitelist loadFromResourceFiles(Class resource, String... filepaths) { + List whitelistStructs = new ArrayList<>(); + + // Execute a single pass through the white-list text files. This will gather all the + // constructors, methods, augmented methods, and fields for each white-listed struct. + for (String filepath : filepaths) { + String line; + int number = -1; + + try (LineNumberReader reader = new LineNumberReader( + new InputStreamReader(resource.getResourceAsStream(filepath), StandardCharsets.UTF_8))) { + + String whitelistStructOrigin = null; + String painlessTypeName = null; + String javaClassName = null; + List whitelistConstructors = null; + List whitelistMethods = null; + List whitelistFields = null; + + while ((line = reader.readLine()) != null) { + number = reader.getLineNumber(); + line = line.trim(); + + // Skip any lines that are either blank or comments. + if (line.length() == 0 || line.charAt(0) == '#') { + continue; + } + + // Handle a new struct by resetting all the variables necessary to construct a new Whitelist.Struct for the white-list. + // Expects the following format: 'class' ID -> ID '{' '\n' + if (line.startsWith("class ")) { + // Ensure the final token of the line is '{'. + if (line.endsWith("{") == false) { + throw new IllegalArgumentException( + "invalid struct definition: failed to parse class opening bracket [" + line + "]"); + } + + // Parse the Painless type name and Java class name. + String[] tokens = line.substring(5, line.length() - 1).replaceAll("\\s+", "").split("->"); + + // Ensure the correct number of tokens. + if (tokens.length != 2) { + throw new IllegalArgumentException("invalid struct definition: failed to parse class name [" + line + "]"); + } + + whitelistStructOrigin = "[" + filepath + "]:[" + number + "]"; + painlessTypeName = tokens[0]; + javaClassName = tokens[1]; + + // Reset all the constructors, methods, and fields to support a new struct. + whitelistConstructors = new ArrayList<>(); + whitelistMethods = new ArrayList<>(); + whitelistFields = new ArrayList<>(); + + // Handle the end of a struct, by creating a new Whitelist.Struct with all the previously gathered + // constructors, methods, augmented methods, and fields, and adding it to the list of white-listed structs. + // Expects the following format: '}' '\n' + } else if (line.equals("}")) { + if (painlessTypeName == null) { + throw new IllegalArgumentException("invalid struct definition: extraneous closing bracket"); + } + + whitelistStructs.add(new Whitelist.Struct(whitelistStructOrigin, painlessTypeName, javaClassName, + whitelistConstructors, whitelistMethods, whitelistFields)); + + // Set all the variables to null to ensure a new struct definition is found before other parsable values. + whitelistStructOrigin = null; + painlessTypeName = null; + javaClassName = null; + whitelistConstructors = null; + whitelistMethods = null; + whitelistFields = null; + + // Handle all other valid cases. + } else { + // Mark the origin of this parsable object. + String origin = "[" + filepath + "]:[" + number + "]"; + + // Ensure we have a defined struct before adding any constructors, methods, augmented methods, or fields. + if (painlessTypeName == null) { + throw new IllegalArgumentException("invalid object definition: expected a class name [" + line + "]"); + } + + // Handle the case for a constructor definition. + // Expects the following format: '(' ( ID ( ',' ID )* )? ')' '\n' + if (line.startsWith("(")) { + // Ensure the final token of the line is ')'. + if (line.endsWith(")") == false) { + throw new IllegalArgumentException( + "invalid constructor definition: expected a closing parenthesis [" + line + "]"); + } + + // Parse the constructor parameters. + String[] tokens = line.substring(1, line.length() - 1).replaceAll("\\s+", "").split(","); + + // Handle the case for a constructor with no parameters. + if ("".equals(tokens[0])) { + tokens = new String[0]; + } + + whitelistConstructors.add(new Whitelist.Constructor(origin, Arrays.asList(tokens))); + + // Handle the case for a method or augmented method definition. + // Expects the following format: ID ID? ID '(' ( ID ( ',' ID )* )? ')' '\n' + } else if (line.contains("(")) { + // Ensure the final token of the line is ')'. + if (line.endsWith(")") == false) { + throw new IllegalArgumentException( + "invalid method definition: expected a closing parenthesis [" + line + "]"); + } + + // Parse the tokens prior to the method parameters. + int parameterIndex = line.indexOf('('); + String[] tokens = line.substring(0, parameterIndex).split("\\s+"); + + String javaMethodName; + String javaAugmentedClassName; + + // Based on the number of tokens, look up the Java method name and if provided the Java augmented class. + if (tokens.length == 2) { + javaMethodName = tokens[1]; + javaAugmentedClassName = null; + } else if (tokens.length == 3) { + javaMethodName = tokens[2]; + javaAugmentedClassName = tokens[1]; + } else { + throw new IllegalArgumentException("invalid method definition: unexpected format [" + line + "]"); + } + + String painlessReturnTypeName = tokens[0]; + + // Parse the method parameters. + tokens = line.substring(parameterIndex + 1, line.length() - 1).replaceAll("\\s+", "").split(","); + + // Handle the case for a method with no parameters. + if ("".equals(tokens[0])) { + tokens = new String[0]; + } + + whitelistMethods.add(new Whitelist.Method(origin, javaAugmentedClassName, javaMethodName, + painlessReturnTypeName, Arrays.asList(tokens))); + + // Handle the case for a field definition. + // Expects the following format: ID ID '\n' + } else { + // Parse the field tokens. + String[] tokens = line.split("\\s+"); + + // Ensure the correct number of tokens. + if (tokens.length != 2) { + throw new IllegalArgumentException("invalid field definition: unexpected format [" + line + "]"); + } + + whitelistFields.add(new Whitelist.Field(origin, tokens[1], tokens[0])); + } + } + } + + // Ensure all structs end with a '}' token before the end of the file. + if (painlessTypeName != null) { + throw new IllegalArgumentException("invalid struct definition: expected closing bracket"); + } + } catch (Exception exception) { + throw new RuntimeException("error in [" + filepath + "] at line [" + number + "]", exception); + } + } + + return new Whitelist(resource.getClassLoader(), whitelistStructs); + } + + private WhitelistLoader() {} +} diff --git a/modules/lang-painless/src/main/java/org/elasticsearch/painless/WriterConstants.java b/modules/lang-painless/src/main/java/org/elasticsearch/painless/WriterConstants.java index 60eff7754e8e9..9150e2609b700 100644 --- a/modules/lang-painless/src/main/java/org/elasticsearch/painless/WriterConstants.java +++ b/modules/lang-painless/src/main/java/org/elasticsearch/painless/WriterConstants.java @@ -137,7 +137,7 @@ public final class WriterConstants { public static final Method DEF_TO_LONG_EXPLICIT = getAsmMethod(long.class , "DefTolongExplicit" , Object.class); public static final Method DEF_TO_FLOAT_EXPLICIT = getAsmMethod(float.class , "DefTofloatExplicit" , Object.class); public static final Method DEF_TO_DOUBLE_EXPLICIT = getAsmMethod(double.class , "DefTodoubleExplicit", Object.class); - public static final Type DEF_ARRAY_LENGTH_METHOD_TYPE = Type.getMethodType(Type.INT_TYPE, Definition.DEF_TYPE.type); + public static final Type DEF_ARRAY_LENGTH_METHOD_TYPE = Type.getMethodType(Type.INT_TYPE, Type.getType(Object.class)); /** invokedynamic bootstrap for lambda expression/method references */ public static final MethodType LAMBDA_BOOTSTRAP_TYPE = diff --git a/modules/lang-painless/src/main/java/org/elasticsearch/painless/antlr/PainlessLexer.java b/modules/lang-painless/src/main/java/org/elasticsearch/painless/antlr/PainlessLexer.java index fd32c59b4ff03..734089c384a5f 100644 --- a/modules/lang-painless/src/main/java/org/elasticsearch/painless/antlr/PainlessLexer.java +++ b/modules/lang-painless/src/main/java/org/elasticsearch/painless/antlr/PainlessLexer.java @@ -11,7 +11,7 @@ @SuppressWarnings({"all", "warnings", "unchecked", "unused", "cast"}) abstract class PainlessLexer extends Lexer { - static { RuntimeMetaData.checkVersion("4.5.1", RuntimeMetaData.VERSION); } + static { RuntimeMetaData.checkVersion("4.5.3", RuntimeMetaData.VERSION); } protected static final DFA[] _decisionToDFA; protected static final PredictionContextCache _sharedContextCache = diff --git a/modules/lang-painless/src/main/java/org/elasticsearch/painless/antlr/PainlessParser.java b/modules/lang-painless/src/main/java/org/elasticsearch/painless/antlr/PainlessParser.java index 619c582d04a2f..528a8a3d851c6 100644 --- a/modules/lang-painless/src/main/java/org/elasticsearch/painless/antlr/PainlessParser.java +++ b/modules/lang-painless/src/main/java/org/elasticsearch/painless/antlr/PainlessParser.java @@ -11,7 +11,7 @@ @SuppressWarnings({"all", "warnings", "unchecked", "unused", "cast"}) class PainlessParser extends Parser { - static { RuntimeMetaData.checkVersion("4.5.1", RuntimeMetaData.VERSION); } + static { RuntimeMetaData.checkVersion("4.5.3", RuntimeMetaData.VERSION); } protected static final DFA[] _decisionToDFA; protected static final PredictionContextCache _sharedContextCache = @@ -579,6 +579,7 @@ public final StatementContext statement() throws RecognitionException { try { int _alt; setState(185); + _errHandler.sync(this); switch ( getInterpreter().adaptivePredict(_input,11,_ctx) ) { case 1: _localctx = new IfContext(_localctx); @@ -595,6 +596,7 @@ public final StatementContext statement() throws RecognitionException { setState(103); trailer(); setState(107); + _errHandler.sync(this); switch ( getInterpreter().adaptivePredict(_input,4,_ctx) ) { case 1: { @@ -1127,6 +1129,7 @@ public final InitializerContext initializer() throws RecognitionException { enterRule(_localctx, 14, RULE_initializer); try { setState(204); + _errHandler.sync(this); switch ( getInterpreter().adaptivePredict(_input,14,_ctx) ) { case 1: enterOuterAlt(_localctx, 1); @@ -1653,6 +1656,7 @@ private ExpressionContext expression(int _p) throws RecognitionException { _prevctx = _localctx; { setState(290); + _errHandler.sync(this); switch ( getInterpreter().adaptivePredict(_input,18,_ctx) ) { case 1: { @@ -1979,6 +1983,7 @@ public final UnaryContext unary() throws RecognitionException { int _la; try { setState(308); + _errHandler.sync(this); switch ( getInterpreter().adaptivePredict(_input,20,_ctx) ) { case 1: _localctx = new PreContext(_localctx); @@ -2126,6 +2131,7 @@ public final ChainContext chain() throws RecognitionException { try { int _alt; setState(326); + _errHandler.sync(this); switch ( getInterpreter().adaptivePredict(_input,23,_ctx) ) { case 1: _localctx = new DynamicContext(_localctx); @@ -2342,6 +2348,7 @@ public final PrimaryContext primary() throws RecognitionException { int _la; try { setState(346); + _errHandler.sync(this); switch ( getInterpreter().adaptivePredict(_input,24,_ctx) ) { case 1: _localctx = new PrecedenceContext(_localctx); @@ -2493,6 +2500,7 @@ public final PostfixContext postfix() throws RecognitionException { enterRule(_localctx, 36, RULE_postfix); try { setState(351); + _errHandler.sync(this); switch ( getInterpreter().adaptivePredict(_input,25,_ctx) ) { case 1: enterOuterAlt(_localctx, 1); @@ -2551,6 +2559,7 @@ public final PostdotContext postdot() throws RecognitionException { enterRule(_localctx, 38, RULE_postdot); try { setState(355); + _errHandler.sync(this); switch ( getInterpreter().adaptivePredict(_input,26,_ctx) ) { case 1: enterOuterAlt(_localctx, 1); @@ -2803,6 +2812,7 @@ public final ArrayinitializerContext arrayinitializer() throws RecognitionExcept try { int _alt; setState(412); + _errHandler.sync(this); switch ( getInterpreter().adaptivePredict(_input,34,_ctx) ) { case 1: _localctx = new NewstandardarrayContext(_localctx); @@ -2837,6 +2847,7 @@ public final ArrayinitializerContext arrayinitializer() throws RecognitionExcept _alt = getInterpreter().adaptivePredict(_input,27,_ctx); } while ( _alt!=2 && _alt!=org.antlr.v4.runtime.atn.ATN.INVALID_ALT_NUMBER ); setState(385); + _errHandler.sync(this); switch ( getInterpreter().adaptivePredict(_input,29,_ctx) ) { case 1: { @@ -2974,6 +2985,7 @@ public final ListinitializerContext listinitializer() throws RecognitionExceptio int _la; try { setState(427); + _errHandler.sync(this); switch ( getInterpreter().adaptivePredict(_input,36,_ctx) ) { case 1: enterOuterAlt(_localctx, 1); @@ -3055,6 +3067,7 @@ public final MapinitializerContext mapinitializer() throws RecognitionException int _la; try { setState(443); + _errHandler.sync(this); switch ( getInterpreter().adaptivePredict(_input,38,_ctx) ) { case 1: enterOuterAlt(_localctx, 1); @@ -3252,6 +3265,7 @@ public final ArgumentContext argument() throws RecognitionException { enterRule(_localctx, 56, RULE_argument); try { setState(465); + _errHandler.sync(this); switch ( getInterpreter().adaptivePredict(_input,41,_ctx) ) { case 1: enterOuterAlt(_localctx, 1); @@ -3533,6 +3547,7 @@ public final FuncrefContext funcref() throws RecognitionException { enterRule(_localctx, 62, RULE_funcref); try { setState(505); + _errHandler.sync(this); switch ( getInterpreter().adaptivePredict(_input,47,_ctx) ) { case 1: _localctx = new ClassfuncrefContext(_localctx); diff --git a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/AExpression.java b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/AExpression.java index 739e3de6d213b..2ca0b265430f9 100644 --- a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/AExpression.java +++ b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/AExpression.java @@ -20,6 +20,7 @@ package org.elasticsearch.painless.node; import org.elasticsearch.painless.AnalyzerCaster; +import org.elasticsearch.painless.Definition; import org.elasticsearch.painless.Definition.Cast; import org.elasticsearch.painless.Definition.Type; import org.elasticsearch.painless.Locals; @@ -118,7 +119,7 @@ public abstract class AExpression extends ANode { * @return The new child node for the parent node calling this method. */ AExpression cast(Locals locals) { - Cast cast = AnalyzerCaster.getLegalCast(location, actual, expected, explicit, internal); + Cast cast = locals.getDefinition().caster.getLegalCast(location, actual, expected, explicit, internal); if (cast == null) { if (constant == null || this instanceof EConstant) { @@ -157,7 +158,7 @@ AExpression cast(Locals locals) { return ecast; } else { - if (expected.sort.constant) { + if (Definition.isConstantType(expected)) { // For the case where a cast is required, a constant is set, // and the constant can be immediately cast to the expected type. // An EConstant replaces this node with the constant cast appropriately @@ -166,7 +167,7 @@ AExpression cast(Locals locals) { // from this node because the output data for the EConstant // will already be the same. - constant = AnalyzerCaster.constCast(location, constant, cast); + constant = locals.getDefinition().caster.constCast(location, constant, cast); EConstant econstant = new EConstant(location, constant); econstant.analyze(locals); diff --git a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/EAssignment.java b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/EAssignment.java index de2e9fc0535bd..84c6145ac0c80 100644 --- a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/EAssignment.java +++ b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/EAssignment.java @@ -19,11 +19,9 @@ package org.elasticsearch.painless.node; -import org.elasticsearch.painless.AnalyzerCaster; import org.elasticsearch.painless.DefBootstrap; import org.elasticsearch.painless.Definition; import org.elasticsearch.painless.Definition.Cast; -import org.elasticsearch.painless.Definition.Sort; import org.elasticsearch.painless.Definition.Type; import org.elasticsearch.painless.Globals; import org.elasticsearch.painless.Locals; @@ -52,6 +50,7 @@ public final class EAssignment extends AExpression { private Type shiftDistance; // for shifts, the RHS is promoted independently private Cast there = null; private Cast back = null; + private Type DefType = null; public EAssignment(Location location, AExpression lhs, AExpression rhs, boolean pre, boolean post, Operation operation) { super(location); @@ -81,6 +80,8 @@ void analyze(Locals locals) { } else { throw new IllegalStateException("Illegal tree structure."); } + + DefType = locals.getDefinition().DefType; } private void analyzeLHS(Locals locals) { @@ -103,14 +104,12 @@ private void analyzeIncrDecr() { throw createError(new IllegalStateException("Illegal tree structure.")); } - Sort sort = lhs.actual.sort; - if (operation == Operation.INCR) { - if (sort == Sort.DOUBLE) { + if (lhs.actual.clazz == double.class) { rhs = new EConstant(location, 1D); - } else if (sort == Sort.FLOAT) { + } else if (lhs.actual.clazz == float.class) { rhs = new EConstant(location, 1F); - } else if (sort == Sort.LONG) { + } else if (lhs.actual.clazz == long.class) { rhs = new EConstant(location, 1L); } else { rhs = new EConstant(location, 1); @@ -118,11 +117,11 @@ private void analyzeIncrDecr() { operation = Operation.ADD; } else if (operation == Operation.DECR) { - if (sort == Sort.DOUBLE) { + if (lhs.actual.clazz == double.class) { rhs = new EConstant(location, 1D); - } else if (sort == Sort.FLOAT) { + } else if (lhs.actual.clazz == float.class) { rhs = new EConstant(location, 1F); - } else if (sort == Sort.LONG) { + } else if (lhs.actual.clazz == long.class) { rhs = new EConstant(location, 1L); } else { rhs = new EConstant(location, 1); @@ -141,33 +140,33 @@ private void analyzeCompound(Locals locals) { boolean shift = false; if (operation == Operation.MUL) { - promote = AnalyzerCaster.promoteNumeric(lhs.actual, rhs.actual, true); + promote = locals.getDefinition().caster.promoteNumeric(lhs.actual, rhs.actual, true); } else if (operation == Operation.DIV) { - promote = AnalyzerCaster.promoteNumeric(lhs.actual, rhs.actual, true); + promote = locals.getDefinition().caster.promoteNumeric(lhs.actual, rhs.actual, true); } else if (operation == Operation.REM) { - promote = AnalyzerCaster.promoteNumeric(lhs.actual, rhs.actual, true); + promote = locals.getDefinition().caster.promoteNumeric(lhs.actual, rhs.actual, true); } else if (operation == Operation.ADD) { - promote = AnalyzerCaster.promoteAdd(lhs.actual, rhs.actual); + promote = locals.getDefinition().caster.promoteAdd(lhs.actual, rhs.actual); } else if (operation == Operation.SUB) { - promote = AnalyzerCaster.promoteNumeric(lhs.actual, rhs.actual, true); + promote = locals.getDefinition().caster.promoteNumeric(lhs.actual, rhs.actual, true); } else if (operation == Operation.LSH) { - promote = AnalyzerCaster.promoteNumeric(lhs.actual, false); - shiftDistance = AnalyzerCaster.promoteNumeric(rhs.actual, false); + promote = locals.getDefinition().caster.promoteNumeric(lhs.actual, false); + shiftDistance = locals.getDefinition().caster.promoteNumeric(rhs.actual, false); shift = true; } else if (operation == Operation.RSH) { - promote = AnalyzerCaster.promoteNumeric(lhs.actual, false); - shiftDistance = AnalyzerCaster.promoteNumeric(rhs.actual, false); + promote = locals.getDefinition().caster.promoteNumeric(lhs.actual, false); + shiftDistance = locals.getDefinition().caster.promoteNumeric(rhs.actual, false); shift = true; } else if (operation == Operation.USH) { - promote = AnalyzerCaster.promoteNumeric(lhs.actual, false); - shiftDistance = AnalyzerCaster.promoteNumeric(rhs.actual, false); + promote = locals.getDefinition().caster.promoteNumeric(lhs.actual, false); + shiftDistance = locals.getDefinition().caster.promoteNumeric(rhs.actual, false); shift = true; } else if (operation == Operation.BWAND) { - promote = AnalyzerCaster.promoteXor(lhs.actual, rhs.actual); + promote = locals.getDefinition().caster.promoteXor(lhs.actual, rhs.actual); } else if (operation == Operation.XOR) { - promote = AnalyzerCaster.promoteXor(lhs.actual, rhs.actual); + promote = locals.getDefinition().caster.promoteXor(lhs.actual, rhs.actual); } else if (operation == Operation.BWOR) { - promote = AnalyzerCaster.promoteXor(lhs.actual, rhs.actual); + promote = locals.getDefinition().caster.promoteXor(lhs.actual, rhs.actual); } else { throw createError(new IllegalStateException("Illegal tree structure.")); } @@ -177,20 +176,20 @@ private void analyzeCompound(Locals locals) { "[" + operation.symbol + "=] to types [" + lhs.actual + "] and [" + rhs.actual + "].")); } - cat = operation == Operation.ADD && promote.sort == Sort.STRING; + cat = operation == Operation.ADD && promote.clazz == String.class; if (cat) { - if (rhs instanceof EBinary && ((EBinary)rhs).operation == Operation.ADD && rhs.actual.sort == Sort.STRING) { + if (rhs instanceof EBinary && ((EBinary)rhs).operation == Operation.ADD && rhs.actual.clazz == String.class) { ((EBinary)rhs).cat = true; } rhs.expected = rhs.actual; } else if (shift) { - if (promote.sort == Sort.DEF) { + if (promote.dynamic) { // shifts are promoted independently, but for the def type, we need object. rhs.expected = promote; - } else if (shiftDistance.sort == Sort.LONG) { - rhs.expected = Definition.INT_TYPE; + } else if (shiftDistance.clazz == long.class) { + rhs.expected = locals.getDefinition().intType; rhs.explicit = true; } else { rhs.expected = shiftDistance; @@ -201,11 +200,11 @@ private void analyzeCompound(Locals locals) { rhs = rhs.cast(locals); - there = AnalyzerCaster.getLegalCast(location, lhs.actual, promote, false, false); - back = AnalyzerCaster.getLegalCast(location, promote, lhs.actual, true, false); + there = locals.getDefinition().caster.getLegalCast(location, lhs.actual, promote, false, false); + back = locals.getDefinition().caster.getLegalCast(location, promote, lhs.actual, true, false); this.statement = true; - this.actual = read ? lhs.actual : Definition.VOID_TYPE; + this.actual = read ? lhs.actual : locals.getDefinition().voidType; } private void analyzeSimple(Locals locals) { @@ -225,7 +224,7 @@ private void analyzeSimple(Locals locals) { rhs = rhs.cast(locals); this.statement = true; - this.actual = read ? lhs.actual : Definition.VOID_TYPE; + this.actual = read ? lhs.actual : locals.getDefinition().voidType; } /** @@ -272,7 +271,7 @@ void write(MethodWriter writer, Globals globals) { writer.writeCast(back); // if necessary, cast the String to the lhs actual type if (lhs.read) { - writer.writeDup(lhs.actual.sort.size, lhs.accessElementCount()); // if this lhs is also read + writer.writeDup(lhs.actual.type.getSize(), lhs.accessElementCount()); // if this lhs is also read // from dup the value onto the stack } @@ -286,7 +285,7 @@ void write(MethodWriter writer, Globals globals) { lhs.load(writer, globals); // load the current lhs's value if (lhs.read && post) { - writer.writeDup(lhs.actual.sort.size, lhs.accessElementCount()); // dup the value if the lhs is also + writer.writeDup(lhs.actual.type.getSize(), lhs.accessElementCount()); // dup the value if the lhs is also // read from and is a post increment } @@ -297,9 +296,9 @@ void write(MethodWriter writer, Globals globals) { // XXX: fix these types, but first we need def compound assignment tests. // its tricky here as there are possibly explicit casts, too. // write the operation instruction for compound assignment - if (promote.sort == Sort.DEF) { - writer.writeDynamicBinaryInstruction(location, promote, - Definition.DEF_TYPE, Definition.DEF_TYPE, operation, DefBootstrap.OPERATOR_COMPOUND_ASSIGNMENT); + if (promote.dynamic) { + writer.writeDynamicBinaryInstruction( + location, promote, DefType, DefType, operation, DefBootstrap.OPERATOR_COMPOUND_ASSIGNMENT); } else { writer.writeBinaryInstruction(location, promote, operation); } @@ -307,7 +306,7 @@ void write(MethodWriter writer, Globals globals) { writer.writeCast(back); // if necessary cast the promotion type value back to the lhs's type if (lhs.read && !post) { - writer.writeDup(lhs.actual.sort.size, lhs.accessElementCount()); // dup the value if the lhs is also + writer.writeDup(lhs.actual.type.getSize(), lhs.accessElementCount()); // dup the value if the lhs is also // read from and is not a post increment } @@ -318,7 +317,7 @@ void write(MethodWriter writer, Globals globals) { rhs.write(writer, globals); // write the bytecode for the rhs rhs if (lhs.read) { - writer.writeDup(lhs.actual.sort.size, lhs.accessElementCount()); // dup the value if the lhs is also read from + writer.writeDup(lhs.actual.type.getSize(), lhs.accessElementCount()); // dup the value if the lhs is also read from } lhs.store(writer, globals); // store the lhs's value from the stack in its respective variable/field/array diff --git a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/EBinary.java b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/EBinary.java index 72748d7069e28..df92d72a3c0c5 100644 --- a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/EBinary.java +++ b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/EBinary.java @@ -19,10 +19,8 @@ package org.elasticsearch.painless.node; -import org.elasticsearch.painless.AnalyzerCaster; import org.elasticsearch.painless.DefBootstrap; import org.elasticsearch.painless.Definition; -import org.elasticsearch.painless.Definition.Sort; import org.elasticsearch.painless.Definition.Type; import org.elasticsearch.painless.Globals; import org.elasticsearch.painless.Locals; @@ -33,6 +31,8 @@ import java.util.Objects; import java.util.Set; +import java.util.regex.Matcher; +import java.util.regex.Pattern; /** * Represents a binary math expression. @@ -101,7 +101,7 @@ private void analyzeMul(Locals variables) { left.analyze(variables); right.analyze(variables); - promote = AnalyzerCaster.promoteNumeric(left.actual, right.actual, true); + promote = variables.getDefinition().caster.promoteNumeric(left.actual, right.actual, true); if (promote == null) { throw createError(new ClassCastException("Cannot apply multiply [*] to types " + @@ -110,7 +110,7 @@ private void analyzeMul(Locals variables) { actual = promote; - if (promote.sort == Sort.DEF) { + if (promote.dynamic) { left.expected = left.actual; right.expected = right.actual; if (expected != null) { @@ -125,15 +125,15 @@ private void analyzeMul(Locals variables) { right = right.cast(variables); if (left.constant != null && right.constant != null) { - Sort sort = promote.sort; + Class sort = promote.clazz; - if (sort == Sort.INT) { + if (sort == int.class) { constant = (int)left.constant * (int)right.constant; - } else if (sort == Sort.LONG) { + } else if (sort == long.class) { constant = (long)left.constant * (long)right.constant; - } else if (sort == Sort.FLOAT) { + } else if (sort == float.class) { constant = (float)left.constant * (float)right.constant; - } else if (sort == Sort.DOUBLE) { + } else if (sort == double.class) { constant = (double)left.constant * (double)right.constant; } else { throw createError(new IllegalStateException("Illegal tree structure.")); @@ -145,7 +145,7 @@ private void analyzeDiv(Locals variables) { left.analyze(variables); right.analyze(variables); - promote = AnalyzerCaster.promoteNumeric(left.actual, right.actual, true); + promote = variables.getDefinition().caster.promoteNumeric(left.actual, right.actual, true); if (promote == null) { throw createError(new ClassCastException("Cannot apply divide [/] to types " + @@ -154,7 +154,7 @@ private void analyzeDiv(Locals variables) { actual = promote; - if (promote.sort == Sort.DEF) { + if (promote.dynamic) { left.expected = left.actual; right.expected = right.actual; @@ -170,16 +170,16 @@ private void analyzeDiv(Locals variables) { right = right.cast(variables); if (left.constant != null && right.constant != null) { - Sort sort = promote.sort; + Class sort = promote.clazz; try { - if (sort == Sort.INT) { + if (sort == int.class) { constant = (int)left.constant / (int)right.constant; - } else if (sort == Sort.LONG) { + } else if (sort == long.class) { constant = (long)left.constant / (long)right.constant; - } else if (sort == Sort.FLOAT) { + } else if (sort == float.class) { constant = (float)left.constant / (float)right.constant; - } else if (sort == Sort.DOUBLE) { + } else if (sort == double.class) { constant = (double)left.constant / (double)right.constant; } else { throw createError(new IllegalStateException("Illegal tree structure.")); @@ -194,7 +194,7 @@ private void analyzeRem(Locals variables) { left.analyze(variables); right.analyze(variables); - promote = AnalyzerCaster.promoteNumeric(left.actual, right.actual, true); + promote = variables.getDefinition().caster.promoteNumeric(left.actual, right.actual, true); if (promote == null) { throw createError(new ClassCastException("Cannot apply remainder [%] to types " + @@ -203,7 +203,7 @@ private void analyzeRem(Locals variables) { actual = promote; - if (promote.sort == Sort.DEF) { + if (promote.dynamic) { left.expected = left.actual; right.expected = right.actual; @@ -219,16 +219,16 @@ private void analyzeRem(Locals variables) { right = right.cast(variables); if (left.constant != null && right.constant != null) { - Sort sort = promote.sort; + Class sort = promote.clazz; try { - if (sort == Sort.INT) { + if (sort == int.class) { constant = (int)left.constant % (int)right.constant; - } else if (sort == Sort.LONG) { + } else if (sort == long.class) { constant = (long)left.constant % (long)right.constant; - } else if (sort == Sort.FLOAT) { + } else if (sort == float.class) { constant = (float)left.constant % (float)right.constant; - } else if (sort == Sort.DOUBLE) { + } else if (sort == double.class) { constant = (double)left.constant % (double)right.constant; } else { throw createError(new IllegalStateException("Illegal tree structure.")); @@ -243,30 +243,30 @@ private void analyzeAdd(Locals variables) { left.analyze(variables); right.analyze(variables); - promote = AnalyzerCaster.promoteAdd(left.actual, right.actual); + promote = variables.getDefinition().caster.promoteAdd(left.actual, right.actual); if (promote == null) { throw createError(new ClassCastException("Cannot apply add [+] to types " + "[" + left.actual.name + "] and [" + right.actual.name + "].")); } - Sort sort = promote.sort; + Class sort = promote.clazz; actual = promote; - if (sort == Sort.STRING) { + if (sort == String.class) { left.expected = left.actual; - if (left instanceof EBinary && ((EBinary)left).operation == Operation.ADD && left.actual.sort == Sort.STRING) { + if (left instanceof EBinary && ((EBinary)left).operation == Operation.ADD && left.actual.clazz == String.class) { ((EBinary)left).cat = true; } right.expected = right.actual; - if (right instanceof EBinary && ((EBinary)right).operation == Operation.ADD && right.actual.sort == Sort.STRING) { + if (right instanceof EBinary && ((EBinary)right).operation == Operation.ADD && right.actual.clazz == String.class) { ((EBinary)right).cat = true; } - } else if (sort == Sort.DEF) { + } else if (promote.dynamic) { left.expected = left.actual; right.expected = right.actual; @@ -282,15 +282,15 @@ private void analyzeAdd(Locals variables) { right = right.cast(variables); if (left.constant != null && right.constant != null) { - if (sort == Sort.INT) { + if (sort == int.class) { constant = (int)left.constant + (int)right.constant; - } else if (sort == Sort.LONG) { + } else if (sort == long.class) { constant = (long)left.constant + (long)right.constant; - } else if (sort == Sort.FLOAT) { + } else if (sort == float.class) { constant = (float)left.constant + (float)right.constant; - } else if (sort == Sort.DOUBLE) { + } else if (sort == double.class) { constant = (double)left.constant + (double)right.constant; - } else if (sort == Sort.STRING) { + } else if (sort == String.class) { constant = left.constant.toString() + right.constant.toString(); } else { throw createError(new IllegalStateException("Illegal tree structure.")); @@ -303,7 +303,7 @@ private void analyzeSub(Locals variables) { left.analyze(variables); right.analyze(variables); - promote = AnalyzerCaster.promoteNumeric(left.actual, right.actual, true); + promote = variables.getDefinition().caster.promoteNumeric(left.actual, right.actual, true); if (promote == null) { throw createError(new ClassCastException("Cannot apply subtract [-] to types " + @@ -312,7 +312,7 @@ private void analyzeSub(Locals variables) { actual = promote; - if (promote.sort == Sort.DEF) { + if (promote.dynamic) { left.expected = left.actual; right.expected = right.actual; @@ -328,15 +328,15 @@ private void analyzeSub(Locals variables) { right = right.cast(variables); if (left.constant != null && right.constant != null) { - Sort sort = promote.sort; + Class sort = promote.clazz; - if (sort == Sort.INT) { + if (sort == int.class) { constant = (int)left.constant - (int)right.constant; - } else if (sort == Sort.LONG) { + } else if (sort == long.class) { constant = (long)left.constant - (long)right.constant; - } else if (sort == Sort.FLOAT) { + } else if (sort == float.class) { constant = (float)left.constant - (float)right.constant; - } else if (sort == Sort.DOUBLE) { + } else if (sort == double.class) { constant = (double)left.constant - (double)right.constant; } else { throw createError(new IllegalStateException("Illegal tree structure.")); @@ -348,22 +348,22 @@ private void analyzeRegexOp(Locals variables) { left.analyze(variables); right.analyze(variables); - left.expected = Definition.STRING_TYPE; - right.expected = Definition.PATTERN_TYPE; + left.expected = variables.getDefinition().StringType; + right.expected = variables.getDefinition().PatternType; left = left.cast(variables); right = right.cast(variables); - promote = Definition.BOOLEAN_TYPE; - actual = Definition.BOOLEAN_TYPE; + promote = variables.getDefinition().booleanType; + actual = variables.getDefinition().booleanType; } private void analyzeLSH(Locals variables) { left.analyze(variables); right.analyze(variables); - Type lhspromote = AnalyzerCaster.promoteNumeric(left.actual, false); - Type rhspromote = AnalyzerCaster.promoteNumeric(right.actual, false); + Type lhspromote = variables.getDefinition().caster.promoteNumeric(left.actual, false); + Type rhspromote = variables.getDefinition().caster.promoteNumeric(right.actual, false); if (lhspromote == null || rhspromote == null) { throw createError(new ClassCastException("Cannot apply left shift [<<] to types " + @@ -373,7 +373,7 @@ private void analyzeLSH(Locals variables) { actual = promote = lhspromote; shiftDistance = rhspromote; - if (lhspromote.sort == Sort.DEF || rhspromote.sort == Sort.DEF) { + if (lhspromote.dynamic || rhspromote.dynamic) { left.expected = left.actual; right.expected = right.actual; @@ -383,8 +383,8 @@ private void analyzeLSH(Locals variables) { } else { left.expected = lhspromote; - if (rhspromote.sort == Sort.LONG) { - right.expected = Definition.INT_TYPE; + if (rhspromote.clazz == long.class) { + right.expected = variables.getDefinition().intType; right.explicit = true; } else { right.expected = rhspromote; @@ -395,11 +395,11 @@ private void analyzeLSH(Locals variables) { right = right.cast(variables); if (left.constant != null && right.constant != null) { - Sort sort = lhspromote.sort; + Class sort = lhspromote.clazz; - if (sort == Sort.INT) { + if (sort == int.class) { constant = (int)left.constant << (int)right.constant; - } else if (sort == Sort.LONG) { + } else if (sort == long.class) { constant = (long)left.constant << (int)right.constant; } else { throw createError(new IllegalStateException("Illegal tree structure.")); @@ -411,8 +411,8 @@ private void analyzeRSH(Locals variables) { left.analyze(variables); right.analyze(variables); - Type lhspromote = AnalyzerCaster.promoteNumeric(left.actual, false); - Type rhspromote = AnalyzerCaster.promoteNumeric(right.actual, false); + Type lhspromote = variables.getDefinition().caster.promoteNumeric(left.actual, false); + Type rhspromote = variables.getDefinition().caster.promoteNumeric(right.actual, false); if (lhspromote == null || rhspromote == null) { throw createError(new ClassCastException("Cannot apply right shift [>>] to types " + @@ -422,7 +422,7 @@ private void analyzeRSH(Locals variables) { actual = promote = lhspromote; shiftDistance = rhspromote; - if (lhspromote.sort == Sort.DEF || rhspromote.sort == Sort.DEF) { + if (lhspromote.dynamic || rhspromote.dynamic) { left.expected = left.actual; right.expected = right.actual; @@ -432,8 +432,8 @@ private void analyzeRSH(Locals variables) { } else { left.expected = lhspromote; - if (rhspromote.sort == Sort.LONG) { - right.expected = Definition.INT_TYPE; + if (rhspromote.clazz == long.class) { + right.expected = variables.getDefinition().intType; right.explicit = true; } else { right.expected = rhspromote; @@ -444,11 +444,11 @@ private void analyzeRSH(Locals variables) { right = right.cast(variables); if (left.constant != null && right.constant != null) { - Sort sort = lhspromote.sort; + Class sort = lhspromote.clazz; - if (sort == Sort.INT) { + if (sort == int.class) { constant = (int)left.constant >> (int)right.constant; - } else if (sort == Sort.LONG) { + } else if (sort == long.class) { constant = (long)left.constant >> (int)right.constant; } else { throw createError(new IllegalStateException("Illegal tree structure.")); @@ -460,8 +460,8 @@ private void analyzeUSH(Locals variables) { left.analyze(variables); right.analyze(variables); - Type lhspromote = AnalyzerCaster.promoteNumeric(left.actual, false); - Type rhspromote = AnalyzerCaster.promoteNumeric(right.actual, false); + Type lhspromote = variables.getDefinition().caster.promoteNumeric(left.actual, false); + Type rhspromote = variables.getDefinition().caster.promoteNumeric(right.actual, false); actual = promote = lhspromote; shiftDistance = rhspromote; @@ -471,7 +471,7 @@ private void analyzeUSH(Locals variables) { "[" + left.actual.name + "] and [" + right.actual.name + "].")); } - if (lhspromote.sort == Sort.DEF || rhspromote.sort == Sort.DEF) { + if (lhspromote.dynamic || rhspromote.dynamic) { left.expected = left.actual; right.expected = right.actual; @@ -481,8 +481,8 @@ private void analyzeUSH(Locals variables) { } else { left.expected = lhspromote; - if (rhspromote.sort == Sort.LONG) { - right.expected = Definition.INT_TYPE; + if (rhspromote.clazz == long.class) { + right.expected = variables.getDefinition().intType; right.explicit = true; } else { right.expected = rhspromote; @@ -493,11 +493,11 @@ private void analyzeUSH(Locals variables) { right = right.cast(variables); if (left.constant != null && right.constant != null) { - Sort sort = lhspromote.sort; + Class sort = lhspromote.clazz; - if (sort == Sort.INT) { + if (sort == int.class) { constant = (int)left.constant >>> (int)right.constant; - } else if (sort == Sort.LONG) { + } else if (sort == long.class) { constant = (long)left.constant >>> (int)right.constant; } else { throw createError(new IllegalStateException("Illegal tree structure.")); @@ -509,7 +509,7 @@ private void analyzeBWAnd(Locals variables) { left.analyze(variables); right.analyze(variables); - promote = AnalyzerCaster.promoteNumeric(left.actual, right.actual, false); + promote = variables.getDefinition().caster.promoteNumeric(left.actual, right.actual, false); if (promote == null) { throw createError(new ClassCastException("Cannot apply and [&] to types " + @@ -518,7 +518,7 @@ private void analyzeBWAnd(Locals variables) { actual = promote; - if (promote.sort == Sort.DEF) { + if (promote.dynamic) { left.expected = left.actual; right.expected = right.actual; @@ -534,11 +534,11 @@ private void analyzeBWAnd(Locals variables) { right = right.cast(variables); if (left.constant != null && right.constant != null) { - Sort sort = promote.sort; + Class sort = promote.clazz; - if (sort == Sort.INT) { + if (sort == int.class) { constant = (int)left.constant & (int)right.constant; - } else if (sort == Sort.LONG) { + } else if (sort == long.class) { constant = (long)left.constant & (long)right.constant; } else { throw createError(new IllegalStateException("Illegal tree structure.")); @@ -550,7 +550,7 @@ private void analyzeXor(Locals variables) { left.analyze(variables); right.analyze(variables); - promote = AnalyzerCaster.promoteXor(left.actual, right.actual); + promote = variables.getDefinition().caster.promoteXor(left.actual, right.actual); if (promote == null) { throw createError(new ClassCastException("Cannot apply xor [^] to types " + @@ -559,7 +559,7 @@ private void analyzeXor(Locals variables) { actual = promote; - if (promote.sort == Sort.DEF) { + if (promote.dynamic) { left.expected = left.actual; right.expected = right.actual; if (expected != null) { @@ -574,13 +574,13 @@ private void analyzeXor(Locals variables) { right = right.cast(variables); if (left.constant != null && right.constant != null) { - Sort sort = promote.sort; + Class sort = promote.clazz; - if (sort == Sort.BOOL) { + if (sort == boolean.class) { constant = (boolean)left.constant ^ (boolean)right.constant; - } else if (sort == Sort.INT) { + } else if (sort == int.class) { constant = (int)left.constant ^ (int)right.constant; - } else if (sort == Sort.LONG) { + } else if (sort == long.class) { constant = (long)left.constant ^ (long)right.constant; } else { throw createError(new IllegalStateException("Illegal tree structure.")); @@ -592,7 +592,7 @@ private void analyzeBWOr(Locals variables) { left.analyze(variables); right.analyze(variables); - promote = AnalyzerCaster.promoteNumeric(left.actual, right.actual, false); + promote = variables.getDefinition().caster.promoteNumeric(left.actual, right.actual, false); if (promote == null) { throw createError(new ClassCastException("Cannot apply or [|] to types " + @@ -601,7 +601,7 @@ private void analyzeBWOr(Locals variables) { actual = promote; - if (promote.sort == Sort.DEF) { + if (promote.dynamic) { left.expected = left.actual; right.expected = right.actual; if (expected != null) { @@ -616,11 +616,11 @@ private void analyzeBWOr(Locals variables) { right = right.cast(variables); if (left.constant != null && right.constant != null) { - Sort sort = promote.sort; + Class sort = promote.clazz; - if (sort == Sort.INT) { + if (sort == int.class) { constant = (int)left.constant | (int)right.constant; - } else if (sort == Sort.LONG) { + } else if (sort == long.class) { constant = (long)left.constant | (long)right.constant; } else { throw createError(new IllegalStateException("Illegal tree structure.")); @@ -632,7 +632,7 @@ private void analyzeBWOr(Locals variables) { void write(MethodWriter writer, Globals globals) { writer.writeDebugInfo(location); - if (promote.sort == Sort.STRING && operation == Operation.ADD) { + if (promote.clazz == String.class && operation == Operation.ADD) { if (!cat) { writer.writeNewStrings(); } @@ -655,12 +655,12 @@ void write(MethodWriter writer, Globals globals) { } else if (operation == Operation.FIND || operation == Operation.MATCH) { right.write(writer, globals); left.write(writer, globals); - writer.invokeVirtual(Definition.PATTERN_TYPE.type, WriterConstants.PATTERN_MATCHER); + writer.invokeVirtual(org.objectweb.asm.Type.getType(Pattern.class), WriterConstants.PATTERN_MATCHER); if (operation == Operation.FIND) { - writer.invokeVirtual(Definition.MATCHER_TYPE.type, WriterConstants.MATCHER_FIND); + writer.invokeVirtual(org.objectweb.asm.Type.getType(Matcher.class), WriterConstants.MATCHER_FIND); } else if (operation == Operation.MATCH) { - writer.invokeVirtual(Definition.MATCHER_TYPE.type, WriterConstants.MATCHER_MATCHES); + writer.invokeVirtual(org.objectweb.asm.Type.getType(Matcher.class), WriterConstants.MATCHER_MATCHES); } else { throw new IllegalStateException("Illegal tree structure."); } @@ -668,7 +668,7 @@ void write(MethodWriter writer, Globals globals) { left.write(writer, globals); right.write(writer, globals); - if (promote.sort == Sort.DEF || (shiftDistance != null && shiftDistance.sort == Sort.DEF)) { + if (promote.dynamic || (shiftDistance != null && shiftDistance.dynamic)) { // def calls adopt the wanted return value. if there was a narrowing cast, // we need to flag that so that its done at runtime. int flags = 0; diff --git a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/EBool.java b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/EBool.java index e1ca0b122922d..dfc12423994dc 100644 --- a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/EBool.java +++ b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/EBool.java @@ -57,11 +57,11 @@ void extractVariables(Set variables) { @Override void analyze(Locals locals) { - left.expected = Definition.BOOLEAN_TYPE; + left.expected = locals.getDefinition().booleanType; left.analyze(locals); left = left.cast(locals); - right.expected = Definition.BOOLEAN_TYPE; + right.expected = locals.getDefinition().booleanType; right.analyze(locals); right = right.cast(locals); @@ -75,7 +75,7 @@ void analyze(Locals locals) { } } - actual = Definition.BOOLEAN_TYPE; + actual = locals.getDefinition().booleanType; } @Override diff --git a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/EBoolean.java b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/EBoolean.java index c9e9ba921d7b6..c180afa4df1a4 100644 --- a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/EBoolean.java +++ b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/EBoolean.java @@ -49,7 +49,7 @@ void analyze(Locals locals) { throw createError(new IllegalArgumentException("Must read from constant [" + constant + "].")); } - actual = Definition.BOOLEAN_TYPE; + actual = locals.getDefinition().booleanType; } @Override diff --git a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/ECapturingFunctionRef.java b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/ECapturingFunctionRef.java index f40883186dd7a..564fcef8eef9f 100644 --- a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/ECapturingFunctionRef.java +++ b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/ECapturingFunctionRef.java @@ -34,7 +34,6 @@ import java.util.Objects; import java.util.Set; -import static org.elasticsearch.painless.Definition.VOID_TYPE; import static org.elasticsearch.painless.WriterConstants.LAMBDA_BOOTSTRAP_HANDLE; /** @@ -64,7 +63,7 @@ void extractVariables(Set variables) { void analyze(Locals locals) { captured = locals.getVariable(location, variable); if (expected == null) { - if (captured.type.sort == Definition.Sort.DEF) { + if (captured.type.dynamic) { // dynamic implementation defPointer = "D" + variable + "." + call + ",1"; } else { @@ -75,7 +74,7 @@ void analyze(Locals locals) { } else { defPointer = null; // static case - if (captured.type.sort != Definition.Sort.DEF) { + if (captured.type.dynamic == false) { try { ref = new FunctionRef(locals.getDefinition(), expected, captured.type.name, call, 1); @@ -83,11 +82,11 @@ void analyze(Locals locals) { for (int i = 0; i < ref.interfaceMethod.arguments.size(); ++i) { Definition.Type from = ref.interfaceMethod.arguments.get(i); Definition.Type to = ref.delegateMethod.arguments.get(i); - AnalyzerCaster.getLegalCast(location, from, to, false, true); + locals.getDefinition().caster.getLegalCast(location, from, to, false, true); } - if (ref.interfaceMethod.rtn != VOID_TYPE) { - AnalyzerCaster.getLegalCast(location, ref.delegateMethod.rtn, ref.interfaceMethod.rtn, false, true); + if (ref.interfaceMethod.rtn.equals(locals.getDefinition().voidType) == false) { + locals.getDefinition().caster.getLegalCast(location, ref.delegateMethod.rtn, ref.interfaceMethod.rtn, false, true); } } catch (IllegalArgumentException e) { throw createError(e); diff --git a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/EComp.java b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/EComp.java index d66f73ec685bd..020ea48cd4c1b 100644 --- a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/EComp.java +++ b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/EComp.java @@ -21,7 +21,6 @@ import org.elasticsearch.painless.Definition; import org.elasticsearch.painless.Globals; -import org.elasticsearch.painless.Definition.Sort; import org.elasticsearch.painless.Definition.Type; import org.elasticsearch.painless.Location; import org.elasticsearch.painless.AnalyzerCaster; @@ -90,14 +89,14 @@ private void analyzeEq(Locals variables) { left.analyze(variables); right.analyze(variables); - promotedType = AnalyzerCaster.promoteEquality(left.actual, right.actual); + promotedType = variables.getDefinition().caster.promoteEquality(left.actual, right.actual); if (promotedType == null) { throw createError(new ClassCastException("Cannot apply equals [==] to types " + "[" + left.actual.name + "] and [" + right.actual.name + "].")); } - if (promotedType.sort == Sort.DEF) { + if (promotedType.dynamic) { left.expected = left.actual; right.expected = right.actual; } else { @@ -113,17 +112,17 @@ private void analyzeEq(Locals variables) { } if ((left.constant != null || left.isNull) && (right.constant != null || right.isNull)) { - Sort sort = promotedType.sort; + Class sort = promotedType.clazz; - if (sort == Sort.BOOL) { + if (sort == boolean.class) { constant = (boolean)left.constant == (boolean)right.constant; - } else if (sort == Sort.INT) { + } else if (sort == int.class) { constant = (int)left.constant == (int)right.constant; - } else if (sort == Sort.LONG) { + } else if (sort == long.class) { constant = (long)left.constant == (long)right.constant; - } else if (sort == Sort.FLOAT) { + } else if (sort == float.class) { constant = (float)left.constant == (float)right.constant; - } else if (sort == Sort.DOUBLE) { + } else if (sort == double.class) { constant = (double)left.constant == (double)right.constant; } else if (!left.isNull) { constant = left.constant.equals(right.constant); @@ -134,14 +133,14 @@ private void analyzeEq(Locals variables) { } } - actual = Definition.BOOLEAN_TYPE; + actual = variables.getDefinition().booleanType; } private void analyzeEqR(Locals variables) { left.analyze(variables); right.analyze(variables); - promotedType = AnalyzerCaster.promoteEquality(left.actual, right.actual); + promotedType = variables.getDefinition().caster.promoteEquality(left.actual, right.actual); if (promotedType == null) { throw createError(new ClassCastException("Cannot apply reference equals [===] to types " + @@ -159,38 +158,38 @@ private void analyzeEqR(Locals variables) { } if ((left.constant != null || left.isNull) && (right.constant != null || right.isNull)) { - Sort sort = promotedType.sort; + Class sort = promotedType.clazz; - if (sort == Sort.BOOL) { + if (sort == boolean.class) { constant = (boolean)left.constant == (boolean)right.constant; - } else if (sort == Sort.INT) { + } else if (sort == int.class) { constant = (int)left.constant == (int)right.constant; - } else if (sort == Sort.LONG) { + } else if (sort == long.class) { constant = (long)left.constant == (long)right.constant; - } else if (sort == Sort.FLOAT) { + } else if (sort == float.class) { constant = (float)left.constant == (float)right.constant; - } else if (sort == Sort.DOUBLE) { + } else if (sort == double.class) { constant = (double)left.constant == (double)right.constant; } else { constant = left.constant == right.constant; } } - actual = Definition.BOOLEAN_TYPE; + actual = variables.getDefinition().booleanType; } private void analyzeNE(Locals variables) { left.analyze(variables); right.analyze(variables); - promotedType = AnalyzerCaster.promoteEquality(left.actual, right.actual); + promotedType = variables.getDefinition().caster.promoteEquality(left.actual, right.actual); if (promotedType == null) { throw createError(new ClassCastException("Cannot apply not equals [!=] to types " + "[" + left.actual.name + "] and [" + right.actual.name + "].")); } - if (promotedType.sort == Sort.DEF) { + if (promotedType.dynamic) { left.expected = left.actual; right.expected = right.actual; } else { @@ -206,17 +205,17 @@ private void analyzeNE(Locals variables) { } if ((left.constant != null || left.isNull) && (right.constant != null || right.isNull)) { - Sort sort = promotedType.sort; + Class sort = promotedType.clazz; - if (sort == Sort.BOOL) { + if (sort == boolean.class) { constant = (boolean)left.constant != (boolean)right.constant; - } else if (sort == Sort.INT) { + } else if (sort == int.class) { constant = (int)left.constant != (int)right.constant; - } else if (sort == Sort.LONG) { + } else if (sort == long.class) { constant = (long)left.constant != (long)right.constant; - } else if (sort == Sort.FLOAT) { + } else if (sort == float.class) { constant = (float)left.constant != (float)right.constant; - } else if (sort == Sort.DOUBLE) { + } else if (sort == double.class) { constant = (double)left.constant != (double)right.constant; } else if (!left.isNull) { constant = !left.constant.equals(right.constant); @@ -227,14 +226,14 @@ private void analyzeNE(Locals variables) { } } - actual = Definition.BOOLEAN_TYPE; + actual = variables.getDefinition().booleanType; } private void analyzeNER(Locals variables) { left.analyze(variables); right.analyze(variables); - promotedType = AnalyzerCaster.promoteEquality(left.actual, right.actual); + promotedType = variables.getDefinition().caster.promoteEquality(left.actual, right.actual); if (promotedType == null) { throw createError(new ClassCastException("Cannot apply reference not equals [!==] to types " + @@ -252,38 +251,38 @@ private void analyzeNER(Locals variables) { } if ((left.constant != null || left.isNull) && (right.constant != null || right.isNull)) { - Sort sort = promotedType.sort; + Class sort = promotedType.clazz; - if (sort == Sort.BOOL) { + if (sort == boolean.class) { constant = (boolean)left.constant != (boolean)right.constant; - } else if (sort == Sort.INT) { + } else if (sort == int.class) { constant = (int)left.constant != (int)right.constant; - } else if (sort == Sort.LONG) { + } else if (sort == long.class) { constant = (long)left.constant != (long)right.constant; - } else if (sort == Sort.FLOAT) { + } else if (sort == float.class) { constant = (float)left.constant != (float)right.constant; - } else if (sort == Sort.DOUBLE) { + } else if (sort == double.class) { constant = (double)left.constant != (double)right.constant; } else { constant = left.constant != right.constant; } } - actual = Definition.BOOLEAN_TYPE; + actual = variables.getDefinition().booleanType; } private void analyzeGTE(Locals variables) { left.analyze(variables); right.analyze(variables); - promotedType = AnalyzerCaster.promoteNumeric(left.actual, right.actual, true); + promotedType = variables.getDefinition().caster.promoteNumeric(left.actual, right.actual, true); if (promotedType == null) { throw createError(new ClassCastException("Cannot apply greater than or equals [>=] to types " + "[" + left.actual.name + "] and [" + right.actual.name + "].")); } - if (promotedType.sort == Sort.DEF) { + if (promotedType.dynamic) { left.expected = left.actual; right.expected = right.actual; } else { @@ -295,36 +294,36 @@ private void analyzeGTE(Locals variables) { right = right.cast(variables); if (left.constant != null && right.constant != null) { - Sort sort = promotedType.sort; + Class sort = promotedType.clazz; - if (sort == Sort.INT) { + if (sort == int.class) { constant = (int)left.constant >= (int)right.constant; - } else if (sort == Sort.LONG) { + } else if (sort == long.class) { constant = (long)left.constant >= (long)right.constant; - } else if (sort == Sort.FLOAT) { + } else if (sort == float.class) { constant = (float)left.constant >= (float)right.constant; - } else if (sort == Sort.DOUBLE) { + } else if (sort == double.class) { constant = (double)left.constant >= (double)right.constant; } else { throw createError(new IllegalStateException("Illegal tree structure.")); } } - actual = Definition.BOOLEAN_TYPE; + actual = variables.getDefinition().booleanType; } private void analyzeGT(Locals variables) { left.analyze(variables); right.analyze(variables); - promotedType = AnalyzerCaster.promoteNumeric(left.actual, right.actual, true); + promotedType = variables.getDefinition().caster.promoteNumeric(left.actual, right.actual, true); if (promotedType == null) { throw createError(new ClassCastException("Cannot apply greater than [>] to types " + "[" + left.actual.name + "] and [" + right.actual.name + "].")); } - if (promotedType.sort == Sort.DEF) { + if (promotedType.dynamic) { left.expected = left.actual; right.expected = right.actual; } else { @@ -336,36 +335,36 @@ private void analyzeGT(Locals variables) { right = right.cast(variables); if (left.constant != null && right.constant != null) { - Sort sort = promotedType.sort; + Class sort = promotedType.clazz; - if (sort == Sort.INT) { + if (sort == int.class) { constant = (int)left.constant > (int)right.constant; - } else if (sort == Sort.LONG) { + } else if (sort == long.class) { constant = (long)left.constant > (long)right.constant; - } else if (sort == Sort.FLOAT) { + } else if (sort == float.class) { constant = (float)left.constant > (float)right.constant; - } else if (sort == Sort.DOUBLE) { + } else if (sort == double.class) { constant = (double)left.constant > (double)right.constant; } else { throw createError(new IllegalStateException("Illegal tree structure.")); } } - actual = Definition.BOOLEAN_TYPE; + actual = variables.getDefinition().booleanType; } private void analyzeLTE(Locals variables) { left.analyze(variables); right.analyze(variables); - promotedType = AnalyzerCaster.promoteNumeric(left.actual, right.actual, true); + promotedType = variables.getDefinition().caster.promoteNumeric(left.actual, right.actual, true); if (promotedType == null) { throw createError(new ClassCastException("Cannot apply less than or equals [<=] to types " + "[" + left.actual.name + "] and [" + right.actual.name + "].")); } - if (promotedType.sort == Sort.DEF) { + if (promotedType.dynamic) { left.expected = left.actual; right.expected = right.actual; } else { @@ -377,36 +376,36 @@ private void analyzeLTE(Locals variables) { right = right.cast(variables); if (left.constant != null && right.constant != null) { - Sort sort = promotedType.sort; + Class sort = promotedType.clazz; - if (sort == Sort.INT) { + if (sort == int.class) { constant = (int)left.constant <= (int)right.constant; - } else if (sort == Sort.LONG) { + } else if (sort == long.class) { constant = (long)left.constant <= (long)right.constant; - } else if (sort == Sort.FLOAT) { + } else if (sort == float.class) { constant = (float)left.constant <= (float)right.constant; - } else if (sort == Sort.DOUBLE) { + } else if (sort == double.class) { constant = (double)left.constant <= (double)right.constant; } else { throw createError(new IllegalStateException("Illegal tree structure.")); } } - actual = Definition.BOOLEAN_TYPE; + actual = variables.getDefinition().booleanType; } private void analyzeLT(Locals variables) { left.analyze(variables); right.analyze(variables); - promotedType = AnalyzerCaster.promoteNumeric(left.actual, right.actual, true); + promotedType = variables.getDefinition().caster.promoteNumeric(left.actual, right.actual, true); if (promotedType == null) { throw createError(new ClassCastException("Cannot apply less than [>=] to types " + "[" + left.actual.name + "] and [" + right.actual.name + "].")); } - if (promotedType.sort == Sort.DEF) { + if (promotedType.dynamic) { left.expected = left.actual; right.expected = right.actual; } else { @@ -418,22 +417,22 @@ private void analyzeLT(Locals variables) { right = right.cast(variables); if (left.constant != null && right.constant != null) { - Sort sort = promotedType.sort; + Class sort = promotedType.clazz; - if (sort == Sort.INT) { + if (sort == int.class) { constant = (int)left.constant < (int)right.constant; - } else if (sort == Sort.LONG) { + } else if (sort == long.class) { constant = (long)left.constant < (long)right.constant; - } else if (sort == Sort.FLOAT) { + } else if (sort == float.class) { constant = (float)left.constant < (float)right.constant; - } else if (sort == Sort.DOUBLE) { + } else if (sort == double.class) { constant = (double)left.constant < (double)right.constant; } else { throw createError(new IllegalStateException("Illegal tree structure.")); } } - actual = Definition.BOOLEAN_TYPE; + actual = variables.getDefinition().booleanType; } @Override @@ -458,96 +457,86 @@ void write(MethodWriter writer, Globals globals) { boolean writejump = true; - switch (promotedType.sort) { - case VOID: - case BYTE: - case SHORT: - case CHAR: + Class sort = promotedType.clazz; + + if (sort == void.class || sort == byte.class || sort == short.class || sort == char.class) { + throw createError(new IllegalStateException("Illegal tree structure.")); + } else if (sort == boolean.class) { + if (eq) writer.ifCmp(promotedType.type, MethodWriter.EQ, jump); + else if (ne) writer.ifCmp(promotedType.type, MethodWriter.NE, jump); + else { throw createError(new IllegalStateException("Illegal tree structure.")); - case BOOL: - if (eq) writer.ifCmp(promotedType.type, MethodWriter.EQ, jump); - else if (ne) writer.ifCmp(promotedType.type, MethodWriter.NE, jump); - else { - throw createError(new IllegalStateException("Illegal tree structure.")); - } + } + } else if (sort == int.class || sort == long.class || sort == float.class || sort == double.class) { + if (eq) writer.ifCmp(promotedType.type, MethodWriter.EQ, jump); + else if (ne) writer.ifCmp(promotedType.type, MethodWriter.NE, jump); + else if (lt) writer.ifCmp(promotedType.type, MethodWriter.LT, jump); + else if (lte) writer.ifCmp(promotedType.type, MethodWriter.LE, jump); + else if (gt) writer.ifCmp(promotedType.type, MethodWriter.GT, jump); + else if (gte) writer.ifCmp(promotedType.type, MethodWriter.GE, jump); + else { + throw createError(new IllegalStateException("Illegal tree structure.")); + } - break; - case INT: - case LONG: - case FLOAT: - case DOUBLE: - if (eq) writer.ifCmp(promotedType.type, MethodWriter.EQ, jump); - else if (ne) writer.ifCmp(promotedType.type, MethodWriter.NE, jump); - else if (lt) writer.ifCmp(promotedType.type, MethodWriter.LT, jump); - else if (lte) writer.ifCmp(promotedType.type, MethodWriter.LE, jump); - else if (gt) writer.ifCmp(promotedType.type, MethodWriter.GT, jump); - else if (gte) writer.ifCmp(promotedType.type, MethodWriter.GE, jump); - else { - throw createError(new IllegalStateException("Illegal tree structure.")); - } + } else if (promotedType.dynamic) { + org.objectweb.asm.Type booleanType = org.objectweb.asm.Type.getType(boolean.class); + org.objectweb.asm.Type descriptor = org.objectweb.asm.Type.getMethodType(booleanType, left.actual.type, right.actual.type); - break; - case DEF: - org.objectweb.asm.Type booleanType = org.objectweb.asm.Type.getType(boolean.class); - org.objectweb.asm.Type descriptor = org.objectweb.asm.Type.getMethodType(booleanType, left.actual.type, right.actual.type); - - if (eq) { - if (right.isNull) { - writer.ifNull(jump); - } else if (!left.isNull && operation == Operation.EQ) { - writer.invokeDefCall("eq", descriptor, DefBootstrap.BINARY_OPERATOR, DefBootstrap.OPERATOR_ALLOWS_NULL); - writejump = false; - } else { - writer.ifCmp(promotedType.type, MethodWriter.EQ, jump); - } - } else if (ne) { - if (right.isNull) { - writer.ifNonNull(jump); - } else if (!left.isNull && operation == Operation.NE) { - writer.invokeDefCall("eq", descriptor, DefBootstrap.BINARY_OPERATOR, DefBootstrap.OPERATOR_ALLOWS_NULL); - writer.ifZCmp(MethodWriter.EQ, jump); - } else { - writer.ifCmp(promotedType.type, MethodWriter.NE, jump); - } - } else if (lt) { - writer.invokeDefCall("lt", descriptor, DefBootstrap.BINARY_OPERATOR, 0); - writejump = false; - } else if (lte) { - writer.invokeDefCall("lte", descriptor, DefBootstrap.BINARY_OPERATOR, 0); - writejump = false; - } else if (gt) { - writer.invokeDefCall("gt", descriptor, DefBootstrap.BINARY_OPERATOR, 0); + if (eq) { + if (right.isNull) { + writer.ifNull(jump); + } else if (!left.isNull && operation == Operation.EQ) { + writer.invokeDefCall("eq", descriptor, DefBootstrap.BINARY_OPERATOR, DefBootstrap.OPERATOR_ALLOWS_NULL); writejump = false; - } else if (gte) { - writer.invokeDefCall("gte", descriptor, DefBootstrap.BINARY_OPERATOR, 0); + } else { + writer.ifCmp(promotedType.type, MethodWriter.EQ, jump); + } + } else if (ne) { + if (right.isNull) { + writer.ifNonNull(jump); + } else if (!left.isNull && operation == Operation.NE) { + writer.invokeDefCall("eq", descriptor, DefBootstrap.BINARY_OPERATOR, DefBootstrap.OPERATOR_ALLOWS_NULL); + writer.ifZCmp(MethodWriter.EQ, jump); + } else { + writer.ifCmp(promotedType.type, MethodWriter.NE, jump); + } + } else if (lt) { + writer.invokeDefCall("lt", descriptor, DefBootstrap.BINARY_OPERATOR, 0); + writejump = false; + } else if (lte) { + writer.invokeDefCall("lte", descriptor, DefBootstrap.BINARY_OPERATOR, 0); + writejump = false; + } else if (gt) { + writer.invokeDefCall("gt", descriptor, DefBootstrap.BINARY_OPERATOR, 0); + writejump = false; + } else if (gte) { + writer.invokeDefCall("gte", descriptor, DefBootstrap.BINARY_OPERATOR, 0); + writejump = false; + } else { + throw createError(new IllegalStateException("Illegal tree structure.")); + } + } else { + if (eq) { + if (right.isNull) { + writer.ifNull(jump); + } else if (operation == Operation.EQ) { + writer.invokeStatic(OBJECTS_TYPE, EQUALS); writejump = false; } else { - throw createError(new IllegalStateException("Illegal tree structure.")); + writer.ifCmp(promotedType.type, MethodWriter.EQ, jump); } - - break; - default: - if (eq) { - if (right.isNull) { - writer.ifNull(jump); - } else if (operation == Operation.EQ) { - writer.invokeStatic(OBJECTS_TYPE, EQUALS); - writejump = false; - } else { - writer.ifCmp(promotedType.type, MethodWriter.EQ, jump); - } - } else if (ne) { - if (right.isNull) { - writer.ifNonNull(jump); - } else if (operation == Operation.NE) { - writer.invokeStatic(OBJECTS_TYPE, EQUALS); - writer.ifZCmp(MethodWriter.EQ, jump); - } else { - writer.ifCmp(promotedType.type, MethodWriter.NE, jump); - } + } else if (ne) { + if (right.isNull) { + writer.ifNonNull(jump); + } else if (operation == Operation.NE) { + writer.invokeStatic(OBJECTS_TYPE, EQUALS); + writer.ifZCmp(MethodWriter.EQ, jump); } else { - throw createError(new IllegalStateException("Illegal tree structure.")); + writer.ifCmp(promotedType.type, MethodWriter.NE, jump); } + } else { + throw createError(new IllegalStateException("Illegal tree structure.")); + } } if (writejump) { diff --git a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/EConditional.java b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/EConditional.java index 40f4b27ed156f..571e57cad24db 100644 --- a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/EConditional.java +++ b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/EConditional.java @@ -59,7 +59,7 @@ void extractVariables(Set variables) { @Override void analyze(Locals locals) { - condition.expected = Definition.BOOLEAN_TYPE; + condition.expected = locals.getDefinition().booleanType; condition.analyze(locals); condition = condition.cast(locals); @@ -79,7 +79,7 @@ void analyze(Locals locals) { right.analyze(locals); if (expected == null) { - final Type promote = AnalyzerCaster.promoteConditional(left.actual, right.actual, left.constant, right.constant); + final Type promote = locals.getDefinition().caster.promoteConditional(left.actual, right.actual, left.constant, right.constant); left.expected = promote; right.expected = promote; diff --git a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/EConstant.java b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/EConstant.java index 4242496c51ac5..03038be3f7ecf 100644 --- a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/EConstant.java +++ b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/EConstant.java @@ -21,7 +21,6 @@ import org.elasticsearch.painless.Definition; import org.elasticsearch.painless.Globals; -import org.elasticsearch.painless.Definition.Sort; import java.util.Set; @@ -49,23 +48,23 @@ void extractVariables(Set variables) { @Override void analyze(Locals locals) { if (constant instanceof String) { - actual = Definition.STRING_TYPE; + actual = locals.getDefinition().StringType; } else if (constant instanceof Double) { - actual = Definition.DOUBLE_TYPE; + actual = locals.getDefinition().doubleType; } else if (constant instanceof Float) { - actual = Definition.FLOAT_TYPE; + actual = locals.getDefinition().floatType; } else if (constant instanceof Long) { - actual = Definition.LONG_TYPE; + actual = locals.getDefinition().longType; } else if (constant instanceof Integer) { - actual = Definition.INT_TYPE; + actual = locals.getDefinition().intType; } else if (constant instanceof Character) { - actual = Definition.CHAR_TYPE; + actual = locals.getDefinition().charType; } else if (constant instanceof Short) { - actual = Definition.SHORT_TYPE; + actual = locals.getDefinition().shortType; } else if (constant instanceof Byte) { - actual = Definition.BYTE_TYPE; + actual = locals.getDefinition().byteType; } else if (constant instanceof Boolean) { - actual = Definition.BOOLEAN_TYPE; + actual = locals.getDefinition().booleanType; } else { throw createError(new IllegalStateException("Illegal tree structure.")); } @@ -73,20 +72,17 @@ void analyze(Locals locals) { @Override void write(MethodWriter writer, Globals globals) { - Sort sort = actual.sort; - - switch (sort) { - case STRING: writer.push((String)constant); break; - case DOUBLE: writer.push((double)constant); break; - case FLOAT: writer.push((float)constant); break; - case LONG: writer.push((long)constant); break; - case INT: writer.push((int)constant); break; - case CHAR: writer.push((char)constant); break; - case SHORT: writer.push((short)constant); break; - case BYTE: writer.push((byte)constant); break; - case BOOL: writer.push((boolean)constant); break; - default: - throw createError(new IllegalStateException("Illegal tree structure.")); + if (actual.clazz == String.class) writer.push((String)constant); + else if (actual.clazz == double.class) writer.push((double)constant); + else if (actual.clazz == float.class) writer.push((float)constant); + else if (actual.clazz == long.class) writer.push((long)constant); + else if (actual.clazz == int.class) writer.push((int)constant); + else if (actual.clazz == char.class) writer.push((char)constant); + else if (actual.clazz == short.class) writer.push((short)constant); + else if (actual.clazz == byte.class) writer.push((byte)constant); + else if (actual.clazz == boolean.class) writer.push((boolean)constant); + else { + throw createError(new IllegalStateException("Illegal tree structure.")); } } diff --git a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/EDecimal.java b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/EDecimal.java index 85675a02892c9..d6adbfb7ee0eb 100644 --- a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/EDecimal.java +++ b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/EDecimal.java @@ -53,7 +53,7 @@ void analyze(Locals locals) { if (value.endsWith("f") || value.endsWith("F")) { try { constant = Float.parseFloat(value.substring(0, value.length() - 1)); - actual = Definition.FLOAT_TYPE; + actual = locals.getDefinition().floatType; } catch (NumberFormatException exception) { throw createError(new IllegalArgumentException("Invalid float constant [" + value + "].")); } @@ -64,7 +64,7 @@ void analyze(Locals locals) { } try { constant = Double.parseDouble(toParse); - actual = Definition.DOUBLE_TYPE; + actual = locals.getDefinition().doubleType; } catch (NumberFormatException exception) { throw createError(new IllegalArgumentException("Invalid double constant [" + value + "].")); } diff --git a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/EElvis.java b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/EElvis.java index a711f17c5b0b4..e9816c524bf3b 100644 --- a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/EElvis.java +++ b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/EElvis.java @@ -54,8 +54,8 @@ void extractVariables(Set variables) { @Override void analyze(Locals locals) { - if (expected != null && expected.sort.primitive) { - throw createError(new IllegalArgumentException("Evlis operator cannot return primitives")); + if (expected != null && expected.clazz.isPrimitive()) { + throw createError(new IllegalArgumentException("Elvis operator cannot return primitives")); } lhs.expected = expected; lhs.explicit = explicit; @@ -73,7 +73,7 @@ void analyze(Locals locals) { if (lhs.constant != null) { throw createError(new IllegalArgumentException("Extraneous elvis operator. LHS is a constant.")); } - if (lhs.actual.sort.primitive) { + if (lhs.actual.clazz.isPrimitive()) { throw createError(new IllegalArgumentException("Extraneous elvis operator. LHS is a primitive.")); } if (rhs.isNull) { @@ -81,7 +81,7 @@ void analyze(Locals locals) { } if (expected == null) { - final Type promote = AnalyzerCaster.promoteConditional(lhs.actual, rhs.actual, lhs.constant, rhs.constant); + final Type promote = locals.getDefinition().caster.promoteConditional(lhs.actual, rhs.actual, lhs.constant, rhs.constant); lhs.expected = promote; rhs.expected = promote; diff --git a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/EFunctionRef.java b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/EFunctionRef.java index c9b2a5d91d232..ffbb344f29cb9 100644 --- a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/EFunctionRef.java +++ b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/EFunctionRef.java @@ -33,7 +33,6 @@ import java.util.Objects; import java.util.Set; -import static org.elasticsearch.painless.Definition.VOID_TYPE; import static org.elasticsearch.painless.WriterConstants.LAMBDA_BOOTSTRAP_HANDLE; /** @@ -83,11 +82,11 @@ void analyze(Locals locals) { for (int i = 0; i < interfaceMethod.arguments.size(); ++i) { Definition.Type from = interfaceMethod.arguments.get(i); Definition.Type to = delegateMethod.arguments.get(i); - AnalyzerCaster.getLegalCast(location, from, to, false, true); + locals.getDefinition().caster.getLegalCast(location, from, to, false, true); } - if (interfaceMethod.rtn != VOID_TYPE) { - AnalyzerCaster.getLegalCast(location, delegateMethod.rtn, interfaceMethod.rtn, false, true); + if (interfaceMethod.rtn.equals(locals.getDefinition().voidType) == false) { + locals.getDefinition().caster.getLegalCast(location, delegateMethod.rtn, interfaceMethod.rtn, false, true); } } else { // whitelist lookup diff --git a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/EInstanceof.java b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/EInstanceof.java index c9b2c95bb4c2d..a8c1217466b7d 100644 --- a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/EInstanceof.java +++ b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/EInstanceof.java @@ -65,7 +65,7 @@ void analyze(Locals locals) { } // map to wrapped type for primitive types - resolvedType = type.sort.primitive ? type.sort.boxed : type.clazz; + resolvedType = type.clazz.isPrimitive() ? locals.getDefinition().getBoxedType(type).clazz : type.clazz; // analyze and cast the expression expression.analyze(locals); @@ -73,11 +73,11 @@ void analyze(Locals locals) { expression = expression.cast(locals); // record if the expression returns a primitive - primitiveExpression = expression.actual.sort.primitive; + primitiveExpression = expression.actual.clazz.isPrimitive(); // map to wrapped type for primitive types - expressionType = expression.actual.sort.primitive ? expression.actual.sort.boxed : type.clazz; + expressionType = expression.actual.clazz.isPrimitive() ? locals.getDefinition().getBoxedType(expression.actual).clazz : type.clazz; - actual = Definition.BOOLEAN_TYPE; + actual = locals.getDefinition().booleanType; } @Override diff --git a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/ELambda.java b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/ELambda.java index 748ff67ddd4b8..07de9138e7ca4 100644 --- a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/ELambda.java +++ b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/ELambda.java @@ -39,10 +39,7 @@ import java.util.Objects; import java.util.Set; -import static org.elasticsearch.painless.Definition.VOID_TYPE; -import static org.elasticsearch.painless.WriterConstants.CLASS_NAME; import static org.elasticsearch.painless.WriterConstants.LAMBDA_BOOTSTRAP_HANDLE; -import static org.objectweb.asm.Opcodes.H_INVOKESTATIC; /** * Lambda expression node. @@ -111,7 +108,7 @@ void analyze(Locals locals) { if (expected == null) { interfaceMethod = null; // we don't know anything: treat as def - returnType = Definition.DEF_TYPE; + returnType = locals.getDefinition().DefType; // don't infer any types, replace any null types with def actualParamTypeStrs = new ArrayList<>(paramTypeStrs.size()); for (String type : paramTypeStrs) { @@ -133,8 +130,8 @@ void analyze(Locals locals) { throw new IllegalArgumentException("Incorrect number of parameters for [" + interfaceMethod.name + "] in [" + expected.clazz + "]"); // for method invocation, its allowed to ignore the return value - if (interfaceMethod.rtn == Definition.VOID_TYPE) { - returnType = Definition.DEF_TYPE; + if (interfaceMethod.rtn.equals(locals.getDefinition().voidType)) { + returnType = locals.getDefinition().DefType; } else { returnType = interfaceMethod.rtn; } @@ -195,11 +192,11 @@ void analyze(Locals locals) { for (int i = 0; i < interfaceMethod.arguments.size(); ++i) { Type from = interfaceMethod.arguments.get(i); Type to = desugared.parameters.get(i + captures.size()).type; - AnalyzerCaster.getLegalCast(location, from, to, false, true); + locals.getDefinition().caster.getLegalCast(location, from, to, false, true); } - if (interfaceMethod.rtn != VOID_TYPE) { - AnalyzerCaster.getLegalCast(location, desugared.rtnType, interfaceMethod.rtn, false, true); + if (interfaceMethod.rtn.equals(locals.getDefinition().voidType) == false) { + locals.getDefinition().caster.getLegalCast(location, desugared.rtnType, interfaceMethod.rtn, false, true); } actual = expected; diff --git a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/EListInit.java b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/EListInit.java index 999d35551cef2..44b4a67b3952a 100644 --- a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/EListInit.java +++ b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/EListInit.java @@ -58,7 +58,7 @@ void analyze(Locals locals) { throw createError(new IllegalArgumentException("Must read from list initializer.")); } - actual = Definition.ARRAY_LIST_TYPE; + actual = locals.getDefinition().ArrayListType; constructor = actual.struct.constructors.get(new MethodKey("", 0)); @@ -75,7 +75,7 @@ void analyze(Locals locals) { for (int index = 0; index < values.size(); ++index) { AExpression expression = values.get(index); - expression.expected = Definition.DEF_TYPE; + expression.expected = locals.getDefinition().DefType; expression.internal = true; expression.analyze(locals); values.set(index, expression.cast(locals)); diff --git a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/EMapInit.java b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/EMapInit.java index 0647b5716e047..42fb37a24075d 100644 --- a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/EMapInit.java +++ b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/EMapInit.java @@ -64,7 +64,7 @@ void analyze(Locals locals) { throw createError(new IllegalArgumentException("Must read from map initializer.")); } - actual = Definition.HASH_MAP_TYPE; + actual = locals.getDefinition().HashMapType; constructor = actual.struct.constructors.get(new MethodKey("", 0)); @@ -85,7 +85,7 @@ void analyze(Locals locals) { for (int index = 0; index < keys.size(); ++index) { AExpression expression = keys.get(index); - expression.expected = Definition.DEF_TYPE; + expression.expected = locals.getDefinition().DefType; expression.internal = true; expression.analyze(locals); keys.set(index, expression.cast(locals)); @@ -94,7 +94,7 @@ void analyze(Locals locals) { for (int index = 0; index < values.size(); ++index) { AExpression expression = values.get(index); - expression.expected = Definition.DEF_TYPE; + expression.expected = locals.getDefinition().DefType; expression.internal = true; expression.analyze(locals); values.set(index, expression.cast(locals)); diff --git a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/ENewArray.java b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/ENewArray.java index d32a153b79730..51198e03bdefc 100644 --- a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/ENewArray.java +++ b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/ENewArray.java @@ -71,8 +71,7 @@ void analyze(Locals locals) { for (int argument = 0; argument < arguments.size(); ++argument) { AExpression expression = arguments.get(argument); - expression.expected = initialize ? locals.getDefinition().getType(type.struct, 0) - : Definition.INT_TYPE; + expression.expected = initialize ? locals.getDefinition().getType(type.struct, 0) : locals.getDefinition().intType; expression.internal = true; expression.analyze(locals); arguments.set(argument, expression.cast(locals)); diff --git a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/ENull.java b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/ENull.java index 0ab574b2e0712..73d34e7eae928 100644 --- a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/ENull.java +++ b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/ENull.java @@ -52,13 +52,13 @@ void analyze(Locals locals) { isNull = true; if (expected != null) { - if (expected.sort.primitive) { + if (expected.clazz.isPrimitive()) { throw createError(new IllegalArgumentException("Cannot cast null to a primitive type [" + expected.name + "].")); } actual = expected; } else { - actual = Definition.OBJECT_TYPE; + actual = locals.getDefinition().ObjectType; } } diff --git a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/ENumeric.java b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/ENumeric.java index 8e14f7f1ea9df..02df9c10a13c9 100644 --- a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/ENumeric.java +++ b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/ENumeric.java @@ -22,7 +22,6 @@ import org.elasticsearch.painless.Definition; import org.elasticsearch.painless.Globals; import org.elasticsearch.painless.Location; -import org.elasticsearch.painless.Definition.Sort; import java.util.Objects; import java.util.Set; @@ -63,7 +62,7 @@ void analyze(Locals locals) { try { constant = Double.parseDouble(value.substring(0, value.length() - 1)); - actual = Definition.DOUBLE_TYPE; + actual = locals.getDefinition().doubleType; } catch (NumberFormatException exception) { throw createError(new IllegalArgumentException("Invalid double constant [" + value + "].")); } @@ -74,34 +73,34 @@ void analyze(Locals locals) { try { constant = Float.parseFloat(value.substring(0, value.length() - 1)); - actual = Definition.FLOAT_TYPE; + actual = locals.getDefinition().floatType; } catch (NumberFormatException exception) { throw createError(new IllegalArgumentException("Invalid float constant [" + value + "].")); } } else if (value.endsWith("l") || value.endsWith("L")) { try { constant = Long.parseLong(value.substring(0, value.length() - 1), radix); - actual = Definition.LONG_TYPE; + actual = locals.getDefinition().longType; } catch (NumberFormatException exception) { throw createError(new IllegalArgumentException("Invalid long constant [" + value + "].")); } } else { try { - Sort sort = expected == null ? Sort.INT : expected.sort; + Class sort = expected == null ? int.class : expected.clazz; int integer = Integer.parseInt(value, radix); - if (sort == Sort.BYTE && integer >= Byte.MIN_VALUE && integer <= Byte.MAX_VALUE) { + if (sort == byte.class && integer >= Byte.MIN_VALUE && integer <= Byte.MAX_VALUE) { constant = (byte)integer; - actual = Definition.BYTE_TYPE; - } else if (sort == Sort.CHAR && integer >= Character.MIN_VALUE && integer <= Character.MAX_VALUE) { + actual = locals.getDefinition().byteType; + } else if (sort == char.class && integer >= Character.MIN_VALUE && integer <= Character.MAX_VALUE) { constant = (char)integer; - actual = Definition.CHAR_TYPE; - } else if (sort == Sort.SHORT && integer >= Short.MIN_VALUE && integer <= Short.MAX_VALUE) { + actual = locals.getDefinition().charType; + } else if (sort == short.class && integer >= Short.MIN_VALUE && integer <= Short.MAX_VALUE) { constant = (short)integer; - actual = Definition.SHORT_TYPE; + actual = locals.getDefinition().shortType; } else { constant = integer; - actual = Definition.INT_TYPE; + actual = locals.getDefinition().intType; } } catch (NumberFormatException exception) { try { diff --git a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/ERegex.java b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/ERegex.java index 4b38868b1b1fc..4fff59f9d59c7 100644 --- a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/ERegex.java +++ b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/ERegex.java @@ -73,22 +73,23 @@ void analyze(Locals locals) { new IllegalArgumentException("Error compiling regex: " + e.getDescription())); } - constant = new Constant(location, Definition.PATTERN_TYPE.type, "regexAt$" + location.getOffset(), this::initializeConstant); - actual = Definition.PATTERN_TYPE; + constant = new Constant( + location, locals.getDefinition().PatternType.type, "regexAt$" + location.getOffset(), this::initializeConstant); + actual = locals.getDefinition().PatternType; } @Override void write(MethodWriter writer, Globals globals) { writer.writeDebugInfo(location); - writer.getStatic(WriterConstants.CLASS_TYPE, constant.name, Definition.PATTERN_TYPE.type); + writer.getStatic(WriterConstants.CLASS_TYPE, constant.name, org.objectweb.asm.Type.getType(Pattern.class)); globals.addConstantInitializer(constant); } private void initializeConstant(MethodWriter writer) { writer.push(pattern); writer.push(flags); - writer.invokeStatic(Definition.PATTERN_TYPE.type, WriterConstants.PATTERN_COMPILE); + writer.invokeStatic(org.objectweb.asm.Type.getType(Pattern.class), WriterConstants.PATTERN_COMPILE); } private int flagForChar(char c) { diff --git a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/EString.java b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/EString.java index 0c2cec13e2431..001e93096289e 100644 --- a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/EString.java +++ b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/EString.java @@ -50,7 +50,7 @@ void analyze(Locals locals) { throw createError(new IllegalArgumentException("Must read from constant [" + constant + "].")); } - actual = Definition.STRING_TYPE; + actual = locals.getDefinition().StringType; } @Override diff --git a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/EUnary.java b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/EUnary.java index cd483d7d71a24..e9971b538f5af 100644 --- a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/EUnary.java +++ b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/EUnary.java @@ -22,7 +22,6 @@ import org.elasticsearch.painless.Definition; import org.elasticsearch.painless.Globals; import org.elasticsearch.painless.Location; -import org.elasticsearch.painless.Definition.Sort; import org.elasticsearch.painless.Definition.Type; import org.elasticsearch.painless.AnalyzerCaster; import org.elasticsearch.painless.DefBootstrap; @@ -77,7 +76,7 @@ void analyze(Locals locals) { } void analyzeNot(Locals variables) { - child.expected = Definition.BOOLEAN_TYPE; + child.expected = variables.getDefinition().booleanType; child.analyze(variables); child = child.cast(variables); @@ -85,13 +84,13 @@ void analyzeNot(Locals variables) { constant = !(boolean)child.constant; } - actual = Definition.BOOLEAN_TYPE; + actual = variables.getDefinition().booleanType; } void analyzeBWNot(Locals variables) { child.analyze(variables); - promote = AnalyzerCaster.promoteNumeric(child.actual, false); + promote = variables.getDefinition().caster.promoteNumeric(child.actual, false); if (promote == null) { throw createError(new ClassCastException("Cannot apply not [~] to type [" + child.actual.name + "].")); @@ -101,18 +100,18 @@ void analyzeBWNot(Locals variables) { child = child.cast(variables); if (child.constant != null) { - Sort sort = promote.sort; + Class sort = promote.clazz; - if (sort == Sort.INT) { + if (sort == int.class) { constant = ~(int)child.constant; - } else if (sort == Sort.LONG) { + } else if (sort == long.class) { constant = ~(long)child.constant; } else { throw createError(new IllegalStateException("Illegal tree structure.")); } } - if (promote.sort == Sort.DEF && expected != null) { + if (promote.dynamic && expected != null) { actual = expected; } else { actual = promote; @@ -122,7 +121,7 @@ void analyzeBWNot(Locals variables) { void analyzerAdd(Locals variables) { child.analyze(variables); - promote = AnalyzerCaster.promoteNumeric(child.actual, true); + promote = variables.getDefinition().caster.promoteNumeric(child.actual, true); if (promote == null) { throw createError(new ClassCastException("Cannot apply positive [+] to type [" + child.actual.name + "].")); @@ -132,22 +131,22 @@ void analyzerAdd(Locals variables) { child = child.cast(variables); if (child.constant != null) { - Sort sort = promote.sort; + Class sort = promote.clazz; - if (sort == Sort.INT) { + if (sort == int.class) { constant = +(int)child.constant; - } else if (sort == Sort.LONG) { + } else if (sort == long.class) { constant = +(long)child.constant; - } else if (sort == Sort.FLOAT) { + } else if (sort == float.class) { constant = +(float)child.constant; - } else if (sort == Sort.DOUBLE) { + } else if (sort == double.class) { constant = +(double)child.constant; } else { throw createError(new IllegalStateException("Illegal tree structure.")); } } - if (promote.sort == Sort.DEF && expected != null) { + if (promote.dynamic && expected != null) { actual = expected; } else { actual = promote; @@ -157,7 +156,7 @@ void analyzerAdd(Locals variables) { void analyzerSub(Locals variables) { child.analyze(variables); - promote = AnalyzerCaster.promoteNumeric(child.actual, true); + promote = variables.getDefinition().caster.promoteNumeric(child.actual, true); if (promote == null) { throw createError(new ClassCastException("Cannot apply negative [-] to type [" + child.actual.name + "].")); @@ -167,22 +166,22 @@ void analyzerSub(Locals variables) { child = child.cast(variables); if (child.constant != null) { - Sort sort = promote.sort; + Class sort = promote.clazz; - if (sort == Sort.INT) { + if (sort == int.class) { constant = -(int)child.constant; - } else if (sort == Sort.LONG) { + } else if (sort == long.class) { constant = -(long)child.constant; - } else if (sort == Sort.FLOAT) { + } else if (sort == float.class) { constant = -(float)child.constant; - } else if (sort == Sort.DOUBLE) { + } else if (sort == double.class) { constant = -(double)child.constant; } else { throw createError(new IllegalStateException("Illegal tree structure.")); } } - if (promote.sort == Sort.DEF && expected != null) { + if (promote.dynamic && expected != null) { actual = expected; } else { actual = promote; @@ -206,7 +205,7 @@ void write(MethodWriter writer, Globals globals) { writer.push(true); writer.mark(end); } else { - Sort sort = promote.sort; + Class sort = promote.clazz; child.write(writer, globals); // Def calls adopt the wanted return value. If there was a narrowing cast, @@ -218,13 +217,13 @@ void write(MethodWriter writer, Globals globals) { } if (operation == Operation.BWNOT) { - if (sort == Sort.DEF) { + if (promote.dynamic) { org.objectweb.asm.Type descriptor = org.objectweb.asm.Type.getMethodType(actual.type, child.actual.type); writer.invokeDefCall("not", descriptor, DefBootstrap.UNARY_OPERATOR, defFlags); } else { - if (sort == Sort.INT) { + if (sort == int.class) { writer.push(-1); - } else if (sort == Sort.LONG) { + } else if (sort == long.class) { writer.push(-1L); } else { throw createError(new IllegalStateException("Illegal tree structure.")); @@ -233,14 +232,14 @@ void write(MethodWriter writer, Globals globals) { writer.math(MethodWriter.XOR, actual.type); } } else if (operation == Operation.SUB) { - if (sort == Sort.DEF) { + if (promote.dynamic) { org.objectweb.asm.Type descriptor = org.objectweb.asm.Type.getMethodType(actual.type, child.actual.type); writer.invokeDefCall("neg", descriptor, DefBootstrap.UNARY_OPERATOR, defFlags); } else { writer.math(MethodWriter.NEG, actual.type); } } else if (operation == Operation.ADD) { - if (sort == Sort.DEF) { + if (promote.dynamic) { org.objectweb.asm.Type descriptor = org.objectweb.asm.Type.getMethodType(actual.type, child.actual.type); writer.invokeDefCall("plus", descriptor, DefBootstrap.UNARY_OPERATOR, defFlags); } diff --git a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/PBrace.java b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/PBrace.java index 4f04db092d58a..d94954ef35944 100644 --- a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/PBrace.java +++ b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/PBrace.java @@ -19,7 +19,6 @@ package org.elasticsearch.painless.node; -import org.elasticsearch.painless.Definition.Sort; import org.elasticsearch.painless.Definition.Type; import org.elasticsearch.painless.Globals; import org.elasticsearch.painless.Locals; @@ -58,11 +57,9 @@ void analyze(Locals locals) { prefix.expected = prefix.actual; prefix = prefix.cast(locals); - Sort sort = prefix.actual.sort; - - if (sort == Sort.ARRAY) { + if (prefix.actual.dimensions > 0) { sub = new PSubBrace(location, prefix.actual, index); - } else if (sort == Sort.DEF) { + } else if (prefix.actual.dynamic) { sub = new PSubDefArray(location, index); } else if (Map.class.isAssignableFrom(prefix.actual.clazz)) { sub = new PSubMapShortcut(location, prefix.actual.struct, index); diff --git a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/PCallInvoke.java b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/PCallInvoke.java index 80bdc3d597c8a..d9e74a70d944c 100644 --- a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/PCallInvoke.java +++ b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/PCallInvoke.java @@ -19,9 +19,9 @@ package org.elasticsearch.painless.node; +import org.elasticsearch.painless.Definition; import org.elasticsearch.painless.Definition.Method; import org.elasticsearch.painless.Definition.MethodKey; -import org.elasticsearch.painless.Definition.Sort; import org.elasticsearch.painless.Definition.Struct; import org.elasticsearch.painless.Globals; import org.elasticsearch.painless.Locals; @@ -66,14 +66,14 @@ void analyze(Locals locals) { prefix.expected = prefix.actual; prefix = prefix.cast(locals); - if (prefix.actual.sort == Sort.ARRAY) { + if (prefix.actual.dimensions > 0) { throw createError(new IllegalArgumentException("Illegal call [" + name + "] on array type.")); } Struct struct = prefix.actual.struct; - if (prefix.actual.sort.primitive) { - struct = locals.getDefinition().getType(prefix.actual.sort.boxed.getSimpleName()).struct; + if (prefix.actual.clazz.isPrimitive()) { + struct = locals.getDefinition().getBoxedType(prefix.actual).struct; } MethodKey methodKey = new MethodKey(name, arguments.size()); @@ -81,7 +81,7 @@ void analyze(Locals locals) { if (method != null) { sub = new PSubCallInvoke(location, method, prefix.actual, arguments); - } else if (prefix.actual.sort == Sort.DEF) { + } else if (prefix.actual.dynamic) { sub = new PSubDefCall(location, name, arguments); } else { throw createError(new IllegalArgumentException( diff --git a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/PField.java b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/PField.java index f22a5d49203eb..68343090b7d68 100644 --- a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/PField.java +++ b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/PField.java @@ -22,7 +22,6 @@ import org.elasticsearch.painless.Definition; import org.elasticsearch.painless.Definition.Field; import org.elasticsearch.painless.Definition.Method; -import org.elasticsearch.painless.Definition.Sort; import org.elasticsearch.painless.Definition.Struct; import org.elasticsearch.painless.Definition.Type; import org.elasticsearch.painless.Globals; @@ -63,11 +62,9 @@ void analyze(Locals locals) { prefix.expected = prefix.actual; prefix = prefix.cast(locals); - Sort sort = prefix.actual.sort; - - if (sort == Sort.ARRAY) { + if (prefix.actual.dimensions > 0) { sub = new PSubArrayLength(location, prefix.actual.name, value); - } else if (sort == Sort.DEF) { + } else if (prefix.actual.dynamic) { sub = new PSubDefField(location, value); } else { Struct struct = prefix.actual.struct; diff --git a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/PSubArrayLength.java b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/PSubArrayLength.java index 806f971850f01..a308bc2c6c92e 100644 --- a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/PSubArrayLength.java +++ b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/PSubArrayLength.java @@ -56,7 +56,7 @@ void analyze(Locals locals) { throw createError(new IllegalArgumentException("Cannot write to read-only field [length] for an array.")); } - actual = Definition.INT_TYPE; + actual = locals.getDefinition().intType; } else { throw createError(new IllegalArgumentException("Field [" + value + "] does not exist for type [" + type + "].")); } diff --git a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/PSubBrace.java b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/PSubBrace.java index 1f3d4109bca8a..56a29f7930b45 100644 --- a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/PSubBrace.java +++ b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/PSubBrace.java @@ -51,7 +51,7 @@ void extractVariables(Set variables) { @Override void analyze(Locals locals) { - index.expected = Definition.INT_TYPE; + index.expected = locals.getDefinition().intType; index.analyze(locals); index = index.cast(locals); diff --git a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/PSubCallInvoke.java b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/PSubCallInvoke.java index 5665f7163af47..fea6647997db9 100644 --- a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/PSubCallInvoke.java +++ b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/PSubCallInvoke.java @@ -71,7 +71,7 @@ void analyze(Locals locals) { void write(MethodWriter writer, Globals globals) { writer.writeDebugInfo(location); - if (box.sort.primitive) { + if (box.clazz.isPrimitive()) { writer.box(box.type); } diff --git a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/PSubDefArray.java b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/PSubDefArray.java index c13c1bcb99238..cac80f80516ff 100644 --- a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/PSubDefArray.java +++ b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/PSubDefArray.java @@ -53,7 +53,7 @@ void analyze(Locals locals) { index.expected = index.actual; index = index.cast(locals); - actual = expected == null || explicit ? Definition.DEF_TYPE : expected; + actual = expected == null || explicit ? locals.getDefinition().DefType : expected; } @Override @@ -83,7 +83,7 @@ void setup(MethodWriter writer, Globals globals) { writer.dup(); // def, def index.write(writer, globals); // def, def, unnormalized_index org.objectweb.asm.Type methodType = org.objectweb.asm.Type.getMethodType( - index.actual.type, Definition.DEF_TYPE.type, index.actual.type); + index.actual.type, org.objectweb.asm.Type.getType(Object.class), index.actual.type); writer.invokeDefCall("normalizeIndex", methodType, DefBootstrap.INDEX_NORMALIZE); // def, normalized_index } @@ -92,7 +92,7 @@ void load(MethodWriter writer, Globals globals) { writer.writeDebugInfo(location); org.objectweb.asm.Type methodType = - org.objectweb.asm.Type.getMethodType(actual.type, Definition.DEF_TYPE.type, index.actual.type); + org.objectweb.asm.Type.getMethodType(actual.type, org.objectweb.asm.Type.getType(Object.class), index.actual.type); writer.invokeDefCall("arrayLoad", methodType, DefBootstrap.ARRAY_LOAD); } @@ -101,7 +101,8 @@ void store(MethodWriter writer, Globals globals) { writer.writeDebugInfo(location); org.objectweb.asm.Type methodType = - org.objectweb.asm.Type.getMethodType(Definition.VOID_TYPE.type, Definition.DEF_TYPE.type, index.actual.type, actual.type); + org.objectweb.asm.Type.getMethodType( + org.objectweb.asm.Type.getType(void.class), org.objectweb.asm.Type.getType(Object.class), index.actual.type, actual.type); writer.invokeDefCall("arrayStore", methodType, DefBootstrap.ARRAY_STORE); } diff --git a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/PSubDefCall.java b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/PSubDefCall.java index 2d0734915af8b..89fc169704f86 100644 --- a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/PSubDefCall.java +++ b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/PSubDefCall.java @@ -80,7 +80,7 @@ void analyze(Locals locals) { arguments.set(argument, expression.cast(locals)); } - actual = expected == null || explicit ? Definition.DEF_TYPE : expected; + actual = expected == null || explicit ? locals.getDefinition().DefType : expected; } @Override @@ -90,7 +90,7 @@ void write(MethodWriter writer, Globals globals) { List parameterTypes = new ArrayList<>(); // first parameter is the receiver, we never know its type: always Object - parameterTypes.add(Definition.DEF_TYPE.type); + parameterTypes.add(org.objectweb.asm.Type.getType(Object.class)); // append each argument for (AExpression argument : arguments) { diff --git a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/PSubDefField.java b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/PSubDefField.java index 24c6c91422d08..c1f0c468e421b 100644 --- a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/PSubDefField.java +++ b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/PSubDefField.java @@ -50,7 +50,7 @@ void extractVariables(Set variables) { @Override void analyze(Locals locals) { - actual = expected == null || explicit ? Definition.DEF_TYPE : expected; + actual = expected == null || explicit ? locals.getDefinition().DefType : expected; } @Override @@ -58,7 +58,7 @@ void write(MethodWriter writer, Globals globals) { writer.writeDebugInfo(location); org.objectweb.asm.Type methodType = - org.objectweb.asm.Type.getMethodType(actual.type, Definition.DEF_TYPE.type); + org.objectweb.asm.Type.getMethodType(actual.type, org.objectweb.asm.Type.getType(Object.class)); writer.invokeDefCall(value, methodType, DefBootstrap.LOAD); } @@ -87,7 +87,7 @@ void load(MethodWriter writer, Globals globals) { writer.writeDebugInfo(location); org.objectweb.asm.Type methodType = - org.objectweb.asm.Type.getMethodType(actual.type, Definition.DEF_TYPE.type); + org.objectweb.asm.Type.getMethodType(actual.type, org.objectweb.asm.Type.getType(Object.class)); writer.invokeDefCall(value, methodType, DefBootstrap.LOAD); } @@ -95,8 +95,8 @@ void load(MethodWriter writer, Globals globals) { void store(MethodWriter writer, Globals globals) { writer.writeDebugInfo(location); - org.objectweb.asm.Type methodType = - org.objectweb.asm.Type.getMethodType(Definition.VOID_TYPE.type, Definition.DEF_TYPE.type, actual.type); + org.objectweb.asm.Type methodType = org.objectweb.asm.Type.getMethodType( + org.objectweb.asm.Type.getType(void.class), org.objectweb.asm.Type.getType(Object.class), actual.type); writer.invokeDefCall(value, methodType, DefBootstrap.STORE); } diff --git a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/PSubListShortcut.java b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/PSubListShortcut.java index fc5cecc1b5e4a..557c1aed6931b 100644 --- a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/PSubListShortcut.java +++ b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/PSubListShortcut.java @@ -21,7 +21,6 @@ import org.elasticsearch.painless.Definition; import org.elasticsearch.painless.Definition.Method; -import org.elasticsearch.painless.Definition.Sort; import org.elasticsearch.painless.Definition.Struct; import org.elasticsearch.painless.Definition.Type; import org.elasticsearch.painless.Globals; @@ -61,12 +60,12 @@ void analyze(Locals locals) { getter = struct.methods.get(new Definition.MethodKey("get", 1)); setter = struct.methods.get(new Definition.MethodKey("set", 2)); - if (getter != null && (getter.rtn.sort == Sort.VOID || getter.arguments.size() != 1 || - getter.arguments.get(0).sort != Sort.INT)) { + if (getter != null && (getter.rtn.clazz == void.class || getter.arguments.size() != 1 || + getter.arguments.get(0).clazz != int.class)) { throw createError(new IllegalArgumentException("Illegal list get shortcut for type [" + struct.name + "].")); } - if (setter != null && (setter.arguments.size() != 2 || setter.arguments.get(0).sort != Sort.INT)) { + if (setter != null && (setter.arguments.size() != 2 || setter.arguments.get(0).clazz != int.class)) { throw createError(new IllegalArgumentException("Illegal list set shortcut for type [" + struct.name + "].")); } @@ -76,7 +75,7 @@ void analyze(Locals locals) { } if ((read || write) && (!read || getter != null) && (!write || setter != null)) { - index.expected = Definition.INT_TYPE; + index.expected = locals.getDefinition().intType; index.analyze(locals); index = index.cast(locals); @@ -132,7 +131,7 @@ void store(MethodWriter writer, Globals globals) { setter.write(writer); - writer.writePop(setter.rtn.sort.size); + writer.writePop(setter.rtn.type.getSize()); } @Override diff --git a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/PSubMapShortcut.java b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/PSubMapShortcut.java index c98aba5e1f09b..72118bd77df3a 100644 --- a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/PSubMapShortcut.java +++ b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/PSubMapShortcut.java @@ -25,7 +25,6 @@ import org.elasticsearch.painless.Globals; import org.elasticsearch.painless.Location; import org.elasticsearch.painless.Definition.Method; -import org.elasticsearch.painless.Definition.Sort; import java.util.Objects; import java.util.Set; @@ -61,7 +60,7 @@ void analyze(Locals locals) { getter = struct.methods.get(new Definition.MethodKey("get", 1)); setter = struct.methods.get(new Definition.MethodKey("put", 2)); - if (getter != null && (getter.rtn.sort == Sort.VOID || getter.arguments.size() != 1)) { + if (getter != null && (getter.rtn.clazz == void.class || getter.arguments.size() != 1)) { throw createError(new IllegalArgumentException("Illegal map get shortcut for type [" + struct.name + "].")); } @@ -135,7 +134,7 @@ void store(MethodWriter writer, Globals globals) { setter.write(writer); - writer.writePop(setter.rtn.sort.size); + writer.writePop(setter.rtn.type.getSize()); } @Override diff --git a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/PSubNullSafeCallInvoke.java b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/PSubNullSafeCallInvoke.java index 6291e94252d43..feae5fac07440 100644 --- a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/PSubNullSafeCallInvoke.java +++ b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/PSubNullSafeCallInvoke.java @@ -52,7 +52,7 @@ void extractVariables(Set variables) { void analyze(Locals locals) { guarded.analyze(locals); actual = guarded.actual; - if (actual.sort.primitive) { + if (actual.clazz.isPrimitive()) { throw new IllegalArgumentException("Result of null safe operator must be nullable"); } } diff --git a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/PSubNullSafeField.java b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/PSubNullSafeField.java index de01ea710d90e..e043ca96ebcb4 100644 --- a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/PSubNullSafeField.java +++ b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/PSubNullSafeField.java @@ -52,7 +52,7 @@ void analyze(Locals locals) { guarded.read = read; guarded.analyze(locals); actual = guarded.actual; - if (actual.sort.primitive) { + if (actual.clazz.isPrimitive()) { throw new IllegalArgumentException("Result of null safe operator must be nullable"); } } diff --git a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/PSubShortcut.java b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/PSubShortcut.java index 5cd62f9fb1ed3..49d2c0fe2477b 100644 --- a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/PSubShortcut.java +++ b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/PSubShortcut.java @@ -20,7 +20,6 @@ package org.elasticsearch.painless.node; import org.elasticsearch.painless.Definition.Method; -import org.elasticsearch.painless.Definition.Sort; import org.elasticsearch.painless.Definition.Type; import org.elasticsearch.painless.Globals; import org.elasticsearch.painless.Locals; @@ -55,12 +54,12 @@ void extractVariables(Set variables) { @Override void analyze(Locals locals) { - if (getter != null && (getter.rtn.sort == Sort.VOID || !getter.arguments.isEmpty())) { + if (getter != null && (getter.rtn.clazz == void.class || !getter.arguments.isEmpty())) { throw createError(new IllegalArgumentException( "Illegal get shortcut on field [" + value + "] for type [" + type + "].")); } - if (setter != null && (setter.rtn.sort != Sort.VOID || setter.arguments.size() != 1)) { + if (setter != null && (setter.rtn.clazz != void.class || setter.arguments.size() != 1)) { throw createError(new IllegalArgumentException( "Illegal set shortcut on field [" + value + "] for type [" + type + "].")); } @@ -124,7 +123,7 @@ void store(MethodWriter writer, Globals globals) { setter.write(writer); - writer.writePop(setter.rtn.sort.size); + writer.writePop(setter.rtn.type.getSize()); } @Override diff --git a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/SDeclaration.java b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/SDeclaration.java index ab9e58db23ebb..e25017ba916ae 100644 --- a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/SDeclaration.java +++ b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/SDeclaration.java @@ -82,17 +82,19 @@ void write(MethodWriter writer, Globals globals) { writer.writeStatementOffset(location); if (expression == null) { - switch (variable.type.sort) { - case VOID: throw createError(new IllegalStateException("Illegal tree structure.")); - case BOOL: - case BYTE: - case SHORT: - case CHAR: - case INT: writer.push(0); break; - case LONG: writer.push(0L); break; - case FLOAT: writer.push(0.0F); break; - case DOUBLE: writer.push(0.0); break; - default: writer.visitInsn(Opcodes.ACONST_NULL); + Class sort = variable.type.clazz; + + if (sort == void.class || sort == boolean.class || sort == byte.class || + sort == short.class || sort == char.class || sort == int.class) { + writer.push(0); + } else if (sort == long.class) { + writer.push(0L); + } else if (sort == float.class) { + writer.push(0F); + } else if (sort == double.class) { + writer.push(0D); + } else { + writer.visitInsn(Opcodes.ACONST_NULL); } } else { expression.write(writer, globals); diff --git a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/SDo.java b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/SDo.java index 979bd4eec9fab..858dc43b6c864 100644 --- a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/SDo.java +++ b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/SDo.java @@ -73,7 +73,7 @@ void analyze(Locals locals) { throw createError(new IllegalArgumentException("Extraneous do while loop.")); } - condition.expected = Definition.BOOLEAN_TYPE; + condition.expected = locals.getDefinition().booleanType; condition.analyze(locals); condition = condition.cast(locals); diff --git a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/SEach.java b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/SEach.java index 003f303b5e023..77fb9b357076e 100644 --- a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/SEach.java +++ b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/SEach.java @@ -19,7 +19,6 @@ package org.elasticsearch.painless.node; -import org.elasticsearch.painless.Definition.Sort; import org.elasticsearch.painless.Definition.Type; import org.elasticsearch.painless.Globals; import org.elasticsearch.painless.Locals; @@ -79,9 +78,9 @@ void analyze(Locals locals) { locals = Locals.newLocalScope(locals); Variable variable = locals.addVariable(location, type, name, true); - if (expression.actual.sort == Sort.ARRAY) { + if (expression.actual.dimensions > 0) { sub = new SSubEachArray(location, variable, expression, block); - } else if (expression.actual.sort == Sort.DEF || Iterable.class.isAssignableFrom(expression.actual.clazz)) { + } else if (expression.actual.dynamic || Iterable.class.isAssignableFrom(expression.actual.clazz)) { sub = new SSubEachIterable(location, variable, expression, block); } else { throw createError(new IllegalArgumentException("Illegal for each type [" + expression.actual.name + "].")); diff --git a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/SExpression.java b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/SExpression.java index ab7a645c39621..88c670650d751 100644 --- a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/SExpression.java +++ b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/SExpression.java @@ -19,7 +19,6 @@ package org.elasticsearch.painless.node; -import org.elasticsearch.painless.Definition.Sort; import org.elasticsearch.painless.Definition.Type; import org.elasticsearch.painless.Globals; import org.elasticsearch.painless.Locals; @@ -50,7 +49,7 @@ void extractVariables(Set variables) { @Override void analyze(Locals locals) { Type rtnType = locals.getReturnType(); - boolean isVoid = rtnType.sort == Sort.VOID; + boolean isVoid = rtnType.clazz == void.class; expression.read = lastSource && !isVoid; expression.analyze(locals); @@ -59,7 +58,7 @@ void analyze(Locals locals) { throw createError(new IllegalArgumentException("Not a statement.")); } - boolean rtn = lastSource && !isVoid && expression.actual.sort != Sort.VOID; + boolean rtn = lastSource && !isVoid && expression.actual.clazz != void.class; expression.expected = rtn ? rtnType : expression.actual; expression.internal = rtn; @@ -79,7 +78,7 @@ void write(MethodWriter writer, Globals globals) { if (methodEscape) { writer.returnValue(); } else { - writer.writePop(expression.expected.sort.size); + writer.writePop(expression.expected.type.getSize()); } } diff --git a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/SFor.java b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/SFor.java index 18e7e11a7d1d0..378ecb6b4f897 100644 --- a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/SFor.java +++ b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/SFor.java @@ -94,7 +94,7 @@ void analyze(Locals locals) { } if (condition != null) { - condition.expected = Definition.BOOLEAN_TYPE; + condition.expected = locals.getDefinition().booleanType; condition.analyze(locals); condition = condition.cast(locals); @@ -161,7 +161,7 @@ void write(MethodWriter writer, Globals globals) { AExpression initializer = (AExpression)this.initializer; initializer.write(writer, globals); - writer.writePop(initializer.expected.sort.size); + writer.writePop(initializer.expected.type.getSize()); } writer.mark(start); diff --git a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/SFunction.java b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/SFunction.java index 59b7a333cf4b2..6c479265cfe1e 100644 --- a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/SFunction.java +++ b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/SFunction.java @@ -24,7 +24,6 @@ import org.elasticsearch.painless.Def; import org.elasticsearch.painless.Definition; import org.elasticsearch.painless.Definition.Method; -import org.elasticsearch.painless.Definition.Sort; import org.elasticsearch.painless.Definition.Type; import org.elasticsearch.painless.Globals; import org.elasticsearch.painless.Locals; @@ -163,7 +162,7 @@ void analyze(Locals locals) { allEscape = statement.allEscape; } - if (!methodEscape && rtnType.sort != Sort.VOID) { + if (!methodEscape && rtnType.clazz != void.class) { throw createError(new IllegalArgumentException("Not all paths provide a return value for method [" + name + "].")); } @@ -198,7 +197,7 @@ void write(MethodWriter function, Globals globals) { } if (!methodEscape) { - if (rtnType.sort == Sort.VOID) { + if (rtnType.clazz == void.class) { function.returnValue(); } else { throw createError(new IllegalStateException("Illegal tree structure.")); diff --git a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/SIf.java b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/SIf.java index 08495cd8d7f1a..e5d2233fa982e 100644 --- a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/SIf.java +++ b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/SIf.java @@ -56,7 +56,7 @@ void extractVariables(Set variables) { @Override void analyze(Locals locals) { - condition.expected = Definition.BOOLEAN_TYPE; + condition.expected = locals.getDefinition().booleanType; condition.analyze(locals); condition = condition.cast(locals); diff --git a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/SIfElse.java b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/SIfElse.java index 4feb6b3fe62b1..d5dd83464850e 100644 --- a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/SIfElse.java +++ b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/SIfElse.java @@ -65,7 +65,7 @@ void extractVariables(Set variables) { @Override void analyze(Locals locals) { - condition.expected = Definition.BOOLEAN_TYPE; + condition.expected = locals.getDefinition().booleanType; condition.analyze(locals); condition = condition.cast(locals); diff --git a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/SSubEachArray.java b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/SSubEachArray.java index 2083d3ddbe5e5..09c73c525bec0 100644 --- a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/SSubEachArray.java +++ b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/SSubEachArray.java @@ -66,11 +66,11 @@ void analyze(Locals locals) { // also add the location offset to make the names unique in case of nested for each loops. array = locals.addVariable(location, expression.actual, "#array" + location.getOffset(), true); - index = locals.addVariable(location, Definition.INT_TYPE, "#index" + location.getOffset(), + index = locals.addVariable(location, locals.getDefinition().intType, "#index" + location.getOffset(), true); indexed = locals.getDefinition().getType(expression.actual.struct, expression.actual.dimensions - 1); - cast = AnalyzerCaster.getLegalCast(location, indexed, variable.type, true, true); + cast = locals.getDefinition().caster.getLegalCast(location, indexed, variable.type, true, true); } @Override diff --git a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/SSubEachIterable.java b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/SSubEachIterable.java index b014e952e32a2..a51a459f0f3f8 100644 --- a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/SSubEachIterable.java +++ b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/SSubEachIterable.java @@ -25,7 +25,6 @@ import org.elasticsearch.painless.Definition.Cast; import org.elasticsearch.painless.Definition.Method; import org.elasticsearch.painless.Definition.MethodKey; -import org.elasticsearch.painless.Definition.Sort; import org.elasticsearch.painless.Globals; import org.elasticsearch.painless.Locals; import org.elasticsearch.painless.Locals.Variable; @@ -34,6 +33,7 @@ import org.objectweb.asm.Label; import org.objectweb.asm.Opcodes; +import java.util.Iterator; import java.util.Objects; import java.util.Set; @@ -74,7 +74,7 @@ void analyze(Locals locals) { iterator = locals.addVariable(location, locals.getDefinition().getType("Iterator"), "#itr" + location.getOffset(), true); - if (expression.actual.sort == Sort.DEF) { + if (expression.actual.dynamic) { method = null; } else { method = expression.actual.struct.methods.get(new MethodKey("iterator", 0)); @@ -85,7 +85,7 @@ void analyze(Locals locals) { } } - cast = AnalyzerCaster.getLegalCast(location, Definition.DEF_TYPE, variable.type, true, true); + cast = locals.getDefinition().caster.getLegalCast(location, locals.getDefinition().DefType, variable.type, true, true); } @Override @@ -96,7 +96,7 @@ void write(MethodWriter writer, Globals globals) { if (method == null) { org.objectweb.asm.Type methodType = org.objectweb.asm.Type - .getMethodType(Definition.ITERATOR_TYPE.type, Definition.DEF_TYPE.type); + .getMethodType(org.objectweb.asm.Type.getType(Iterator.class), org.objectweb.asm.Type.getType(Object.class)); writer.invokeDefCall("iterator", methodType, DefBootstrap.ITERATOR); } else { method.write(writer); diff --git a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/SThrow.java b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/SThrow.java index e87d862b2acc7..1e96586b6f751 100644 --- a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/SThrow.java +++ b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/SThrow.java @@ -48,7 +48,7 @@ void extractVariables(Set variables) { @Override void analyze(Locals locals) { - expression.expected = Definition.EXCEPTION_TYPE; + expression.expected = locals.getDefinition().ExceptionType; expression.analyze(locals); expression = expression.cast(locals); diff --git a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/SWhile.java b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/SWhile.java index 7663c853a0c94..f6c750a94a1ac 100644 --- a/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/SWhile.java +++ b/modules/lang-painless/src/main/java/org/elasticsearch/painless/node/SWhile.java @@ -59,7 +59,7 @@ void extractVariables(Set variables) { void analyze(Locals locals) { locals = Locals.newLocalScope(locals); - condition.expected = Definition.BOOLEAN_TYPE; + condition.expected = locals.getDefinition().booleanType; condition.analyze(locals); condition = condition.cast(locals); diff --git a/modules/lang-painless/src/main/resources/org/elasticsearch/painless/java.lang.txt b/modules/lang-painless/src/main/resources/org/elasticsearch/painless/java.lang.txt index 0f8667998209c..a5a414008d753 100644 --- a/modules/lang-painless/src/main/resources/org/elasticsearch/painless/java.lang.txt +++ b/modules/lang-painless/src/main/resources/org/elasticsearch/painless/java.lang.txt @@ -36,8 +36,8 @@ class CharSequence -> java.lang.CharSequence { IntStream chars() IntStream codePoints() int length() - String org.elasticsearch.painless.api.Augmentation.replaceAll(Pattern,Function) - String org.elasticsearch.painless.api.Augmentation.replaceFirst(Pattern,Function) + String org.elasticsearch.painless.api.Augmentation replaceAll(Pattern,Function) + String org.elasticsearch.painless.api.Augmentation replaceFirst(Pattern,Function) CharSequence subSequence(int,int) String toString() } @@ -52,18 +52,18 @@ class Iterable -> java.lang.Iterable { void forEach(Consumer) Iterator iterator() Spliterator spliterator() - # some adaptations of groovy methods - boolean org.elasticsearch.painless.api.Augmentation.any(Predicate) - Collection org.elasticsearch.painless.api.Augmentation.asCollection() - List org.elasticsearch.painless.api.Augmentation.asList() - def org.elasticsearch.painless.api.Augmentation.each(Consumer) - def org.elasticsearch.painless.api.Augmentation.eachWithIndex(ObjIntConsumer) - boolean org.elasticsearch.painless.api.Augmentation.every(Predicate) - List org.elasticsearch.painless.api.Augmentation.findResults(Function) - Map org.elasticsearch.painless.api.Augmentation.groupBy(Function) - String org.elasticsearch.painless.api.Augmentation.join(String) - double org.elasticsearch.painless.api.Augmentation.sum() - double org.elasticsearch.painless.api.Augmentation.sum(ToDoubleFunction) + # some adaptations of groovy wmethods + boolean org.elasticsearch.painless.api.Augmentation any(Predicate) + Collection org.elasticsearch.painless.api.Augmentation asCollection() + List org.elasticsearch.painless.api.Augmentation asList() + def org.elasticsearch.painless.api.Augmentation each(Consumer) + def org.elasticsearch.painless.api.Augmentation eachWithIndex(ObjIntConsumer) + boolean org.elasticsearch.painless.api.Augmentation every(Predicate) + List org.elasticsearch.painless.api.Augmentation findResults(Function) + Map org.elasticsearch.painless.api.Augmentation groupBy(Function) + String org.elasticsearch.painless.api.Augmentation join(String) + double org.elasticsearch.painless.api.Augmentation sum() + double org.elasticsearch.painless.api.Augmentation sum(ToDoubleFunction) } # Readable: i/o @@ -72,7 +72,7 @@ class Iterable -> java.lang.Iterable { #### Classes -class Boolean -> java.lang.Boolean extends Comparable,Object { +class Boolean -> java.lang.Boolean { Boolean TRUE Boolean FALSE boolean booleanValue() @@ -87,7 +87,7 @@ class Boolean -> java.lang.Boolean extends Comparable,Object { Boolean valueOf(boolean) } -class Byte -> java.lang.Byte extends Number,Comparable,Object { +class Byte -> java.lang.Byte { int BYTES byte MAX_VALUE byte MIN_VALUE @@ -105,7 +105,7 @@ class Byte -> java.lang.Byte extends Number,Comparable,Object { Byte valueOf(String,int) } -class Character -> java.lang.Character extends Comparable,Object { +class Character -> java.lang.Character { int BYTES byte COMBINING_SPACING_MARK byte CONNECTOR_PUNCTUATION @@ -226,10 +226,10 @@ class Character -> java.lang.Character extends Comparable,Object { Character valueOf(char) } -class Character.Subset -> java.lang.Character$Subset extends Object { +class Character.Subset -> java.lang.Character$Subset { } -class Character.UnicodeBlock -> java.lang.Character$UnicodeBlock extends Character.Subset,Object { +class Character.UnicodeBlock -> java.lang.Character$UnicodeBlock { Character.UnicodeBlock AEGEAN_NUMBERS Character.UnicodeBlock ALCHEMICAL_SYMBOLS Character.UnicodeBlock ALPHABETIC_PRESENTATION_FORMS @@ -459,7 +459,7 @@ class Character.UnicodeBlock -> java.lang.Character$UnicodeBlock extends Charact # ClassValue: ... # Compiler: ... -class Double -> java.lang.Double extends Number,Comparable,Object { +class Double -> java.lang.Double { int BYTES int MAX_EXPONENT double MAX_VALUE @@ -490,13 +490,13 @@ class Double -> java.lang.Double extends Number,Comparable,Object { Double valueOf(double) } -class Enum -> java.lang.Enum extends Comparable,Object { +class Enum -> java.lang.Enum { int compareTo(Enum) - String name(); - int ordinal(); + String name() + int ordinal() } -class Float -> java.lang.Float extends Number,Comparable,Object { +class Float -> java.lang.Float { int BYTES int MAX_EXPONENT float MAX_VALUE @@ -529,7 +529,7 @@ class Float -> java.lang.Float extends Number,Comparable,Object { # InheritableThreadLocal: threads -class Integer -> java.lang.Integer extends Number,Comparable,Object { +class Integer -> java.lang.Integer { int BYTES int MAX_VALUE int MIN_VALUE @@ -569,7 +569,7 @@ class Integer -> java.lang.Integer extends Number,Comparable,Object { Integer valueOf(String,int) } -class Long -> java.lang.Long extends Number,Comparable,Object { +class Long -> java.lang.Long { int BYTES long MAX_VALUE long MIN_VALUE @@ -609,7 +609,7 @@ class Long -> java.lang.Long extends Number,Comparable,Object { Long valueOf(String,int) } -class Math -> java.lang.Math extends Object { +class Math -> java.lang.Math { double E double PI double abs(double) @@ -651,7 +651,7 @@ class Math -> java.lang.Math extends Object { double ulp(double) } -class Number -> java.lang.Number extends Object { +class Number -> java.lang.Number { byte byteValue() short shortValue() int intValue() @@ -674,7 +674,7 @@ class Object -> java.lang.Object { # RuntimePermission: skipped # SecurityManger: skipped -class Short -> java.lang.Short extends Number,Comparable,Object { +class Short -> java.lang.Short { int BYTES short MAX_VALUE short MIN_VALUE @@ -693,8 +693,8 @@ class Short -> java.lang.Short extends Number,Comparable,Object { Short valueOf(String,int) } -class StackTraceElement -> java.lang.StackTraceElement extends Object { - StackTraceElement (String,String,String,int) +class StackTraceElement -> java.lang.StackTraceElement { + (String,String,String,int) String getClassName() String getFileName() int getLineNumber() @@ -702,7 +702,7 @@ class StackTraceElement -> java.lang.StackTraceElement extends Object { boolean isNativeMethod() } -class StrictMath -> java.lang.StrictMath extends Object { +class StrictMath -> java.lang.StrictMath { double E double PI double abs(double) @@ -744,8 +744,8 @@ class StrictMath -> java.lang.StrictMath extends Object { double ulp(double) } -class String -> java.lang.String extends CharSequence,Comparable,Object { - String () +class String -> java.lang.String { + () int codePointAt(int) int codePointBefore(int) int codePointCount(int,int) @@ -756,8 +756,8 @@ class String -> java.lang.String extends CharSequence,Comparable,Object { boolean contentEquals(CharSequence) String copyValueOf(char[]) String copyValueOf(char[],int,int) - String org.elasticsearch.painless.api.Augmentation.decodeBase64() - String org.elasticsearch.painless.api.Augmentation.encodeBase64() + String org.elasticsearch.painless.api.Augmentation decodeBase64() + String org.elasticsearch.painless.api.Augmentation encodeBase64() boolean endsWith(String) boolean equalsIgnoreCase(String) String format(Locale,String,def[]) @@ -786,9 +786,9 @@ class String -> java.lang.String extends CharSequence,Comparable,Object { String valueOf(def) } -class StringBuffer -> java.lang.StringBuffer extends CharSequence,Appendable,Object { - StringBuffer () - StringBuffer (CharSequence) +class StringBuffer -> java.lang.StringBuffer { + () + (CharSequence) StringBuffer append(def) StringBuffer append(CharSequence,int,int) StringBuffer appendCodePoint(int) @@ -813,9 +813,9 @@ class StringBuffer -> java.lang.StringBuffer extends CharSequence,Appendable,Obj String substring(int,int) } -class StringBuilder -> java.lang.StringBuilder extends CharSequence,Appendable,Object { - StringBuilder () - StringBuilder (CharSequence) +class StringBuilder -> java.lang.StringBuilder { + () + (CharSequence) StringBuilder append(def) StringBuilder append(CharSequence,int,int) StringBuilder appendCodePoint(int) @@ -840,7 +840,7 @@ class StringBuilder -> java.lang.StringBuilder extends CharSequence,Appendable,O String substring(int,int) } -class System -> java.lang.System extends Object { +class System -> java.lang.System { void arraycopy(Object,int,Object,int,int) long currentTimeMillis() long nanoTime() @@ -851,12 +851,12 @@ class System -> java.lang.System extends Object { # ThreadLocal: skipped # Throwable: skipped (reserved for painless, users can only catch Exceptions) -class Void -> java.lang.Void extends Object { +class Void -> java.lang.Void { } #### Enums -class Character.UnicodeScript -> java.lang.Character$UnicodeScript extends Enum,Object { +class Character.UnicodeScript -> java.lang.Character$UnicodeScript { Character.UnicodeScript ARABIC Character.UnicodeScript ARMENIAN Character.UnicodeScript AVESTAN @@ -968,138 +968,138 @@ class Character.UnicodeScript -> java.lang.Character$UnicodeScript extends Enum, #### Exceptions -class ArithmeticException -> java.lang.ArithmeticException extends RuntimeException,Exception,Object { - ArithmeticException () - ArithmeticException (String) +class ArithmeticException -> java.lang.ArithmeticException { + () + (String) } -class ArrayIndexOutOfBoundsException -> java.lang.ArrayIndexOutOfBoundsException extends IndexOutOfBoundsException,RuntimeException,Exception,Object { - ArrayIndexOutOfBoundsException () - ArrayIndexOutOfBoundsException (String) +class ArrayIndexOutOfBoundsException -> java.lang.ArrayIndexOutOfBoundsException { + () + (String) } -class ArrayStoreException -> java.lang.ArrayStoreException extends RuntimeException,Exception,Object { - ArrayStoreException () - ArrayStoreException (String) +class ArrayStoreException -> java.lang.ArrayStoreException { + () + (String) } -class ClassCastException -> java.lang.ClassCastException extends RuntimeException,Exception,Object { - ClassCastException () - ClassCastException (String) +class ClassCastException -> java.lang.ClassCastException { + () + (String) } -class ClassNotFoundException -> java.lang.ClassNotFoundException extends ReflectiveOperationException,Exception,Object { - ClassNotFoundException () - ClassNotFoundException (String) +class ClassNotFoundException -> java.lang.ClassNotFoundException { + () + (String) } -class CloneNotSupportedException -> java.lang.CloneNotSupportedException extends Exception,Object { - CloneNotSupportedException () - CloneNotSupportedException (String) +class CloneNotSupportedException -> java.lang.CloneNotSupportedException { + () + (String) } -class EnumConstantNotPresentException -> java.lang.EnumConstantNotPresentException extends RuntimeException,Exception,Object { +class EnumConstantNotPresentException -> java.lang.EnumConstantNotPresentException { String constantName() } -class Exception -> java.lang.Exception extends Object { - Exception () - Exception (String) +class Exception -> java.lang.Exception { + () + (String) String getLocalizedMessage() String getMessage() StackTraceElement[] getStackTrace() } -class IllegalAccessException -> java.lang.IllegalAccessException extends ReflectiveOperationException,Exception,Object { - IllegalAccessException () - IllegalAccessException (String) +class IllegalAccessException -> java.lang.IllegalAccessException { + () + (String) } -class IllegalArgumentException -> java.lang.IllegalArgumentException extends RuntimeException,Exception,Object { - IllegalArgumentException () - IllegalArgumentException (String) +class IllegalArgumentException -> java.lang.IllegalArgumentException { + () + (String) } -class IllegalMonitorStateException -> java.lang.IllegalMonitorStateException extends RuntimeException,Exception,Object { - IllegalMonitorStateException () - IllegalMonitorStateException (String) +class IllegalMonitorStateException -> java.lang.IllegalMonitorStateException { + () + (String) } -class IllegalStateException -> java.lang.IllegalStateException extends RuntimeException,Exception,Object { - IllegalStateException () - IllegalStateException (String) +class IllegalStateException -> java.lang.IllegalStateException { + () + (String) } -class IllegalThreadStateException -> java.lang.IllegalThreadStateException extends IllegalArgumentException,RuntimeException,Exception,Object { - IllegalThreadStateException () - IllegalThreadStateException (String) +class IllegalThreadStateException -> java.lang.IllegalThreadStateException { + () + (String) } -class IndexOutOfBoundsException -> java.lang.IndexOutOfBoundsException extends RuntimeException,Exception,Object { - IndexOutOfBoundsException () - IndexOutOfBoundsException (String) +class IndexOutOfBoundsException -> java.lang.IndexOutOfBoundsException { + () + (String) } -class InstantiationException -> java.lang.InstantiationException extends ReflectiveOperationException,Exception,Object { - InstantiationException () - InstantiationException (String) +class InstantiationException -> java.lang.InstantiationException { + () + (String) } -class InterruptedException -> java.lang.InterruptedException extends Exception,Object { - InterruptedException () - InterruptedException (String) +class InterruptedException -> java.lang.InterruptedException { + () + (String) } -class NegativeArraySizeException -> java.lang.NegativeArraySizeException extends RuntimeException,Exception,Object { - NegativeArraySizeException () - NegativeArraySizeException (String) +class NegativeArraySizeException -> java.lang.NegativeArraySizeException { + () + (String) } -class NoSuchFieldException -> java.lang.NoSuchFieldException extends ReflectiveOperationException,Exception,Object { - NoSuchFieldException () - NoSuchFieldException (String) +class NoSuchFieldException -> java.lang.NoSuchFieldException { + () + (String) } -class NoSuchMethodException -> java.lang.NoSuchMethodException extends ReflectiveOperationException,Exception,Object { - NoSuchMethodException () - NoSuchMethodException (String) +class NoSuchMethodException -> java.lang.NoSuchMethodException { + () + (String) } -class NullPointerException -> java.lang.NullPointerException extends RuntimeException,Exception,Object { - NullPointerException () - NullPointerException (String) +class NullPointerException -> java.lang.NullPointerException { + () + (String) } -class NumberFormatException -> java.lang.NumberFormatException extends RuntimeException,Exception,Object { - NumberFormatException () - NumberFormatException (String) +class NumberFormatException -> java.lang.NumberFormatException { + () + (String) } -class ReflectiveOperationException -> java.lang.ReflectiveOperationException extends Exception,Object { - ReflectiveOperationException () - ReflectiveOperationException (String) +class ReflectiveOperationException -> java.lang.ReflectiveOperationException { + () + (String) } -class RuntimeException -> java.lang.RuntimeException extends Exception,Object { - RuntimeException () - RuntimeException (String) +class RuntimeException -> java.lang.RuntimeException { + () + (String) } -class SecurityException -> java.lang.SecurityException extends RuntimeException,Exception,Object { - SecurityException () - SecurityException (String) +class SecurityException -> java.lang.SecurityException { + () + (String) } -class StringIndexOutOfBoundsException -> java.lang.StringIndexOutOfBoundsException extends IndexOutOfBoundsException,RuntimeException,Exception,Object { - StringIndexOutOfBoundsException () - StringIndexOutOfBoundsException (String) +class StringIndexOutOfBoundsException -> java.lang.StringIndexOutOfBoundsException { + () + (String) } -class TypeNotPresentException -> java.lang.TypeNotPresentException extends RuntimeException,Exception,Object { +class TypeNotPresentException -> java.lang.TypeNotPresentException { String typeName() } -class UnsupportedOperationException -> java.lang.UnsupportedOperationException extends RuntimeException,Exception,Object { - UnsupportedOperationException () - UnsupportedOperationException (String) +class UnsupportedOperationException -> java.lang.UnsupportedOperationException { + () + (String) } diff --git a/modules/lang-painless/src/main/resources/org/elasticsearch/painless/java.math.txt b/modules/lang-painless/src/main/resources/org/elasticsearch/painless/java.math.txt index e3d13a0959c60..e7457628203a2 100644 --- a/modules/lang-painless/src/main/resources/org/elasticsearch/painless/java.math.txt +++ b/modules/lang-painless/src/main/resources/org/elasticsearch/painless/java.math.txt @@ -24,12 +24,12 @@ #### Classes -class BigDecimal -> java.math.BigDecimal extends Number,Comparable,Object { +class BigDecimal -> java.math.BigDecimal { BigDecimal ONE BigDecimal TEN BigDecimal ZERO - BigDecimal (String) - BigDecimal (String,MathContext) + (String) + (String,MathContext) BigDecimal abs() BigDecimal abs(MathContext) BigDecimal add(BigDecimal) @@ -77,12 +77,12 @@ class BigDecimal -> java.math.BigDecimal extends Number,Comparable,Object { BigDecimal valueOf(double) } -class BigInteger -> java.math.BigInteger extends Number,Comparable,Object { +class BigInteger -> java.math.BigInteger { BigInteger ONE BigInteger TEN BigInteger ZERO - BigInteger (String) - BigInteger (String,int) + (String) + (String,int) BigInteger abs() BigInteger add(BigInteger) BigInteger and(BigInteger) @@ -123,20 +123,20 @@ class BigInteger -> java.math.BigInteger extends Number,Comparable,Object { BigInteger xor(BigInteger) } -class MathContext -> java.math.MathContext extends Object { +class MathContext -> java.math.MathContext { MathContext DECIMAL128 MathContext DECIMAL32 MathContext DECIMAL64 MathContext UNLIMITED - MathContext (int) - MathContext (int,RoundingMode) + (int) + (int,RoundingMode) int getPrecision() RoundingMode getRoundingMode() } #### Enums -class RoundingMode -> java.math.RoundingMode extends Enum,Object { +class RoundingMode -> java.math.RoundingMode { RoundingMode CEILING RoundingMode DOWN RoundingMode FLOOR diff --git a/modules/lang-painless/src/main/resources/org/elasticsearch/painless/java.text.txt b/modules/lang-painless/src/main/resources/org/elasticsearch/painless/java.text.txt index 2e93b14ab3dc6..fa9170cb5d254 100644 --- a/modules/lang-painless/src/main/resources/org/elasticsearch/painless/java.text.txt +++ b/modules/lang-painless/src/main/resources/org/elasticsearch/painless/java.text.txt @@ -24,7 +24,7 @@ #### Interfaces -class AttributedCharacterIterator -> java.text.AttributedCharacterIterator extends CharacterIterator { +class AttributedCharacterIterator -> java.text.AttributedCharacterIterator { Set getAllAttributeKeys() def getAttribute(AttributedCharacterIterator.Attribute) Map getAttributes() @@ -50,20 +50,20 @@ class CharacterIterator -> java.text.CharacterIterator { #### Classes -class Annotation -> java.text.Annotation extends Object { - Annotation (Object) +class Annotation -> java.text.Annotation { + (Object) def getValue() } -class AttributedCharacterIterator.Attribute -> java.text.AttributedCharacterIterator$Attribute extends Object { +class AttributedCharacterIterator.Attribute -> java.text.AttributedCharacterIterator$Attribute { AttributedCharacterIterator.Attribute INPUT_METHOD_SEGMENT AttributedCharacterIterator.Attribute LANGUAGE AttributedCharacterIterator.Attribute READING } -class AttributedString -> java.text.AttributedString extends Object { - AttributedString (String) - AttributedString (String,Map) +class AttributedString -> java.text.AttributedString { + (String) + (String,Map) void addAttribute(AttributedCharacterIterator.Attribute,Object) void addAttribute(AttributedCharacterIterator.Attribute,Object,int,int) void addAttributes(Map,int,int) @@ -72,14 +72,14 @@ class AttributedString -> java.text.AttributedString extends Object { AttributedCharacterIterator getIterator(AttributedCharacterIterator.Attribute[],int,int) } -class Bidi -> java.text.Bidi extends Object { +class Bidi -> java.text.Bidi { int DIRECTION_DEFAULT_LEFT_TO_RIGHT int DIRECTION_DEFAULT_RIGHT_TO_LEFT int DIRECTION_LEFT_TO_RIGHT int DIRECTION_RIGHT_TO_LEFT - Bidi (AttributedCharacterIterator) - Bidi (char[],int,byte[],int,int,int) - Bidi (String,int) + (AttributedCharacterIterator) + (char[],int,byte[],int,int,int) + (String,int) boolean baseIsLeftToRight() Bidi createLineBidi(int,int) int getBaseLevel() @@ -96,7 +96,7 @@ class Bidi -> java.text.Bidi extends Object { boolean requiresBidi(char[],int,int) } -class BreakIterator -> java.text.BreakIterator extend Object { +class BreakIterator -> java.text.BreakIterator { int DONE def clone() int current() @@ -121,9 +121,9 @@ class BreakIterator -> java.text.BreakIterator extend Object { void setText(String) } -class ChoiceFormat -> java.text.ChoiceFormat extends NumberFormat,Format,Object { - ChoiceFormat (double[],String[]) - ChoiceFormat (String) +class ChoiceFormat -> java.text.ChoiceFormat { + (double[],String[]) + (String) void applyPattern(String) def[] getFormats() double[] getLimits() @@ -134,7 +134,7 @@ class ChoiceFormat -> java.text.ChoiceFormat extends NumberFormat,Format,Object String toPattern() } -class CollationElementIterator -> java.text.CollationElementIterator extends Object { +class CollationElementIterator -> java.text.CollationElementIterator { int NULLORDER int getMaxExpansion(int) int getOffset() @@ -148,13 +148,13 @@ class CollationElementIterator -> java.text.CollationElementIterator extends Obj short tertiaryOrder(int) } -class CollationKey -> java.text.CollationKey extends Comparable,Object { +class CollationKey -> java.text.CollationKey { int compareTo(CollationKey) String getSourceString() byte[] toByteArray() } -class Collator -> java.text.Collator extends Comparator,Object { +class Collator -> java.text.Collator { int CANONICAL_DECOMPOSITION int FULL_DECOMPOSITION int IDENTICAL @@ -174,7 +174,7 @@ class Collator -> java.text.Collator extends Comparator,Object { void setStrength(int) } -class DateFormat -> java.text.DateFormat extends Format,Object { +class DateFormat -> java.text.DateFormat { int AM_PM_FIELD int DATE_FIELD int DAY_OF_WEEK_FIELD @@ -221,7 +221,7 @@ class DateFormat -> java.text.DateFormat extends Format,Object { void setTimeZone(TimeZone) } -class DateFormat.Field -> java.text.DateFormat$Field extends Format.Field,AttributedCharacterIterator.Attribute,Object { +class DateFormat.Field -> java.text.DateFormat$Field { DateFormat.Field AM_PM DateFormat.Field DAY_OF_MONTH DateFormat.Field DAY_OF_WEEK @@ -244,9 +244,9 @@ class DateFormat.Field -> java.text.DateFormat$Field extends Format.Field,Attrib DateFormat.Field ofCalendarField(int) } -class DateFormatSymbols -> java.text.DateFormatSymbols extends Object { - DateFormatSymbols () - DateFormatSymbols (Locale) +class DateFormatSymbols -> java.text.DateFormatSymbols { + () + (Locale) def clone() String[] getAmPmStrings() Locale[] getAvailableLocales() @@ -270,10 +270,10 @@ class DateFormatSymbols -> java.text.DateFormatSymbols extends Object { void setZoneStrings(String[][]) } -class DecimalFormat -> java.text.DecimalFormat extends NumberFormat,Format,Object { - DecimalFormat () - DecimalFormat (String) - DecimalFormat (String,DecimalFormatSymbols) +class DecimalFormat -> java.text.DecimalFormat { + () + (String) + (String,DecimalFormatSymbols) void applyLocalizedPattern(String) void applyPattern(String) DecimalFormatSymbols getDecimalFormatSymbols() @@ -298,9 +298,9 @@ class DecimalFormat -> java.text.DecimalFormat extends NumberFormat,Format,Objec String toPattern() } -class DecimalFormatSymbols -> java.text.DecimalFormatSymbols extends Object { - DecimalFormatSymbols () - DecimalFormatSymbols (Locale) +class DecimalFormatSymbols -> java.text.DecimalFormatSymbols { + () + (Locale) def clone() Locale[] getAvailableLocales() Currency getCurrency() @@ -337,9 +337,9 @@ class DecimalFormatSymbols -> java.text.DecimalFormatSymbols extends Object { void setZeroDigit(char) } -class FieldPosition -> java.text.FieldPosition extends Object { - FieldPosition (int) - FieldPosition (Format.Field,int) +class FieldPosition -> java.text.FieldPosition { + (int) + (Format.Field,int) int getBeginIndex() int getEndIndex() int getField() @@ -348,7 +348,7 @@ class FieldPosition -> java.text.FieldPosition extends Object { void setEndIndex(int) } -class Format -> java.text.Format extends Object { +class Format -> java.text.Format { def clone() String format(Object) StringBuffer format(Object,StringBuffer,FieldPosition) @@ -357,10 +357,10 @@ class Format -> java.text.Format extends Object { Object parseObject(String,ParsePosition) } -class Format.Field -> java.text.Format$Field extends AttributedCharacterIterator.Attribute,Object { +class Format.Field -> java.text.Format$Field { } -class MessageFormat -> java.text.MessageFormat extends Format,Object { +class MessageFormat -> java.text.MessageFormat { void applyPattern(String) String format(String,Object[]) Format[] getFormats() @@ -376,16 +376,16 @@ class MessageFormat -> java.text.MessageFormat extends Format,Object { String toPattern() } -class MessageFormat.Field -> java.text.MessageFormat$Field extends Format.Field,AttributedCharacterIterator.Attribute,Object { +class MessageFormat.Field -> java.text.MessageFormat$Field { MessageFormat.Field ARGUMENT } -class Normalizer -> java.text.Normalizer extends Object { +class Normalizer -> java.text.Normalizer { boolean isNormalized(CharSequence,Normalizer.Form) String normalize(CharSequence,Normalizer.Form) } -class NumberFormat -> java.text.NumberFormat extends Format,Object { +class NumberFormat -> java.text.NumberFormat { int FRACTION_FIELD int INTEGER_FIELD Locale[] getAvailableLocales() @@ -419,7 +419,7 @@ class NumberFormat -> java.text.NumberFormat extends Format,Object { void setRoundingMode(RoundingMode) } -class NumberFormat.Field -> java.text.NumberFormat$Field extends Format.Field,AttributedCharacterIterator.Attribute,Object { +class NumberFormat.Field -> java.text.NumberFormat$Field { NumberFormat.Field CURRENCY NumberFormat.Field DECIMAL_SEPARATOR NumberFormat.Field EXPONENT @@ -433,24 +433,24 @@ class NumberFormat.Field -> java.text.NumberFormat$Field extends Format.Field,At NumberFormat.Field SIGN } -class ParsePosition -> java.text.ParsePosition extends Object { - ParsePosition (int) +class ParsePosition -> java.text.ParsePosition { + (int) int getErrorIndex() int getIndex() void setErrorIndex(int) void setIndex(int) } -class RuleBasedCollator -> java.text.RuleBasedCollator extends Collator,Comparator,Object { - RuleBasedCollator (String) +class RuleBasedCollator -> java.text.RuleBasedCollator { + (String) CollationElementIterator getCollationElementIterator(String) String getRules() } -class SimpleDateFormat -> java.text.SimpleDateFormat extends DateFormat,Format,Object { - SimpleDateFormat () - SimpleDateFormat (String) - SimpleDateFormat (String,Locale) +class SimpleDateFormat -> java.text.SimpleDateFormat { + () + (String) + (String,Locale) void applyLocalizedPattern(String) void applyPattern(String) Date get2DigitYearStart() @@ -461,16 +461,16 @@ class SimpleDateFormat -> java.text.SimpleDateFormat extends DateFormat,Format,O String toPattern() } -class StringCharacterIterator -> java.text.StringCharacterIterator extends CharacterIterator,Object { - StringCharacterIterator (String) - StringCharacterIterator (String,int) - StringCharacterIterator (String,int,int,int) +class StringCharacterIterator -> java.text.StringCharacterIterator { + (String) + (String,int) + (String,int,int,int) void setText(String) } #### Enums -class Normalizer.Form -> java.text.Normalizer$Form extends Enum,Object { +class Normalizer.Form -> java.text.Normalizer$Form { Normalizer.Form NFC Normalizer.Form NFD Normalizer.Form NFKC @@ -481,7 +481,7 @@ class Normalizer.Form -> java.text.Normalizer$Form extends Enum,Object { #### Exceptions -class ParseException -> java.text.ParseException extends Exception,Object { - ParseException (String,int) +class ParseException -> java.text.ParseException { + (String,int) int getErrorOffset() } diff --git a/modules/lang-painless/src/main/resources/org/elasticsearch/painless/java.time.chrono.txt b/modules/lang-painless/src/main/resources/org/elasticsearch/painless/java.time.chrono.txt index 286141e29215b..2d932a3ed1a57 100644 --- a/modules/lang-painless/src/main/resources/org/elasticsearch/painless/java.time.chrono.txt +++ b/modules/lang-painless/src/main/resources/org/elasticsearch/painless/java.time.chrono.txt @@ -24,7 +24,7 @@ #### Interfaces -class ChronoLocalDate -> java.time.chrono.ChronoLocalDate extends Temporal,TemporalAccessor,TemporalAdjuster,Comparable { +class ChronoLocalDate -> java.time.chrono.ChronoLocalDate { ChronoLocalDateTime atTime(LocalTime) int compareTo(ChronoLocalDate) boolean equals(Object) @@ -51,7 +51,7 @@ class ChronoLocalDate -> java.time.chrono.ChronoLocalDate extends Temporal,Tempo ChronoLocalDate with(TemporalField,long) } -class ChronoLocalDateTime -> java.time.chrono.ChronoLocalDateTime extends Temporal,TemporalAccessor,TemporalAdjuster,Comparable { +class ChronoLocalDateTime -> java.time.chrono.ChronoLocalDateTime { ChronoZonedDateTime atZone(ZoneId) int compareTo(ChronoLocalDateTime) boolean equals(Object) @@ -76,7 +76,7 @@ class ChronoLocalDateTime -> java.time.chrono.ChronoLocalDateTime extends Tempor ChronoLocalDateTime with(TemporalField,long) } -class Chronology -> java.time.chrono.Chronology extends Comparable { +class Chronology -> java.time.chrono.Chronology { int compareTo(Chronology) ChronoLocalDate date(TemporalAccessor) ChronoLocalDate date(Era,int,int,int) @@ -106,7 +106,7 @@ class Chronology -> java.time.chrono.Chronology extends Comparable { ChronoZonedDateTime zonedDateTime(Instant,ZoneId) } -class ChronoPeriod -> java.time.chrono.ChronoPeriod extends TemporalAmount { +class ChronoPeriod -> java.time.chrono.ChronoPeriod { ChronoPeriod between(ChronoLocalDate,ChronoLocalDate) boolean equals(Object) Chronology getChronology() @@ -122,7 +122,7 @@ class ChronoPeriod -> java.time.chrono.ChronoPeriod extends TemporalAmount { String toString() } -class ChronoZonedDateTime -> java.time.chrono.ChronoZonedDateTime extends Temporal,TemporalAccessor,Comparable { +class ChronoZonedDateTime -> java.time.chrono.ChronoZonedDateTime { int compareTo(ChronoZonedDateTime) boolean equals(Object) String format(DateTimeFormatter) @@ -153,17 +153,17 @@ class ChronoZonedDateTime -> java.time.chrono.ChronoZonedDateTime extends Tempor ChronoZonedDateTime withZoneSameInstant(ZoneId) } -class Era -> java.time.chrono.Era extends TemporalAccessor,TemporalAdjuster { +class Era -> java.time.chrono.Era { String getDisplayName(TextStyle,Locale) int getValue() } #### Classes -class AbstractChronology -> java.time.chrono.Chronology extends Chronology,Comparable,Object { +class AbstractChronology -> java.time.chrono.AbstractChronology { } -class HijrahChronology -> java.time.chrono.HijrahChronology extends AbstractChronology,Chronology,Comparable,Object { +class HijrahChronology -> java.time.chrono.HijrahChronology { HijrahChronology INSTANCE HijrahDate date(TemporalAccessor) HijrahDate date(int,int,int) @@ -175,7 +175,7 @@ class HijrahChronology -> java.time.chrono.HijrahChronology extends AbstractChro HijrahDate resolveDate(Map,ResolverStyle) } -class HijrahDate -> java.time.chrono.HijrahDate extends ChronoLocalDate,Temporal,TemporalAccessor,TemporalAdjuster,Comparable,Object { +class HijrahDate -> java.time.chrono.HijrahDate { HijrahDate from(TemporalAccessor) HijrahChronology getChronology() HijrahEra getEra() @@ -189,7 +189,7 @@ class HijrahDate -> java.time.chrono.HijrahDate extends ChronoLocalDate,Temporal HijrahDate withVariant(HijrahChronology) } -class IsoChronology -> java.time.chrono.IsoChronology extends AbstractChronology,Chronology,Comparable,Object { +class IsoChronology -> java.time.chrono.IsoChronology { IsoChronology INSTANCE LocalDate date(TemporalAccessor) LocalDate date(int,int,int) @@ -205,7 +205,7 @@ class IsoChronology -> java.time.chrono.IsoChronology extends AbstractChronology ZonedDateTime zonedDateTime(Instant,ZoneId) } -class JapaneseChronology -> java.time.chrono.JapaneseChronology extends AbstractChronology,Chronology,Comparable,Object { +class JapaneseChronology -> java.time.chrono.JapaneseChronology { JapaneseChronology INSTANCE JapaneseDate date(TemporalAccessor) JapaneseDate date(int,int,int) @@ -217,7 +217,7 @@ class JapaneseChronology -> java.time.chrono.JapaneseChronology extends Abstract JapaneseDate resolveDate(Map,ResolverStyle) } -class JapaneseDate -> java.time.chrono.JapaneseDate extends ChronoLocalDate,Temporal,TemporalAccessor,TemporalAdjuster,Comparable,Object { +class JapaneseDate -> java.time.chrono.JapaneseDate { JapaneseDate of(int,int,int) JapaneseDate from(TemporalAccessor) JapaneseChronology getChronology() @@ -230,7 +230,7 @@ class JapaneseDate -> java.time.chrono.JapaneseDate extends ChronoLocalDate,Temp JapaneseDate minus(long,TemporalUnit) } -class JapaneseEra -> java.time.chrono.JapaneseEra extends Era,TemporalAccessor,TemporalAdjuster,Object { +class JapaneseEra -> java.time.chrono.JapaneseEra { JapaneseEra HEISEI JapaneseEra MEIJI JapaneseEra SHOWA @@ -241,7 +241,7 @@ class JapaneseEra -> java.time.chrono.JapaneseEra extends Era,TemporalAccessor,T JapaneseEra[] values() } -class MinguoChronology -> java.time.chrono.MinguoChronology extends AbstractChronology,Chronology,Comparable,Object { +class MinguoChronology -> java.time.chrono.MinguoChronology { MinguoChronology INSTANCE MinguoDate date(TemporalAccessor) MinguoDate date(int,int,int) @@ -253,7 +253,7 @@ class MinguoChronology -> java.time.chrono.MinguoChronology extends AbstractChro MinguoDate resolveDate(Map,ResolverStyle) } -class MinguoDate -> java.time.chrono.MinguoDate extends ChronoLocalDate,Temporal,TemporalAccessor,TemporalAdjuster,Comparable,Object { +class MinguoDate -> java.time.chrono.MinguoDate { MinguoDate of(int,int,int) MinguoDate from(TemporalAccessor) MinguoChronology getChronology() @@ -266,7 +266,7 @@ class MinguoDate -> java.time.chrono.MinguoDate extends ChronoLocalDate,Temporal MinguoDate minus(long,TemporalUnit) } -class ThaiBuddhistChronology -> java.time.chrono.ThaiBuddhistChronology extends AbstractChronology,Chronology,Comparable,Object { +class ThaiBuddhistChronology -> java.time.chrono.ThaiBuddhistChronology { ThaiBuddhistChronology INSTANCE ThaiBuddhistDate date(TemporalAccessor) ThaiBuddhistDate date(int,int,int) @@ -278,7 +278,7 @@ class ThaiBuddhistChronology -> java.time.chrono.ThaiBuddhistChronology extends ThaiBuddhistDate resolveDate(Map,ResolverStyle) } -class ThaiBuddhistDate -> java.time.chrono.ThaiBuddhistDate extends ChronoLocalDate,Temporal,TemporalAccessor,TemporalAdjuster,Comparable,Object { +class ThaiBuddhistDate -> java.time.chrono.ThaiBuddhistDate { ThaiBuddhistDate of(int,int,int) ThaiBuddhistDate from(TemporalAccessor) ThaiBuddhistChronology getChronology() @@ -293,7 +293,7 @@ class ThaiBuddhistDate -> java.time.chrono.ThaiBuddhistDate extends ChronoLocalD #### Enums -class HijrahEra -> java.time.chrono.HijrahEra extends Enum,Comparable,Era,TemporalAccessor,TemporalAdjuster,Object { +class HijrahEra -> java.time.chrono.HijrahEra { HijrahEra AH int getValue() HijrahEra of(int) @@ -301,7 +301,7 @@ class HijrahEra -> java.time.chrono.HijrahEra extends Enum,Comparable,Era,Tempor HijrahEra[] values() } -class IsoEra -> java.time.chrono.IsoEra extends Enum,Comparable,Era,TemporalAccessor,TemporalAdjuster,Object { +class IsoEra -> java.time.chrono.IsoEra { IsoEra BCE IsoEra CE int getValue() @@ -310,7 +310,7 @@ class IsoEra -> java.time.chrono.IsoEra extends Enum,Comparable,Era,TemporalAcce IsoEra[] values() } -class MinguoEra -> java.time.chrono.MinguoEra extends Enum,Comparable,Era,TemporalAccessor,TemporalAdjuster,Object { +class MinguoEra -> java.time.chrono.MinguoEra { MinguoEra BEFORE_ROC MinguoEra ROC int getValue() @@ -319,7 +319,7 @@ class MinguoEra -> java.time.chrono.MinguoEra extends Enum,Comparable,Era,Tempor MinguoEra[] values() } -class ThaiBuddhistEra -> java.time.chrono.ThaiBuddhistEra extends Enum,Comparable,Era,TemporalAccessor,TemporalAdjuster,Object { +class ThaiBuddhistEra -> java.time.chrono.ThaiBuddhistEra { ThaiBuddhistEra BE ThaiBuddhistEra BEFORE_BE int getValue() diff --git a/modules/lang-painless/src/main/resources/org/elasticsearch/painless/java.time.format.txt b/modules/lang-painless/src/main/resources/org/elasticsearch/painless/java.time.format.txt index 20831c4b6b4a7..d5b5c9cc35541 100644 --- a/modules/lang-painless/src/main/resources/org/elasticsearch/painless/java.time.format.txt +++ b/modules/lang-painless/src/main/resources/org/elasticsearch/painless/java.time.format.txt @@ -24,7 +24,7 @@ #### Classes -class DateTimeFormatter -> java.time.format.DateTimeFormatter extends Object { +class DateTimeFormatter -> java.time.format.DateTimeFormatter { DateTimeFormatter BASIC_ISO_DATE DateTimeFormatter ISO_DATE DateTimeFormatter ISO_DATE_TIME @@ -70,8 +70,8 @@ class DateTimeFormatter -> java.time.format.DateTimeFormatter extends Object { DateTimeFormatter withZone(ZoneId) } -class DateTimeFormatterBuilder -> java.time.format.DateTimeFormatterBuilder extends Object { - DateTimeFormatterBuilder () +class DateTimeFormatterBuilder -> java.time.format.DateTimeFormatterBuilder { + () DateTimeFormatterBuilder append(DateTimeFormatter) DateTimeFormatterBuilder appendChronologyId() DateTimeFormatterBuilder appendChronologyText(TextStyle) @@ -110,7 +110,7 @@ class DateTimeFormatterBuilder -> java.time.format.DateTimeFormatterBuilder exte DateTimeFormatter toFormatter(Locale) } -class DecimalStyle -> java.time.format.DecimalStyle extends Object { +class DecimalStyle -> java.time.format.DecimalStyle { DecimalStyle STANDARD Set getAvailableLocales() char getDecimalSeparator() @@ -127,7 +127,7 @@ class DecimalStyle -> java.time.format.DecimalStyle extends Object { #### Enums -class FormatStyle -> java.time.format.FormatStyle extends Enum,Comparable,Object { +class FormatStyle -> java.time.format.FormatStyle { FormatStyle FULL FormatStyle LONG FormatStyle MEDIUM @@ -136,7 +136,7 @@ class FormatStyle -> java.time.format.FormatStyle extends Enum,Comparable,Object FormatStyle[] values() } -class ResolverStyle -> java.time.format.ResolverStyle extends Enum,Comparable,Object { +class ResolverStyle -> java.time.format.ResolverStyle { ResolverStyle LENIENT ResolverStyle SMART ResolverStyle STRICT @@ -144,7 +144,7 @@ class ResolverStyle -> java.time.format.ResolverStyle extends Enum,Comparable,Ob ResolverStyle[] values() } -class SignStyle -> java.time.format.SignStyle extends Enum,Comparable,Object { +class SignStyle -> java.time.format.SignStyle { SignStyle ALWAYS SignStyle EXCEEDS_PAD SignStyle NEVER @@ -154,7 +154,7 @@ class SignStyle -> java.time.format.SignStyle extends Enum,Comparable,Object { SignStyle[] values() } -class TextStyle -> java.time.format.TextStyle extends Enum,Comparable,Object { +class TextStyle -> java.time.format.TextStyle { TextStyle FULL TextStyle FULL_STANDALONE TextStyle NARROW @@ -170,8 +170,8 @@ class TextStyle -> java.time.format.TextStyle extends Enum,Comparable,Object { #### Exceptions -class DateTimeParseException -> java.time.format.DateTimeParseException extends DateTimeException,RuntimeException,Exception,Object { - DateTimeParseException (String,CharSequence,int) +class DateTimeParseException -> java.time.format.DateTimeParseException { + (String,CharSequence,int) int getErrorIndex() String getParsedString() } diff --git a/modules/lang-painless/src/main/resources/org/elasticsearch/painless/java.time.temporal.txt b/modules/lang-painless/src/main/resources/org/elasticsearch/painless/java.time.temporal.txt index 9094dab6ba18a..e3c09bc625521 100644 --- a/modules/lang-painless/src/main/resources/org/elasticsearch/painless/java.time.temporal.txt +++ b/modules/lang-painless/src/main/resources/org/elasticsearch/painless/java.time.temporal.txt @@ -24,7 +24,7 @@ #### Interfaces -class Temporal -> java.time.temporal.Temporal extends TemporalAccessor { +class Temporal -> java.time.temporal.Temporal { Temporal minus(long,TemporalUnit) Temporal minus(TemporalAmount) Temporal plus(long,TemporalUnit) @@ -85,7 +85,7 @@ class TemporalUnit -> java.time.temporal.TemporalUnit { #### Classes -class IsoFields -> java.time.temporal.IsoFields extends Object { +class IsoFields -> java.time.temporal.IsoFields { TemporalField DAY_OF_QUARTER TemporalField QUARTER_OF_YEAR TemporalUnit QUARTER_YEARS @@ -94,13 +94,13 @@ class IsoFields -> java.time.temporal.IsoFields extends Object { TemporalField WEEK_OF_WEEK_BASED_YEAR } -class JulianFields -> java.time.temporal.JulianFields extends Object { +class JulianFields -> java.time.temporal.JulianFields { TemporalField JULIAN_DAY TemporalField MODIFIED_JULIAN_DAY TemporalField RATA_DIE } -class TemporalAdjusters -> java.time.temporal.TemporalAdjusters extends Object { +class TemporalAdjusters -> java.time.temporal.TemporalAdjusters { TemporalAdjuster dayOfWeekInMonth(int,DayOfWeek) TemporalAdjuster firstDayOfMonth() TemporalAdjuster firstDayOfNextMonth() @@ -117,7 +117,7 @@ class TemporalAdjusters -> java.time.temporal.TemporalAdjusters extends Object { TemporalAdjuster previousOrSame(DayOfWeek) } -class TemporalQueries -> java.time.temporal.TemporalQueries extends Object { +class TemporalQueries -> java.time.temporal.TemporalQueries { TemporalQuery chronology() TemporalQuery localDate() TemporalQuery localTime() @@ -127,7 +127,7 @@ class TemporalQueries -> java.time.temporal.TemporalQueries extends Object { TemporalQuery zoneId() } -class ValueRange -> java.time.temporal.ValueRange extends Object { +class ValueRange -> java.time.temporal.ValueRange { int checkValidIntValue(long,TemporalField) long checkValidValue(long,TemporalField) long getLargestMinimum() @@ -143,7 +143,7 @@ class ValueRange -> java.time.temporal.ValueRange extends Object { ValueRange of(long,long,long,long) } -class WeekFields -> java.time.temporal.WeekFields extends Object { +class WeekFields -> java.time.temporal.WeekFields { WeekFields ISO WeekFields SUNDAY_START TemporalUnit WEEK_BASED_YEARS @@ -160,7 +160,7 @@ class WeekFields -> java.time.temporal.WeekFields extends Object { #### Enums -class ChronoField -> java.time.temporal.ChronoField extends Enum,Comparable,TemporalField,Object { +class ChronoField -> java.time.temporal.ChronoField { ChronoField ALIGNED_DAY_OF_WEEK_IN_MONTH ChronoField ALIGNED_DAY_OF_WEEK_IN_YEAR ChronoField ALIGNED_WEEK_OF_MONTH @@ -197,7 +197,7 @@ class ChronoField -> java.time.temporal.ChronoField extends Enum,Comparable,Temp ChronoField[] values() } -class ChronoUnit -> java.time.temporal.ChronoUnit extends Enum,Comparable,TemporalUnit,Object { +class ChronoUnit -> java.time.temporal.ChronoUnit { ChronoUnit CENTURIES ChronoUnit DAYS ChronoUnit DECADES @@ -220,6 +220,6 @@ class ChronoUnit -> java.time.temporal.ChronoUnit extends Enum,Comparable,Tempor #### Exceptions -class UnsupportedTemporalTypeException -> java.time.temporal.UnsupportedTemporalTypeException extends DateTimeException,RuntimeException,Exception,Object { - UnsupportedTemporalTypeException (String) +class UnsupportedTemporalTypeException -> java.time.temporal.UnsupportedTemporalTypeException { + (String) } diff --git a/modules/lang-painless/src/main/resources/org/elasticsearch/painless/java.time.txt b/modules/lang-painless/src/main/resources/org/elasticsearch/painless/java.time.txt index 35f19b0abddea..1c012042b02c7 100644 --- a/modules/lang-painless/src/main/resources/org/elasticsearch/painless/java.time.txt +++ b/modules/lang-painless/src/main/resources/org/elasticsearch/painless/java.time.txt @@ -24,7 +24,7 @@ #### Classes -class Clock -> java.time.Clock extends Object { +class Clock -> java.time.Clock { Clock fixed(Instant,ZoneId) ZoneId getZone() Instant instant() @@ -33,7 +33,7 @@ class Clock -> java.time.Clock extends Object { Clock tick(Clock,Duration) } -class Duration -> java.time.Duration extends Comparable,TemporalAmount,Object { +class Duration -> java.time.Duration { Duration ZERO Duration abs() Duration between(Temporal,Temporal) @@ -57,7 +57,7 @@ class Duration -> java.time.Duration extends Comparable,TemporalAmount,Object { Duration of(long,TemporalUnit) Duration ofDays(long) Duration ofHours(long) - Duration ofMillis(long) + Duration ofMillis(long) Duration ofMinutes(long) Duration ofNanos(long) Duration ofSeconds(long) @@ -80,7 +80,7 @@ class Duration -> java.time.Duration extends Comparable,TemporalAmount,Object { Duration withNanos(int) } -class Instant -> java.time.Instant extends Comparable,Temporal,TemporalAccessor,TemporalAdjuster,Object { +class Instant -> java.time.Instant { Instant EPOCH Instant MAX Instant MIN @@ -112,7 +112,7 @@ class Instant -> java.time.Instant extends Comparable,Temporal,TemporalAccessor, Instant with(TemporalField,long) } -class LocalDate -> java.time.LocalDate extends ChronoLocalDate,Temporal,TemporalAccessor,TemporalAdjuster,Comparable,Object { +class LocalDate -> java.time.LocalDate { LocalDate MAX LocalDate MIN LocalDateTime atStartOfDay() @@ -155,7 +155,7 @@ class LocalDate -> java.time.LocalDate extends ChronoLocalDate,Temporal,Temporal LocalDate withYear(int) } -class LocalDateTime -> java.time.LocalDateTime extends ChronoLocalDateTime,Temporal,TemporalAccessor,TemporalAdjuster,Comparable,Object { +class LocalDateTime -> java.time.LocalDateTime { LocalDateTime MIN LocalDateTime MAX OffsetDateTime atOffset(ZoneOffset) @@ -212,7 +212,7 @@ class LocalDateTime -> java.time.LocalDateTime extends ChronoLocalDateTime,Tempo LocalDateTime withYear(int) } -class LocalTime -> java.time.LocalTime extends Temporal,TemporalAccessor,TemporalAdjuster,Comparable,Object { +class LocalTime -> java.time.LocalTime { LocalTime MAX LocalTime MIDNIGHT LocalTime MIN @@ -258,7 +258,7 @@ class LocalTime -> java.time.LocalTime extends Temporal,TemporalAccessor,Tempora LocalTime withSecond(int) } -class MonthDay -> java.time.MonthDay extends TemporalAccessor,TemporalAdjuster,Comparable,Object { +class MonthDay -> java.time.MonthDay { LocalDate atYear(int) int compareTo(MonthDay) String format(DateTimeFormatter) @@ -270,14 +270,14 @@ class MonthDay -> java.time.MonthDay extends TemporalAccessor,TemporalAdjuster,C boolean isBefore(MonthDay) boolean isValidYear(int) MonthDay of(int,int) - MonthDay parse(CharSequence) + MonthDay parse(CharSequence) MonthDay parse(CharSequence,DateTimeFormatter) MonthDay with(Month) MonthDay withDayOfMonth(int) MonthDay withMonth(int) } -class OffsetDateTime -> java.time.OffsetDateTime extends Temporal,TemporalAccessor,TemporalAdjuster,Comparable,Object { +class OffsetDateTime -> java.time.OffsetDateTime { OffsetDateTime MAX OffsetDateTime MIN ZonedDateTime atZoneSameInstant(ZoneId) @@ -348,7 +348,7 @@ class OffsetDateTime -> java.time.OffsetDateTime extends Temporal,TemporalAccess OffsetDateTime withOffsetSameInstant(ZoneOffset) } -class OffsetTime -> java.time.OffsetTime extends Temporal,TemporalAccessor,TemporalAdjuster,Comparable,Object { +class OffsetTime -> java.time.OffsetTime { OffsetTime MAX OffsetTime MIN int compareTo(OffsetTime) @@ -391,7 +391,7 @@ class OffsetTime -> java.time.OffsetTime extends Temporal,TemporalAccessor,Tempo OffsetTime withSecond(int) } -class Period -> java.time.Period extends ChronoPeriod,TemporalAmount,Object { +class Period -> java.time.Period { Period ZERO Period between(LocalDate,LocalDate) Period from(TemporalAmount) @@ -422,7 +422,7 @@ class Period -> java.time.Period extends ChronoPeriod,TemporalAmount,Object { Period withYears(int) } -class Year -> java.time.Year extends Temporal,TemporalAccessor,TemporalAdjuster,Comparable,Object { +class Year -> java.time.Year { int MAX_VALUE int MIN_VALUE LocalDate atDay(int) @@ -450,7 +450,7 @@ class Year -> java.time.Year extends Temporal,TemporalAccessor,TemporalAdjuster, Year with(TemporalField,long) } -class YearMonth -> java.time.YearMonth extends Temporal,TemporalAccessor,TemporalAdjuster,Comparable,Object { +class YearMonth -> java.time.YearMonth { LocalDate atDay(int) LocalDate atEndOfMonth() int compareTo(YearMonth) @@ -482,7 +482,7 @@ class YearMonth -> java.time.YearMonth extends Temporal,TemporalAccessor,Tempora YearMonth withMonth(int) } -class ZonedDateTime -> java.time.ZonedDateTime extends ChronoZonedDateTime,Temporal,TemporalAccessor,Comparable,Object { +class ZonedDateTime -> java.time.ZonedDateTime { int getDayOfMonth() DayOfWeek getDayOfWeek() int getDayOfYear() @@ -544,7 +544,7 @@ class ZonedDateTime -> java.time.ZonedDateTime extends ChronoZonedDateTime,Tempo ZonedDateTime withZoneSameInstant(ZoneId) } -class ZoneId -> java.time.ZoneId extends Object { +class ZoneId -> java.time.ZoneId { Map SHORT_IDS Set getAvailableZoneIds() ZoneId of(String) @@ -558,7 +558,7 @@ class ZoneId -> java.time.ZoneId extends Object { ZoneRules getRules() } -class ZoneOffset -> java.time.ZoneOffset extends ZoneId,Object { +class ZoneOffset -> java.time.ZoneOffset { ZoneOffset MAX ZoneOffset MIN ZoneOffset UTC @@ -573,7 +573,7 @@ class ZoneOffset -> java.time.ZoneOffset extends ZoneId,Object { #### Enums -class DayOfWeek -> java.time.DayOfWeek extends Enum,TemporalAccessor,TemporalAdjuster,Comparable,Object { +class DayOfWeek -> java.time.DayOfWeek { DayOfWeek FRIDAY DayOfWeek MONDAY DayOfWeek SATURDAY @@ -591,7 +591,7 @@ class DayOfWeek -> java.time.DayOfWeek extends Enum,TemporalAccessor,TemporalAdj DayOfWeek[] values() } -class Month -> java.time.Month extends Enum,TemporalAccessor,TemporalAdjuster,Comparable,Object { +class Month -> java.time.Month { Month APRIL Month AUGUST Month DECEMBER @@ -621,7 +621,7 @@ class Month -> java.time.Month extends Enum,TemporalAccessor,TemporalAdjuster,Co #### Exceptions -class DateTimeException -> java.time.DateTimeException extends RuntimeException,Exception,Object { - DateTimeException (String) +class DateTimeException -> java.time.DateTimeException { + (String) } diff --git a/modules/lang-painless/src/main/resources/org/elasticsearch/painless/java.time.zone.txt b/modules/lang-painless/src/main/resources/org/elasticsearch/painless/java.time.zone.txt index d9d1cce5c104b..dfb6fc7a8076f 100644 --- a/modules/lang-painless/src/main/resources/org/elasticsearch/painless/java.time.zone.txt +++ b/modules/lang-painless/src/main/resources/org/elasticsearch/painless/java.time.zone.txt @@ -24,7 +24,7 @@ #### Classes -class ZoneOffsetTransition -> java.time.zone.ZoneOffsetTransition extends Comparable,Object { +class ZoneOffsetTransition -> java.time.zone.ZoneOffsetTransition { int compareTo(ZoneOffsetTransition) LocalDateTime getDateTimeAfter() LocalDateTime getDateTimeBefore() @@ -39,7 +39,7 @@ class ZoneOffsetTransition -> java.time.zone.ZoneOffsetTransition extends Compar long toEpochSecond() } -class ZoneOffsetTransitionRule -> java.time.zone.ZoneOffsetTransitionRule extends Object { +class ZoneOffsetTransitionRule -> java.time.zone.ZoneOffsetTransitionRule { ZoneOffsetTransition createTransition(int) int getDayOfMonthIndicator() DayOfWeek getDayOfWeek() @@ -53,7 +53,7 @@ class ZoneOffsetTransitionRule -> java.time.zone.ZoneOffsetTransitionRule extend ZoneOffsetTransitionRule of(Month,int,DayOfWeek,LocalTime,boolean,ZoneOffsetTransitionRule.TimeDefinition,ZoneOffset,ZoneOffset,ZoneOffset) } -class ZoneRules -> java.time.zone.ZoneRules extends Object { +class ZoneRules -> java.time.zone.ZoneRules { Duration getDaylightSavings(Instant) ZoneOffset getOffset(Instant) ZoneOffset getStandardOffset(Instant) @@ -70,7 +70,7 @@ class ZoneRules -> java.time.zone.ZoneRules extends Object { ZoneOffsetTransition previousTransition(Instant) } -class ZoneRulesProvider -> java.time.zone.ZoneRulesProvider extends Object { +class ZoneRulesProvider -> java.time.zone.ZoneRulesProvider { Set getAvailableZoneIds() ZoneRules getRules(String,boolean) NavigableMap getVersions(String) @@ -78,7 +78,7 @@ class ZoneRulesProvider -> java.time.zone.ZoneRulesProvider extends Object { #### Enums -class ZoneOffsetTransitionRule.TimeDefinition -> java.time.zone.ZoneOffsetTransitionRule$TimeDefinition extends Enum,Comparable,Object { +class ZoneOffsetTransitionRule.TimeDefinition -> java.time.zone.ZoneOffsetTransitionRule$TimeDefinition { ZoneOffsetTransitionRule.TimeDefinition STANDARD ZoneOffsetTransitionRule.TimeDefinition UTC ZoneOffsetTransitionRule.TimeDefinition WALL @@ -89,6 +89,6 @@ class ZoneOffsetTransitionRule.TimeDefinition -> java.time.zone.ZoneOffsetTransi #### Exceptions -class ZoneRulesException -> java.time.zone.ZoneRulesException extends DateTimeException,RuntimeException,Exception,Object { - ZoneRulesException (String) +class ZoneRulesException -> java.time.zone.ZoneRulesException { + (String) } diff --git a/modules/lang-painless/src/main/resources/org/elasticsearch/painless/java.util.function.txt b/modules/lang-painless/src/main/resources/org/elasticsearch/painless/java.util.function.txt index 969a8d6fb46c4..baab868ec0e8a 100644 --- a/modules/lang-painless/src/main/resources/org/elasticsearch/painless/java.util.function.txt +++ b/modules/lang-painless/src/main/resources/org/elasticsearch/painless/java.util.function.txt @@ -21,7 +21,6 @@ # Painless definition file. This defines the hierarchy of classes, # what methods and fields they have, etc. # - #### Interfaces class BiConsumer -> java.util.function.BiConsumer { @@ -34,7 +33,7 @@ class BiFunction -> java.util.function.BiFunction { def apply(def,def) } -class BinaryOperator -> java.util.function.BinaryOperator extends BiFunction { +class BinaryOperator -> java.util.function.BinaryOperator { BinaryOperator maxBy(Comparator) BinaryOperator minBy(Comparator) } @@ -227,6 +226,6 @@ class ToLongFunction -> java.util.function.ToLongFunction { long applyAsLong(def) } -class UnaryOperator -> java.util.function.UnaryOperator extends Function { +class UnaryOperator -> java.util.function.UnaryOperator { UnaryOperator identity() } diff --git a/modules/lang-painless/src/main/resources/org/elasticsearch/painless/java.util.regex.txt b/modules/lang-painless/src/main/resources/org/elasticsearch/painless/java.util.regex.txt index 4bf1993528bdd..9ea87dd4197ab 100644 --- a/modules/lang-painless/src/main/resources/org/elasticsearch/painless/java.util.regex.txt +++ b/modules/lang-painless/src/main/resources/org/elasticsearch/painless/java.util.regex.txt @@ -22,7 +22,7 @@ # what methods and fields they have, etc. # -class Pattern -> java.util.regex.Pattern extends Object { +class Pattern -> java.util.regex.Pattern { # Pattern compile(String) Intentionally not included. We don't want dynamic patterns because they allow regexes to be generated per time # the script is run which is super slow. LRegex generates code that calls this method but it skips these checks. Predicate asPredicate() @@ -35,14 +35,14 @@ class Pattern -> java.util.regex.Pattern extends Object { Stream splitAsStream(CharSequence) } -class Matcher -> java.util.regex.Matcher extends Object { +class Matcher -> java.util.regex.Matcher { int end() int end(int) boolean find() boolean find(int) String group() String group(int) - String org.elasticsearch.painless.api.Augmentation.namedGroup(String) + String org.elasticsearch.painless.api.Augmentation namedGroup(String) int groupCount() boolean hasAnchoringBounds() boolean hasTransparentBounds() diff --git a/modules/lang-painless/src/main/resources/org/elasticsearch/painless/java.util.stream.txt b/modules/lang-painless/src/main/resources/org/elasticsearch/painless/java.util.stream.txt index d24cf8c04246e..d531fbb558f3c 100644 --- a/modules/lang-painless/src/main/resources/org/elasticsearch/painless/java.util.stream.txt +++ b/modules/lang-painless/src/main/resources/org/elasticsearch/painless/java.util.stream.txt @@ -43,7 +43,7 @@ class Collector -> java.util.stream.Collector { Supplier supplier() } -class DoubleStream -> java.util.stream.DoubleStream extends BaseStream { +class DoubleStream -> java.util.stream.DoubleStream { boolean allMatch(DoublePredicate) boolean anyMatch(DoublePredicate) OptionalDouble average() @@ -82,12 +82,12 @@ class DoubleStream -> java.util.stream.DoubleStream extends BaseStream { double[] toArray() } -class DoubleStream.Builder -> java.util.stream.DoubleStream$Builder extends DoubleConsumer { +class DoubleStream.Builder -> java.util.stream.DoubleStream$Builder { DoubleStream.Builder add(double) DoubleStream build() } -class IntStream -> java.util.stream.IntStream extends BaseStream { +class IntStream -> java.util.stream.IntStream { boolean allMatch(IntPredicate) boolean anyMatch(IntPredicate) DoubleStream asDoubleStream() @@ -130,12 +130,12 @@ class IntStream -> java.util.stream.IntStream extends BaseStream { int[] toArray() } -class IntStream.Builder -> java.util.stream.IntStream$Builder extends IntConsumer { +class IntStream.Builder -> java.util.stream.IntStream$Builder { IntStream.Builder add(int) IntStream build() } -class LongStream -> java.util.stream.LongStream extends BaseStream { +class LongStream -> java.util.stream.LongStream { boolean allMatch(LongPredicate) boolean anyMatch(LongPredicate) DoubleStream asDoubleStream() @@ -177,12 +177,12 @@ class LongStream -> java.util.stream.LongStream extends BaseStream { long[] toArray() } -class LongStream.Builder -> java.util.stream.LongStream$Builder extends LongConsumer { +class LongStream.Builder -> java.util.stream.LongStream$Builder { LongStream.Builder add(long) LongStream build() } -class Stream -> java.util.stream.Stream extends BaseStream { +class Stream -> java.util.stream.Stream { boolean allMatch(Predicate) boolean anyMatch(Predicate) Stream.Builder builder() @@ -221,14 +221,14 @@ class Stream -> java.util.stream.Stream extends BaseStream { def[] toArray(IntFunction) } -class Stream.Builder -> java.util.stream.Stream$Builder extends Consumer { +class Stream.Builder -> java.util.stream.Stream$Builder { Stream.Builder add(def) Stream build() } #### Classes -class Collectors -> java.util.stream.Collectors extends Object { +class Collectors -> java.util.stream.Collectors { Collector averagingDouble(ToDoubleFunction) Collector averagingInt(ToIntFunction) Collector averagingLong(ToLongFunction) @@ -264,7 +264,7 @@ class Collectors -> java.util.stream.Collectors extends Object { #### Enums -class Collector.Characteristics -> java.util.stream.Collector$Characteristics extends Enum,Object { +class Collector.Characteristics -> java.util.stream.Collector$Characteristics { Collector.Characteristics CONCURRENT Collector.Characteristics IDENTITY_FINISH Collector.Characteristics UNORDERED diff --git a/modules/lang-painless/src/main/resources/org/elasticsearch/painless/java.util.txt b/modules/lang-painless/src/main/resources/org/elasticsearch/painless/java.util.txt index ba50a30042cd9..164798e68d325 100644 --- a/modules/lang-painless/src/main/resources/org/elasticsearch/painless/java.util.txt +++ b/modules/lang-painless/src/main/resources/org/elasticsearch/painless/java.util.txt @@ -24,7 +24,7 @@ #### Interfaces -class Collection -> java.util.Collection extends Iterable { +class Collection -> java.util.Collection { boolean add(def) boolean addAll(Collection) void clear() @@ -41,13 +41,13 @@ class Collection -> java.util.Collection extends Iterable { def[] toArray(def[]) # some adaptations of groovy methods - List org.elasticsearch.painless.api.Augmentation.collect(Function) - def org.elasticsearch.painless.api.Augmentation.collect(Collection,Function) - def org.elasticsearch.painless.api.Augmentation.find(Predicate) - List org.elasticsearch.painless.api.Augmentation.findAll(Predicate) - def org.elasticsearch.painless.api.Augmentation.findResult(Function) - def org.elasticsearch.painless.api.Augmentation.findResult(def,Function) - List org.elasticsearch.painless.api.Augmentation.split(Predicate) + List org.elasticsearch.painless.api.Augmentation collect(Function) + def org.elasticsearch.painless.api.Augmentation collect(Collection,Function) + def org.elasticsearch.painless.api.Augmentation find(Predicate) + List org.elasticsearch.painless.api.Augmentation findAll(Predicate) + def org.elasticsearch.painless.api.Augmentation findResult(Function) + def org.elasticsearch.painless.api.Augmentation findResult(def,Function) + List org.elasticsearch.painless.api.Augmentation split(Predicate) } class Comparator -> java.util.Comparator { @@ -70,7 +70,7 @@ class Comparator -> java.util.Comparator { Comparator thenComparingLong(ToLongFunction) } -class Deque -> java.util.Deque extends Queue,Collection,Iterable { +class Deque -> java.util.Deque { void addFirst(def) void addLast(def) Iterator descendingIterator() @@ -110,7 +110,7 @@ class Iterator -> java.util.Iterator { void remove() } -class List -> java.util.List extends Collection,Iterable { +class List -> java.util.List { void add(int,def) boolean addAll(int,Collection) boolean equals(Object) @@ -123,12 +123,12 @@ class List -> java.util.List extends Collection,Iterable { def remove(int) void replaceAll(UnaryOperator) def set(int,def) - int org.elasticsearch.painless.api.Augmentation.getLength() + int org.elasticsearch.painless.api.Augmentation getLength() void sort(Comparator) List subList(int,int) } -class ListIterator -> java.util.ListIterator extends Iterator { +class ListIterator -> java.util.ListIterator { void add(def) boolean hasPrevious() int nextIndex() @@ -163,17 +163,17 @@ class Map -> java.util.Map { Collection values() # some adaptations of groovy methods - List org.elasticsearch.painless.api.Augmentation.collect(BiFunction) - def org.elasticsearch.painless.api.Augmentation.collect(Collection,BiFunction) - int org.elasticsearch.painless.api.Augmentation.count(BiPredicate) - def org.elasticsearch.painless.api.Augmentation.each(BiConsumer) - boolean org.elasticsearch.painless.api.Augmentation.every(BiPredicate) - Map.Entry org.elasticsearch.painless.api.Augmentation.find(BiPredicate) - Map org.elasticsearch.painless.api.Augmentation.findAll(BiPredicate) - def org.elasticsearch.painless.api.Augmentation.findResult(BiFunction) - def org.elasticsearch.painless.api.Augmentation.findResult(def,BiFunction) - List org.elasticsearch.painless.api.Augmentation.findResults(BiFunction) - Map org.elasticsearch.painless.api.Augmentation.groupBy(BiFunction) + List org.elasticsearch.painless.api.Augmentation collect(BiFunction) + def org.elasticsearch.painless.api.Augmentation collect(Collection,BiFunction) + int org.elasticsearch.painless.api.Augmentation count(BiPredicate) + def org.elasticsearch.painless.api.Augmentation each(BiConsumer) + boolean org.elasticsearch.painless.api.Augmentation every(BiPredicate) + Map.Entry org.elasticsearch.painless.api.Augmentation find(BiPredicate) + Map org.elasticsearch.painless.api.Augmentation findAll(BiPredicate) + def org.elasticsearch.painless.api.Augmentation findResult(BiFunction) + def org.elasticsearch.painless.api.Augmentation findResult(def,BiFunction) + List org.elasticsearch.painless.api.Augmentation findResults(BiFunction) + Map org.elasticsearch.painless.api.Augmentation groupBy(BiFunction) } class Map.Entry -> java.util.Map$Entry { @@ -188,7 +188,7 @@ class Map.Entry -> java.util.Map$Entry { def setValue(def) } -class NavigableMap -> java.util.NavigableMap extends SortedMap,Map { +class NavigableMap -> java.util.NavigableMap { Map.Entry ceilingEntry(def) def ceilingKey(def) NavigableSet descendingKeySet() @@ -208,7 +208,7 @@ class NavigableMap -> java.util.NavigableMap extends SortedMap,Map { NavigableMap tailMap(def,boolean) } -class NavigableSet -> java.util.NavigableSet extends SortedSet,Set,Collection,Iterable { +class NavigableSet -> java.util.NavigableSet { def ceiling(def) Iterator descendingIterator() NavigableSet descendingSet() @@ -226,21 +226,21 @@ class Observer -> java.util.Observer { void update(Observable,Object) } -class PrimitiveIterator -> java.util.PrimitiveIterator extends Iterator { +class PrimitiveIterator -> java.util.PrimitiveIterator { void forEachRemaining(def) } -class PrimitiveIterator.OfDouble -> java.util.PrimitiveIterator$OfDouble extends PrimitiveIterator,Iterator { +class PrimitiveIterator.OfDouble -> java.util.PrimitiveIterator$OfDouble { Double next() double nextDouble() } -class PrimitiveIterator.OfInt -> java.util.PrimitiveIterator$OfInt extends PrimitiveIterator,Iterator { +class PrimitiveIterator.OfInt -> java.util.PrimitiveIterator$OfInt { Integer next() int nextInt() } -class PrimitiveIterator.OfLong -> java.util.PrimitiveIterator$OfLong extends PrimitiveIterator,Iterator { +class PrimitiveIterator.OfLong -> java.util.PrimitiveIterator$OfLong { Long next() long nextLong() } @@ -264,25 +264,25 @@ class Spliterator -> java.util.Spliterator { Spliterator trySplit() } -class Spliterator.OfPrimitive -> java.util.Spliterator$OfPrimitive extends Spliterator { +class Spliterator.OfPrimitive -> java.util.Spliterator$OfPrimitive { void forEachRemaining(def) boolean tryAdvance(def) Spliterator.OfPrimitive trySplit() } -class Spliterator.OfDouble -> java.util.Spliterator$OfDouble extends Spliterator.OfPrimitive,Spliterator { +class Spliterator.OfDouble -> java.util.Spliterator$OfDouble { Spliterator.OfDouble trySplit() } -class Spliterator.OfInt -> java.util.Spliterator$OfInt extends Spliterator.OfPrimitive,Spliterator { +class Spliterator.OfInt -> java.util.Spliterator$OfInt { Spliterator.OfInt trySplit() } -class Spliterator.OfLong -> java.util.Spliterator$OfLong extends Spliterator.OfPrimitive,Spliterator { +class Spliterator.OfLong -> java.util.Spliterator$OfLong { Spliterator.OfLong trySplit() } -class Queue -> java.util.Queue extends Collection,Iterable { +class Queue -> java.util.Queue { def element() boolean offer(def) def peek() @@ -293,13 +293,13 @@ class Queue -> java.util.Queue extends Collection,Iterable { class RandomAccess -> java.util.RandomAccess { } -class Set -> java.util.Set extends Collection,Iterable { +class Set -> java.util.Set { boolean equals(Object) int hashCode() boolean remove(def) } -class SortedMap -> java.util.SortedMap extends Map { +class SortedMap -> java.util.SortedMap { Comparator comparator() def firstKey() SortedMap headMap(def) @@ -308,7 +308,7 @@ class SortedMap -> java.util.SortedMap extends Map { SortedMap tailMap(def) } -class SortedSet -> java.util.SortedSet extends Set,Collection,Iterable { +class SortedSet -> java.util.SortedSet { Comparator comparator() def first() SortedSet headSet(def) @@ -319,55 +319,55 @@ class SortedSet -> java.util.SortedSet extends Set,Collection,Iterable { #### Classes -class AbstractCollection -> java.util.AbstractCollection extends Collection,Iterable,Object { +class AbstractCollection -> java.util.AbstractCollection { } -class AbstractList -> java.util.AbstractList extends AbstractCollection,List,Collection,Iterable,Object { +class AbstractList -> java.util.AbstractList { } -class AbstractMap -> java.util.AbstractMap extends Map,Object { +class AbstractMap -> java.util.AbstractMap { } -class AbstractMap.SimpleEntry -> java.util.AbstractMap$SimpleEntry extends Map.Entry,Object { - AbstractMap.SimpleEntry (def,def) - AbstractMap.SimpleEntry (Map.Entry) +class AbstractMap.SimpleEntry -> java.util.AbstractMap$SimpleEntry { + (def,def) + (Map.Entry) } -class AbstractMap.SimpleImmutableEntry -> java.util.AbstractMap$SimpleImmutableEntry extends Map.Entry,Object { - AbstractMap.SimpleImmutableEntry (def,def) - AbstractMap.SimpleImmutableEntry (Map.Entry) +class AbstractMap.SimpleImmutableEntry -> java.util.AbstractMap$SimpleImmutableEntry { + (def,def) + (Map.Entry) } -class AbstractQueue -> java.util.AbstractQueue extends AbstractCollection,Queue,Collection,Iterable,Object { +class AbstractQueue -> java.util.AbstractQueue { } -class AbstractSequentialList -> java.util.AbstractSequentialList extends AbstractList,AbstractCollection,List,Collection,Iterable,Object { +class AbstractSequentialList -> java.util.AbstractSequentialList { } -class AbstractSet -> java.util.AbstractSet extends AbstractCollection,Set,Collection,Iterable,Object { +class AbstractSet -> java.util.AbstractSet { } -class ArrayDeque -> java.util.ArrayDeque extends AbstractCollection,Deque,Queue,Collection,Iterable,Object { - ArrayDeque () - ArrayDeque (Collection) +class ArrayDeque -> java.util.ArrayDeque { + () + (Collection) ArrayDeque clone() } -class ArrayList -> java.util.ArrayList extends AbstractList,AbstractCollection,List,RandomAccess,Collection,Iterable,Object { - ArrayList () - ArrayList (Collection) +class ArrayList -> java.util.ArrayList { + () + (Collection) def clone() void trimToSize() } -class Arrays -> java.util.Arrays extends Object { +class Arrays -> java.util.Arrays { List asList(Object[]) boolean deepEquals(Object[],Object[]) int deepHashCode(Object[]) String deepToString(Object[]) } -class Base64 -> java.util.Base64 extends Object { +class Base64 -> java.util.Base64 { Base64.Decoder getDecoder() Base64.Encoder getEncoder() Base64.Decoder getMimeDecoder() @@ -377,20 +377,20 @@ class Base64 -> java.util.Base64 extends Object { Base64.Encoder getUrlEncoder() } -class Base64.Decoder -> java.util.Base64$Decoder extends Object { +class Base64.Decoder -> java.util.Base64$Decoder { int decode(byte[],byte[]) byte[] decode(String) } -class Base64.Encoder -> java.util.Base64$Encoder extends Object { +class Base64.Encoder -> java.util.Base64$Encoder { int encode(byte[],byte[]) String encodeToString(byte[]) Base64.Encoder withoutPadding() } -class BitSet -> java.util.BitSet extends Object { - BitSet () - BitSet (int) +class BitSet -> java.util.BitSet { + () + (int) void and(BitSet) void andNot(BitSet) int cardinality() @@ -418,7 +418,7 @@ class BitSet -> java.util.BitSet extends Object { void xor(BitSet) } -class Calendar -> java.util.Calendar extends Comparable,Object { +class Calendar -> java.util.Calendar { int ALL_STYLES int AM int AM_PM @@ -516,8 +516,8 @@ class Calendar -> java.util.Calendar extends Comparable,Object { Instant toInstant() } -class Calendar.Builder -> java.util.Calendar$Builder extends Object { - Calendar.Builder () +class Calendar.Builder -> java.util.Calendar$Builder { + () Calendar build() Calendar.Builder set(int,int) Calendar.Builder setCalendarType(String) @@ -533,7 +533,7 @@ class Calendar.Builder -> java.util.Calendar$Builder extends Object { Calendar.Builder setWeekDefinition(int,int) } -class Collections -> java.util.Collections extends Object { +class Collections -> java.util.Collections { List EMPTY_LIST Map EMPTY_MAP Set EMPTY_SET @@ -588,7 +588,7 @@ class Collections -> java.util.Collections extends Object { SortedSet unmodifiableSortedSet(SortedSet) } -class Currency -> java.util.Currency extends Object { +class Currency -> java.util.Currency { Set getAvailableCurrencies() String getCurrencyCode() int getDefaultFractionDigits() @@ -600,9 +600,9 @@ class Currency -> java.util.Currency extends Object { String getSymbol(Locale) } -class Date -> java.util.Date extends Comparable,Object { - Date () - Date (long) +class Date -> java.util.Date { + () + (long) boolean after(Date) boolean before(Date) def clone() @@ -612,7 +612,7 @@ class Date -> java.util.Date extends Comparable,Object { void setTime(long) } -class Dictionary -> java.util.Dictionary extends Object { +class Dictionary -> java.util.Dictionary { Enumeration elements() def get(def) boolean isEmpty() @@ -622,8 +622,8 @@ class Dictionary -> java.util.Dictionary extends Object { int size() } -class DoubleSummaryStatistics -> java.util.DoubleSummaryStatistics extends DoubleConsumer,Object { - DoubleSummaryStatistics () +class DoubleSummaryStatistics -> java.util.DoubleSummaryStatistics { + () void combine(DoubleSummaryStatistics) double getAverage() long getCount() @@ -632,40 +632,40 @@ class DoubleSummaryStatistics -> java.util.DoubleSummaryStatistics extends Doubl double getSum() } -class EventListenerProxy -> java.util.EventListenerProxy extends EventListener,Object { +class EventListenerProxy -> java.util.EventListenerProxy { EventListener getListener() } -class EventObject -> java.util.EventObject extends Object { - EventObject (Object) +class EventObject -> java.util.EventObject { + (Object) Object getSource() } -class FormattableFlags -> java.util.FormattableFlags extends Object { +class FormattableFlags -> java.util.FormattableFlags { int ALTERNATE int LEFT_JUSTIFY int UPPERCASE } -class Formatter -> java.util.Formatter extends Object { - Formatter () - Formatter (Appendable) - Formatter (Appendable,Locale) +class Formatter -> java.util.Formatter { + () + (Appendable) + (Appendable,Locale) Formatter format(Locale,String,def[]) Formatter format(String,def[]) Locale locale() Appendable out() } -class GregorianCalendar -> java.util.GregorianCalendar extends Calendar,Comparable,Object { +class GregorianCalendar -> java.util.GregorianCalendar { int AD int BC - GregorianCalendar () - GregorianCalendar (int,int,int) - GregorianCalendar (int,int,int,int,int) - GregorianCalendar (int,int,int,int,int,int) - GregorianCalendar (TimeZone) - GregorianCalendar (TimeZone,Locale) + () + (int,int,int) + (int,int,int,int,int) + (int,int,int,int,int,int) + (TimeZone) + (TimeZone,Locale) GregorianCalendar from(ZonedDateTime) Date getGregorianChange() boolean isLeapYear(int) @@ -673,32 +673,32 @@ class GregorianCalendar -> java.util.GregorianCalendar extends Calendar,Comparab ZonedDateTime toZonedDateTime() } -class HashMap -> java.util.HashMap extends AbstractMap,Map,Object { - HashMap () - HashMap (Map) +class HashMap -> java.util.HashMap { + () + (Map) def clone() } -class HashSet -> java.util.HashSet extends AbstractSet,Set,Collection,Iterable,Object { - HashSet () - HashSet (Collection) +class HashSet -> java.util.HashSet { + () + (Collection) def clone() } -class Hashtable -> java.util.Hashtable extends Dictionary,Map,Object { - Hashtable () - Hashtable (Map) +class Hashtable -> java.util.Hashtable { + () + (Map) def clone() } -class IdentityHashMap -> java.util.IdentityHashMap extends AbstractMap,Map,Object { - IdentityHashMap () - IdentityHashMap (Map) +class IdentityHashMap -> java.util.IdentityHashMap { + () + (Map) def clone() } -class IntSummaryStatistics -> java.util.IntSummaryStatistics extends IntConsumer,Object { - IntSummaryStatistics () +class IntSummaryStatistics -> java.util.IntSummaryStatistics { + () void combine(IntSummaryStatistics) double getAverage() long getCount() @@ -707,23 +707,23 @@ class IntSummaryStatistics -> java.util.IntSummaryStatistics extends IntConsumer long getSum() } -class LinkedHashMap -> java.util.LinkedHashMap extends HashMap,AbstractMap,Map,Object { - LinkedHashMap () - LinkedHashMap (Map) +class LinkedHashMap -> java.util.LinkedHashMap { + () + (Map) } -class LinkedHashSet -> java.util.LinkedHashSet extends HashSet,AbstractSet,Set,AbstractCollection,Collection,Iterable,Object { - LinkedHashSet () - LinkedHashSet (Collection) +class LinkedHashSet -> java.util.LinkedHashSet { + () + (Collection) } -class LinkedList -> java.util.LinkedList extends AbstractSequentialList,AbstractList,List,Deque,Queue,AbstractCollection,Collection,Iterable,Object { - LinkedList () - LinkedList (Collection) +class LinkedList -> java.util.LinkedList { + () + (Collection) def clone() } -class Locale -> java.util.Locale extends Object { +class Locale -> java.util.Locale { Locale CANADA Locale CANADA_FRENCH Locale CHINA @@ -748,9 +748,9 @@ class Locale -> java.util.Locale extends Object { Locale UK char UNICODE_LOCALE_EXTENSION Locale US - Locale (String) - Locale (String,String) - Locale (String,String,String) + (String) + (String,String) + (String,String,String) def clone() List filter(List,Collection) List filterTags(List,Collection) @@ -788,8 +788,8 @@ class Locale -> java.util.Locale extends Object { String toLanguageTag() } -class Locale.Builder -> java.util.Locale$Builder extends Object { - Locale.Builder () +class Locale.Builder -> java.util.Locale$Builder { + () Locale.Builder addUnicodeLocaleAttribute(String) Locale build() Locale.Builder clear() @@ -805,11 +805,11 @@ class Locale.Builder -> java.util.Locale$Builder extends Object { Locale.Builder setVariant(String) } -class Locale.LanguageRange -> java.util.Locale$LanguageRange extends Object { +class Locale.LanguageRange -> java.util.Locale$LanguageRange { double MAX_WEIGHT double MIN_WEIGHT - Locale.LanguageRange (String) - Locale.LanguageRange (String,double) + (String) + (String,double) String getRange() double getWeight() List mapEquivalents(List,Map) @@ -817,8 +817,8 @@ class Locale.LanguageRange -> java.util.Locale$LanguageRange extends Object { List parse(String,Map) } -class LongSummaryStatistics -> java.util.LongSummaryStatistics extends LongConsumer,Object { - LongSummaryStatistics () +class LongSummaryStatistics -> java.util.LongSummaryStatistics { + () void combine(LongSummaryStatistics) double getAverage() long getCount() @@ -827,7 +827,7 @@ class LongSummaryStatistics -> java.util.LongSummaryStatistics extends LongConsu long getSum() } -class Objects -> java.util.Objects extends Object { +class Objects -> java.util.Objects { int compare(def,def,Comparator) boolean deepEquals(Object,Object) boolean equals(Object,Object) @@ -841,8 +841,8 @@ class Objects -> java.util.Objects extends Object { String toString(Object,String) } -class Observable -> java.util.Observable extends Object { - Observable () +class Observable -> java.util.Observable { + () void addObserver(Observer) int countObservers() void deleteObserver(Observer) @@ -852,7 +852,7 @@ class Observable -> java.util.Observable extends Object { void notifyObservers(Object) } -class Optional -> java.util.Optional extends Object { +class Optional -> java.util.Optional { Optional empty() Optional filter(Predicate) Optional flatMap(Function) @@ -867,7 +867,7 @@ class Optional -> java.util.Optional extends Object { def orElseThrow(Supplier) } -class OptionalDouble -> java.util.OptionalDouble extends Object { +class OptionalDouble -> java.util.OptionalDouble { OptionalDouble empty() double getAsDouble() void ifPresent(DoubleConsumer) @@ -878,7 +878,7 @@ class OptionalDouble -> java.util.OptionalDouble extends Object { double orElseThrow(Supplier) } -class OptionalInt -> java.util.OptionalInt extends Object { +class OptionalInt -> java.util.OptionalInt { OptionalInt empty() int getAsInt() void ifPresent(IntConsumer) @@ -889,7 +889,7 @@ class OptionalInt -> java.util.OptionalInt extends Object { int orElseThrow(Supplier) } -class OptionalLong -> java.util.OptionalLong extends Object { +class OptionalLong -> java.util.OptionalLong { OptionalLong empty() long getAsLong() void ifPresent(LongConsumer) @@ -900,14 +900,14 @@ class OptionalLong -> java.util.OptionalLong extends Object { long orElseThrow(Supplier) } -class PriorityQueue -> java.util.PriorityQueue extends AbstractQueue,Queue,AbstractCollection,Collection,Iterable,Object { - PriorityQueue () - PriorityQueue (Comparator) +class PriorityQueue -> java.util.PriorityQueue { + () + (Comparator) } -class Random -> java.util.Random extends Object { - Random () - Random (long) +class Random -> java.util.Random { + () + (long) DoubleStream doubles(long) DoubleStream doubles(long,double,double) IntStream ints(long) @@ -925,14 +925,14 @@ class Random -> java.util.Random extends Object { void setSeed(long) } -class SimpleTimeZone -> java.util.SimpleTimeZone extends TimeZone,Object { +class SimpleTimeZone -> java.util.SimpleTimeZone { int STANDARD_TIME int UTC_TIME int WALL_TIME - SimpleTimeZone (int,String) - SimpleTimeZone (int,String,int,int,int,int,int,int,int,int) - SimpleTimeZone (int,String,int,int,int,int,int,int,int,int,int) - SimpleTimeZone (int,String,int,int,int,int,int,int,int,int,int,int,int) + (int,String) + (int,String,int,int,int,int,int,int,int,int) + (int,String,int,int,int,int,int,int,int,int,int) + (int,String,int,int,int,int,int,int,int,int,int,int,int) int getDSTSavings() void setDSTSavings(int) void setEndRule(int,int,int) @@ -944,7 +944,7 @@ class SimpleTimeZone -> java.util.SimpleTimeZone extends TimeZone,Object { void setStartYear(int) } -class Spliterators -> java.util.Spliterators extends Object { +class Spliterators -> java.util.Spliterators { Spliterator.OfDouble emptyDoubleSpliterator() Spliterator.OfInt emptyIntSpliterator() Spliterator.OfLong emptyLongSpliterator() @@ -955,8 +955,8 @@ class Spliterators -> java.util.Spliterators extends Object { Spliterator spliteratorUnknownSize(Iterator,int) } -class Stack -> java.util.Stack extends Vector,AbstractList,List,AbstractCollection,Collection,Iterable,RandomAccess,Object { - Stack () +class Stack -> java.util.Stack { + () def push(def) def pop() def peek() @@ -964,26 +964,26 @@ class Stack -> java.util.Stack extends Vector,AbstractList,List,AbstractCollecti int search(def) } -class StringJoiner -> java.util.StringJoiner extends Object { - StringJoiner (CharSequence) - StringJoiner (CharSequence,CharSequence,CharSequence) +class StringJoiner -> java.util.StringJoiner { + (CharSequence) + (CharSequence,CharSequence,CharSequence) StringJoiner add(CharSequence) int length() StringJoiner merge(StringJoiner) StringJoiner setEmptyValue(CharSequence) } -class StringTokenizer -> java.util.StringTokenizer extends Enumeration,Object { - StringTokenizer (String) - StringTokenizer (String,String) - StringTokenizer (String,String,boolean) +class StringTokenizer -> java.util.StringTokenizer { + (String) + (String,String) + (String,String,boolean) int countTokens() boolean hasMoreTokens() String nextToken() String nextToken(String) } -class TimeZone -> java.util.TimeZone extends Object { +class TimeZone -> java.util.TimeZone { int LONG int SHORT def clone() @@ -1008,20 +1008,20 @@ class TimeZone -> java.util.TimeZone extends Object { boolean useDaylightTime() } -class TreeMap -> java.util.TreeMap extends AbstractMap,NavigableMap,SortedMap,Map,Object { - TreeMap () - TreeMap (Comparator) +class TreeMap -> java.util.TreeMap { + () + (Comparator) def clone() } -class TreeSet -> java.util.TreeSet extends AbstractSet,NavigableSet,SortedSet,Set,AbstractCollection,Collection,Iterable,Object { - TreeSet () - TreeSet (Comparator) +class TreeSet -> java.util.TreeSet { + () + (Comparator) def clone() } -class UUID -> java.util.UUID extends Comparable,Object { - UUID (long,long) +class UUID -> java.util.UUID { + (long,long) int compareTo(UUID) int clockSequence() UUID fromString(String) @@ -1034,9 +1034,9 @@ class UUID -> java.util.UUID extends Comparable,Object { int version() } -class Vector -> java.util.Vector extends AbstractList,List,AbstractCollection,Collection,Iterable,RandomAccess,Object { - Vector () - Vector (Collection) +class Vector -> java.util.Vector { + () + (Collection) void addElement(def) void copyInto(Object[]) def elementAt(int) @@ -1054,19 +1054,19 @@ class Vector -> java.util.Vector extends AbstractList,List,AbstractCollection,Co #### Enums -class Formatter.BigDecimalLayoutForm -> java.util.Formatter$BigDecimalLayoutForm extends Enum,Comparable,Object { +class Formatter.BigDecimalLayoutForm -> java.util.Formatter$BigDecimalLayoutForm { Formatter.BigDecimalLayoutForm DECIMAL_FLOAT Formatter.BigDecimalLayoutForm SCIENTIFIC } -class Locale.Category -> java.util.Locale$Category extends Enum,Comparable,Object { +class Locale.Category -> java.util.Locale$Category { Locale.Category DISPLAY Locale.Category FORMAT Locale.Category valueOf(String) Locale.Category[] values() } -class Locale.FilteringMode -> java.util.Locale$FilteringMode extends Enum,Comparable,Object { +class Locale.FilteringMode -> java.util.Locale$FilteringMode { Locale.FilteringMode AUTOSELECT_FILTERING Locale.FilteringMode EXTENDED_FILTERING Locale.FilteringMode IGNORE_EXTENDED_RANGES @@ -1078,101 +1078,101 @@ class Locale.FilteringMode -> java.util.Locale$FilteringMode extends Enum,Compar #### Exceptions -class ConcurrentModificationException -> java.util.ConcurrentModificationException extends RuntimeException,Exception,Object { - ConcurrentModificationException () - ConcurrentModificationException (String) +class ConcurrentModificationException -> java.util.ConcurrentModificationException { + () + (String) } -class DuplicateFormatFlagsException -> java.util.DuplicateFormatFlagsException extends IllegalFormatException,IllegalArgumentException,RuntimeException,Exception,Object { - DuplicateFormatFlagsException (String) +class DuplicateFormatFlagsException -> java.util.DuplicateFormatFlagsException { + (String) String getFlags() } -class EmptyStackException -> java.util.EmptyStackException extends RuntimeException,Exception,Object { - EmptyStackException () +class EmptyStackException -> java.util.EmptyStackException { + () } -class FormatFlagsConversionMismatchException -> java.util.FormatFlagsConversionMismatchException extends IllegalFormatException,IllegalArgumentException,RuntimeException,Exception,Object { - FormatFlagsConversionMismatchException (String,char) +class FormatFlagsConversionMismatchException -> java.util.FormatFlagsConversionMismatchException { + (String,char) char getConversion() String getFlags() } -class FormatterClosedException -> java.util.FormatterClosedException extends IllegalStateException,RuntimeException,Exception,Object { - FormatterClosedException () +class FormatterClosedException -> java.util.FormatterClosedException { + () } -class IllegalFormatCodePointException -> java.util.IllegalFormatCodePointException extends IllegalFormatException,IllegalArgumentException,RuntimeException,Exception,Object { - IllegalFormatCodePointException (int) +class IllegalFormatCodePointException -> java.util.IllegalFormatCodePointException { + (int) int getCodePoint() } -class IllegalFormatConversionException -> java.util.IllegalFormatConversionException extends IllegalFormatException,IllegalArgumentException,RuntimeException,Exception,Object { +class IllegalFormatConversionException -> java.util.IllegalFormatConversionException { char getConversion() } -class IllegalFormatException -> java.util.IllegalFormatException extends IllegalArgumentException,RuntimeException,Exception,Object { +class IllegalFormatException -> java.util.IllegalFormatException { } -class IllegalFormatFlagsException -> java.util.IllegalFormatFlagsException extends IllegalFormatException,IllegalArgumentException,RuntimeException,Exception,Object { - IllegalFormatFlagsException (String) +class IllegalFormatFlagsException -> java.util.IllegalFormatFlagsException { + (String) String getFlags() } -class IllegalFormatPrecisionException -> java.util.IllegalFormatPrecisionException extends IllegalFormatException,IllegalArgumentException,RuntimeException,Exception,Object { - IllegalFormatPrecisionException (int) +class IllegalFormatPrecisionException -> java.util.IllegalFormatPrecisionException { + (int) int getPrecision() } -class IllegalFormatWidthException -> java.util.IllegalFormatWidthException extends IllegalFormatException,IllegalArgumentException,RuntimeException,Exception,Object { - IllegalFormatWidthException (int) +class IllegalFormatWidthException -> java.util.IllegalFormatWidthException { + (int) int getWidth() } -class IllformedLocaleException -> java.util.IllformedLocaleException extends RuntimeException,Exception,Object { - IllformedLocaleException () - IllformedLocaleException (String) - IllformedLocaleException (String,int) +class IllformedLocaleException -> java.util.IllformedLocaleException { + () + (String) + (String,int) int getErrorIndex() } -class InputMismatchException -> java.util.InputMismatchException extends NoSuchElementException,RuntimeException,Exception,Object { - InputMismatchException () - InputMismatchException (String) +class InputMismatchException -> java.util.InputMismatchException { + () + (String) } -class MissingFormatArgumentException -> java.util.MissingFormatArgumentException extends IllegalFormatException,IllegalArgumentException,RuntimeException,Exception,Object { - MissingFormatArgumentException (String) +class MissingFormatArgumentException -> java.util.MissingFormatArgumentException { + (String) String getFormatSpecifier() } -class MissingFormatWidthException -> java.util.MissingFormatWidthException extends IllegalFormatException,IllegalArgumentException,RuntimeException,Exception,Object { - MissingFormatWidthException (String) +class MissingFormatWidthException -> java.util.MissingFormatWidthException { + (String) String getFormatSpecifier() } -class MissingResourceException -> java.util.MissingResourceException extends RuntimeException,Exception,Object { - MissingResourceException (String,String,String) +class MissingResourceException -> java.util.MissingResourceException { + (String,String,String) String getClassName() String getKey() } -class NoSuchElementException -> java.util.NoSuchElementException extends RuntimeException,Exception,Object { - NoSuchElementException () - NoSuchElementException (String) +class NoSuchElementException -> java.util.NoSuchElementException { + () + (String) } -class TooManyListenersException -> java.util.TooManyListenersException extends Exception,Object { - TooManyListenersException () - TooManyListenersException (String) +class TooManyListenersException -> java.util.TooManyListenersException { + () + (String) } -class UnknownFormatConversionException -> java.util.UnknownFormatConversionException extends IllegalFormatException,IllegalArgumentException,RuntimeException,Exception,Object { - UnknownFormatConversionException (String) +class UnknownFormatConversionException -> java.util.UnknownFormatConversionException { + (String) String getConversion() } -class UnknownFormatFlagsException -> java.util.UnknownFormatFlagsException extends IllegalFormatException,IllegalArgumentException,RuntimeException,Exception,Object { - UnknownFormatFlagsException (String) +class UnknownFormatFlagsException -> java.util.UnknownFormatFlagsException { + (String) String getFlags() } diff --git a/modules/lang-painless/src/main/resources/org/elasticsearch/painless/joda.time.txt b/modules/lang-painless/src/main/resources/org/elasticsearch/painless/joda.time.txt index 02d959215345e..6899e2878680c 100644 --- a/modules/lang-painless/src/main/resources/org/elasticsearch/painless/joda.time.txt +++ b/modules/lang-painless/src/main/resources/org/elasticsearch/painless/joda.time.txt @@ -26,7 +26,7 @@ # convenient access via the scripting API. classes are fully qualified to avoid # any confusion with java.time -class org.joda.time.ReadableInstant -> org.joda.time.ReadableInstant extends Comparable { +class org.joda.time.ReadableInstant -> org.joda.time.ReadableInstant { boolean equals(Object) long getMillis() int hashCode() @@ -36,7 +36,7 @@ class org.joda.time.ReadableInstant -> org.joda.time.ReadableInstant extends Com String toString() } -class org.joda.time.ReadableDateTime -> org.joda.time.ReadableDateTime extends org.joda.time.ReadableInstant,Comparable { +class org.joda.time.ReadableDateTime -> org.joda.time.ReadableDateTime { int getCenturyOfEra() int getDayOfMonth() int getDayOfWeek() diff --git a/modules/lang-painless/src/main/resources/org/elasticsearch/painless/org.elasticsearch.txt b/modules/lang-painless/src/main/resources/org/elasticsearch/painless/org.elasticsearch.txt index 8f9b0413c4131..5ff486b0b81f2 100644 --- a/modules/lang-painless/src/main/resources/org/elasticsearch/painless/org.elasticsearch.txt +++ b/modules/lang-painless/src/main/resources/org/elasticsearch/painless/org.elasticsearch.txt @@ -51,33 +51,26 @@ class float -> float { class double -> double { } -class def -> java.lang.Object { - boolean equals(Object) - int hashCode() - String toString() -} - - #### Painless debugging API -class Debug -> org.elasticsearch.painless.api.Debug extends Object { +class Debug -> org.elasticsearch.painless.api.Debug { void explain(Object) } #### ES Scripting API -class org.elasticsearch.common.geo.GeoPoint -> org.elasticsearch.common.geo.GeoPoint extends Object { +class org.elasticsearch.common.geo.GeoPoint -> org.elasticsearch.common.geo.GeoPoint { double getLat() double getLon() } -class org.elasticsearch.index.fielddata.ScriptDocValues.Strings -> org.elasticsearch.index.fielddata.ScriptDocValues$Strings extends List,Collection,Iterable,Object { +class org.elasticsearch.index.fielddata.ScriptDocValues.Strings -> org.elasticsearch.index.fielddata.ScriptDocValues$Strings { String get(int) String getValue() List getValues() } -class org.elasticsearch.index.fielddata.ScriptDocValues.Longs -> org.elasticsearch.index.fielddata.ScriptDocValues$Longs extends List,Collection,Iterable,Object { +class org.elasticsearch.index.fielddata.ScriptDocValues.Longs -> org.elasticsearch.index.fielddata.ScriptDocValues$Longs { Long get(int) long getValue() List getValues() @@ -85,7 +78,7 @@ class org.elasticsearch.index.fielddata.ScriptDocValues.Longs -> org.elasticsear List getDates() } -class org.elasticsearch.index.fielddata.ScriptDocValues.Dates -> org.elasticsearch.index.fielddata.ScriptDocValues$Dates extends List,Collection,Iterable,Object { +class org.elasticsearch.index.fielddata.ScriptDocValues.Dates -> org.elasticsearch.index.fielddata.ScriptDocValues$Dates { org.joda.time.ReadableDateTime get(int) org.joda.time.ReadableDateTime getValue() List getValues() @@ -93,13 +86,13 @@ class org.elasticsearch.index.fielddata.ScriptDocValues.Dates -> org.elasticsear List getDates() } -class org.elasticsearch.index.fielddata.ScriptDocValues.Doubles -> org.elasticsearch.index.fielddata.ScriptDocValues$Doubles extends List,Collection,Iterable,Object { +class org.elasticsearch.index.fielddata.ScriptDocValues.Doubles -> org.elasticsearch.index.fielddata.ScriptDocValues$Doubles { Double get(int) double getValue() List getValues() } -class org.elasticsearch.index.fielddata.ScriptDocValues.GeoPoints -> org.elasticsearch.index.fielddata.ScriptDocValues$GeoPoints extends List,Collection,Iterable,Object { +class org.elasticsearch.index.fielddata.ScriptDocValues.GeoPoints -> org.elasticsearch.index.fielddata.ScriptDocValues$GeoPoints { org.elasticsearch.common.geo.GeoPoint get(int) org.elasticsearch.common.geo.GeoPoint getValue() List getValues() @@ -117,19 +110,19 @@ class org.elasticsearch.index.fielddata.ScriptDocValues.GeoPoints -> org.elastic double geohashDistanceWithDefault(String,double) } -class org.elasticsearch.index.fielddata.ScriptDocValues.Booleans -> org.elasticsearch.index.fielddata.ScriptDocValues$Booleans extends List,Collection,Iterable,Object { +class org.elasticsearch.index.fielddata.ScriptDocValues.Booleans -> org.elasticsearch.index.fielddata.ScriptDocValues$Booleans { Boolean get(int) boolean getValue() List getValues() } -class org.elasticsearch.index.fielddata.ScriptDocValues.BytesRefs -> org.elasticsearch.index.fielddata.ScriptDocValues$BytesRefs extends List,Collection,Iterable,Object { +class org.elasticsearch.index.fielddata.ScriptDocValues.BytesRefs -> org.elasticsearch.index.fielddata.ScriptDocValues$BytesRefs { BytesRef get(int) BytesRef getValue() List getValues() } -class BytesRef -> org.apache.lucene.util.BytesRef extends Object { +class BytesRef -> org.apache.lucene.util.BytesRef { byte[] bytes int offset int length @@ -137,7 +130,7 @@ class BytesRef -> org.apache.lucene.util.BytesRef extends Object { String utf8ToString() } -class org.elasticsearch.index.mapper.IpFieldMapper.IpFieldType.IpScriptDocValues -> org.elasticsearch.index.mapper.IpFieldMapper$IpFieldType$IpScriptDocValues extends List,Collection,Iterable,Object { +class org.elasticsearch.index.mapper.IpFieldMapper.IpFieldType.IpScriptDocValues -> org.elasticsearch.index.mapper.IpFieldMapper$IpFieldType$IpScriptDocValues { String get(int) String getValue() List getValues() @@ -145,9 +138,9 @@ class org.elasticsearch.index.mapper.IpFieldMapper.IpFieldType.IpScriptDocValues # for testing. # currently FeatureTest exposes overloaded constructor, field load store, and overloaded static methods -class org.elasticsearch.painless.FeatureTest -> org.elasticsearch.painless.FeatureTest extends Object { - org.elasticsearch.painless.FeatureTest () - org.elasticsearch.painless.FeatureTest (int,int) +class org.elasticsearch.painless.FeatureTest -> org.elasticsearch.painless.FeatureTest { + () + (int,int) int getX() int getY() void setX(int) @@ -156,32 +149,32 @@ class org.elasticsearch.painless.FeatureTest -> org.elasticsearch.painless.Featu boolean overloadedStatic(boolean) Object twoFunctionsOfX(Function,Function) void listInput(List) - int org.elasticsearch.painless.FeatureTestAugmentation.getTotal() - int org.elasticsearch.painless.FeatureTestAugmentation.addToTotal(int) + int org.elasticsearch.painless.FeatureTestAugmentation getTotal() + int org.elasticsearch.painless.FeatureTestAugmentation addToTotal(int) } -class org.elasticsearch.search.lookup.FieldLookup -> org.elasticsearch.search.lookup.FieldLookup extends Object { +class org.elasticsearch.search.lookup.FieldLookup -> org.elasticsearch.search.lookup.FieldLookup { def getValue() List getValues() boolean isEmpty() } -class org.elasticsearch.index.similarity.ScriptedSimilarity.Query -> org.elasticsearch.index.similarity.ScriptedSimilarity$Query extends Object { +class org.elasticsearch.index.similarity.ScriptedSimilarity.Query -> org.elasticsearch.index.similarity.ScriptedSimilarity$Query { float getBoost() } -class org.elasticsearch.index.similarity.ScriptedSimilarity.Field -> org.elasticsearch.index.similarity.ScriptedSimilarity$Field extends Object { +class org.elasticsearch.index.similarity.ScriptedSimilarity.Field -> org.elasticsearch.index.similarity.ScriptedSimilarity$Field { long getDocCount() long getSumDocFreq() long getSumTotalTermFreq() } -class org.elasticsearch.index.similarity.ScriptedSimilarity.Term -> org.elasticsearch.index.similarity.ScriptedSimilarity$Term extends Object { +class org.elasticsearch.index.similarity.ScriptedSimilarity.Term -> org.elasticsearch.index.similarity.ScriptedSimilarity$Term { long getDocFreq() long getTotalTermFreq() } -class org.elasticsearch.index.similarity.ScriptedSimilarity.Doc -> org.elasticsearch.index.similarity.ScriptedSimilarity$Doc extends Object { +class org.elasticsearch.index.similarity.ScriptedSimilarity.Doc -> org.elasticsearch.index.similarity.ScriptedSimilarity$Doc { int getLength() float getFreq() } diff --git a/modules/lang-painless/src/test/java/org/elasticsearch/painless/AnalyzerCasterTests.java b/modules/lang-painless/src/test/java/org/elasticsearch/painless/AnalyzerCasterTests.java index e8d2f1332e094..249ce122b43e5 100644 --- a/modules/lang-painless/src/test/java/org/elasticsearch/painless/AnalyzerCasterTests.java +++ b/modules/lang-painless/src/test/java/org/elasticsearch/painless/AnalyzerCasterTests.java @@ -23,13 +23,6 @@ import org.elasticsearch.painless.Definition.Type; import org.elasticsearch.test.ESTestCase; -import static org.elasticsearch.painless.Definition.BYTE_TYPE; -import static org.elasticsearch.painless.Definition.DOUBLE_TYPE; -import static org.elasticsearch.painless.Definition.FLOAT_TYPE; -import static org.elasticsearch.painless.Definition.INT_TYPE; -import static org.elasticsearch.painless.Definition.LONG_TYPE; -import static org.elasticsearch.painless.Definition.SHORT_TYPE; - public class AnalyzerCasterTests extends ESTestCase { private static void assertCast(Type actual, Type expected, boolean mustBeExplicit) { @@ -37,68 +30,68 @@ private static void assertCast(Type actual, Type expected, boolean mustBeExplici if (actual.equals(expected)) { assertFalse(mustBeExplicit); - assertNull(AnalyzerCaster.getLegalCast(location, actual, expected, false, false)); - assertNull(AnalyzerCaster.getLegalCast(location, actual, expected, true, false)); + assertNull(Definition.DEFINITION.caster.getLegalCast(location, actual, expected, false, false)); + assertNull(Definition.DEFINITION.caster.getLegalCast(location, actual, expected, true, false)); return; } - Cast cast = AnalyzerCaster.getLegalCast(location, actual, expected, true, false); + Cast cast = Definition.DEFINITION.caster.getLegalCast(location, actual, expected, true, false); assertEquals(actual, cast.from); assertEquals(expected, cast.to); if (mustBeExplicit) { ClassCastException error = expectThrows(ClassCastException.class, - () -> AnalyzerCaster.getLegalCast(location, actual, expected, false, false)); + () -> Definition.DEFINITION.caster.getLegalCast(location, actual, expected, false, false)); assertTrue(error.getMessage().startsWith("Cannot cast")); } else { - cast = AnalyzerCaster.getLegalCast(location, actual, expected, false, false); + cast = Definition.DEFINITION.caster.getLegalCast(location, actual, expected, false, false); assertEquals(actual, cast.from); assertEquals(expected, cast.to); } } public void testNumericCasts() { - assertCast(BYTE_TYPE, BYTE_TYPE, false); - assertCast(BYTE_TYPE, SHORT_TYPE, false); - assertCast(BYTE_TYPE, INT_TYPE, false); - assertCast(BYTE_TYPE, LONG_TYPE, false); - assertCast(BYTE_TYPE, FLOAT_TYPE, false); - assertCast(BYTE_TYPE, DOUBLE_TYPE, false); + assertCast(Definition.DEFINITION.byteType, Definition.DEFINITION.byteType, false); + assertCast(Definition.DEFINITION.byteType, Definition.DEFINITION.shortType, false); + assertCast(Definition.DEFINITION.byteType, Definition.DEFINITION.intType, false); + assertCast(Definition.DEFINITION.byteType, Definition.DEFINITION.longType, false); + assertCast(Definition.DEFINITION.byteType, Definition.DEFINITION.floatType, false); + assertCast(Definition.DEFINITION.byteType, Definition.DEFINITION.doubleType, false); - assertCast(SHORT_TYPE, BYTE_TYPE, true); - assertCast(SHORT_TYPE, SHORT_TYPE, false); - assertCast(SHORT_TYPE, INT_TYPE, false); - assertCast(SHORT_TYPE, LONG_TYPE, false); - assertCast(SHORT_TYPE, FLOAT_TYPE, false); - assertCast(SHORT_TYPE, DOUBLE_TYPE, false); + assertCast(Definition.DEFINITION.shortType, Definition.DEFINITION.byteType, true); + assertCast(Definition.DEFINITION.shortType, Definition.DEFINITION.shortType, false); + assertCast(Definition.DEFINITION.shortType, Definition.DEFINITION.intType, false); + assertCast(Definition.DEFINITION.shortType, Definition.DEFINITION.longType, false); + assertCast(Definition.DEFINITION.shortType, Definition.DEFINITION.floatType, false); + assertCast(Definition.DEFINITION.shortType, Definition.DEFINITION.doubleType, false); - assertCast(INT_TYPE, BYTE_TYPE, true); - assertCast(INT_TYPE, SHORT_TYPE, true); - assertCast(INT_TYPE, INT_TYPE, false); - assertCast(INT_TYPE, LONG_TYPE, false); - assertCast(INT_TYPE, FLOAT_TYPE, false); - assertCast(INT_TYPE, DOUBLE_TYPE, false); + assertCast(Definition.DEFINITION.intType, Definition.DEFINITION.byteType, true); + assertCast(Definition.DEFINITION.intType, Definition.DEFINITION.shortType, true); + assertCast(Definition.DEFINITION.intType, Definition.DEFINITION.intType, false); + assertCast(Definition.DEFINITION.intType, Definition.DEFINITION.longType, false); + assertCast(Definition.DEFINITION.intType, Definition.DEFINITION.floatType, false); + assertCast(Definition.DEFINITION.intType, Definition.DEFINITION.doubleType, false); - assertCast(LONG_TYPE, BYTE_TYPE, true); - assertCast(LONG_TYPE, SHORT_TYPE, true); - assertCast(LONG_TYPE, INT_TYPE, true); - assertCast(LONG_TYPE, LONG_TYPE, false); - assertCast(LONG_TYPE, FLOAT_TYPE, false); - assertCast(LONG_TYPE, DOUBLE_TYPE, false); + assertCast(Definition.DEFINITION.longType, Definition.DEFINITION.byteType, true); + assertCast(Definition.DEFINITION.longType, Definition.DEFINITION.shortType, true); + assertCast(Definition.DEFINITION.longType, Definition.DEFINITION.intType, true); + assertCast(Definition.DEFINITION.longType, Definition.DEFINITION.longType, false); + assertCast(Definition.DEFINITION.longType, Definition.DEFINITION.floatType, false); + assertCast(Definition.DEFINITION.longType, Definition.DEFINITION.doubleType, false); - assertCast(FLOAT_TYPE, BYTE_TYPE, true); - assertCast(FLOAT_TYPE, SHORT_TYPE, true); - assertCast(FLOAT_TYPE, INT_TYPE, true); - assertCast(FLOAT_TYPE, LONG_TYPE, true); - assertCast(FLOAT_TYPE, FLOAT_TYPE, false); - assertCast(FLOAT_TYPE, DOUBLE_TYPE, false); + assertCast(Definition.DEFINITION.floatType, Definition.DEFINITION.byteType, true); + assertCast(Definition.DEFINITION.floatType, Definition.DEFINITION.shortType, true); + assertCast(Definition.DEFINITION.floatType, Definition.DEFINITION.intType, true); + assertCast(Definition.DEFINITION.floatType, Definition.DEFINITION.longType, true); + assertCast(Definition.DEFINITION.floatType, Definition.DEFINITION.floatType, false); + assertCast(Definition.DEFINITION.floatType, Definition.DEFINITION.doubleType, false); - assertCast(DOUBLE_TYPE, BYTE_TYPE, true); - assertCast(DOUBLE_TYPE, SHORT_TYPE, true); - assertCast(DOUBLE_TYPE, INT_TYPE, true); - assertCast(DOUBLE_TYPE, LONG_TYPE, true); - assertCast(DOUBLE_TYPE, FLOAT_TYPE, true); - assertCast(DOUBLE_TYPE, DOUBLE_TYPE, false); + assertCast(Definition.DEFINITION.doubleType, Definition.DEFINITION.byteType, true); + assertCast(Definition.DEFINITION.doubleType, Definition.DEFINITION.shortType, true); + assertCast(Definition.DEFINITION.doubleType, Definition.DEFINITION.intType, true); + assertCast(Definition.DEFINITION.doubleType, Definition.DEFINITION.longType, true); + assertCast(Definition.DEFINITION.doubleType, Definition.DEFINITION.floatType, true); + assertCast(Definition.DEFINITION.doubleType, Definition.DEFINITION.doubleType, false); } } diff --git a/modules/lang-painless/src/test/java/org/elasticsearch/painless/BaseClassTests.java b/modules/lang-painless/src/test/java/org/elasticsearch/painless/BaseClassTests.java index e4f04b235297d..cdd51447bc149 100644 --- a/modules/lang-painless/src/test/java/org/elasticsearch/painless/BaseClassTests.java +++ b/modules/lang-painless/src/test/java/org/elasticsearch/painless/BaseClassTests.java @@ -66,7 +66,7 @@ public Map getTestMap() { } public void testGets() { - Compiler compiler = new Compiler(Gets.class, Definition.BUILTINS); + Compiler compiler = new Compiler(Gets.class, Definition.DEFINITION); Map map = new HashMap<>(); map.put("s", 1); @@ -84,7 +84,7 @@ public abstract static class NoArgs { public abstract Object execute(); } public void testNoArgs() { - Compiler compiler = new Compiler(NoArgs.class, Definition.BUILTINS); + Compiler compiler = new Compiler(NoArgs.class, Definition.DEFINITION); assertEquals(1, ((NoArgs)scriptEngine.compile(compiler, null, "1", emptyMap())).execute()); assertEquals("foo", ((NoArgs)scriptEngine.compile(compiler, null, "'foo'", emptyMap())).execute()); @@ -108,13 +108,13 @@ public abstract static class OneArg { public abstract Object execute(Object arg); } public void testOneArg() { - Compiler compiler = new Compiler(OneArg.class, Definition.BUILTINS); + Compiler compiler = new Compiler(OneArg.class, Definition.DEFINITION); Object rando = randomInt(); assertEquals(rando, ((OneArg)scriptEngine.compile(compiler, null, "arg", emptyMap())).execute(rando)); rando = randomAlphaOfLength(5); assertEquals(rando, ((OneArg)scriptEngine.compile(compiler, null, "arg", emptyMap())).execute(rando)); - Compiler noargs = new Compiler(NoArgs.class, Definition.BUILTINS); + Compiler noargs = new Compiler(NoArgs.class, Definition.DEFINITION); Exception e = expectScriptThrows(IllegalArgumentException.class, () -> scriptEngine.compile(noargs, null, "doc", emptyMap())); assertEquals("Variable [doc] is not defined.", e.getMessage()); @@ -129,7 +129,7 @@ public abstract static class ArrayArg { public abstract Object execute(String[] arg); } public void testArrayArg() { - Compiler compiler = new Compiler(ArrayArg.class, Definition.BUILTINS); + Compiler compiler = new Compiler(ArrayArg.class, Definition.DEFINITION); String rando = randomAlphaOfLength(5); assertEquals(rando, ((ArrayArg)scriptEngine.compile(compiler, null, "arg[0]", emptyMap())).execute(new String[] {rando, "foo"})); } @@ -139,7 +139,7 @@ public abstract static class PrimitiveArrayArg { public abstract Object execute(int[] arg); } public void testPrimitiveArrayArg() { - Compiler compiler = new Compiler(PrimitiveArrayArg.class, Definition.BUILTINS); + Compiler compiler = new Compiler(PrimitiveArrayArg.class, Definition.DEFINITION); int rando = randomInt(); assertEquals(rando, ((PrimitiveArrayArg)scriptEngine.compile(compiler, null, "arg[0]", emptyMap())).execute(new int[] {rando, 10})); } @@ -149,7 +149,7 @@ public abstract static class DefArrayArg { public abstract Object execute(Object[] arg); } public void testDefArrayArg() { - Compiler compiler = new Compiler(DefArrayArg.class, Definition.BUILTINS); + Compiler compiler = new Compiler(DefArrayArg.class, Definition.DEFINITION); Object rando = randomInt(); assertEquals(rando, ((DefArrayArg)scriptEngine.compile(compiler, null, "arg[0]", emptyMap())).execute(new Object[] {rando, 10})); rando = randomAlphaOfLength(5); @@ -167,7 +167,7 @@ public abstract static class ManyArgs { public abstract boolean needsD(); } public void testManyArgs() { - Compiler compiler = new Compiler(ManyArgs.class, Definition.BUILTINS); + Compiler compiler = new Compiler(ManyArgs.class, Definition.DEFINITION); int rando = randomInt(); assertEquals(rando, ((ManyArgs)scriptEngine.compile(compiler, null, "a", emptyMap())).execute(rando, 0, 0, 0)); assertEquals(10, ((ManyArgs)scriptEngine.compile(compiler, null, "a + b + c + d", emptyMap())).execute(1, 2, 3, 4)); @@ -195,7 +195,7 @@ public abstract static class VarargTest { public abstract Object execute(String... arg); } public void testVararg() { - Compiler compiler = new Compiler(VarargTest.class, Definition.BUILTINS); + Compiler compiler = new Compiler(VarargTest.class, Definition.DEFINITION); assertEquals("foo bar baz", ((VarargTest)scriptEngine.compile(compiler, null, "String.join(' ', Arrays.asList(arg))", emptyMap())) .execute("foo", "bar", "baz")); } @@ -211,7 +211,7 @@ public Object executeWithASingleOne(int a, int b, int c) { } } public void testDefaultMethods() { - Compiler compiler = new Compiler(DefaultMethods.class, Definition.BUILTINS); + Compiler compiler = new Compiler(DefaultMethods.class, Definition.DEFINITION); int rando = randomInt(); assertEquals(rando, ((DefaultMethods)scriptEngine.compile(compiler, null, "a", emptyMap())).execute(rando, 0, 0, 0)); assertEquals(rando, ((DefaultMethods)scriptEngine.compile(compiler, null, "a", emptyMap())).executeWithASingleOne(rando, 0, 0)); @@ -225,7 +225,7 @@ public abstract static class ReturnsVoid { public abstract void execute(Map map); } public void testReturnsVoid() { - Compiler compiler = new Compiler(ReturnsVoid.class, Definition.BUILTINS); + Compiler compiler = new Compiler(ReturnsVoid.class, Definition.DEFINITION); Map map = new HashMap<>(); ((ReturnsVoid)scriptEngine.compile(compiler, null, "map.a = 'foo'", emptyMap())).execute(map); assertEquals(singletonMap("a", "foo"), map); @@ -244,7 +244,7 @@ public abstract static class ReturnsPrimitiveBoolean { public abstract boolean execute(); } public void testReturnsPrimitiveBoolean() { - Compiler compiler = new Compiler(ReturnsPrimitiveBoolean.class, Definition.BUILTINS); + Compiler compiler = new Compiler(ReturnsPrimitiveBoolean.class, Definition.DEFINITION); assertEquals(true, ((ReturnsPrimitiveBoolean)scriptEngine.compile(compiler, null, "true", emptyMap())).execute()); assertEquals(false, ((ReturnsPrimitiveBoolean)scriptEngine.compile(compiler, null, "false", emptyMap())).execute()); @@ -286,7 +286,7 @@ public abstract static class ReturnsPrimitiveInt { public abstract int execute(); } public void testReturnsPrimitiveInt() { - Compiler compiler = new Compiler(ReturnsPrimitiveInt.class, Definition.BUILTINS); + Compiler compiler = new Compiler(ReturnsPrimitiveInt.class, Definition.DEFINITION); assertEquals(1, ((ReturnsPrimitiveInt)scriptEngine.compile(compiler, null, "1", emptyMap())).execute()); assertEquals(1, ((ReturnsPrimitiveInt)scriptEngine.compile(compiler, null, "(int) 1L", emptyMap())).execute()); @@ -328,7 +328,7 @@ public abstract static class ReturnsPrimitiveFloat { public abstract float execute(); } public void testReturnsPrimitiveFloat() { - Compiler compiler = new Compiler(ReturnsPrimitiveFloat.class, Definition.BUILTINS); + Compiler compiler = new Compiler(ReturnsPrimitiveFloat.class, Definition.DEFINITION); assertEquals(1.1f, ((ReturnsPrimitiveFloat)scriptEngine.compile(compiler, null, "1.1f", emptyMap())).execute(), 0); assertEquals(1.1f, ((ReturnsPrimitiveFloat)scriptEngine.compile(compiler, null, "(float) 1.1d", emptyMap())).execute(), 0); @@ -359,7 +359,7 @@ public abstract static class ReturnsPrimitiveDouble { public abstract double execute(); } public void testReturnsPrimitiveDouble() { - Compiler compiler = new Compiler(ReturnsPrimitiveDouble.class, Definition.BUILTINS); + Compiler compiler = new Compiler(ReturnsPrimitiveDouble.class, Definition.DEFINITION); assertEquals(1.0, ((ReturnsPrimitiveDouble)scriptEngine.compile(compiler, null, "1", emptyMap())).execute(), 0); assertEquals(1.0, ((ReturnsPrimitiveDouble)scriptEngine.compile(compiler, null, "1L", emptyMap())).execute(), 0); @@ -393,7 +393,7 @@ public abstract static class NoArgumentsConstant { public abstract Object execute(String foo); } public void testNoArgumentsConstant() { - Compiler compiler = new Compiler(NoArgumentsConstant.class, Definition.BUILTINS); + Compiler compiler = new Compiler(NoArgumentsConstant.class, Definition.DEFINITION); Exception e = expectScriptThrows(IllegalArgumentException.class, false, () -> scriptEngine.compile(compiler, null, "1", emptyMap())); assertThat(e.getMessage(), startsWith( @@ -406,7 +406,7 @@ public abstract static class WrongArgumentsConstant { public abstract Object execute(String foo); } public void testWrongArgumentsConstant() { - Compiler compiler = new Compiler(WrongArgumentsConstant.class, Definition.BUILTINS); + Compiler compiler = new Compiler(WrongArgumentsConstant.class, Definition.DEFINITION); Exception e = expectScriptThrows(IllegalArgumentException.class, false, () -> scriptEngine.compile(compiler, null, "1", emptyMap())); assertThat(e.getMessage(), startsWith( @@ -419,7 +419,7 @@ public abstract static class WrongLengthOfArgumentConstant { public abstract Object execute(String foo); } public void testWrongLengthOfArgumentConstant() { - Compiler compiler = new Compiler(WrongLengthOfArgumentConstant.class, Definition.BUILTINS); + Compiler compiler = new Compiler(WrongLengthOfArgumentConstant.class, Definition.DEFINITION); Exception e = expectScriptThrows(IllegalArgumentException.class, false, () -> scriptEngine.compile(compiler, null, "1", emptyMap())); assertThat(e.getMessage(), startsWith("[" + WrongLengthOfArgumentConstant.class.getName() + "#ARGUMENTS] has length [2] but [" @@ -431,7 +431,7 @@ public abstract static class UnknownArgType { public abstract Object execute(UnknownArgType foo); } public void testUnknownArgType() { - Compiler compiler = new Compiler(UnknownArgType.class, Definition.BUILTINS); + Compiler compiler = new Compiler(UnknownArgType.class, Definition.DEFINITION); Exception e = expectScriptThrows(IllegalArgumentException.class, false, () -> scriptEngine.compile(compiler, null, "1", emptyMap())); assertEquals("[foo] is of unknown type [" + UnknownArgType.class.getName() + ". Painless interfaces can only accept arguments " @@ -443,7 +443,7 @@ public abstract static class UnknownReturnType { public abstract UnknownReturnType execute(String foo); } public void testUnknownReturnType() { - Compiler compiler = new Compiler(UnknownReturnType.class, Definition.BUILTINS); + Compiler compiler = new Compiler(UnknownReturnType.class, Definition.DEFINITION); Exception e = expectScriptThrows(IllegalArgumentException.class, false, () -> scriptEngine.compile(compiler, null, "1", emptyMap())); assertEquals("Painless can only implement execute methods returning a whitelisted type but [" + UnknownReturnType.class.getName() @@ -455,7 +455,7 @@ public abstract static class UnknownArgTypeInArray { public abstract Object execute(UnknownArgTypeInArray[] foo); } public void testUnknownArgTypeInArray() { - Compiler compiler = new Compiler(UnknownArgTypeInArray.class, Definition.BUILTINS); + Compiler compiler = new Compiler(UnknownArgTypeInArray.class, Definition.DEFINITION); Exception e = expectScriptThrows(IllegalArgumentException.class, false, () -> scriptEngine.compile(compiler, null, "1", emptyMap())); assertEquals("[foo] is of unknown type [" + UnknownArgTypeInArray.class.getName() + ". Painless interfaces can only accept " @@ -467,7 +467,7 @@ public abstract static class TwoExecuteMethods { public abstract Object execute(boolean foo); } public void testTwoExecuteMethods() { - Compiler compiler = new Compiler(TwoExecuteMethods.class, Definition.BUILTINS); + Compiler compiler = new Compiler(TwoExecuteMethods.class, Definition.DEFINITION); Exception e = expectScriptThrows(IllegalArgumentException.class, false, () -> scriptEngine.compile(compiler, null, "null", emptyMap())); assertEquals("Painless can only implement interfaces that have a single method named [execute] but [" diff --git a/modules/lang-painless/src/test/java/org/elasticsearch/painless/DebugTests.java b/modules/lang-painless/src/test/java/org/elasticsearch/painless/DebugTests.java index c1098a1e7afa4..fe50578df1acb 100644 --- a/modules/lang-painless/src/test/java/org/elasticsearch/painless/DebugTests.java +++ b/modules/lang-painless/src/test/java/org/elasticsearch/painless/DebugTests.java @@ -34,7 +34,7 @@ import static org.hamcrest.Matchers.not; public class DebugTests extends ScriptTestCase { - private final Definition definition = Definition.BUILTINS; + private final Definition definition = Definition.DEFINITION; public void testExplain() { // Debug.explain can explain an object diff --git a/modules/lang-painless/src/test/java/org/elasticsearch/painless/Debugger.java b/modules/lang-painless/src/test/java/org/elasticsearch/painless/Debugger.java index 1169525b9b804..2b4a896fb5e66 100644 --- a/modules/lang-painless/src/test/java/org/elasticsearch/painless/Debugger.java +++ b/modules/lang-painless/src/test/java/org/elasticsearch/painless/Debugger.java @@ -39,7 +39,7 @@ static String toString(Class iface, String source, CompilerSettings settings) PrintWriter outputWriter = new PrintWriter(output); Textifier textifier = new Textifier(); try { - new Compiler(iface, Definition.BUILTINS).compile("", source, settings, textifier); + new Compiler(iface, Definition.DEFINITION).compile("", source, settings, textifier); } catch (Exception e) { textifier.print(outputWriter); e.addSuppressed(new Exception("current bytecode: \n" + output)); diff --git a/modules/lang-painless/src/test/java/org/elasticsearch/painless/DefBootstrapTests.java b/modules/lang-painless/src/test/java/org/elasticsearch/painless/DefBootstrapTests.java index 24e096d03a090..7188caf425197 100644 --- a/modules/lang-painless/src/test/java/org/elasticsearch/painless/DefBootstrapTests.java +++ b/modules/lang-painless/src/test/java/org/elasticsearch/painless/DefBootstrapTests.java @@ -30,14 +30,14 @@ import org.elasticsearch.test.ESTestCase; public class DefBootstrapTests extends ESTestCase { - private final Definition definition = Definition.BUILTINS; - + private final Definition definition = Definition.DEFINITION; + /** calls toString() on integers, twice */ public void testOneType() throws Throwable { CallSite site = DefBootstrap.bootstrap(definition, - MethodHandles.publicLookup(), - "toString", - MethodType.methodType(String.class, Object.class), + MethodHandles.publicLookup(), + "toString", + MethodType.methodType(String.class, Object.class), 0, DefBootstrap.METHOD_CALL, ""); @@ -52,12 +52,12 @@ public void testOneType() throws Throwable { assertEquals("6", (String)handle.invokeExact((Object)6)); assertDepthEquals(site, 1); } - + public void testTwoTypes() throws Throwable { CallSite site = DefBootstrap.bootstrap(definition, - MethodHandles.publicLookup(), - "toString", - MethodType.methodType(String.class, Object.class), + MethodHandles.publicLookup(), + "toString", + MethodType.methodType(String.class, Object.class), 0, DefBootstrap.METHOD_CALL, ""); @@ -75,14 +75,14 @@ public void testTwoTypes() throws Throwable { assertEquals("2.5", (String)handle.invokeExact((Object)2.5f)); assertDepthEquals(site, 2); } - + public void testTooManyTypes() throws Throwable { // if this changes, test must be rewritten assertEquals(5, DefBootstrap.PIC.MAX_DEPTH); CallSite site = DefBootstrap.bootstrap(definition, - MethodHandles.publicLookup(), - "toString", - MethodType.methodType(String.class, Object.class), + MethodHandles.publicLookup(), + "toString", + MethodType.methodType(String.class, Object.class), 0, DefBootstrap.METHOD_CALL, ""); @@ -102,13 +102,13 @@ public void testTooManyTypes() throws Throwable { assertEquals("c", (String)handle.invokeExact((Object)'c')); assertDepthEquals(site, 5); } - + /** test that we revert to the megamorphic classvalue cache and that it works as expected */ public void testMegamorphic() throws Throwable { - DefBootstrap.PIC site = (DefBootstrap.PIC) DefBootstrap.bootstrap(definition, - MethodHandles.publicLookup(), - "size", - MethodType.methodType(int.class, Object.class), + DefBootstrap.PIC site = (DefBootstrap.PIC) DefBootstrap.bootstrap(definition, + MethodHandles.publicLookup(), + "size", + MethodType.methodType(int.class, Object.class), 0, DefBootstrap.METHOD_CALL, ""); @@ -118,12 +118,12 @@ public void testMegamorphic() throws Throwable { assertEquals(1, (int)handle.invokeExact((Object) Collections.singletonMap("a", "b"))); assertEquals(3, (int)handle.invokeExact((Object) Arrays.asList("x", "y", "z"))); assertEquals(2, (int)handle.invokeExact((Object) Arrays.asList("u", "v"))); - + final HashMap map = new HashMap<>(); map.put("x", "y"); map.put("a", "b"); assertEquals(2, (int)handle.invokeExact((Object) map)); - + final IllegalArgumentException iae = expectThrows(IllegalArgumentException.class, () -> { Integer.toString((int)handle.invokeExact(new Object())); }); @@ -133,12 +133,12 @@ public void testMegamorphic() throws Throwable { e.getClassName().startsWith("org.elasticsearch.painless.DefBootstrap$PIC$"); })); } - + // test operators with null guards public void testNullGuardAdd() throws Throwable { DefBootstrap.MIC site = (DefBootstrap.MIC) DefBootstrap.bootstrap(definition, - MethodHandles.publicLookup(), + MethodHandles.publicLookup(), "add", MethodType.methodType(Object.class, Object.class, Object.class), 0, @@ -147,7 +147,7 @@ public void testNullGuardAdd() throws Throwable { MethodHandle handle = site.dynamicInvoker(); assertEquals("nulltest", (Object)handle.invokeExact((Object)null, (Object)"test")); } - + public void testNullGuardAddWhenCached() throws Throwable { DefBootstrap.MIC site = (DefBootstrap.MIC) DefBootstrap.bootstrap(definition, MethodHandles.publicLookup(), @@ -160,11 +160,11 @@ public void testNullGuardAddWhenCached() throws Throwable { assertEquals(2, (Object)handle.invokeExact((Object)1, (Object)1)); assertEquals("nulltest", (Object)handle.invokeExact((Object)null, (Object)"test")); } - + public void testNullGuardEq() throws Throwable { DefBootstrap.MIC site = (DefBootstrap.MIC) DefBootstrap.bootstrap(definition, - MethodHandles.publicLookup(), - "eq", + MethodHandles.publicLookup(), + "eq", MethodType.methodType(boolean.class, Object.class, Object.class), 0, DefBootstrap.BINARY_OPERATOR, @@ -173,7 +173,7 @@ public void testNullGuardEq() throws Throwable { assertFalse((boolean) handle.invokeExact((Object)null, (Object)"test")); assertTrue((boolean) handle.invokeExact((Object)null, (Object)null)); } - + public void testNullGuardEqWhenCached() throws Throwable { DefBootstrap.MIC site = (DefBootstrap.MIC) DefBootstrap.bootstrap(definition, MethodHandles.publicLookup(), @@ -187,14 +187,14 @@ public void testNullGuardEqWhenCached() throws Throwable { assertFalse((boolean) handle.invokeExact((Object)null, (Object)"test")); assertTrue((boolean) handle.invokeExact((Object)null, (Object)null)); } - + // make sure these operators work without null guards too // for example, nulls are only legal for + if the other parameter is a String, // and can be disabled in some circumstances. - + public void testNoNullGuardAdd() throws Throwable { DefBootstrap.MIC site = (DefBootstrap.MIC) DefBootstrap.bootstrap(definition, - MethodHandles.publicLookup(), + MethodHandles.publicLookup(), "add", MethodType.methodType(Object.class, int.class, Object.class), 0, @@ -205,7 +205,7 @@ public void testNoNullGuardAdd() throws Throwable { assertNotNull((Object)handle.invokeExact(5, (Object)null)); }); } - + public void testNoNullGuardAddWhenCached() throws Throwable { DefBootstrap.MIC site = (DefBootstrap.MIC) DefBootstrap.bootstrap(definition, MethodHandles.publicLookup(), @@ -220,7 +220,7 @@ public void testNoNullGuardAddWhenCached() throws Throwable { assertNotNull((Object)handle.invokeExact(5, (Object)null)); }); } - + static void assertDepthEquals(CallSite site, int expected) { DefBootstrap.PIC dsite = (DefBootstrap.PIC) site; assertEquals(expected, dsite.depth); diff --git a/modules/lang-painless/src/test/java/org/elasticsearch/painless/ElvisTests.java b/modules/lang-painless/src/test/java/org/elasticsearch/painless/ElvisTests.java index fc760c479a139..da0822c8f7555 100644 --- a/modules/lang-painless/src/test/java/org/elasticsearch/painless/ElvisTests.java +++ b/modules/lang-painless/src/test/java/org/elasticsearch/painless/ElvisTests.java @@ -142,6 +142,6 @@ public void testQuestionSpaceColonIsNotElvis() { private void assertCannotReturnPrimitive(String script) { Exception e = expectScriptThrows(IllegalArgumentException.class, () -> exec(script)); - assertEquals("Evlis operator cannot return primitives", e.getMessage()); + assertEquals("Elvis operator cannot return primitives", e.getMessage()); } } diff --git a/modules/lang-painless/src/test/java/org/elasticsearch/painless/EqualsTests.java b/modules/lang-painless/src/test/java/org/elasticsearch/painless/EqualsTests.java index 9045a390f2ae3..30f098d28b800 100644 --- a/modules/lang-painless/src/test/java/org/elasticsearch/painless/EqualsTests.java +++ b/modules/lang-painless/src/test/java/org/elasticsearch/painless/EqualsTests.java @@ -23,7 +23,6 @@ import static java.util.Collections.singletonMap; -// TODO: Figure out a way to test autobox caching properly from methods such as Integer.valueOf(int); public class EqualsTests extends ScriptTestCase { public void testTypesEquals() { assertEquals(true, exec("return false === false;")); @@ -133,7 +132,7 @@ public void testBranchEquals() { assertEquals(0, exec("def a = 1; Object b = new HashMap(); if (a === (Object)b) return 1; else return 0;")); } - public void testBranchEqualsDefAndPrimitive() { + public void testEqualsDefAndPrimitive() { /* This test needs an Integer that isn't cached by Integer.valueOf so we draw one randomly. We can't use any fixed integer because * we can never be sure that the JVM hasn't configured itself to cache that Integer. It is sneaky like that. */ int uncachedAutoboxedInt = randomValueOtherThanMany(i -> Integer.valueOf(i) == Integer.valueOf(i), ESTestCase::randomInt); @@ -141,6 +140,15 @@ public void testBranchEqualsDefAndPrimitive() { assertEquals(false, exec("def x = params.i; int y = params.i; return x === y;", singletonMap("i", uncachedAutoboxedInt), true)); assertEquals(true, exec("def x = params.i; int y = params.i; return y == x;", singletonMap("i", uncachedAutoboxedInt), true)); assertEquals(false, exec("def x = params.i; int y = params.i; return y === x;", singletonMap("i", uncachedAutoboxedInt), true)); + + /* Now check that we use valueOf with the boxing used for comparing primitives to def. For this we need an + * integer that is cached by Integer.valueOf. The JLS says 0 should always be cached. */ + int cachedAutoboxedInt = 0; + assertSame(Integer.valueOf(cachedAutoboxedInt), Integer.valueOf(cachedAutoboxedInt)); + assertEquals(true, exec("def x = params.i; int y = params.i; return x == y;", singletonMap("i", cachedAutoboxedInt), true)); + assertEquals(true, exec("def x = params.i; int y = params.i; return x === y;", singletonMap("i", cachedAutoboxedInt), true)); + assertEquals(true, exec("def x = params.i; int y = params.i; return y == x;", singletonMap("i", cachedAutoboxedInt), true)); + assertEquals(true, exec("def x = params.i; int y = params.i; return y === x;", singletonMap("i", cachedAutoboxedInt), true)); } public void testBranchNotEquals() { @@ -153,7 +161,7 @@ public void testBranchNotEquals() { assertEquals(1, exec("def a = 1; Object b = new HashMap(); if (a !== (Object)b) return 1; else return 0;")); } - public void testBranchNotEqualsDefAndPrimitive() { + public void testNotEqualsDefAndPrimitive() { /* This test needs an Integer that isn't cached by Integer.valueOf so we draw one randomly. We can't use any fixed integer because * we can never be sure that the JVM hasn't configured itself to cache that Integer. It is sneaky like that. */ int uncachedAutoboxedInt = randomValueOtherThanMany(i -> Integer.valueOf(i) == Integer.valueOf(i), ESTestCase::randomInt); @@ -161,6 +169,15 @@ public void testBranchNotEqualsDefAndPrimitive() { assertEquals(true, exec("def x = params.i; int y = params.i; return x !== y;", singletonMap("i", uncachedAutoboxedInt), true)); assertEquals(false, exec("def x = params.i; int y = params.i; return y != x;", singletonMap("i", uncachedAutoboxedInt), true)); assertEquals(true, exec("def x = params.i; int y = params.i; return y !== x;", singletonMap("i", uncachedAutoboxedInt), true)); + + /* Now check that we use valueOf with the boxing used for comparing primitives to def. For this we need an + * integer that is cached by Integer.valueOf. The JLS says 0 should always be cached. */ + int cachedAutoboxedInt = 0; + assertSame(Integer.valueOf(cachedAutoboxedInt), Integer.valueOf(cachedAutoboxedInt)); + assertEquals(false, exec("def x = params.i; int y = params.i; return x != y;", singletonMap("i", cachedAutoboxedInt), true)); + assertEquals(false, exec("def x = params.i; int y = params.i; return x !== y;", singletonMap("i", cachedAutoboxedInt), true)); + assertEquals(false, exec("def x = params.i; int y = params.i; return y != x;", singletonMap("i", cachedAutoboxedInt), true)); + assertEquals(false, exec("def x = params.i; int y = params.i; return y !== x;", singletonMap("i", cachedAutoboxedInt), true)); } public void testRightHandNull() { diff --git a/modules/lang-painless/src/test/java/org/elasticsearch/painless/OrTests.java b/modules/lang-painless/src/test/java/org/elasticsearch/painless/OrTests.java index 929d9486d8893..72d5af1942a6e 100644 --- a/modules/lang-painless/src/test/java/org/elasticsearch/painless/OrTests.java +++ b/modules/lang-painless/src/test/java/org/elasticsearch/painless/OrTests.java @@ -52,7 +52,7 @@ public void testLongConst() throws Exception { assertEquals(5L | -12L, exec("return 5L | -12L;")); assertEquals(7L | 15L | 3L, exec("return 7L | 15L | 3L;")); } - + public void testIllegal() throws Exception { expectScriptThrows(ClassCastException.class, () -> { exec("float x = (float)4; int y = 1; return x | y"); @@ -61,7 +61,7 @@ public void testIllegal() throws Exception { exec("double x = (double)4; int y = 1; return x | y"); }); } - + public void testDef() { expectScriptThrows(ClassCastException.class, () -> { exec("def x = (float)4; def y = (byte)1; return x | y"); @@ -104,13 +104,13 @@ public void testDef() { assertEquals(5, exec("def x = (char)4; def y = (char)1; return x | y")); assertEquals(5, exec("def x = (int)4; def y = (int)1; return x | y")); assertEquals(5L, exec("def x = (long)4; def y = (long)1; return x | y")); - + assertEquals(true, exec("def x = true; def y = true; return x | y")); assertEquals(true, exec("def x = true; def y = false; return x | y")); assertEquals(true, exec("def x = false; def y = true; return x | y")); assertEquals(false, exec("def x = false; def y = false; return x | y")); } - + public void testDefTypedLHS() { expectScriptThrows(ClassCastException.class, () -> { exec("float x = (float)4; def y = (byte)1; return x | y"); @@ -153,13 +153,13 @@ public void testDefTypedLHS() { assertEquals(5, exec("char x = (char)4; def y = (char)1; return x | y")); assertEquals(5, exec("int x = (int)4; def y = (int)1; return x | y")); assertEquals(5L, exec("long x = (long)4; def y = (long)1; return x | y")); - + assertEquals(true, exec("boolean x = true; def y = true; return x | y")); assertEquals(true, exec("boolean x = true; def y = false; return x | y")); assertEquals(true, exec("boolean x = false; def y = true; return x | y")); assertEquals(false, exec("boolean x = false; def y = false; return x | y")); } - + public void testDefTypedRHS() { expectScriptThrows(ClassCastException.class, () -> { exec("def x = (float)4; byte y = (byte)1; return x | y"); @@ -202,13 +202,13 @@ public void testDefTypedRHS() { assertEquals(5, exec("def x = (char)4; char y = (char)1; return x | y")); assertEquals(5, exec("def x = (int)4; int y = (int)1; return x | y")); assertEquals(5L, exec("def x = (long)4; long y = (long)1; return x | y")); - + assertEquals(true, exec("def x = true; boolean y = true; return x | y")); assertEquals(true, exec("def x = true; boolean y = false; return x | y")); assertEquals(true, exec("def x = false; boolean y = true; return x | y")); assertEquals(false, exec("def x = false; boolean y = false; return x | y")); } - + public void testCompoundAssignment() { // boolean assertEquals(true, exec("boolean x = true; x |= true; return x;")); @@ -231,7 +231,7 @@ public void testCompoundAssignment() { // long assertEquals((long) (13 | 14), exec("long x = 13L; x |= 14; return x;")); } - + public void testBogusCompoundAssignment() { expectScriptThrows(ClassCastException.class, () -> { exec("float x = 4; int y = 1; x |= y"); @@ -246,7 +246,7 @@ public void testBogusCompoundAssignment() { exec("int x = 4; double y = 1; x |= y"); }); } - + public void testDefCompoundAssignment() { // boolean assertEquals(true, exec("def x = true; x |= true; return x;")); @@ -269,7 +269,7 @@ public void testDefCompoundAssignment() { // long assertEquals((long) (13 | 14), exec("def x = 13L; x |= 14; return x;")); } - + public void testDefBogusCompoundAssignment() { expectScriptThrows(ClassCastException.class, () -> { exec("def x = 4F; int y = 1; x |= y"); diff --git a/modules/lang-painless/src/test/java/org/elasticsearch/painless/PainlessDocGenerator.java b/modules/lang-painless/src/test/java/org/elasticsearch/painless/PainlessDocGenerator.java index c29260163c099..74e0f90cc1b9f 100644 --- a/modules/lang-painless/src/test/java/org/elasticsearch/painless/PainlessDocGenerator.java +++ b/modules/lang-painless/src/test/java/org/elasticsearch/painless/PainlessDocGenerator.java @@ -44,6 +44,7 @@ import static java.util.Comparator.comparing; import static java.util.stream.Collectors.toList; +import static org.elasticsearch.painless.Definition.DEFINITION; /** * Generates an API reference from the method and type whitelists in {@link Definition}. @@ -67,9 +68,9 @@ public static void main(String[] args) throws IOException { Files.newOutputStream(indexPath, StandardOpenOption.CREATE_NEW, StandardOpenOption.WRITE), false, StandardCharsets.UTF_8.name())) { emitGeneratedWarning(indexStream); - List types = Definition.allSimpleTypes().stream().sorted(comparing(t -> t.name)).collect(toList()); + List types = DEFINITION.allSimpleTypes().stream().sorted(comparing(t -> t.name)).collect(toList()); for (Type type : types) { - if (type.sort.primitive) { + if (type.clazz.isPrimitive()) { // Primitives don't have methods to reference continue; } @@ -268,7 +269,7 @@ private static void emitJavadocLink(PrintStream stream, String root, Method meth stream.print("link:{"); stream.print(root); stream.print("-javadoc}/"); - stream.print((method.augmentation != null ? method.augmentation : method.owner.clazz).getName().replace('.', '/')); + stream.print(classUrlPath(method.augmentation != null ? method.augmentation : method.owner.clazz)); stream.print(".html#"); stream.print(methodName(method)); stream.print("%2D"); @@ -300,7 +301,7 @@ private static void emitJavadocLink(PrintStream stream, String root, Field field stream.print("link:{"); stream.print(root); stream.print("-javadoc}/"); - stream.print(field.owner.clazz.getName().replace('.', '/')); + stream.print(classUrlPath(field.owner.clazz)); stream.print(".html#"); stream.print(field.javaName); } @@ -352,4 +353,8 @@ private static void emitGeneratedWarning(PrintStream stream) { stream.println("////"); stream.println(); } + + private static String classUrlPath(Class clazz) { + return clazz.getName().replace('.', '/').replace('$', '.'); + } } diff --git a/modules/lang-painless/src/test/java/org/elasticsearch/painless/ScriptTestCase.java b/modules/lang-painless/src/test/java/org/elasticsearch/painless/ScriptTestCase.java index 2e99f652c0ade..89159c5364798 100644 --- a/modules/lang-painless/src/test/java/org/elasticsearch/painless/ScriptTestCase.java +++ b/modules/lang-painless/src/test/java/org/elasticsearch/painless/ScriptTestCase.java @@ -91,7 +91,7 @@ public Object exec(String script, Map vars, boolean picky) { public Object exec(String script, Map vars, Map compileParams, Scorer scorer, boolean picky) { // test for ambiguity errors before running the actual script if picky is true if (picky) { - Definition definition = Definition.BUILTINS; + Definition definition = Definition.DEFINITION; ScriptClassInfo scriptClassInfo = new ScriptClassInfo(definition, GenericElasticsearchScript.class); CompilerSettings pickySettings = new CompilerSettings(); pickySettings.setPicky(true); diff --git a/modules/lang-painless/src/test/java/org/elasticsearch/painless/node/NodeToStringTests.java b/modules/lang-painless/src/test/java/org/elasticsearch/painless/node/NodeToStringTests.java index ee208991a79b4..b3842615859f8 100644 --- a/modules/lang-painless/src/test/java/org/elasticsearch/painless/node/NodeToStringTests.java +++ b/modules/lang-painless/src/test/java/org/elasticsearch/painless/node/NodeToStringTests.java @@ -48,7 +48,7 @@ * Tests {@link Object#toString} implementations on all extensions of {@link ANode}. */ public class NodeToStringTests extends ESTestCase { - private final Definition definition = Definition.BUILTINS; + private final Definition definition = Definition.DEFINITION; public void testEAssignment() { assertToString( @@ -161,12 +161,12 @@ public void testECapturingFunctionRef() { public void testECast() { Location l = new Location(getTestName(), 0); AExpression child = new EConstant(l, "test"); - Cast cast = new Cast(Definition.STRING_TYPE, Definition.INT_OBJ_TYPE, true); + Cast cast = new Cast(Definition.DEFINITION.StringType, Definition.DEFINITION.IntegerType, true); assertEquals("(ECast Integer (EConstant String 'test'))", new ECast(l, child, cast).toString()); l = new Location(getTestName(), 1); child = new EBinary(l, Operation.ADD, new EConstant(l, "test"), new EConstant(l, 12)); - cast = new Cast(Definition.INT_OBJ_TYPE, Definition.BOOLEAN_OBJ_TYPE, true); + cast = new Cast(Definition.DEFINITION.IntegerType, Definition.DEFINITION.BooleanType, true); assertEquals("(ECast Boolean (EBinary (EConstant String 'test') + (EConstant Integer 12)))", new ECast(l, child, cast).toString()); } @@ -395,7 +395,7 @@ public void testPSubArrayLength() { public void testPSubBrace() { Location l = new Location(getTestName(), 0); - PSubBrace node = new PSubBrace(l, Definition.INT_TYPE, new ENumeric(l, "1", 10)); + PSubBrace node = new PSubBrace(l, Definition.DEFINITION.intType, new ENumeric(l, "1", 10)); node.prefix = new EVariable(l, "a"); assertEquals("(PSubBrace (EVariable a) (ENumeric 1))", node.toString()); } @@ -761,7 +761,7 @@ public void testSIfElse() { public void testSSubEachArray() { Location l = new Location(getTestName(), 0); - Variable v = new Variable(l, "test", Definition.INT_TYPE, 5, false); + Variable v = new Variable(l, "test", Definition.DEFINITION.intType, 5, false); AExpression e = new ENewArray(l, "int", Arrays.asList(new EConstant(l, 1), new EConstant(l, 2), new EConstant(l, 3)), true); SBlock b = new SBlock(l, singletonList(new SReturn(l, new EConstant(l, 5)))); SSubEachArray node = new SSubEachArray(l, v, e, b); @@ -773,7 +773,7 @@ public void testSSubEachArray() { public void testSSubEachIterable() { Location l = new Location(getTestName(), 0); - Variable v = new Variable(l, "test", Definition.INT_TYPE, 5, false); + Variable v = new Variable(l, "test", Definition.DEFINITION.intType, 5, false); AExpression e = new EListInit(l, Arrays.asList(new EConstant(l, 1), new EConstant(l, 2), new EConstant(l, 3))); SBlock b = new SBlock(l, singletonList(new SReturn(l, new EConstant(l, 5)))); SSubEachIterable node = new SSubEachIterable(l, v, e, b); diff --git a/modules/lang-painless/src/test/resources/rest-api-spec/test/painless/15_update.yml b/modules/lang-painless/src/test/resources/rest-api-spec/test/painless/15_update.yml index a64ad904c4963..0e319be97bf0b 100644 --- a/modules/lang-painless/src/test/resources/rest-api-spec/test/painless/15_update.yml +++ b/modules/lang-painless/src/test/resources/rest-api-spec/test/painless/15_update.yml @@ -124,7 +124,7 @@ count: 1 - do: - catch: request + catch: bad_request update: index: test_1 type: test diff --git a/core/src/test/java/org/elasticsearch/common/settings/foo/FooTestClass.java b/modules/mapper-extras/build.gradle similarity index 86% rename from core/src/test/java/org/elasticsearch/common/settings/foo/FooTestClass.java rename to modules/mapper-extras/build.gradle index 6d8ca4a798645..7831de3a68e94 100644 --- a/core/src/test/java/org/elasticsearch/common/settings/foo/FooTestClass.java +++ b/modules/mapper-extras/build.gradle @@ -17,8 +17,7 @@ * under the License. */ -package org.elasticsearch.common.settings.foo; - -// used in SettingsTest -public class FooTestClass { +esplugin { + description 'Adds advanced field mappers' + classname 'org.elasticsearch.index.mapper.MapperExtrasPlugin' } diff --git a/core/src/main/java/org/apache/lucene/queries/BinaryDocValuesRangeQuery.java b/modules/mapper-extras/src/main/java/org/apache/lucene/queries/BinaryDocValuesRangeQuery.java similarity index 71% rename from core/src/main/java/org/apache/lucene/queries/BinaryDocValuesRangeQuery.java rename to modules/mapper-extras/src/main/java/org/apache/lucene/queries/BinaryDocValuesRangeQuery.java index c8f78ab616d3f..f5d86849e56d1 100644 --- a/core/src/main/java/org/apache/lucene/queries/BinaryDocValuesRangeQuery.java +++ b/modules/mapper-extras/src/main/java/org/apache/lucene/queries/BinaryDocValuesRangeQuery.java @@ -37,15 +37,18 @@ public final class BinaryDocValuesRangeQuery extends Query { private final String fieldName; private final QueryType queryType; + private final LengthType lengthType; private final BytesRef from; private final BytesRef to; private final Object originalFrom; private final Object originalTo; - public BinaryDocValuesRangeQuery(String fieldName, QueryType queryType, BytesRef from, BytesRef to, + public BinaryDocValuesRangeQuery(String fieldName, QueryType queryType, LengthType lengthType, + BytesRef from, BytesRef to, Object originalFrom, Object originalTo) { this.fieldName = fieldName; this.queryType = queryType; + this.lengthType = lengthType; this.from = from; this.to = to; this.originalFrom = originalFrom; @@ -66,29 +69,34 @@ public Scorer scorer(LeafReaderContext context) throws IOException { final TwoPhaseIterator iterator = new TwoPhaseIterator(values) { ByteArrayDataInput in = new ByteArrayDataInput(); - BytesRef otherFrom = new BytesRef(16); - BytesRef otherTo = new BytesRef(16); + BytesRef otherFrom = new BytesRef(); + BytesRef otherTo = new BytesRef(); @Override public boolean matches() throws IOException { BytesRef encodedRanges = values.binaryValue(); in.reset(encodedRanges.bytes, encodedRanges.offset, encodedRanges.length); int numRanges = in.readVInt(); + final byte[] bytes = encodedRanges.bytes; + otherFrom.bytes = bytes; + otherTo.bytes = bytes; + int offset = in.getPosition(); for (int i = 0; i < numRanges; i++) { - otherFrom.length = in.readVInt(); - otherFrom.bytes = encodedRanges.bytes; - otherFrom.offset = in.getPosition(); - in.skipBytes(otherFrom.length); + int length = lengthType.readLength(bytes, offset); + otherFrom.offset = offset; + otherFrom.length = length; + offset += length; - otherTo.length = in.readVInt(); - otherTo.bytes = encodedRanges.bytes; - otherTo.offset = in.getPosition(); - in.skipBytes(otherTo.length); + length = lengthType.readLength(bytes, offset); + otherTo.offset = offset; + otherTo.length = length; + offset += length; if (queryType.matches(from, to, otherFrom, otherTo)) { return true; } } + assert offset == encodedRanges.offset + encodedRanges.length; return false; } @@ -114,13 +122,14 @@ public boolean equals(Object o) { BinaryDocValuesRangeQuery that = (BinaryDocValuesRangeQuery) o; return Objects.equals(fieldName, that.fieldName) && queryType == that.queryType && + lengthType == that.lengthType && Objects.equals(from, that.from) && Objects.equals(to, that.to); } @Override public int hashCode() { - return Objects.hash(getClass(), fieldName, queryType, from, to); + return Objects.hash(getClass(), fieldName, queryType, lengthType, from, to); } public enum QueryType { @@ -161,4 +170,42 @@ boolean matches(BytesRef from, BytesRef to, BytesRef otherFrom, BytesRef otherTo } + public enum LengthType { + FIXED_4 { + @Override + int readLength(byte[] bytes, int offset) { + return 4; + } + }, + FIXED_8 { + @Override + int readLength(byte[] bytes, int offset) { + return 8; + } + }, + FIXED_16 { + @Override + int readLength(byte[] bytes, int offset) { + return 16; + } + }, + VARIABLE { + @Override + int readLength(byte[] bytes, int offset) { + // the first bit encodes the sign and the next 4 bits encode the number + // of additional bytes + int token = Byte.toUnsignedInt(bytes[offset]); + int length = (token >>> 3) & 0x0f; + if ((token & 0x80) == 0) { + length = 0x0f - length; + } + return 1 + length; + } + }; + + /** + * Return the length of the value that starts at {@code offset} in {@code bytes}. + */ + abstract int readLength(byte[] bytes, int offset); + } } diff --git a/modules/mapper-extras/src/main/java/org/elasticsearch/index/mapper/BinaryRangeUtil.java b/modules/mapper-extras/src/main/java/org/elasticsearch/index/mapper/BinaryRangeUtil.java new file mode 100644 index 0000000000000..384ab24a73bf6 --- /dev/null +++ b/modules/mapper-extras/src/main/java/org/elasticsearch/index/mapper/BinaryRangeUtil.java @@ -0,0 +1,161 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.index.mapper; + +import org.apache.lucene.document.HalfFloatPoint; +import org.apache.lucene.store.ByteArrayDataOutput; +import org.apache.lucene.util.BytesRef; +import org.apache.lucene.util.NumericUtils; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.Comparator; +import java.util.List; +import java.util.Set; + +enum BinaryRangeUtil { + + ; + + static BytesRef encodeLongRanges(Set ranges) throws IOException { + List sortedRanges = new ArrayList<>(ranges); + Comparator fromComparator = Comparator.comparingLong(range -> ((Number) range.from).longValue()); + Comparator toComparator = Comparator.comparingLong(range -> ((Number) range.to).longValue()); + sortedRanges.sort(fromComparator.thenComparing(toComparator)); + + final byte[] encoded = new byte[5 + (9 * 2) * sortedRanges.size()]; + ByteArrayDataOutput out = new ByteArrayDataOutput(encoded); + out.writeVInt(sortedRanges.size()); + for (RangeFieldMapper.Range range : sortedRanges) { + byte[] encodedFrom = encodeLong(((Number) range.from).longValue()); + out.writeBytes(encodedFrom, encodedFrom.length); + byte[] encodedTo = encodeLong(((Number) range.to).longValue()); + out.writeBytes(encodedTo, encodedTo.length); + } + return new BytesRef(encoded, 0, out.getPosition()); + } + + static BytesRef encodeDoubleRanges(Set ranges) throws IOException { + List sortedRanges = new ArrayList<>(ranges); + Comparator fromComparator = Comparator.comparingDouble(range -> ((Number) range.from).doubleValue()); + Comparator toComparator = Comparator.comparingDouble(range -> ((Number) range.to).doubleValue()); + sortedRanges.sort(fromComparator.thenComparing(toComparator)); + + final byte[] encoded = new byte[5 + (8 * 2) * sortedRanges.size()]; + ByteArrayDataOutput out = new ByteArrayDataOutput(encoded); + out.writeVInt(sortedRanges.size()); + for (RangeFieldMapper.Range range : sortedRanges) { + byte[] encodedFrom = encodeDouble(((Number) range.from).doubleValue()); + out.writeBytes(encodedFrom, encodedFrom.length); + byte[] encodedTo = encodeDouble(((Number) range.to).doubleValue()); + out.writeBytes(encodedTo, encodedTo.length); + } + return new BytesRef(encoded, 0, out.getPosition()); + } + + static BytesRef encodeFloatRanges(Set ranges) throws IOException { + List sortedRanges = new ArrayList<>(ranges); + Comparator fromComparator = Comparator.comparingDouble(range -> ((Number) range.from).floatValue()); + Comparator toComparator = Comparator.comparingDouble(range -> ((Number) range.to).floatValue()); + sortedRanges.sort(fromComparator.thenComparing(toComparator)); + + final byte[] encoded = new byte[5 + (4 * 2) * sortedRanges.size()]; + ByteArrayDataOutput out = new ByteArrayDataOutput(encoded); + out.writeVInt(sortedRanges.size()); + for (RangeFieldMapper.Range range : sortedRanges) { + byte[] encodedFrom = encodeFloat(((Number) range.from).floatValue()); + out.writeBytes(encodedFrom, encodedFrom.length); + byte[] encodedTo = encodeFloat(((Number) range.to).floatValue()); + out.writeBytes(encodedTo, encodedTo.length); + } + return new BytesRef(encoded, 0, out.getPosition()); + } + + static byte[] encodeDouble(double number) { + byte[] encoded = new byte[8]; + NumericUtils.longToSortableBytes(NumericUtils.doubleToSortableLong(number), encoded, 0); + return encoded; + } + + static byte[] encodeFloat(float number) { + byte[] encoded = new byte[4]; + NumericUtils.intToSortableBytes(NumericUtils.floatToSortableInt(number), encoded, 0); + return encoded; + } + + /** + * Encodes the specified number of type long in a variable-length byte format. + * The byte format preserves ordering, which means the returned byte array can be used for comparing as is. + * The first bit stores the sign and the 4 subsequent bits encode the number of bytes that are used to + * represent the long value, in addition to the first one. + */ + static byte[] encodeLong(long number) { + int sign = 1; // means positive + if (number < 0) { + number = -1 - number; + sign = 0; + } + return encode(number, sign); + } + + private static byte[] encode(long l, int sign) { + assert l >= 0; + + // the header is formed of: + // - 1 bit for the sign + // - 4 bits for the number of additional bytes + // - up to 3 bits of the value + // additional bytes are data bytes + + int numBits = 64 - Long.numberOfLeadingZeros(l); + int numAdditionalBytes = (numBits + 7 - 3) / 8; + + byte[] encoded = new byte[1 + numAdditionalBytes]; + + // write data bytes + int i = encoded.length; + while (numBits > 0) { + int index = --i; + assert index > 0 || numBits <= 3; // byte 0 can't encode more than 3 bits + encoded[index] = (byte) l; + l >>>= 8; + numBits -= 8; + } + assert Byte.toUnsignedInt(encoded[0]) <= 0x07; + assert encoded.length == 1 || encoded[0] != 0 || Byte.toUnsignedInt(encoded[1]) > 0x07; + + if (sign == 0) { + // reverse the order + for (int j = 0; j < encoded.length; ++j) { + encoded[j] = (byte) ~Byte.toUnsignedInt(encoded[j]); + } + // the first byte only uses 3 bits, we need the 5 upper bits for the header + encoded[0] &= 0x07; + } + + // write the header + encoded[0] |= sign << 7; + if (sign > 0) { + encoded[0] |= numAdditionalBytes << 3; + } else { + encoded[0] |= (15 - numAdditionalBytes) << 3; + } + return encoded; + } +} diff --git a/modules/mapper-extras/src/main/java/org/elasticsearch/index/mapper/MapperExtrasPlugin.java b/modules/mapper-extras/src/main/java/org/elasticsearch/index/mapper/MapperExtrasPlugin.java new file mode 100644 index 0000000000000..d91d2b28df821 --- /dev/null +++ b/modules/mapper-extras/src/main/java/org/elasticsearch/index/mapper/MapperExtrasPlugin.java @@ -0,0 +1,42 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.index.mapper; + +import org.elasticsearch.plugins.MapperPlugin; +import org.elasticsearch.plugins.Plugin; + +import java.util.Collections; +import java.util.LinkedHashMap; +import java.util.Map; + +public class MapperExtrasPlugin extends Plugin implements MapperPlugin { + + @Override + public Map getMappers() { + Map mappers = new LinkedHashMap<>(); + mappers.put(ScaledFloatFieldMapper.CONTENT_TYPE, new ScaledFloatFieldMapper.TypeParser()); + mappers.put(TokenCountFieldMapper.CONTENT_TYPE, new TokenCountFieldMapper.TypeParser()); + for (RangeFieldMapper.RangeType type : RangeFieldMapper.RangeType.values()) { + mappers.put(type.typeName(), new RangeFieldMapper.TypeParser(type)); + } + return Collections.unmodifiableMap(mappers); + } + +} diff --git a/core/src/main/java/org/elasticsearch/index/mapper/RangeFieldMapper.java b/modules/mapper-extras/src/main/java/org/elasticsearch/index/mapper/RangeFieldMapper.java similarity index 87% rename from core/src/main/java/org/elasticsearch/index/mapper/RangeFieldMapper.java rename to modules/mapper-extras/src/main/java/org/elasticsearch/index/mapper/RangeFieldMapper.java index 1f1cdd71e4b11..46d553a472973 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/RangeFieldMapper.java +++ b/modules/mapper-extras/src/main/java/org/elasticsearch/index/mapper/RangeFieldMapper.java @@ -28,17 +28,21 @@ import org.apache.lucene.document.StoredField; import org.apache.lucene.index.IndexOptions; import org.apache.lucene.index.IndexableField; +import org.apache.lucene.index.Term; import org.apache.lucene.queries.BinaryDocValuesRangeQuery; import org.apache.lucene.queries.BinaryDocValuesRangeQuery.QueryType; import org.apache.lucene.search.BoostQuery; +import org.apache.lucene.search.DocValuesFieldExistsQuery; import org.apache.lucene.search.IndexOrDocValuesQuery; import org.apache.lucene.search.Query; +import org.apache.lucene.search.TermQuery; import org.apache.lucene.store.ByteArrayDataOutput; import org.apache.lucene.util.BytesRef; import org.elasticsearch.ElasticsearchException; import org.elasticsearch.Version; import org.elasticsearch.common.Explicit; import org.elasticsearch.common.Nullable; +import org.elasticsearch.common.collect.Tuple; import org.elasticsearch.common.geo.ShapeRelation; import org.elasticsearch.common.joda.DateMathParser; import org.elasticsearch.common.joda.FormatDateTimeFormatter; @@ -54,6 +58,7 @@ import java.io.IOException; import java.net.InetAddress; +import java.net.UnknownHostException; import java.util.ArrayList; import java.util.HashSet; import java.util.Iterator; @@ -152,7 +157,7 @@ protected void setupFieldType(BuilderContext context) { @Override public RangeFieldMapper build(BuilderContext context) { setupFieldType(context); - return new RangeFieldMapper(name, fieldType, defaultFieldType, coerce(context), includeInAll, + return new RangeFieldMapper(name, fieldType, defaultFieldType, coerce(context), context.indexSettings(), multiFieldsBuilder.build(this, context), copyTo); } } @@ -180,7 +185,13 @@ public Mapper.Builder parse(String name, Map node, builder.coerce(TypeParsers.nodeBooleanValue(name, "coerce", propNode, parserContext)); iterator.remove(); } else if (propName.equals("locale")) { - builder.locale(LocaleUtils.parse(propNode.toString())); + Locale locale; + if (parserContext.indexVersionCreated().onOrAfter(Version.V_6_0_0_beta2)) { + locale = LocaleUtils.parse(propNode.toString()); + } else { + locale = LocaleUtils.parse5x(propNode.toString()); + } + builder.locale(locale); iterator.remove(); } else if (propName.equals("format")) { builder.dateTimeFormatter(parseDateTimeFormatter(propNode)); @@ -281,29 +292,33 @@ protected DateMathParser dateMathParser() { return dateMathParser; } + @Override + public Query existsQuery(QueryShardContext context) { + if (hasDocValues()) { + return new DocValuesFieldExistsQuery(name()); + } else { + return new TermQuery(new Term(FieldNamesFieldMapper.NAME, name())); + } + } + @Override public Query termQuery(Object value, QueryShardContext context) { - Query query = rangeQuery(value, value, true, true, ShapeRelation.INTERSECTS, context); + Query query = rangeQuery(value, value, true, true, ShapeRelation.INTERSECTS, null, null, context); if (boost() != 1f) { query = new BoostQuery(query, boost()); } return query; } - public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper, - ShapeRelation relation, QueryShardContext context) { - failIfNotIndexed(); - return rangeQuery(lowerTerm, upperTerm, includeLower, includeUpper, relation, null, dateMathParser, context); - } - + @Override public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper, ShapeRelation relation, DateTimeZone timeZone, DateMathParser parser, QueryShardContext context) { + failIfNotIndexed(); return rangeType.rangeQuery(name(), hasDocValues(), lowerTerm, upperTerm, includeLower, includeUpper, relation, timeZone, parser, context); } } - private Boolean includeInAll; private Explicit coerce; private RangeFieldMapper( @@ -311,13 +326,11 @@ private RangeFieldMapper( MappedFieldType fieldType, MappedFieldType defaultFieldType, Explicit coerce, - Boolean includeInAll, Settings indexSettings, MultiFields multiFields, CopyTo copyTo) { super(simpleName, fieldType, defaultFieldType, indexSettings, multiFields, copyTo); this.coerce = coerce; - this.includeInAll = includeInAll; } @Override @@ -337,13 +350,13 @@ protected RangeFieldMapper clone() { @Override protected void parseCreateField(ParseContext context, List fields) throws IOException { - final boolean includeInAll = context.includeInAll(this.includeInAll, this); Range range; if (context.externalValueSet()) { range = context.parseExternalValue(Range.class); } else { XContentParser parser = context.parser(); - if (parser.currentToken() == XContentParser.Token.START_OBJECT) { + final XContentParser.Token start = parser.currentToken(); + if (start == XContentParser.Token.START_OBJECT) { RangeFieldType fieldType = fieldType(); RangeType rangeType = fieldType.rangeType; String fieldName = null; @@ -383,25 +396,26 @@ protected void parseCreateField(ParseContext context, List field } } range = new Range(rangeType, from, to, includeFrom, includeTo); + } else if (fieldType().rangeType == RangeType.IP && start == XContentParser.Token.VALUE_STRING) { + range = parseIpRangeFromCidr(parser); } else { throw new MapperParsingException("error parsing field [" + name() + "], expected an object but got " + parser.currentName()); } } - if (includeInAll) { - context.allEntries().addText(fieldType.name(), range.toString(), fieldType.boost()); - } boolean indexed = fieldType.indexOptions() != IndexOptions.NONE; boolean docValued = fieldType.hasDocValues(); boolean stored = fieldType.stored(); fields.addAll(fieldType().rangeType.createFields(context, name(), range, indexed, docValued, stored)); + if (docValued == false && (indexed || stored)) { + createFieldNamesField(context, fields); + } } @Override protected void doMerge(Mapper mergeWith, boolean updateAllTypes) { super.doMerge(mergeWith, updateAllTypes); RangeFieldMapper other = (RangeFieldMapper) mergeWith; - this.includeInAll = other.includeInAll; if (other.coerce.explicit()) { this.coerce = other.coerce; } @@ -424,10 +438,22 @@ && fieldType().dateTimeFormatter().locale() != Locale.ROOT))) { if (includeDefaults || coerce.explicit()) { builder.field("coerce", coerce.value()); } - if (includeInAll != null) { - builder.field("include_in_all", includeInAll); - } else if (includeDefaults) { - builder.field("include_in_all", false); + } + + private static Range parseIpRangeFromCidr(final XContentParser parser) throws IOException { + final Tuple cidr = InetAddresses.parseCidr(parser.text()); + // create the lower value by zeroing out the host portion, upper value by filling it with all ones. + byte[] lower = cidr.v1().getAddress(); + byte[] upper = lower.clone(); + for (int i = cidr.v2(); i < 8 * lower.length; i++) { + int m = 1 << 7 - (i & 7); + lower[i >> 3] &= ~m; + upper[i >> 3] |= m; + } + try { + return new Range(RangeType.IP, InetAddress.getByAddress(lower), InetAddress.getByAddress(upper), true, true); + } catch (UnknownHostException bogus) { + throw new AssertionError(bogus); } } @@ -484,12 +510,10 @@ public BytesRef encodeRanges(Set ranges) throws IOException { ByteArrayDataOutput out = new ByteArrayDataOutput(encoded); out.writeVInt(ranges.size()); for (Range range : ranges) { - out.writeVInt(16); InetAddress fromValue = (InetAddress) range.from; byte[] encodedFromValue = InetAddressPoint.encode(fromValue); out.writeBytes(encodedFromValue, 0, encodedFromValue.length); - out.writeVInt(16); InetAddress toValue = (InetAddress) range.to; byte[] encodedToValue = InetAddressPoint.encode(toValue); out.writeBytes(encodedToValue, 0, encodedToValue.length); @@ -498,10 +522,19 @@ public BytesRef encodeRanges(Set ranges) throws IOException { } @Override - BytesRef[] encodeRange(Object from, Object to) { - BytesRef encodedFrom = new BytesRef(InetAddressPoint.encode((InetAddress) from)); - BytesRef encodedTo = new BytesRef(InetAddressPoint.encode((InetAddress) to)); - return new BytesRef[]{encodedFrom, encodedTo}; + public Query dvRangeQuery(String field, QueryType queryType, Object from, Object to, boolean includeFrom, boolean includeTo) { + if (includeFrom == false) { + from = nextUp(from); + } + + if (includeTo == false) { + to = nextDown(to); + } + + byte[] encodedFrom = InetAddressPoint.encode((InetAddress) from); + byte[] encodedTo = InetAddressPoint.encode((InetAddress) to); + return new BinaryDocValuesRangeQuery(field, queryType, BinaryDocValuesRangeQuery.LengthType.FIXED_16, + new BytesRef(encodedFrom), new BytesRef(encodedTo), from, to); } @Override @@ -525,9 +558,6 @@ public Query intersectsQuery(String field, Object from, Object to, boolean inclu return InetAddressRange.newIntersectsQuery(field, includeLower ? lower : nextUp(lower), includeUpper ? upper : nextDown(upper)); } - public String toString(InetAddress address) { - return InetAddresses.toAddrString(address); - } }, DATE("date_range", NumberType.LONG) { @Override @@ -572,8 +602,8 @@ public BytesRef encodeRanges(Set ranges) throws IOException { } @Override - BytesRef[] encodeRange(Object from, Object to) { - return LONG.encodeRange(from, to); + public Query dvRangeQuery(String field, QueryType queryType, Object from, Object to, boolean includeFrom, boolean includeTo) { + return LONG.dvRangeQuery(field, queryType, from, to, includeFrom, includeTo); } @Override @@ -627,12 +657,23 @@ public Float nextDown(Object value) { @Override public BytesRef encodeRanges(Set ranges) throws IOException { - return DOUBLE.encodeRanges(ranges); + return BinaryRangeUtil.encodeFloatRanges(ranges); } @Override - BytesRef[] encodeRange(Object from, Object to) { - return DOUBLE.encodeRange(((Number) from).floatValue(), ((Number) to).floatValue()); + public Query dvRangeQuery(String field, QueryType queryType, Object from, Object to, boolean includeFrom, boolean includeTo) { + if (includeFrom == false) { + from = nextUp(from); + } + + if (includeTo == false) { + to = nextDown(to); + } + + byte[] encodedFrom = BinaryRangeUtil.encodeFloat((Float) from); + byte[] encodedTo = BinaryRangeUtil.encodeFloat((Float) to); + return new BinaryDocValuesRangeQuery(field, queryType, BinaryDocValuesRangeQuery.LengthType.FIXED_4, + new BytesRef(encodedFrom), new BytesRef(encodedTo), from, to); } @Override @@ -682,10 +723,19 @@ public BytesRef encodeRanges(Set ranges) throws IOException { } @Override - BytesRef[] encodeRange(Object from, Object to) { - byte[] fromValue = BinaryRangeUtil.encode(((Number) from).doubleValue()); - byte[] toValue = BinaryRangeUtil.encode(((Number) to).doubleValue()); - return new BytesRef[]{new BytesRef(fromValue), new BytesRef(toValue)}; + public Query dvRangeQuery(String field, QueryType queryType, Object from, Object to, boolean includeFrom, boolean includeTo) { + if (includeFrom == false) { + from = nextUp(from); + } + + if (includeTo == false) { + to = nextDown(to); + } + + byte[] encodedFrom = BinaryRangeUtil.encodeDouble((Double) from); + byte[] encodedTo = BinaryRangeUtil.encodeDouble((Double) to); + return new BinaryDocValuesRangeQuery(field, queryType, BinaryDocValuesRangeQuery.LengthType.FIXED_8, + new BytesRef(encodedFrom), new BytesRef(encodedTo), from, to); } @Override @@ -737,8 +787,8 @@ public BytesRef encodeRanges(Set ranges) throws IOException { } @Override - BytesRef[] encodeRange(Object from, Object to) { - return LONG.encodeRange(from, to); + public Query dvRangeQuery(String field, QueryType queryType, Object from, Object to, boolean includeFrom, boolean includeTo) { + return LONG.dvRangeQuery(field, queryType, from, to, includeFrom, includeTo); } @Override @@ -785,10 +835,19 @@ public BytesRef encodeRanges(Set ranges) throws IOException { } @Override - BytesRef[] encodeRange(Object from, Object to) { - byte[] encodedFrom = BinaryRangeUtil.encode(((Number) from).longValue()); - byte[] encodedTo = BinaryRangeUtil.encode(((Number) to).longValue()); - return new BytesRef[]{new BytesRef(encodedFrom), new BytesRef(encodedTo)}; + public Query dvRangeQuery(String field, QueryType queryType, Object from, Object to, boolean includeFrom, boolean includeTo) { + if (includeFrom == false) { + from = nextUp(from); + } + + if (includeTo == false) { + to = nextDown(to); + } + + byte[] encodedFrom = BinaryRangeUtil.encodeLong(((Number) from).longValue()); + byte[] encodedTo = BinaryRangeUtil.encodeLong(((Number) to).longValue()); + return new BinaryDocValuesRangeQuery(field, queryType, BinaryDocValuesRangeQuery.LengthType.VARIABLE, + new BytesRef(encodedFrom), new BytesRef(encodedTo), from, to); } @Override @@ -904,19 +963,8 @@ public Query rangeQuery(String field, boolean hasDocValues, Object from, Object // rounded up via parseFrom and parseTo methods. public abstract BytesRef encodeRanges(Set ranges) throws IOException; - public Query dvRangeQuery(String field, QueryType queryType, Object from, Object to, boolean includeFrom, boolean includeTo) { - if (includeFrom == false) { - from = nextUp(from); - } - - if (includeTo == false) { - to = nextDown(to); - } - BytesRef[] range = encodeRange(from, to); - return new BinaryDocValuesRangeQuery(field, queryType, range[0], range[1], from, to); - } - - abstract BytesRef[] encodeRange(Object from, Object to); + public abstract Query dvRangeQuery(String field, QueryType queryType, Object from, Object to, + boolean includeFrom, boolean includeTo); public final String name; private final NumberType numberType; diff --git a/core/src/main/java/org/elasticsearch/index/mapper/ScaledFloatFieldMapper.java b/modules/mapper-extras/src/main/java/org/elasticsearch/index/mapper/ScaledFloatFieldMapper.java similarity index 94% rename from core/src/main/java/org/elasticsearch/index/mapper/ScaledFloatFieldMapper.java rename to modules/mapper-extras/src/main/java/org/elasticsearch/index/mapper/ScaledFloatFieldMapper.java index eace5f66292ab..96ec29e2aa695 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/ScaledFloatFieldMapper.java +++ b/modules/mapper-extras/src/main/java/org/elasticsearch/index/mapper/ScaledFloatFieldMapper.java @@ -25,9 +25,12 @@ import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.index.NumericDocValues; import org.apache.lucene.index.SortedNumericDocValues; +import org.apache.lucene.index.Term; import org.apache.lucene.search.BoostQuery; +import org.apache.lucene.search.DocValuesFieldExistsQuery; import org.apache.lucene.search.Query; import org.apache.lucene.search.SortField; +import org.apache.lucene.search.TermQuery; import org.apache.lucene.util.BytesRef; import org.elasticsearch.common.Explicit; import org.elasticsearch.common.Nullable; @@ -88,6 +91,12 @@ public Builder ignoreMalformed(boolean ignoreMalformed) { return builder; } + @Override + public Builder indexOptions(IndexOptions indexOptions) { + throw new MapperParsingException( + "index_options not allowed in field [" + name + "] of type [" + builder.fieldType().typeName() + "]"); + } + protected Explicit ignoreMalformed(BuilderContext context) { if (ignoreMalformed != null) { return new Explicit<>(ignoreMalformed, true); @@ -126,7 +135,7 @@ public ScaledFloatFieldMapper build(BuilderContext context) { } setupFieldType(context); return new ScaledFloatFieldMapper(name, fieldType, defaultFieldType, ignoreMalformed(context), - coerce(context), includeInAll, context.indexSettings(), multiFieldsBuilder.build(this, context), copyTo); + coerce(context), context.indexSettings(), multiFieldsBuilder.build(this, context), copyTo); } } @@ -162,7 +171,7 @@ public Mapper.Builder parse(String name, Map node, } } - public static final class ScaledFloatFieldType extends MappedFieldType { + public static final class ScaledFloatFieldType extends SimpleMappedFieldType { private double scalingFactor; @@ -205,6 +214,15 @@ public void checkCompatibility(MappedFieldType other, List conflicts, bo } } + @Override + public Query existsQuery(QueryShardContext context) { + if (hasDocValues()) { + return new DocValuesFieldExistsQuery(name()); + } else { + return new TermQuery(new Term(FieldNamesFieldMapper.NAME, name())); + } + } + @Override public Query termQuery(Object value, QueryShardContext context) { failIfNotIndexed(); @@ -238,19 +256,19 @@ public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower failIfNotIndexed(); Long lo = null; if (lowerTerm != null) { - double dValue = parse(lowerTerm); + double dValue = parse(lowerTerm) * scalingFactor; if (includeLower == false) { dValue = Math.nextUp(dValue); } - lo = Math.round(Math.ceil(dValue * scalingFactor)); + lo = Math.round(Math.ceil(dValue)); } Long hi = null; if (upperTerm != null) { - double dValue = parse(upperTerm); + double dValue = parse(upperTerm) * scalingFactor; if (includeUpper == false) { dValue = Math.nextDown(dValue); } - hi = Math.round(Math.floor(dValue * scalingFactor)); + hi = Math.round(Math.floor(dValue)); } Query query = NumberFieldMapper.NumberType.LONG.rangeQuery(name(), lo, hi, true, true, hasDocValues()); if (boost() != 1f) { @@ -309,8 +327,6 @@ public int hashCode() { } } - private Boolean includeInAll; - private Explicit ignoreMalformed; private Explicit coerce; @@ -321,7 +337,6 @@ private ScaledFloatFieldMapper( MappedFieldType defaultFieldType, Explicit ignoreMalformed, Explicit coerce, - Boolean includeInAll, Settings indexSettings, MultiFields multiFields, CopyTo copyTo) { @@ -332,7 +347,6 @@ private ScaledFloatFieldMapper( } this.ignoreMalformed = ignoreMalformed; this.coerce = coerce; - this.includeInAll = includeInAll; } @Override @@ -352,7 +366,6 @@ protected ScaledFloatFieldMapper clone() { @Override protected void parseCreateField(ParseContext context, List fields) throws IOException { - final boolean includeInAll = context.includeInAll(this.includeInAll, this); XContentParser parser = context.parser(); Object value; @@ -375,11 +388,7 @@ protected void parseCreateField(ParseContext context, List field throw e; } } - if (includeInAll) { - value = parser.textOrNull(); // preserve formatting - } else { - value = numericValue; - } + value = numericValue; } if (value == null) { @@ -394,10 +403,6 @@ protected void parseCreateField(ParseContext context, List field numericValue = parse(value); } - if (includeInAll) { - context.allEntries().addText(fieldType().name(), value.toString(), fieldType().boost()); - } - double doubleValue = numericValue.doubleValue(); if (Double.isFinite(doubleValue) == false) { if (ignoreMalformed.value()) { @@ -413,13 +418,15 @@ protected void parseCreateField(ParseContext context, List field boolean docValued = fieldType().hasDocValues(); boolean stored = fieldType().stored(); fields.addAll(NumberFieldMapper.NumberType.LONG.createFields(fieldType().name(), scaledValue, indexed, docValued, stored)); + if (docValued == false && (indexed || stored)) { + createFieldNamesField(context, fields); + } } @Override protected void doMerge(Mapper mergeWith, boolean updateAllTypes) { super.doMerge(mergeWith, updateAllTypes); ScaledFloatFieldMapper other = (ScaledFloatFieldMapper) mergeWith; - this.includeInAll = other.includeInAll; if (other.ignoreMalformed.explicit()) { this.ignoreMalformed = other.ignoreMalformed; } @@ -444,12 +451,6 @@ protected void doXContentBody(XContentBuilder builder, boolean includeDefaults, if (includeDefaults || fieldType().nullValue() != null) { builder.field("null_value", fieldType().nullValue()); } - - if (includeInAll != null) { - builder.field("include_in_all", includeInAll); - } else if (includeDefaults) { - builder.field("include_in_all", false); - } } static Double parse(Object value) { diff --git a/core/src/main/java/org/elasticsearch/index/mapper/TokenCountFieldMapper.java b/modules/mapper-extras/src/main/java/org/elasticsearch/index/mapper/TokenCountFieldMapper.java similarity index 100% rename from core/src/main/java/org/elasticsearch/index/mapper/TokenCountFieldMapper.java rename to modules/mapper-extras/src/main/java/org/elasticsearch/index/mapper/TokenCountFieldMapper.java diff --git a/core/src/test/java/org/apache/lucene/queries/BaseRandomBinaryDocValuesRangeQueryTestCase.java b/modules/mapper-extras/src/test/java/org/apache/lucene/queries/BaseRandomBinaryDocValuesRangeQueryTestCase.java similarity index 92% rename from core/src/test/java/org/apache/lucene/queries/BaseRandomBinaryDocValuesRangeQueryTestCase.java rename to modules/mapper-extras/src/test/java/org/apache/lucene/queries/BaseRandomBinaryDocValuesRangeQueryTestCase.java index b83dac78d070e..fcc9f67229f87 100644 --- a/core/src/test/java/org/apache/lucene/queries/BaseRandomBinaryDocValuesRangeQueryTestCase.java +++ b/modules/mapper-extras/src/test/java/org/apache/lucene/queries/BaseRandomBinaryDocValuesRangeQueryTestCase.java @@ -49,7 +49,7 @@ public void testRandomBig() throws Exception { @Override protected final Field newRangeField(Range box) { - AbstractRange testRange = (AbstractRange) box; + AbstractRange testRange = (AbstractRange) box; RangeFieldMapper.Range range = new RangeFieldMapper.Range(rangeType(), testRange.getMin(), testRange.getMax(), true , true); try { BytesRef encodeRange = rangeType().encodeRanges(Collections.singleton(range)); @@ -61,25 +61,25 @@ protected final Field newRangeField(Range box) { @Override protected final Query newIntersectsQuery(Range box) { - AbstractRange testRange = (AbstractRange) box; + AbstractRange testRange = (AbstractRange) box; return rangeType().dvRangeQuery(fieldName(), INTERSECTS, testRange.getMin(), testRange.getMax(), true, true); } @Override protected final Query newContainsQuery(Range box) { - AbstractRange testRange = (AbstractRange) box; + AbstractRange testRange = (AbstractRange) box; return rangeType().dvRangeQuery(fieldName(), CONTAINS, testRange.getMin(), testRange.getMax(), true, true); } @Override protected final Query newWithinQuery(Range box) { - AbstractRange testRange = (AbstractRange) box; + AbstractRange testRange = (AbstractRange) box; return rangeType().dvRangeQuery(fieldName(), WITHIN, testRange.getMin(), testRange.getMax(), true, true); } @Override protected final Query newCrossesQuery(Range box) { - AbstractRange testRange = (AbstractRange) box; + AbstractRange testRange = (AbstractRange) box; return rangeType().dvRangeQuery(fieldName(), CROSSES, testRange.getMin(), testRange.getMax(), true, true); } @@ -116,7 +116,7 @@ protected final Object getMax(int dim) { @Override protected final boolean isEqual(Range o) { - AbstractRange other = (AbstractRange) o; + AbstractRange other = (AbstractRange) o; return Objects.equals(getMin(), other.getMin()) && Objects.equals(getMax(), other.getMax()); } diff --git a/core/src/test/java/org/apache/lucene/queries/BinaryDocValuesRangeQueryTests.java b/modules/mapper-extras/src/test/java/org/apache/lucene/queries/BinaryDocValuesRangeQueryTests.java similarity index 100% rename from core/src/test/java/org/apache/lucene/queries/BinaryDocValuesRangeQueryTests.java rename to modules/mapper-extras/src/test/java/org/apache/lucene/queries/BinaryDocValuesRangeQueryTests.java diff --git a/core/src/test/java/org/apache/lucene/queries/DoubleRandomBinaryDocValuesRangeQueryTests.java b/modules/mapper-extras/src/test/java/org/apache/lucene/queries/DoubleRandomBinaryDocValuesRangeQueryTests.java similarity index 96% rename from core/src/test/java/org/apache/lucene/queries/DoubleRandomBinaryDocValuesRangeQueryTests.java rename to modules/mapper-extras/src/test/java/org/apache/lucene/queries/DoubleRandomBinaryDocValuesRangeQueryTests.java index aa15a80319510..984b1d72ef843 100644 --- a/core/src/test/java/org/apache/lucene/queries/DoubleRandomBinaryDocValuesRangeQueryTests.java +++ b/modules/mapper-extras/src/test/java/org/apache/lucene/queries/DoubleRandomBinaryDocValuesRangeQueryTests.java @@ -56,7 +56,7 @@ private double nextDoubleInternal() { } } - private static class DoubleTestRange extends AbstractRange { + private static class DoubleTestRange extends AbstractRange { double min; double max; @@ -66,7 +66,7 @@ private static class DoubleTestRange extends AbstractRange { } @Override - public Object getMin() { + public Double getMin() { return min; } @@ -82,7 +82,7 @@ protected void setMin(int dim, Object val) { } @Override - public Object getMax() { + public Double getMax() { return max; } diff --git a/core/src/test/java/org/apache/lucene/queries/FloatRandomBinaryDocValuesRangeQueryTests.java b/modules/mapper-extras/src/test/java/org/apache/lucene/queries/FloatRandomBinaryDocValuesRangeQueryTests.java similarity index 96% rename from core/src/test/java/org/apache/lucene/queries/FloatRandomBinaryDocValuesRangeQueryTests.java rename to modules/mapper-extras/src/test/java/org/apache/lucene/queries/FloatRandomBinaryDocValuesRangeQueryTests.java index 8a04a50448fed..a7f877392cf43 100644 --- a/core/src/test/java/org/apache/lucene/queries/FloatRandomBinaryDocValuesRangeQueryTests.java +++ b/modules/mapper-extras/src/test/java/org/apache/lucene/queries/FloatRandomBinaryDocValuesRangeQueryTests.java @@ -56,7 +56,7 @@ private float nextFloatInternal() { } } - private static class FloatTestRange extends AbstractRange { + private static class FloatTestRange extends AbstractRange { float min; float max; @@ -66,7 +66,7 @@ private static class FloatTestRange extends AbstractRange { } @Override - public Object getMin() { + public Float getMin() { return min; } @@ -82,7 +82,7 @@ protected void setMin(int dim, Object val) { } @Override - public Object getMax() { + public Float getMax() { return max; } diff --git a/core/src/test/java/org/apache/lucene/queries/InetAddressRandomBinaryDocValuesRangeQueryTests.java b/modules/mapper-extras/src/test/java/org/apache/lucene/queries/InetAddressRandomBinaryDocValuesRangeQueryTests.java similarity index 96% rename from core/src/test/java/org/apache/lucene/queries/InetAddressRandomBinaryDocValuesRangeQueryTests.java rename to modules/mapper-extras/src/test/java/org/apache/lucene/queries/InetAddressRandomBinaryDocValuesRangeQueryTests.java index 1592e89d174eb..2def2702d38b3 100644 --- a/core/src/test/java/org/apache/lucene/queries/InetAddressRandomBinaryDocValuesRangeQueryTests.java +++ b/modules/mapper-extras/src/test/java/org/apache/lucene/queries/InetAddressRandomBinaryDocValuesRangeQueryTests.java @@ -67,7 +67,7 @@ private InetAddress nextInetaddress() throws UnknownHostException { } } - private static class IpRange extends AbstractRange { + private static class IpRange extends AbstractRange { InetAddress minAddress; InetAddress maxAddress; byte[] min; @@ -81,7 +81,7 @@ private static class IpRange extends AbstractRange { } @Override - public Object getMin() { + public InetAddress getMin() { return minAddress; } @@ -101,7 +101,7 @@ protected void setMin(int dim, Object val) { } @Override - public Object getMax() { + public InetAddress getMax() { return maxAddress; } diff --git a/core/src/test/java/org/apache/lucene/queries/IntegerRandomBinaryDocValuesRangeQueryTests.java b/modules/mapper-extras/src/test/java/org/apache/lucene/queries/IntegerRandomBinaryDocValuesRangeQueryTests.java similarity index 96% rename from core/src/test/java/org/apache/lucene/queries/IntegerRandomBinaryDocValuesRangeQueryTests.java rename to modules/mapper-extras/src/test/java/org/apache/lucene/queries/IntegerRandomBinaryDocValuesRangeQueryTests.java index 6fe59b8827312..1d04cdbaaca86 100644 --- a/core/src/test/java/org/apache/lucene/queries/IntegerRandomBinaryDocValuesRangeQueryTests.java +++ b/modules/mapper-extras/src/test/java/org/apache/lucene/queries/IntegerRandomBinaryDocValuesRangeQueryTests.java @@ -64,7 +64,7 @@ private int nextIntInternal() { } } - private static class IntTestRange extends AbstractRange { + private static class IntTestRange extends AbstractRange { int min; int max; @@ -74,7 +74,7 @@ private static class IntTestRange extends AbstractRange { } @Override - public Object getMin() { + public Integer getMin() { return min; } @@ -90,7 +90,7 @@ protected void setMin(int dim, Object val) { } @Override - public Object getMax() { + public Integer getMax() { return max; } diff --git a/core/src/test/java/org/apache/lucene/queries/LongRandomBinaryDocValuesRangeQueryTests.java b/modules/mapper-extras/src/test/java/org/apache/lucene/queries/LongRandomBinaryDocValuesRangeQueryTests.java similarity index 96% rename from core/src/test/java/org/apache/lucene/queries/LongRandomBinaryDocValuesRangeQueryTests.java rename to modules/mapper-extras/src/test/java/org/apache/lucene/queries/LongRandomBinaryDocValuesRangeQueryTests.java index 139cb3f0b1282..e506c2c269028 100644 --- a/core/src/test/java/org/apache/lucene/queries/LongRandomBinaryDocValuesRangeQueryTests.java +++ b/modules/mapper-extras/src/test/java/org/apache/lucene/queries/LongRandomBinaryDocValuesRangeQueryTests.java @@ -64,7 +64,7 @@ private long nextLongInternal() { } } - private static class LongTestRange extends AbstractRange { + private static class LongTestRange extends AbstractRange { long min; long max; @@ -74,7 +74,7 @@ private static class LongTestRange extends AbstractRange { } @Override - public Object getMin() { + public Long getMin() { return min; } @@ -90,7 +90,7 @@ protected void setMin(int dim, Object val) { } @Override - public Object getMax() { + public Long getMax() { return max; } diff --git a/core/src/test/java/org/elasticsearch/action/admin/indices/template/BWCTemplateTests.java b/modules/mapper-extras/src/test/java/org/elasticsearch/index/mapper/BWCTemplateTests.java similarity index 82% rename from core/src/test/java/org/elasticsearch/action/admin/indices/template/BWCTemplateTests.java rename to modules/mapper-extras/src/test/java/org/elasticsearch/index/mapper/BWCTemplateTests.java index 40c314edd41cd..1d9671218c456 100644 --- a/core/src/test/java/org/elasticsearch/action/admin/indices/template/BWCTemplateTests.java +++ b/modules/mapper-extras/src/test/java/org/elasticsearch/index/mapper/BWCTemplateTests.java @@ -17,11 +17,15 @@ * under the License. */ -package org.elasticsearch.action.admin.indices.template; +package org.elasticsearch.index.mapper; import org.elasticsearch.common.xcontent.XContentType; +import org.elasticsearch.index.mapper.MapperExtrasPlugin; +import org.elasticsearch.plugins.Plugin; import org.elasticsearch.test.ESSingleNodeTestCase; +import java.util.Collection; + import static org.elasticsearch.test.StreamsUtils.copyToBytesFromClasspath; /** @@ -29,11 +33,16 @@ * prior to their 5.x releases work for newly created indices */ public class BWCTemplateTests extends ESSingleNodeTestCase { + @Override + protected Collection> getPlugins() { + return pluginList(MapperExtrasPlugin.class); + } + public void testBeatsTemplatesBWC() throws Exception { - byte[] metricBeat = copyToBytesFromClasspath("/org/elasticsearch/action/admin/indices/template/metricbeat-5.0.template.json"); - byte[] packetBeat = copyToBytesFromClasspath("/org/elasticsearch/action/admin/indices/template/packetbeat-5.0.template.json"); - byte[] fileBeat = copyToBytesFromClasspath("/org/elasticsearch/action/admin/indices/template/filebeat-5.0.template.json"); - byte[] winLogBeat = copyToBytesFromClasspath("/org/elasticsearch/action/admin/indices/template/winlogbeat-5.0.template.json"); + byte[] metricBeat = copyToBytesFromClasspath("/org/elasticsearch/index/mapper/metricbeat-5.0.template.json"); + byte[] packetBeat = copyToBytesFromClasspath("/org/elasticsearch/index/mapper/packetbeat-5.0.template.json"); + byte[] fileBeat = copyToBytesFromClasspath("/org/elasticsearch/index/mapper/filebeat-5.0.template.json"); + byte[] winLogBeat = copyToBytesFromClasspath("/org/elasticsearch/index/mapper/winlogbeat-5.0.template.json"); client().admin().indices().preparePutTemplate("metricbeat").setSource(metricBeat, XContentType.JSON).get(); client().admin().indices().preparePutTemplate("packetbeat").setSource(packetBeat, XContentType.JSON).get(); client().admin().indices().preparePutTemplate("filebeat").setSource(fileBeat, XContentType.JSON).get(); diff --git a/modules/mapper-extras/src/test/java/org/elasticsearch/index/mapper/BinaryRangeUtilTests.java b/modules/mapper-extras/src/test/java/org/elasticsearch/index/mapper/BinaryRangeUtilTests.java new file mode 100644 index 0000000000000..20d4af1f0b600 --- /dev/null +++ b/modules/mapper-extras/src/test/java/org/elasticsearch/index/mapper/BinaryRangeUtilTests.java @@ -0,0 +1,152 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.index.mapper; + +import org.apache.lucene.util.BytesRef; +import org.elasticsearch.test.ESTestCase; + +public class BinaryRangeUtilTests extends ESTestCase { + + public void testBasics() { + BytesRef encoded1 = new BytesRef(BinaryRangeUtil.encodeLong(Long.MIN_VALUE)); + BytesRef encoded2 = new BytesRef(BinaryRangeUtil.encodeLong(-1L)); + BytesRef encoded3 = new BytesRef(BinaryRangeUtil.encodeLong(0L)); + BytesRef encoded4 = new BytesRef(BinaryRangeUtil.encodeLong(1L)); + BytesRef encoded5 = new BytesRef(BinaryRangeUtil.encodeLong(Long.MAX_VALUE)); + + assertTrue(encoded1.compareTo(encoded2) < 0); + assertTrue(encoded2.compareTo(encoded1) > 0); + assertTrue(encoded2.compareTo(encoded3) < 0); + assertTrue(encoded3.compareTo(encoded2) > 0); + assertTrue(encoded3.compareTo(encoded4) < 0); + assertTrue(encoded4.compareTo(encoded3) > 0); + assertTrue(encoded4.compareTo(encoded5) < 0); + assertTrue(encoded5.compareTo(encoded4) > 0); + + encoded1 = new BytesRef(BinaryRangeUtil.encodeDouble(Double.NEGATIVE_INFINITY)); + encoded2 = new BytesRef(BinaryRangeUtil.encodeDouble(-1D)); + encoded3 = new BytesRef(BinaryRangeUtil.encodeDouble(-0D)); + encoded4 = new BytesRef(BinaryRangeUtil.encodeDouble(0D)); + encoded5 = new BytesRef(BinaryRangeUtil.encodeDouble(1D)); + BytesRef encoded6 = new BytesRef(BinaryRangeUtil.encodeDouble(Double.POSITIVE_INFINITY)); + + assertTrue(encoded1.compareTo(encoded2) < 0); + assertTrue(encoded2.compareTo(encoded1) > 0); + assertTrue(encoded2.compareTo(encoded3) < 0); + assertTrue(encoded3.compareTo(encoded2) > 0); + assertTrue(encoded3.compareTo(encoded4) < 0); + assertTrue(encoded4.compareTo(encoded3) > 0); + assertTrue(encoded4.compareTo(encoded5) < 0); + assertTrue(encoded5.compareTo(encoded4) > 0); + assertTrue(encoded5.compareTo(encoded6) < 0); + assertTrue(encoded6.compareTo(encoded5) > 0); + + encoded1 = new BytesRef(BinaryRangeUtil.encodeFloat(Float.NEGATIVE_INFINITY)); + encoded2 = new BytesRef(BinaryRangeUtil.encodeFloat(-1F)); + encoded3 = new BytesRef(BinaryRangeUtil.encodeFloat(-0F)); + encoded4 = new BytesRef(BinaryRangeUtil.encodeFloat(0F)); + encoded5 = new BytesRef(BinaryRangeUtil.encodeFloat(1F)); + encoded6 = new BytesRef(BinaryRangeUtil.encodeFloat(Float.POSITIVE_INFINITY)); + + assertTrue(encoded1.compareTo(encoded2) < 0); + assertTrue(encoded2.compareTo(encoded1) > 0); + assertTrue(encoded2.compareTo(encoded3) < 0); + assertTrue(encoded3.compareTo(encoded2) > 0); + assertTrue(encoded3.compareTo(encoded4) < 0); + assertTrue(encoded4.compareTo(encoded3) > 0); + assertTrue(encoded4.compareTo(encoded5) < 0); + assertTrue(encoded5.compareTo(encoded4) > 0); + assertTrue(encoded5.compareTo(encoded6) < 0); + assertTrue(encoded6.compareTo(encoded5) > 0); + } + + public void testEncode_long() { + int iters = randomIntBetween(32, 1024); + for (int i = 0; i < iters; i++) { + long number1 = randomLong(); + BytesRef encodedNumber1 = new BytesRef(BinaryRangeUtil.encodeLong(number1)); + long number2 = randomBoolean() ? number1 + 1 : randomLong(); + BytesRef encodedNumber2 = new BytesRef(BinaryRangeUtil.encodeLong(number2)); + + int cmp = normalize(Long.compare(number1, number2)); + assertEquals(cmp, normalize(encodedNumber1.compareTo(encodedNumber2))); + cmp = normalize(Long.compare(number2, number1)); + assertEquals(cmp, normalize(encodedNumber2.compareTo(encodedNumber1))); + } + } + + public void testVariableLengthEncoding() { + for (int i = -8; i <= 7; ++i) { + assertEquals(1, BinaryRangeUtil.encodeLong(i).length); + } + for (int i = -2048; i <= 2047; ++i) { + if (i < -8 ||i > 7) { + assertEquals(2, BinaryRangeUtil.encodeLong(i).length); + } + } + assertEquals(3, BinaryRangeUtil.encodeLong(-2049).length); + assertEquals(3, BinaryRangeUtil.encodeLong(2048).length); + assertEquals(9, BinaryRangeUtil.encodeLong(Long.MIN_VALUE).length); + assertEquals(9, BinaryRangeUtil.encodeLong(Long.MAX_VALUE).length); + } + + public void testEncode_double() { + int iters = randomIntBetween(32, 1024); + for (int i = 0; i < iters; i++) { + double number1 = randomDouble(); + BytesRef encodedNumber1 = new BytesRef(BinaryRangeUtil.encodeDouble(number1)); + double number2 = randomBoolean() ? Math.nextUp(number1) : randomDouble(); + BytesRef encodedNumber2 = new BytesRef(BinaryRangeUtil.encodeDouble(number2)); + + assertEquals(8, encodedNumber1.length); + assertEquals(8, encodedNumber2.length); + int cmp = normalize(Double.compare(number1, number2)); + assertEquals(cmp, normalize(encodedNumber1.compareTo(encodedNumber2))); + cmp = normalize(Double.compare(number2, number1)); + assertEquals(cmp, normalize(encodedNumber2.compareTo(encodedNumber1))); + } + } + + public void testEncode_Float() { + int iters = randomIntBetween(32, 1024); + for (int i = 0; i < iters; i++) { + float number1 = randomFloat(); + BytesRef encodedNumber1 = new BytesRef(BinaryRangeUtil.encodeFloat(number1)); + float number2 = randomBoolean() ? Math.nextUp(number1) : randomFloat(); + BytesRef encodedNumber2 = new BytesRef(BinaryRangeUtil.encodeFloat(number2)); + + assertEquals(4, encodedNumber1.length); + assertEquals(4, encodedNumber2.length); + int cmp = normalize(Double.compare(number1, number2)); + assertEquals(cmp, normalize(encodedNumber1.compareTo(encodedNumber2))); + cmp = normalize(Double.compare(number2, number1)); + assertEquals(cmp, normalize(encodedNumber2.compareTo(encodedNumber1))); + } + } + + private static int normalize(int cmp) { + if (cmp < 0) { + return -1; + } else if (cmp > 0) { + return 1; + } + return 0; + } + +} diff --git a/modules/mapper-extras/src/test/java/org/elasticsearch/index/mapper/IpRangeFieldMapperTests.java b/modules/mapper-extras/src/test/java/org/elasticsearch/index/mapper/IpRangeFieldMapperTests.java new file mode 100644 index 0000000000000..63ebe7a6cb3c0 --- /dev/null +++ b/modules/mapper-extras/src/test/java/org/elasticsearch/index/mapper/IpRangeFieldMapperTests.java @@ -0,0 +1,87 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.index.mapper; + +import java.util.Collection; +import java.util.HashMap; +import java.util.Map; +import org.apache.lucene.index.DocValuesType; +import org.apache.lucene.index.IndexableField; +import org.elasticsearch.common.compress.CompressedXContent; +import org.elasticsearch.common.network.InetAddresses; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentFactory; +import org.elasticsearch.common.xcontent.XContentType; +import org.elasticsearch.index.IndexService; +import org.elasticsearch.plugins.Plugin; +import org.elasticsearch.test.ESSingleNodeTestCase; +import org.junit.Before; + +import static org.hamcrest.Matchers.containsString; + +public class IpRangeFieldMapperTests extends ESSingleNodeTestCase { + + private IndexService indexService; + private DocumentMapperParser parser; + + @Override + protected Collection> getPlugins() { + return pluginList(MapperExtrasPlugin.class); + } + + @Before + public void setup() { + indexService = createIndex("test"); + parser = indexService.mapperService().documentMapperParser(); + } + + public void testStoreCidr() throws Exception { + XContentBuilder mapping = XContentFactory.jsonBuilder().startObject().startObject("type") + .startObject("properties").startObject("field").field("type", "ip_range") + .field("store", true); + mapping = mapping.endObject().endObject().endObject().endObject(); + DocumentMapper mapper = parser.parse("type", new CompressedXContent(mapping.string())); + assertEquals(mapping.string(), mapper.mappingSource().toString()); + final Map cases = new HashMap<>(); + cases.put("192.168.0.0/15", "192.169.255.255"); + cases.put("192.168.0.0/16", "192.168.255.255"); + cases.put("192.168.0.0/17", "192.168.127.255"); + for (final Map.Entry entry : cases.entrySet()) { + ParsedDocument doc = + mapper.parse(SourceToParse.source("test", "type", "1", XContentFactory.jsonBuilder() + .startObject() + .field("field", entry.getKey()) + .endObject().bytes(), + XContentType.JSON + )); + IndexableField[] fields = doc.rootDoc().getFields("field"); + assertEquals(3, fields.length); + IndexableField dvField = fields[0]; + assertEquals(DocValuesType.BINARY, dvField.fieldType().docValuesType()); + IndexableField pointField = fields[1]; + assertEquals(2, pointField.fieldType().pointDimensionCount()); + IndexableField storedField = fields[2]; + assertTrue(storedField.fieldType().stored()); + String strVal = + InetAddresses.toAddrString(InetAddresses.forString("192.168.0.0")) + " : " + + InetAddresses.toAddrString(InetAddresses.forString(entry.getValue())); + assertThat(storedField.stringValue(), containsString(strVal)); + } + } +} diff --git a/modules/mapper-extras/src/test/java/org/elasticsearch/index/mapper/MapperExtrasClientYamlTestSuiteIT.java b/modules/mapper-extras/src/test/java/org/elasticsearch/index/mapper/MapperExtrasClientYamlTestSuiteIT.java new file mode 100644 index 0000000000000..e2f10791739f8 --- /dev/null +++ b/modules/mapper-extras/src/test/java/org/elasticsearch/index/mapper/MapperExtrasClientYamlTestSuiteIT.java @@ -0,0 +1,40 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.index.mapper; + +import com.carrotsearch.randomizedtesting.annotations.Name; +import com.carrotsearch.randomizedtesting.annotations.ParametersFactory; + +import org.elasticsearch.test.rest.yaml.ClientYamlTestCandidate; +import org.elasticsearch.test.rest.yaml.ESClientYamlSuiteTestCase; + +/** Runs yaml rest tests */ +public class MapperExtrasClientYamlTestSuiteIT extends ESClientYamlSuiteTestCase { + + public MapperExtrasClientYamlTestSuiteIT(@Name("yaml") ClientYamlTestCandidate testCandidate) { + super(testCandidate); + } + + @ParametersFactory + public static Iterable parameters() throws Exception { + return ESClientYamlSuiteTestCase.createParameters(); + } +} + diff --git a/core/src/test/java/org/elasticsearch/index/mapper/RangeFieldMapperTests.java b/modules/mapper-extras/src/test/java/org/elasticsearch/index/mapper/RangeFieldMapperTests.java similarity index 98% rename from core/src/test/java/org/elasticsearch/index/mapper/RangeFieldMapperTests.java rename to modules/mapper-extras/src/test/java/org/elasticsearch/index/mapper/RangeFieldMapperTests.java index 7bae878b92459..0742aeadcb58a 100644 --- a/core/src/test/java/org/elasticsearch/index/mapper/RangeFieldMapperTests.java +++ b/modules/mapper-extras/src/test/java/org/elasticsearch/index/mapper/RangeFieldMapperTests.java @@ -27,10 +27,13 @@ import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.common.xcontent.XContentType; +import org.elasticsearch.plugins.Plugin; +import org.elasticsearch.test.InternalSettingsPlugin; import java.io.IOException; import java.net.InetAddress; import java.util.Arrays; +import java.util.Collection; import java.util.HashSet; import java.util.Locale; @@ -42,6 +45,12 @@ import static org.hamcrest.Matchers.containsString; public class RangeFieldMapperTests extends AbstractNumericFieldMapperTestCase { + + @Override + protected Collection> getPlugins() { + return pluginList(InternalSettingsPlugin.class, MapperExtrasPlugin.class); + } + private static String FROM_DATE = "2016-10-31"; private static String TO_DATE = "2016-11-01 20:00:00"; private static String FROM_IP = "::ffff:c0a8:107"; diff --git a/core/src/test/java/org/elasticsearch/index/mapper/RangeFieldTypeTests.java b/modules/mapper-extras/src/test/java/org/elasticsearch/index/mapper/RangeFieldTypeTests.java similarity index 99% rename from core/src/test/java/org/elasticsearch/index/mapper/RangeFieldTypeTests.java rename to modules/mapper-extras/src/test/java/org/elasticsearch/index/mapper/RangeFieldTypeTests.java index 328e61c233091..810563555969a 100644 --- a/core/src/test/java/org/elasticsearch/index/mapper/RangeFieldTypeTests.java +++ b/modules/mapper-extras/src/test/java/org/elasticsearch/index/mapper/RangeFieldTypeTests.java @@ -95,7 +95,7 @@ public void testRangeQuery() throws Exception { Object to = nextTo(from); assertEquals(getExpectedRangeQuery(relation, from, to, includeLower, includeUpper), - ft.rangeQuery(from, to, includeLower, includeUpper, relation, context)); + ft.rangeQuery(from, to, includeLower, includeUpper, relation, null, null, context)); } private Query getExpectedRangeQuery(ShapeRelation relation, Object from, Object to, boolean includeLower, boolean includeUpper) { diff --git a/core/src/test/java/org/elasticsearch/index/mapper/ScaledFloatFieldMapperTests.java b/modules/mapper-extras/src/test/java/org/elasticsearch/index/mapper/ScaledFloatFieldMapperTests.java similarity index 94% rename from core/src/test/java/org/elasticsearch/index/mapper/ScaledFloatFieldMapperTests.java rename to modules/mapper-extras/src/test/java/org/elasticsearch/index/mapper/ScaledFloatFieldMapperTests.java index c0650c9c72e5b..29e68c85db5e4 100644 --- a/core/src/test/java/org/elasticsearch/index/mapper/ScaledFloatFieldMapperTests.java +++ b/modules/mapper-extras/src/test/java/org/elasticsearch/index/mapper/ScaledFloatFieldMapperTests.java @@ -50,7 +50,7 @@ public void setup() { @Override protected Collection> getPlugins() { - return pluginList(InternalSettingsPlugin.class); + return pluginList(InternalSettingsPlugin.class, MapperExtrasPlugin.class); } public void testDefaults() throws Exception { @@ -336,4 +336,19 @@ public void testEmptyName() throws IOException { ); assertThat(e.getMessage(), containsString("name cannot be empty string")); } + + /** + * `index_options` was deprecated and is rejected as of 7.0 + */ + public void testRejectIndexOptions() throws IOException { + String mapping = XContentFactory.jsonBuilder().startObject().startObject("type") + .startObject("properties") + .startObject("foo") + .field("type", "scaled_float") + .field("index_options", randomFrom(new String[] { "docs", "freqs", "positions", "offsets" })) + .endObject() + .endObject().endObject().endObject().string(); + MapperParsingException e = expectThrows(MapperParsingException.class, () -> parser.parse("type", new CompressedXContent(mapping))); + assertThat(e.getMessage(), containsString("index_options not allowed in field [foo] of type [scaled_float]")); + } } diff --git a/core/src/test/java/org/elasticsearch/index/mapper/ScaledFloatFieldTypeTests.java b/modules/mapper-extras/src/test/java/org/elasticsearch/index/mapper/ScaledFloatFieldTypeTests.java similarity index 77% rename from core/src/test/java/org/elasticsearch/index/mapper/ScaledFloatFieldTypeTests.java rename to modules/mapper-extras/src/test/java/org/elasticsearch/index/mapper/ScaledFloatFieldTypeTests.java index 811bac82bbce9..83039ebd88319 100644 --- a/core/src/test/java/org/elasticsearch/index/mapper/ScaledFloatFieldTypeTests.java +++ b/modules/mapper-extras/src/test/java/org/elasticsearch/index/mapper/ScaledFloatFieldTypeTests.java @@ -124,6 +124,42 @@ public void testRangeQuery() throws IOException { IOUtils.close(reader, dir); } + public void testRoundsUpperBoundCorrectly() { + ScaledFloatFieldMapper.ScaledFloatFieldType ft = new ScaledFloatFieldMapper.ScaledFloatFieldType(); + ft.setName("scaled_float"); + ft.setScalingFactor(100.0); + Query scaledFloatQ = ft.rangeQuery(null, 0.1, true, false, null); + assertEquals("scaled_float:[-9223372036854775808 TO 9]", scaledFloatQ.toString()); + scaledFloatQ = ft.rangeQuery(null, 0.1, true, true, null); + assertEquals("scaled_float:[-9223372036854775808 TO 10]", scaledFloatQ.toString()); + scaledFloatQ = ft.rangeQuery(null, 0.095, true, false, null); + assertEquals("scaled_float:[-9223372036854775808 TO 9]", scaledFloatQ.toString()); + scaledFloatQ = ft.rangeQuery(null, 0.095, true, true, null); + assertEquals("scaled_float:[-9223372036854775808 TO 9]", scaledFloatQ.toString()); + scaledFloatQ = ft.rangeQuery(null, 0.105, true, false, null); + assertEquals("scaled_float:[-9223372036854775808 TO 10]", scaledFloatQ.toString()); + scaledFloatQ = ft.rangeQuery(null, 0.105, true, true, null); + assertEquals("scaled_float:[-9223372036854775808 TO 10]", scaledFloatQ.toString()); + } + + public void testRoundsLowerBoundCorrectly() { + ScaledFloatFieldMapper.ScaledFloatFieldType ft = new ScaledFloatFieldMapper.ScaledFloatFieldType(); + ft.setName("scaled_float"); + ft.setScalingFactor(100.0); + Query scaledFloatQ = ft.rangeQuery(-0.1, null, false, true, null); + assertEquals("scaled_float:[-9 TO 9223372036854775807]", scaledFloatQ.toString()); + scaledFloatQ = ft.rangeQuery(-0.1, null, true, true, null); + assertEquals("scaled_float:[-10 TO 9223372036854775807]", scaledFloatQ.toString()); + scaledFloatQ = ft.rangeQuery(-0.095, null, false, true, null); + assertEquals("scaled_float:[-9 TO 9223372036854775807]", scaledFloatQ.toString()); + scaledFloatQ = ft.rangeQuery(-0.095, null, true, true, null); + assertEquals("scaled_float:[-9 TO 9223372036854775807]", scaledFloatQ.toString()); + scaledFloatQ = ft.rangeQuery(-0.105, null, false, true, null); + assertEquals("scaled_float:[-10 TO 9223372036854775807]", scaledFloatQ.toString()); + scaledFloatQ = ft.rangeQuery(-0.105, null, true, true, null); + assertEquals("scaled_float:[-10 TO 9223372036854775807]", scaledFloatQ.toString()); + } + public void testValueForSearch() { ScaledFloatFieldMapper.ScaledFloatFieldType ft = new ScaledFloatFieldMapper.ScaledFloatFieldType(); ft.setName("scaled_float"); diff --git a/core/src/test/java/org/elasticsearch/index/mapper/TokenCountFieldMapperIntegrationIT.java b/modules/mapper-extras/src/test/java/org/elasticsearch/index/mapper/TokenCountFieldMapperIntegrationIT.java similarity index 99% rename from core/src/test/java/org/elasticsearch/index/mapper/TokenCountFieldMapperIntegrationIT.java rename to modules/mapper-extras/src/test/java/org/elasticsearch/index/mapper/TokenCountFieldMapperIntegrationIT.java index 75b588df85ad3..3d69b0d013e29 100644 --- a/core/src/test/java/org/elasticsearch/index/mapper/TokenCountFieldMapperIntegrationIT.java +++ b/modules/mapper-extras/src/test/java/org/elasticsearch/index/mapper/TokenCountFieldMapperIntegrationIT.java @@ -220,7 +220,8 @@ private void assertSearchHit(SearchHit hit, int[] standardTermCounts, int[] engl assertThat(hit.field("foo.token_count_without_position_increments"), not(nullValue())); assertThat(hit.field("foo.token_count_without_position_increments").getValues().size(), equalTo(englishTermCounts.length)); for (int i = 0; i < englishTermCounts.length; i++) { - assertThat((Integer) hit.field("foo.token_count_without_position_increments").getValues().get(i), equalTo(englishTermCounts[i])); + assertThat((Integer) hit.field("foo.token_count_without_position_increments").getValues().get(i), + equalTo(englishTermCounts[i])); } if (loadCountedFields && storeCountedFields) { diff --git a/core/src/test/java/org/elasticsearch/index/mapper/TokenCountFieldMapperTests.java b/modules/mapper-extras/src/test/java/org/elasticsearch/index/mapper/TokenCountFieldMapperTests.java similarity index 99% rename from core/src/test/java/org/elasticsearch/index/mapper/TokenCountFieldMapperTests.java rename to modules/mapper-extras/src/test/java/org/elasticsearch/index/mapper/TokenCountFieldMapperTests.java index 861586370aef8..633f10276096c 100644 --- a/core/src/test/java/org/elasticsearch/index/mapper/TokenCountFieldMapperTests.java +++ b/modules/mapper-extras/src/test/java/org/elasticsearch/index/mapper/TokenCountFieldMapperTests.java @@ -45,6 +45,12 @@ * Test for {@link TokenCountFieldMapper}. */ public class TokenCountFieldMapperTests extends ESSingleNodeTestCase { + + @Override + protected Collection> getPlugins() { + return pluginList(InternalSettingsPlugin.class, MapperExtrasPlugin.class); + } + public void testMerge() throws IOException { String stage1Mapping = XContentFactory.jsonBuilder().startObject() .startObject("person") @@ -122,11 +128,6 @@ public TokenStreamComponents createComponents(String fieldName) { return analyzer; } - @Override - protected Collection> getPlugins() { - return pluginList(InternalSettingsPlugin.class); - } - public void testEmptyName() throws IOException { IndexService indexService = createIndex("test"); DocumentMapperParser parser = indexService.mapperService().documentMapperParser(); diff --git a/core/src/test/resources/org/elasticsearch/action/admin/indices/template/filebeat-5.0.template.json b/modules/mapper-extras/src/test/resources/org/elasticsearch/index/mapper/filebeat-5.0.template.json similarity index 100% rename from core/src/test/resources/org/elasticsearch/action/admin/indices/template/filebeat-5.0.template.json rename to modules/mapper-extras/src/test/resources/org/elasticsearch/index/mapper/filebeat-5.0.template.json diff --git a/core/src/test/resources/org/elasticsearch/action/admin/indices/template/metricbeat-5.0.template.json b/modules/mapper-extras/src/test/resources/org/elasticsearch/index/mapper/metricbeat-5.0.template.json similarity index 100% rename from core/src/test/resources/org/elasticsearch/action/admin/indices/template/metricbeat-5.0.template.json rename to modules/mapper-extras/src/test/resources/org/elasticsearch/index/mapper/metricbeat-5.0.template.json diff --git a/core/src/test/resources/org/elasticsearch/action/admin/indices/template/packetbeat-5.0.template.json b/modules/mapper-extras/src/test/resources/org/elasticsearch/index/mapper/packetbeat-5.0.template.json similarity index 100% rename from core/src/test/resources/org/elasticsearch/action/admin/indices/template/packetbeat-5.0.template.json rename to modules/mapper-extras/src/test/resources/org/elasticsearch/index/mapper/packetbeat-5.0.template.json diff --git a/core/src/test/resources/org/elasticsearch/action/admin/indices/template/winlogbeat-5.0.template.json b/modules/mapper-extras/src/test/resources/org/elasticsearch/index/mapper/winlogbeat-5.0.template.json similarity index 100% rename from core/src/test/resources/org/elasticsearch/action/admin/indices/template/winlogbeat-5.0.template.json rename to modules/mapper-extras/src/test/resources/org/elasticsearch/index/mapper/winlogbeat-5.0.template.json diff --git a/modules/mapper-extras/src/test/resources/rest-api-spec/test/range/10_basic.yml b/modules/mapper-extras/src/test/resources/rest-api-spec/test/range/10_basic.yml new file mode 100644 index 0000000000000..9fd54d6342d54 --- /dev/null +++ b/modules/mapper-extras/src/test/resources/rest-api-spec/test/range/10_basic.yml @@ -0,0 +1,334 @@ +setup: + - do: + indices.create: + index: test + body: + settings: + number_of_replicas: 0 + mappings: + doc: + "properties": + "integer_range": + "type" : "integer_range" + "long_range": + "type" : "long_range" + "float_range": + "type" : "float_range" + "double_range": + "type" : "double_range" + "date_range": + "type" : "date_range" + "ip_range": + "type" : "ip_range" + +--- +"Integer range": + + - do: + index: + index: test + type: doc + id: 1 + body: { "integer_range" : { "gte": 1, "lte": 5 } } + + - do: + index: + index: test + type: doc + id: 2 + body: { "integer_range" : { "gte": 1, "lte": 3 } } + + - do: + index: + index: test + type: doc + id: 3 + body: { "integer_range" : { "gte": 4, "lte": 5 } } + + + - do: + indices.refresh: {} + + - do: + search: + body: { "size" : 0, "query" : { "range" : { "integer_range" : { "gte": 3, "lte" : 4 } } } } + + - match: { hits.total: 3 } + + - do: + search: + body: { "size" : 0, "query" : { "range" : { "integer_range" : { "gte": 3, "lte" : 4, "relation": "intersects" } } } } + + - match: { hits.total: 3 } + + - do: + search: + body: { "size" : 0, "query" : { "range" : { "integer_range" : { "gte": 3, "lte" : 4, "relation": "contains" } } } } + + - match: { hits.total: 1 } + + - do: + search: + body: { "size" : 0, "query" : { "range" : { "integer_range" : { "gte": 3, "lte" : 4, "relation": "within" } } } } + + - match: { hits.total: 0 } + +--- +"Long range": + + - do: + index: + index: test + type: doc + id: 1 + body: { "long_range" : { "gte": 1, "lte": 5 } } + + - do: + index: + index: test + type: doc + id: 2 + body: { "long_range" : { "gte": 1, "lte": 3 } } + + - do: + index: + index: test + type: doc + id: 3 + body: { "long_range" : { "gte": 4, "lte": 5 } } + + + - do: + indices.refresh: {} + + - do: + search: + body: { "size" : 0, "query" : { "range" : { "long_range" : { "gte": 3, "lte" : 4 } } } } + + - match: { hits.total: 3 } + + - do: + search: + body: { "size" : 0, "query" : { "range" : { "long_range" : { "gte": 3, "lte" : 4, "relation": "intersects" } } } } + + - match: { hits.total: 3 } + + - do: + search: + body: { "size" : 0, "query" : { "range" : { "long_range" : { "gte": 3, "lte" : 4, "relation": "contains" } } } } + + - match: { hits.total: 1 } + + - do: + search: + body: { "size" : 0, "query" : { "range" : { "long_range" : { "gte": 3, "lte" : 4, "relation": "within" } } } } + + - match: { hits.total: 0 } + +--- +"Float range": + + - do: + index: + index: test + type: doc + id: 1 + body: { "float_range" : { "gte": 1, "lte": 5 } } + + - do: + index: + index: test + type: doc + id: 2 + body: { "float_range" : { "gte": 1, "lte": 3 } } + + - do: + index: + index: test + type: doc + id: 3 + body: { "float_range" : { "gte": 4, "lte": 5 } } + + + - do: + indices.refresh: {} + + - do: + search: + body: { "size" : 0, "query" : { "range" : { "float_range" : { "gte": 3, "lte" : 4 } } } } + + - match: { hits.total: 3 } + + - do: + search: + body: { "size" : 0, "query" : { "range" : { "float_range" : { "gte": 3, "lte" : 4, "relation": "intersects" } } } } + + - match: { hits.total: 3 } + + - do: + search: + body: { "size" : 0, "query" : { "range" : { "float_range" : { "gte": 3, "lte" : 4, "relation": "contains" } } } } + + - match: { hits.total: 1 } + + - do: + search: + body: { "size" : 0, "query" : { "range" : { "float_range" : { "gte": 3, "lte" : 4, "relation": "within" } } } } + + - match: { hits.total: 0 } + +--- +"Double range": + + - do: + index: + index: test + type: doc + id: 1 + body: { "double_range" : { "gte": 1, "lte": 5 } } + + - do: + index: + index: test + type: doc + id: 2 + body: { "double_range" : { "gte": 1, "lte": 3 } } + + - do: + index: + index: test + type: doc + id: 3 + body: { "double_range" : { "gte": 4, "lte": 5 } } + + + - do: + indices.refresh: {} + + - do: + search: + body: { "size" : 0, "query" : { "range" : { "double_range" : { "gte": 3, "lte" : 4 } } } } + + - match: { hits.total: 3 } + + - do: + search: + body: { "size" : 0, "query" : { "range" : { "double_range" : { "gte": 3, "lte" : 4, "relation": "intersects" } } } } + + - match: { hits.total: 3 } + + - do: + search: + body: { "size" : 0, "query" : { "range" : { "double_range" : { "gte": 3, "lte" : 4, "relation": "contains" } } } } + + - match: { hits.total: 1 } + + - do: + search: + body: { "size" : 0, "query" : { "range" : { "double_range" : { "gte": 3, "lte" : 4, "relation": "within" } } } } + + - match: { hits.total: 0 } + +--- +"IP range": + + - do: + index: + index: test + type: doc + id: 1 + body: { "ip_range" : { "gte": "192.168.0.1", "lte": "192.168.0.5" } } + + - do: + index: + index: test + type: doc + id: 2 + body: { "ip_range" : { "gte": "192.168.0.1", "lte": "192.168.0.3" } } + + - do: + index: + index: test + type: doc + id: 3 + body: { "ip_range" : { "gte": "192.168.0.4", "lte": "192.168.0.5" } } + + + - do: + indices.refresh: {} + + - do: + search: + body: { "size" : 0, "query" : { "range" : { "ip_range" : { "gte": "192.168.0.3", "lte" : "192.168.0.4" } } } } + + - match: { hits.total: 3 } + + - do: + search: + body: { "size" : 0, "query" : { "range" : { "ip_range" : { "gte": "192.168.0.3", "lte" : "192.168.0.4", "relation": "intersects" } } } } + + - match: { hits.total: 3 } + + - do: + search: + body: { "size" : 0, "query" : { "range" : { "ip_range" : { "gte": "192.168.0.3", "lte" : "192.168.0.4", "relation": "contains" } } } } + + - match: { hits.total: 1 } + + - do: + search: + body: { "size" : 0, "query" : { "range" : { "ip_range" : { "gte": "192.168.0.3", "lte" : "192.168.0.4", "relation": "within" } } } } + + - match: { hits.total: 0 } + +--- +"Date range": + + - do: + index: + index: test + type: doc + id: 1 + body: { "date_range" : { "gte": "2017-09-01", "lte": "2017-09-05" } } + + - do: + index: + index: test + type: doc + id: 2 + body: { "date_range" : { "gte": "2017-09-01", "lte": "2017-09-03" } } + + - do: + index: + index: test + type: doc + id: 3 + body: { "date_range" : { "gte": "2017-09-04", "lte": "2017-09-05" } } + + + - do: + indices.refresh: {} + + - do: + search: + body: { "size" : 0, "query" : { "range" : { "date_range" : { "gte": "2017-09-03", "lte" : "2017-09-04" } } } } + + - match: { hits.total: 3 } + + - do: + search: + body: { "size" : 0, "query" : { "range" : { "date_range" : { "gte": "2017-09-03", "lte" : "2017-09-04", "relation": "intersects" } } } } + + - match: { hits.total: 3 } + + - do: + search: + body: { "size" : 0, "query" : { "range" : { "date_range" : { "gte": "2017-09-03", "lte" : "2017-09-04", "relation": "contains" } } } } + + - match: { hits.total: 1 } + + - do: + search: + body: { "size" : 0, "query" : { "range" : { "date_range" : { "gte": "2017-09-03", "lte" : "2017-09-04", "relation": "within" } } } } + + - match: { hits.total: 0 } diff --git a/modules/mapper-extras/src/test/resources/rest-api-spec/test/scaled_float/10_basic.yml b/modules/mapper-extras/src/test/resources/rest-api-spec/test/scaled_float/10_basic.yml new file mode 100644 index 0000000000000..6840d8aae20d6 --- /dev/null +++ b/modules/mapper-extras/src/test/resources/rest-api-spec/test/scaled_float/10_basic.yml @@ -0,0 +1,105 @@ +setup: + - do: + indices.create: + index: test + body: + settings: + number_of_replicas: 0 + mappings: + doc: + "properties": + "number": + "type" : "scaled_float" + "scaling_factor": 100 + + - do: + index: + index: test + type: doc + id: 1 + body: { "number" : 1 } + + - do: + index: + index: test + type: doc + id: 2 + body: { "number" : 1.53 } + + - do: + index: + index: test + type: doc + id: 3 + body: { "number" : -2.1 } + + - do: + index: + index: test + type: doc + id: 4 + body: { "number" : 1.53 } + + - do: + indices.refresh: {} + +--- +"Aggregations": + + - do: + search: + body: { "size" : 0, "aggs" : { "my_terms" : { "terms" : { "field" : "number" } } } } + + - match: { hits.total: 4 } + + - length: { aggregations.my_terms.buckets: 3 } + + - match: { aggregations.my_terms.buckets.0.key: 1.53 } + + - is_false: aggregations.my_terms.buckets.0.key_as_string + + - match: { aggregations.my_terms.buckets.0.doc_count: 2 } + + - match: { aggregations.my_terms.buckets.1.key: -2.1 } + + - is_false: aggregations.my_terms.buckets.1.key_as_string + + - match: { aggregations.my_terms.buckets.1.doc_count: 1 } + + - match: { aggregations.my_terms.buckets.2.key: 1 } + + - is_false: aggregations.my_terms.buckets.2.key_as_string + + - match: { aggregations.my_terms.buckets.2.doc_count: 1 } + +--- +"Search": + + - do: + search: + body: { "size" : 0, "query" : { "range" : { "number" : { "gte" : -2 } } } } + + - match: { hits.total: 3 } + + - do: + search: + body: { "size" : 0, "query" : { "range" : { "number" : { "gte" : 0 } } } } + + - match: { hits.total: 3 } + + - do: + search: + body: { "size" : 0, "query" : { "range" : { "number" : { "lt" : 1.5 } } } } + + - match: { hits.total: 2 } + +--- +"Sort": + + - do: + search: + body: { "size" : 1, "sort" : { "number" : { "order" : "asc" } } } + + - match: { hits.total: 4 } + - match: { hits.hits.0._id: "3" } + diff --git a/modules/parent-join/src/main/java/org/elasticsearch/join/aggregations/ParentToChildrenAggregator.java b/modules/parent-join/src/main/java/org/elasticsearch/join/aggregations/ParentToChildrenAggregator.java index fa025e994f6f7..b555afce67ae7 100644 --- a/modules/parent-join/src/main/java/org/elasticsearch/join/aggregations/ParentToChildrenAggregator.java +++ b/modules/parent-join/src/main/java/org/elasticsearch/join/aggregations/ParentToChildrenAggregator.java @@ -36,6 +36,7 @@ import org.elasticsearch.search.aggregations.AggregatorFactories; import org.elasticsearch.search.aggregations.InternalAggregation; import org.elasticsearch.search.aggregations.LeafBucketCollector; +import org.elasticsearch.search.aggregations.bucket.BucketsAggregator; import org.elasticsearch.search.aggregations.bucket.SingleBucketAggregator; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; import org.elasticsearch.search.aggregations.support.ValuesSource; @@ -48,7 +49,7 @@ // The RecordingPerReaderBucketCollector assumes per segment recording which isn't the case for this // aggregation, for this reason that collector can't be used -public class ParentToChildrenAggregator extends SingleBucketAggregator { +public class ParentToChildrenAggregator extends BucketsAggregator implements SingleBucketAggregator { static final ParseField TYPE_FIELD = new ParseField("type"); diff --git a/modules/parent-join/src/main/java/org/elasticsearch/join/mapper/MetaJoinFieldMapper.java b/modules/parent-join/src/main/java/org/elasticsearch/join/mapper/MetaJoinFieldMapper.java index 9c67d90b0dce1..388d4ca833ff4 100644 --- a/modules/parent-join/src/main/java/org/elasticsearch/join/mapper/MetaJoinFieldMapper.java +++ b/modules/parent-join/src/main/java/org/elasticsearch/join/mapper/MetaJoinFieldMapper.java @@ -21,6 +21,7 @@ import org.apache.lucene.index.IndexOptions; import org.apache.lucene.index.IndexableField; +import org.apache.lucene.search.Query; import org.apache.lucene.util.BytesRef; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.index.fielddata.IndexFieldData; @@ -29,6 +30,7 @@ import org.elasticsearch.index.mapper.MappedFieldType; import org.elasticsearch.index.mapper.ParseContext; import org.elasticsearch.index.mapper.StringFieldType; +import org.elasticsearch.index.query.QueryShardContext; import java.io.IOException; import java.util.List; @@ -105,6 +107,11 @@ public Object valueForDisplay(Object value) { public ParentJoinFieldMapper getMapper() { return mapper; } + + @Override + public Query existsQuery(QueryShardContext context) { + throw new UnsupportedOperationException("Exists query not supported for fields of type" + typeName()); + } } MetaJoinFieldMapper(String name, MappedFieldType fieldType, Settings indexSettings) { diff --git a/modules/parent-join/src/main/java/org/elasticsearch/join/mapper/ParentIdFieldMapper.java b/modules/parent-join/src/main/java/org/elasticsearch/join/mapper/ParentIdFieldMapper.java index 58c30a05788a9..21078c2763f1c 100644 --- a/modules/parent-join/src/main/java/org/elasticsearch/join/mapper/ParentIdFieldMapper.java +++ b/modules/parent-join/src/main/java/org/elasticsearch/join/mapper/ParentIdFieldMapper.java @@ -27,6 +27,7 @@ import org.apache.lucene.search.BooleanClause; import org.apache.lucene.search.BooleanQuery; import org.apache.lucene.search.ConstantScoreQuery; +import org.apache.lucene.search.DocValuesFieldExistsQuery; import org.apache.lucene.search.Query; import org.apache.lucene.search.TermQuery; import org.apache.lucene.util.BytesRef; @@ -39,6 +40,7 @@ import org.elasticsearch.index.mapper.Mapper; import org.elasticsearch.index.mapper.ParseContext; import org.elasticsearch.index.mapper.StringFieldType; +import org.elasticsearch.index.query.QueryShardContext; import java.io.IOException; import java.util.Collection; @@ -124,6 +126,11 @@ public Object valueForDisplay(Object value) { BytesRef binaryValue = (BytesRef) value; return binaryValue.utf8ToString(); } + + @Override + public Query existsQuery(QueryShardContext context) { + return new DocValuesFieldExistsQuery(name()); + } } private final String parentName; diff --git a/modules/parent-join/src/main/java/org/elasticsearch/join/mapper/ParentJoinFieldMapper.java b/modules/parent-join/src/main/java/org/elasticsearch/join/mapper/ParentJoinFieldMapper.java index 61f2ae1a89966..b2ec28cf0c86b 100644 --- a/modules/parent-join/src/main/java/org/elasticsearch/join/mapper/ParentJoinFieldMapper.java +++ b/modules/parent-join/src/main/java/org/elasticsearch/join/mapper/ParentJoinFieldMapper.java @@ -23,6 +23,8 @@ import org.apache.lucene.document.SortedDocValuesField; import org.apache.lucene.index.IndexOptions; import org.apache.lucene.index.IndexableField; +import org.apache.lucene.search.DocValuesFieldExistsQuery; +import org.apache.lucene.search.Query; import org.apache.lucene.util.BytesRef; import org.elasticsearch.common.lucene.Lucene; import org.elasticsearch.common.settings.Settings; @@ -40,6 +42,7 @@ import org.elasticsearch.index.mapper.MapperService; import org.elasticsearch.index.mapper.ParseContext; import org.elasticsearch.index.mapper.StringFieldType; +import org.elasticsearch.index.query.QueryShardContext; import java.io.IOException; import java.util.ArrayList; @@ -236,6 +239,11 @@ public Object valueForDisplay(Object value) { BytesRef binaryValue = (BytesRef) value; return binaryValue.utf8ToString(); } + + @Override + public Query existsQuery(QueryShardContext context) { + return new DocValuesFieldExistsQuery(name()); + } } // The meta field that ensures that there is no other parent-join in the mapping diff --git a/modules/parent-join/src/main/java/org/elasticsearch/join/query/HasChildQueryBuilder.java b/modules/parent-join/src/main/java/org/elasticsearch/join/query/HasChildQueryBuilder.java index ee362b9bca78a..dbbb98af65af8 100644 --- a/modules/parent-join/src/main/java/org/elasticsearch/join/query/HasChildQueryBuilder.java +++ b/modules/parent-join/src/main/java/org/elasticsearch/join/query/HasChildQueryBuilder.java @@ -20,7 +20,7 @@ import org.apache.lucene.index.DirectoryReader; import org.apache.lucene.index.IndexReader; -import org.apache.lucene.index.MultiDocValues; +import org.apache.lucene.index.OrdinalMap; import org.apache.lucene.search.IndexSearcher; import org.apache.lucene.search.MatchNoDocsQuery; import org.apache.lucene.search.Query; @@ -76,8 +76,8 @@ public class HasChildQueryBuilder extends AbstractQueryBuilder requests = new ArrayList<>(); requests.add(createIndexRequest("index1", "parent", "1", null)); requests.add(createIndexRequest("index1", "child", "2", "1", "field", "value1")); @@ -585,4 +590,56 @@ public void testInnerHitsWithIgnoreUnmapped() throws Exception { assertHitCount(response, 2); assertSearchHits(response, "1", "3"); } + + public void testTooHighResultWindow() throws Exception { + if (legacy()) { + assertAcked(prepareCreate("index1") + .addMapping("parent_type", "nested_type", "type=nested") + .addMapping("child_type", "_parent", "type=parent_type") + ); + } else { + assertAcked(prepareCreate("index1") + .addMapping("doc", addFieldMappings( + buildParentJoinFieldMappingFromSimplifiedDef("join_field", true, "parent_type", "child_type"), + "nested_type", "nested")) + ); + } + createIndexRequest("index1", "parent_type", "1", null, "nested_type", Collections.singletonMap("key", "value")).get(); + createIndexRequest("index1", "child_type", "2", "1").get(); + refresh(); + + SearchResponse response = client().prepareSearch("index1") + .setQuery(hasChildQuery("child_type", matchAllQuery(), ScoreMode.None).ignoreUnmapped(true) + .innerHit(new InnerHitBuilder().setFrom(50).setSize(10).setName("_name"))) + .get(); + assertNoFailures(response); + assertHitCount(response, 1); + + Exception e = expectThrows(SearchPhaseExecutionException.class, () -> client().prepareSearch("index1") + .setQuery(hasChildQuery("child_type", matchAllQuery(), ScoreMode.None).ignoreUnmapped(true) + .innerHit(new InnerHitBuilder().setFrom(100).setSize(10).setName("_name"))) + .get()); + assertThat(e.getCause().getMessage(), + containsString("the inner hit definition's [_name]'s from + size must be less than or equal to: [100] but was [110]")); + e = expectThrows(SearchPhaseExecutionException.class, () -> client().prepareSearch("index1") + .setQuery(hasChildQuery("child_type", matchAllQuery(), ScoreMode.None).ignoreUnmapped(true) + .innerHit(new InnerHitBuilder().setFrom(10).setSize(100).setName("_name"))) + .get()); + assertThat(e.getCause().getMessage(), + containsString("the inner hit definition's [_name]'s from + size must be less than or equal to: [100] but was [110]")); + + client().admin().indices().prepareUpdateSettings("index1") + .setSettings(Collections.singletonMap(IndexSettings.MAX_INNER_RESULT_WINDOW_SETTING.getKey(), 110)) + .get(); + response = client().prepareSearch("index1") + .setQuery(hasChildQuery("child_type", matchAllQuery(), ScoreMode.None).ignoreUnmapped(true) + .innerHit(new InnerHitBuilder().setFrom(100).setSize(10).setName("_name"))) + .get(); + assertNoFailures(response); + response = client().prepareSearch("index1") + .setQuery(hasChildQuery("child_type", matchAllQuery(), ScoreMode.None).ignoreUnmapped(true) + .innerHit(new InnerHitBuilder().setFrom(10).setSize(100).setName("_name"))) + .get(); + assertNoFailures(response); + } } diff --git a/modules/parent-join/src/test/java/org/elasticsearch/join/query/LegacyHasParentQueryBuilderTests.java b/modules/parent-join/src/test/java/org/elasticsearch/join/query/LegacyHasParentQueryBuilderTests.java index c1a3c51a46641..8517348721e30 100644 --- a/modules/parent-join/src/test/java/org/elasticsearch/join/query/LegacyHasParentQueryBuilderTests.java +++ b/modules/parent-join/src/test/java/org/elasticsearch/join/query/LegacyHasParentQueryBuilderTests.java @@ -26,9 +26,6 @@ import org.elasticsearch.action.admin.indices.mapping.put.PutMappingRequest; import org.elasticsearch.common.compress.CompressedXContent; import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.common.xcontent.ToXContent; -import org.elasticsearch.common.xcontent.XContentBuilder; -import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.index.mapper.MapperService; import org.elasticsearch.index.query.IdsQueryBuilder; import org.elasticsearch.index.query.InnerHitBuilder; @@ -186,20 +183,6 @@ public void testIllegalValues() throws IOException { assertThat(qse.getMessage(), equalTo("[has_parent] no child types found for type [just_a_type]")); } - public void testDeprecatedXContent() throws IOException { - XContentBuilder builder = XContentFactory.jsonBuilder().prettyPrint(); - builder.startObject(); - builder.startObject("has_parent"); - builder.field("query"); - new TermQueryBuilder("a", "a").toXContent(builder, ToXContent.EMPTY_PARAMS); - builder.field("type", "foo"); // deprecated - builder.endObject(); - builder.endObject(); - HasParentQueryBuilder queryBuilder = (HasParentQueryBuilder) parseQuery(builder.string()); - assertEquals("foo", queryBuilder.type()); - assertWarnings("Deprecated field [type] used, expected [parent_type] instead"); - } - public void testToQueryInnerQueryType() throws IOException { String[] searchTypes = new String[]{CHILD_TYPE}; QueryShardContext shardContext = createShardContext(); diff --git a/modules/percolator/build.gradle b/modules/percolator/build.gradle index cf55368861aef..36b93fd4d866f 100644 --- a/modules/percolator/build.gradle +++ b/modules/percolator/build.gradle @@ -25,7 +25,16 @@ esplugin { dependencies { // for testing hasChild and hasParent rejections + compile project(path: ':modules:mapper-extras', configuration: 'runtime') testCompile project(path: ':modules:parent-join', configuration: 'runtime') } + +dependencyLicenses { + // Don't check the client's license. We know it. + dependencies = project.configurations.runtime.fileCollection { + it.group.startsWith('org.elasticsearch') == false + } - project.configurations.provided +} + compileJava.options.compilerArgs << "-Xlint:-deprecation,-rawtypes" compileTestJava.options.compilerArgs << "-Xlint:-deprecation,-rawtypes" diff --git a/modules/percolator/src/main/java/org/elasticsearch/percolator/PercolateQuery.java b/modules/percolator/src/main/java/org/elasticsearch/percolator/PercolateQuery.java index a7ca013ec22b9..910c716db6934 100644 --- a/modules/percolator/src/main/java/org/elasticsearch/percolator/PercolateQuery.java +++ b/modules/percolator/src/main/java/org/elasticsearch/percolator/PercolateQuery.java @@ -38,6 +38,7 @@ import org.elasticsearch.common.lucene.Lucene; import java.io.IOException; +import java.util.List; import java.util.Objects; import java.util.Set; @@ -46,15 +47,17 @@ final class PercolateQuery extends Query implements Accountable { // cost of matching the query against the document, arbitrary as it would be really complex to estimate private static final float MATCH_COST = 1000; + private final String name; private final QueryStore queryStore; - private final BytesReference documentSource; + private final List documents; private final Query candidateMatchesQuery; private final Query verifiedMatchesQuery; private final IndexSearcher percolatorIndexSearcher; - PercolateQuery(QueryStore queryStore, BytesReference documentSource, + PercolateQuery(String name, QueryStore queryStore, List documents, Query candidateMatchesQuery, IndexSearcher percolatorIndexSearcher, Query verifiedMatchesQuery) { - this.documentSource = Objects.requireNonNull(documentSource); + this.name = name; + this.documents = Objects.requireNonNull(documents); this.candidateMatchesQuery = Objects.requireNonNull(candidateMatchesQuery); this.queryStore = Objects.requireNonNull(queryStore); this.percolatorIndexSearcher = Objects.requireNonNull(percolatorIndexSearcher); @@ -65,7 +68,7 @@ final class PercolateQuery extends Query implements Accountable { public Query rewrite(IndexReader reader) throws IOException { Query rewritten = candidateMatchesQuery.rewrite(reader); if (rewritten != candidateMatchesQuery) { - return new PercolateQuery(queryStore, documentSource, rewritten, percolatorIndexSearcher, verifiedMatchesQuery); + return new PercolateQuery(name, queryStore, documents, rewritten, percolatorIndexSearcher, verifiedMatchesQuery); } else { return this; } @@ -164,12 +167,16 @@ boolean matchDocId(int docId) throws IOException { }; } + String getName() { + return name; + } + IndexSearcher getPercolatorIndexSearcher() { return percolatorIndexSearcher; } - BytesReference getDocumentSource() { - return documentSource; + List getDocuments() { + return documents; } QueryStore getQueryStore() { @@ -193,13 +200,22 @@ public int hashCode() { @Override public String toString(String s) { - return "PercolateQuery{document_source={" + documentSource.utf8ToString() + "},inner={" + + StringBuilder sources = new StringBuilder(); + for (BytesReference document : documents) { + sources.append(document.utf8ToString()); + sources.append('\n'); + } + return "PercolateQuery{document_sources={" + sources + "},inner={" + candidateMatchesQuery.toString(s) + "}}"; } @Override public long ramBytesUsed() { - return documentSource.ramBytesUsed(); + long ramUsed = 0L; + for (BytesReference document : documents) { + ramUsed += document.ramBytesUsed(); + } + return ramUsed; } @FunctionalInterface diff --git a/modules/percolator/src/main/java/org/elasticsearch/percolator/PercolateQueryBuilder.java b/modules/percolator/src/main/java/org/elasticsearch/percolator/PercolateQueryBuilder.java index 30da327c81f32..db1b444dcd28e 100644 --- a/modules/percolator/src/main/java/org/elasticsearch/percolator/PercolateQueryBuilder.java +++ b/modules/percolator/src/main/java/org/elasticsearch/percolator/PercolateQueryBuilder.java @@ -57,9 +57,11 @@ import org.elasticsearch.common.logging.DeprecationLogger; import org.elasticsearch.common.logging.Loggers; import org.elasticsearch.common.lucene.search.Queries; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.common.xcontent.XContent; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentFactory; +import org.elasticsearch.common.xcontent.XContentHelper; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.index.analysis.FieldNameAnalyzer; @@ -69,6 +71,7 @@ import org.elasticsearch.index.mapper.DocumentMapperForType; import org.elasticsearch.index.mapper.MappedFieldType; import org.elasticsearch.index.mapper.MapperService; +import org.elasticsearch.index.mapper.ParseContext; import org.elasticsearch.index.mapper.ParsedDocument; import org.elasticsearch.index.query.AbstractQueryBuilder; import org.elasticsearch.index.query.QueryBuilder; @@ -81,7 +84,10 @@ import java.io.ByteArrayInputStream; import java.io.IOException; import java.io.InputStream; +import java.util.ArrayList; import java.util.Collection; +import java.util.Collections; +import java.util.List; import java.util.Objects; import java.util.function.Supplier; @@ -94,6 +100,8 @@ public class PercolateQueryBuilder extends AbstractQueryBuilder documents; private final XContentType documentXContentType; private final String indexedDocumentIndex; @@ -123,7 +132,7 @@ public class PercolateQueryBuilder extends AbstractQueryBuilder documents, XContentType documentXContentType) { + this(field, null, documents, documentXContentType); } @Deprecated - public PercolateQueryBuilder(String field, String documentType, BytesReference document, XContentType documentXContentType) { + public PercolateQueryBuilder(String field, String documentType, List documents, XContentType documentXContentType) { if (field == null) { throw new IllegalArgumentException("[field] is a required argument"); } - if (document == null) { + if (documents == null) { throw new IllegalArgumentException("[document] is a required argument"); } this.field = field; this.documentType = documentType; - this.document = document; + this.documents = documents; this.documentXContentType = Objects.requireNonNull(documentXContentType); indexedDocumentIndex = null; indexedDocumentType = null; @@ -164,7 +184,7 @@ private PercolateQueryBuilder(String field, String documentType, Supplier_percolator_document_slot response field + * when multiple percolate queries have been specified in the main query. + */ + public PercolateQueryBuilder setName(String name) { + this.name = name; + return this; + } + @Override protected void doWriteTo(StreamOutput out) throws IOException { if (documentSupplier != null) { throw new IllegalStateException("supplier must be null, can't serialize suppliers, missing a rewriteAndFetch?"); } out.writeString(field); + if (out.getVersion().onOrAfter(Version.V_6_1_0)) { + out.writeOptionalString(name); + } if (out.getVersion().before(Version.V_6_0_0_beta1)) { out.writeString(documentType); } else { @@ -277,8 +317,19 @@ protected void doWriteTo(StreamOutput out) throws IOException { } else { out.writeBoolean(false); } - out.writeOptionalBytesReference(document); - if (document != null && out.getVersion().onOrAfter(Version.V_5_3_0)) { + if (out.getVersion().onOrAfter(Version.V_6_1_0)) { + out.writeVInt(documents.size()); + for (BytesReference document : documents) { + out.writeBytesReference(document); + } + } else { + if (documents.size() > 1) { + throw new IllegalArgumentException("Nodes prior to 6.1.0 cannot accept multiple documents"); + } + BytesReference doc = documents.isEmpty() ? null : documents.iterator().next(); + out.writeOptionalBytesReference(doc); + } + if (documents.isEmpty() == false && out.getVersion().onOrAfter(Version.V_5_3_0)) { documentXContentType.writeTo(out); } } @@ -288,8 +339,18 @@ protected void doXContent(XContentBuilder builder, Params params) throws IOExcep builder.startObject(NAME); builder.field(DOCUMENT_TYPE_FIELD.getPreferredName(), documentType); builder.field(QUERY_FIELD.getPreferredName(), field); - if (document != null) { - builder.rawField(DOCUMENT_FIELD.getPreferredName(), document); + if (name != null) { + builder.field(NAME_FIELD.getPreferredName(), name); + } + if (documents.isEmpty() == false) { + builder.startArray(DOCUMENTS_FIELD.getPreferredName()); + for (BytesReference document : documents) { + try (XContentParser parser = XContentHelper.createParser(NamedXContentRegistry.EMPTY, document)) { + parser.nextToken(); + XContentHelper.copyCurrentStructure(builder.generator(), parser); + } + } + builder.endArray(); } if (indexedDocumentIndex != null || indexedDocumentType != null || indexedDocumentId != null) { if (indexedDocumentIndex != null) { @@ -319,6 +380,7 @@ public static PercolateQueryBuilder fromXContent(XContentParser parser) throws I float boost = AbstractQueryBuilder.DEFAULT_BOOST; String field = null; + String name = null; String documentType = null; String indexedDocumentIndex = null; @@ -328,29 +390,62 @@ public static PercolateQueryBuilder fromXContent(XContentParser parser) throws I String indexedDocumentPreference = null; Long indexedDocumentVersion = null; - BytesReference source = null; + List documents = new ArrayList<>(); String queryName = null; String currentFieldName = null; + boolean documentsSpecified = false; + boolean documentSpecified = false; + XContentParser.Token token; while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { if (token == XContentParser.Token.FIELD_NAME) { currentFieldName = parser.currentName(); + } else if (token == XContentParser.Token.START_ARRAY) { + if (DOCUMENTS_FIELD.match(currentFieldName)) { + if (documentSpecified) { + throw new IllegalArgumentException("[" + PercolateQueryBuilder.NAME + + "] Either specified [document] or [documents], not both"); + } + documentsSpecified = true; + while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { + if (token == XContentParser.Token.START_OBJECT) { + try (XContentBuilder builder = XContentFactory.jsonBuilder()) { + builder.copyCurrentStructure(parser); + builder.flush(); + documents.add(builder.bytes()); + } + } else { + throw new ParsingException(parser.getTokenLocation(), "[" + PercolateQueryBuilder.NAME + + "] query does not support [" + token + "]"); + } + } + } else { + throw new ParsingException(parser.getTokenLocation(), "[" + PercolateQueryBuilder.NAME + + "] query does not field name [" + currentFieldName + "]"); + } } else if (token == XContentParser.Token.START_OBJECT) { if (DOCUMENT_FIELD.match(currentFieldName)) { + if (documentsSpecified) { + throw new IllegalArgumentException("[" + PercolateQueryBuilder.NAME + + "] Either specified [document] or [documents], not both"); + } + documentSpecified = true; try (XContentBuilder builder = XContentFactory.jsonBuilder()) { builder.copyCurrentStructure(parser); builder.flush(); - source = builder.bytes(); + documents.add(builder.bytes()); } } else { throw new ParsingException(parser.getTokenLocation(), "[" + PercolateQueryBuilder.NAME + - "] query does not support [" + token + "]"); + "] query does not support field name [" + currentFieldName + "]"); } } else if (token.isValue() || token == XContentParser.Token.VALUE_NULL) { if (QUERY_FIELD.match(currentFieldName)) { field = parser.text(); + } else if (NAME_FIELD.match(currentFieldName)) { + name = parser.textOrNull(); } else if (DOCUMENT_TYPE_FIELD.match(currentFieldName)) { documentType = parser.textOrNull(); } else if (INDEXED_DOCUMENT_FIELD_INDEX.match(currentFieldName)) { @@ -380,14 +475,17 @@ public static PercolateQueryBuilder fromXContent(XContentParser parser) throws I } PercolateQueryBuilder queryBuilder; - if (source != null) { - queryBuilder = new PercolateQueryBuilder(field, documentType, source, XContentType.JSON); + if (documents.isEmpty() == false) { + queryBuilder = new PercolateQueryBuilder(field, documentType, documents, XContentType.JSON); } else if (indexedDocumentId != null) { queryBuilder = new PercolateQueryBuilder(field, documentType, indexedDocumentIndex, indexedDocumentType, indexedDocumentId, indexedDocumentRouting, indexedDocumentPreference, indexedDocumentVersion); } else { throw new IllegalArgumentException("[" + PercolateQueryBuilder.NAME + "] query, nothing to percolate"); } + if (name != null) { + queryBuilder.setName(name); + } queryBuilder.queryName(queryName); queryBuilder.boost(boost); return queryBuilder; @@ -397,7 +495,7 @@ public static PercolateQueryBuilder fromXContent(XContentParser parser) throws I protected boolean doEquals(PercolateQueryBuilder other) { return Objects.equals(field, other.field) && Objects.equals(documentType, other.documentType) - && Objects.equals(document, other.document) + && Objects.equals(documents, other.documents) && Objects.equals(indexedDocumentIndex, other.indexedDocumentIndex) && Objects.equals(indexedDocumentType, other.indexedDocumentType) && Objects.equals(documentSupplier, other.documentSupplier) @@ -407,7 +505,7 @@ protected boolean doEquals(PercolateQueryBuilder other) { @Override protected int doHashCode() { - return Objects.hash(field, documentType, document, indexedDocumentIndex, indexedDocumentType, indexedDocumentId, documentSupplier); + return Objects.hash(field, documentType, documents, indexedDocumentIndex, indexedDocumentType, indexedDocumentId, documentSupplier); } @Override @@ -417,14 +515,15 @@ public String getWriteableName() { @Override protected QueryBuilder doRewrite(QueryRewriteContext queryShardContext) { - if (document != null) { + if (documents.isEmpty() == false) { return this; } else if (documentSupplier != null) { final BytesReference source = documentSupplier.get(); if (source == null) { return this; // not executed yet } else { - return new PercolateQueryBuilder(field, documentType, source, XContentFactory.xContentType(source)); + return new PercolateQueryBuilder(field, documentType, Collections.singletonList(source), + XContentFactory.xContentType(source)); } } GetRequest getRequest = new GetRequest(indexedDocumentIndex, indexedDocumentType, indexedDocumentId); @@ -464,7 +563,7 @@ protected Query doToQuery(QueryShardContext context) throws IOException { throw new IllegalStateException("query builder must be rewritten first"); } - if (document == null) { + if (documents.isEmpty()) { throw new IllegalStateException("no document to percolate"); } @@ -478,7 +577,7 @@ protected Query doToQuery(QueryShardContext context) throws IOException { "] to be of type [percolator], but is of type [" + fieldType.typeName() + "]"); } - final ParsedDocument doc; + final List docs = new ArrayList<>(); final DocumentMapper docMapper; final MapperService mapperService = context.getMapperService(); if (context.getIndexSettings().isSingleType()) { @@ -495,14 +594,18 @@ protected Query doToQuery(QueryShardContext context) throws IOException { } } docMapper = mapperService.documentMapper(type); - doc = docMapper.parse(source(context.index().getName(), type, "_temp_id", document, documentXContentType)); + for (BytesReference document : documents) { + docs.add(docMapper.parse(source(context.index().getName(), type, "_temp_id", document, documentXContentType))); + } } else { if (documentType == null) { throw new IllegalArgumentException("[percolate] query is missing required [document_type] parameter"); } DocumentMapperForType docMapperForType = mapperService.documentMapperWithAutoCreate(documentType); docMapper = docMapperForType.getDocumentMapper(); - doc = docMapper.parse(source(context.index().getName(), documentType, "_temp_id", document, documentXContentType)); + for (BytesReference document : documents) { + docs.add(docMapper.parse(source(context.index().getName(), documentType, "_temp_id", document, documentXContentType))); + } } FieldNameAnalyzer fieldNameAnalyzer = (FieldNameAnalyzer) docMapper.mappers().indexAnalyzer(); @@ -520,22 +623,23 @@ protected Analyzer getWrappedAnalyzer(String fieldName) { } }; final IndexSearcher docSearcher; - if (doc.docs().size() > 1) { - assert docMapper.hasNestedObjects(); - docSearcher = createMultiDocumentSearcher(analyzer, doc); + if (docs.size() > 1 || docs.get(0).docs().size() > 1) { + assert docs.size() != 1 || docMapper.hasNestedObjects(); + docSearcher = createMultiDocumentSearcher(analyzer, docs); } else { - MemoryIndex memoryIndex = MemoryIndex.fromDocument(doc.rootDoc(), analyzer, true, false); + MemoryIndex memoryIndex = MemoryIndex.fromDocument(docs.get(0).rootDoc(), analyzer, true, false); docSearcher = memoryIndex.createSearcher(); docSearcher.setQueryCache(null); } - boolean mapUnmappedFieldsAsString = context.getIndexSettings() - .getValue(PercolatorFieldMapper.INDEX_MAP_UNMAPPED_FIELDS_AS_STRING_SETTING); + PercolatorFieldMapper percolatorFieldMapper = (PercolatorFieldMapper) docMapper.mappers().getMapper(field); + boolean mapUnmappedFieldsAsString = percolatorFieldMapper.isMapUnmappedFieldAsText(); QueryShardContext percolateShardContext = wrap(context); + String name = this.name != null ? this.name : field; PercolatorFieldMapper.FieldType pft = (PercolatorFieldMapper.FieldType) fieldType; PercolateQuery.QueryStore queryStore = createStore(pft.queryBuilderField, percolateShardContext, mapUnmappedFieldsAsString); - return pft.percolateQuery(queryStore, document, docSearcher); + return pft.percolateQuery(name, queryStore, documents, docSearcher); } public String getField() { @@ -546,8 +650,8 @@ public String getDocumentType() { return documentType; } - public BytesReference getDocument() { - return document; + public List getDocuments() { + return documents; } //pkg-private for testing @@ -555,12 +659,17 @@ XContentType getXContentType() { return documentXContentType; } - static IndexSearcher createMultiDocumentSearcher(Analyzer analyzer, ParsedDocument doc) { + static IndexSearcher createMultiDocumentSearcher(Analyzer analyzer, Collection docs) { RAMDirectory ramDirectory = new RAMDirectory(); try (IndexWriter indexWriter = new IndexWriter(ramDirectory, new IndexWriterConfig(analyzer))) { - indexWriter.addDocuments(doc.docs()); - indexWriter.commit(); - DirectoryReader directoryReader = DirectoryReader.open(ramDirectory); + // Indexing in order here, so that the user provided order matches with the docid sequencing: + Iterable iterable = () -> docs.stream() + .map(ParsedDocument::docs) + .flatMap(Collection::stream) + .iterator(); + indexWriter.addDocuments(iterable); + + DirectoryReader directoryReader = DirectoryReader.open(indexWriter); assert directoryReader.leaves().size() == 1 : "Expected single leaf, but got [" + directoryReader.leaves().size() + "]"; final IndexSearcher slowSearcher = new IndexSearcher(directoryReader) { @@ -596,7 +705,8 @@ static PercolateQuery.QueryStore createStore(MappedFieldType queryBuilderFieldTy if (binaryDocValues.advanceExact(docId)) { BytesRef qbSource = binaryDocValues.binaryValue(); try (InputStream in = new ByteArrayInputStream(qbSource.bytes, qbSource.offset, qbSource.length)) { - try (StreamInput input = new NamedWriteableAwareStreamInput(new InputStreamStreamInput(in), registry)) { + try (StreamInput input = new NamedWriteableAwareStreamInput( + new InputStreamStreamInput(in, qbSource.length), registry)) { input.setVersion(indexVersion); // Query builder's content is stored via BinaryFieldMapper, which has a custom encoding // to encode multiple binary values into a single binary doc values field. diff --git a/modules/percolator/src/main/java/org/elasticsearch/percolator/PercolatorFieldMapper.java b/modules/percolator/src/main/java/org/elasticsearch/percolator/PercolatorFieldMapper.java index ee8c7ff44fbe6..06c1423eb238d 100644 --- a/modules/percolator/src/main/java/org/elasticsearch/percolator/PercolatorFieldMapper.java +++ b/modules/percolator/src/main/java/org/elasticsearch/percolator/PercolatorFieldMapper.java @@ -32,6 +32,7 @@ import org.apache.lucene.index.TermsEnum; import org.apache.lucene.search.BooleanClause.Occur; import org.apache.lucene.search.BooleanQuery; +import org.apache.lucene.search.DocValuesFieldExistsQuery; import org.apache.lucene.search.IndexSearcher; import org.apache.lucene.search.MatchNoDocsQuery; import org.apache.lucene.search.Query; @@ -45,6 +46,8 @@ import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.hash.MurmurHash3; import org.elasticsearch.common.io.stream.OutputStreamStreamOutput; +import org.elasticsearch.common.logging.DeprecationLogger; +import org.elasticsearch.common.logging.Loggers; import org.elasticsearch.common.settings.Setting; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentBuilder; @@ -54,6 +57,7 @@ import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.index.mapper.BinaryFieldMapper; import org.elasticsearch.index.mapper.FieldMapper; +import org.elasticsearch.index.mapper.FieldNamesFieldMapper; import org.elasticsearch.index.mapper.KeywordFieldMapper; import org.elasticsearch.index.mapper.MappedFieldType; import org.elasticsearch.index.mapper.Mapper; @@ -91,9 +95,13 @@ public class PercolatorFieldMapper extends FieldMapper { static final XContentType QUERY_BUILDER_CONTENT_TYPE = XContentType.SMILE; - static final Setting INDEX_MAP_UNMAPPED_FIELDS_AS_STRING_SETTING = - Setting.boolSetting("index.percolator.map_unmapped_fields_as_string", false, Setting.Property.IndexScope); + @Deprecated + static final Setting INDEX_MAP_UNMAPPED_FIELDS_AS_STRING_SETTING = Setting.boolSetting( + "index.percolator.map_unmapped_fields_as_string", false, Setting.Property.IndexScope, Setting.Property.Deprecated); + static final Setting INDEX_MAP_UNMAPPED_FIELDS_AS_TEXT_SETTING = Setting.boolSetting( + "index.percolator.map_unmapped_fields_as_text", false, Setting.Property.IndexScope); static final String CONTENT_TYPE = "percolator"; + private static final DeprecationLogger DEPRECATION_LOGGER = new DeprecationLogger(Loggers.getLogger(PercolatorFieldMapper.class)); private static final FieldType FIELD_TYPE = new FieldType(); static final byte FIELD_VALUE_SEPARATOR = 0; // nul code point @@ -224,12 +232,21 @@ public String typeName() { return CONTENT_TYPE; } + @Override + public Query existsQuery(QueryShardContext context) { + if (hasDocValues()) { + return new DocValuesFieldExistsQuery(name()); + } else { + return new TermQuery(new Term(FieldNamesFieldMapper.NAME, name())); + } + } + @Override public Query termQuery(Object value, QueryShardContext context) { throw new QueryShardException(context, "Percolator fields are not searchable directly, use a percolate query instead"); } - Query percolateQuery(PercolateQuery.QueryStore queryStore, BytesReference documentSource, + Query percolateQuery(String name, PercolateQuery.QueryStore queryStore, List documents, IndexSearcher searcher) throws IOException { IndexReader indexReader = searcher.getIndexReader(); Query candidateMatchesQuery = createCandidateQuery(indexReader); @@ -241,9 +258,9 @@ Query percolateQuery(PercolateQuery.QueryStore queryStore, BytesReference docume if (indexReader.maxDoc() == 1) { verifiedMatchesQuery = new TermQuery(new Term(extractionResultField.name(), EXTRACTION_COMPLETE)); } else { - verifiedMatchesQuery = new MatchNoDocsQuery("nested docs, so no verified matches"); + verifiedMatchesQuery = new MatchNoDocsQuery("multiple/nested docs, so no verified matches"); } - return new PercolateQuery(queryStore, documentSource, candidateMatchesQuery, searcher, verifiedMatchesQuery); + return new PercolateQuery(name, queryStore, documents, candidateMatchesQuery, searcher, verifiedMatchesQuery); } Query createCandidateQuery(IndexReader indexReader) throws IOException { @@ -295,7 +312,7 @@ Query createCandidateQuery(IndexReader indexReader) throws IOException { } - private final boolean mapUnmappedFieldAsString; + private final boolean mapUnmappedFieldAsText; private final Supplier queryShardContext; private KeywordFieldMapper queryTermsField; private KeywordFieldMapper extractionResultField; @@ -315,11 +332,28 @@ Query createCandidateQuery(IndexReader indexReader) throws IOException { this.queryTermsField = queryTermsField; this.extractionResultField = extractionResultField; this.queryBuilderField = queryBuilderField; - this.mapUnmappedFieldAsString = INDEX_MAP_UNMAPPED_FIELDS_AS_STRING_SETTING.get(indexSettings); + this.mapUnmappedFieldAsText = getMapUnmappedFieldAsText(indexSettings); this.rangeFieldMapper = rangeFieldMapper; this.boostFields = boostFields; } + private static boolean getMapUnmappedFieldAsText(Settings indexSettings) { + if (INDEX_MAP_UNMAPPED_FIELDS_AS_TEXT_SETTING.exists(indexSettings) && + INDEX_MAP_UNMAPPED_FIELDS_AS_STRING_SETTING.exists(indexSettings)) { + throw new IllegalArgumentException("Either specify [" + INDEX_MAP_UNMAPPED_FIELDS_AS_STRING_SETTING.getKey() + + "] or [" + INDEX_MAP_UNMAPPED_FIELDS_AS_TEXT_SETTING.getKey() + "] setting, not both"); + } + + if (INDEX_MAP_UNMAPPED_FIELDS_AS_STRING_SETTING.exists(indexSettings)) { + DEPRECATION_LOGGER.deprecatedAndMaybeLog(INDEX_MAP_UNMAPPED_FIELDS_AS_STRING_SETTING.getKey(), + "The [" + INDEX_MAP_UNMAPPED_FIELDS_AS_STRING_SETTING.getKey() + + "] setting is deprecated in favour for the [" + INDEX_MAP_UNMAPPED_FIELDS_AS_TEXT_SETTING.getKey() + "] setting"); + return INDEX_MAP_UNMAPPED_FIELDS_AS_STRING_SETTING.get(indexSettings); + } else { + return INDEX_MAP_UNMAPPED_FIELDS_AS_TEXT_SETTING.get(indexSettings); + } + } + @Override public FieldMapper updateFieldType(Map fullNameToFieldType) { PercolatorFieldMapper updated = (PercolatorFieldMapper) super.updateFieldType(fullNameToFieldType); @@ -364,7 +398,7 @@ public Mapper parse(ParseContext context) throws IOException { Version indexVersion = context.mapperService().getIndexSettings().getIndexVersionCreated(); createQueryBuilderField(indexVersion, queryBuilderField, queryBuilder, context); - Query query = toQuery(queryShardContext, mapUnmappedFieldAsString, queryBuilder); + Query query = toQuery(queryShardContext, mapUnmappedFieldAsText, queryBuilder); processQuery(query, context); return null; } @@ -418,6 +452,11 @@ void processQuery(Query query, ParseContext context) { } else { doc.add(new Field(extractionResultField.name(), EXTRACTION_PARTIAL, extractionResultField.fieldType())); } + List fields = new ArrayList<>(1); + createFieldNamesField(context, fields); + for (IndexableField field : fields) { + context.doc().add(field); + } } static Query parseQuery(QueryShardContext context, boolean mapUnmappedFieldsAsString, XContentParser parser) throws IOException { @@ -487,6 +526,10 @@ protected void doXContentBody(XContentBuilder builder, boolean includeDefaults, } } + boolean isMapUnmappedFieldAsText() { + return mapUnmappedFieldAsText; + } + /** * Fails if a percolator contains an unsupported query. The following queries are not supported: * 1) a has_child query diff --git a/modules/percolator/src/main/java/org/elasticsearch/percolator/PercolatorHighlightSubFetchPhase.java b/modules/percolator/src/main/java/org/elasticsearch/percolator/PercolatorHighlightSubFetchPhase.java index 5d26993c0341c..44823f9aa012b 100644 --- a/modules/percolator/src/main/java/org/elasticsearch/percolator/PercolatorHighlightSubFetchPhase.java +++ b/modules/percolator/src/main/java/org/elasticsearch/percolator/PercolatorHighlightSubFetchPhase.java @@ -29,12 +29,14 @@ import org.apache.lucene.search.IndexSearcher; import org.apache.lucene.search.Query; import org.elasticsearch.common.bytes.BytesReference; +import org.elasticsearch.common.document.DocumentField; import org.elasticsearch.common.lucene.search.function.FunctionScoreQuery; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.text.Text; import org.elasticsearch.index.query.ParsedQuery; import org.elasticsearch.search.SearchHit; import org.elasticsearch.search.fetch.FetchSubPhase; +import org.elasticsearch.search.fetch.subphase.highlight.HighlightField; import org.elasticsearch.search.fetch.subphase.highlight.HighlightPhase; import org.elasticsearch.search.fetch.subphase.highlight.Highlighter; import org.elasticsearch.search.fetch.subphase.highlight.SearchContextHighlight; @@ -42,6 +44,7 @@ import org.elasticsearch.search.internal.SubSearchContext; import java.io.IOException; +import java.util.ArrayList; import java.util.Collections; import java.util.List; import java.util.Map; @@ -50,76 +53,111 @@ * Highlighting in the case of the percolate query is a bit different, because the PercolateQuery itself doesn't get highlighted, * but the source of the PercolateQuery gets highlighted by each hit containing a query. */ -final class PercolatorHighlightSubFetchPhase extends HighlightPhase { +final class PercolatorHighlightSubFetchPhase implements FetchSubPhase { + private final HighlightPhase highlightPhase; PercolatorHighlightSubFetchPhase(Settings settings, Map highlighters) { - super(settings, highlighters); + this.highlightPhase = new HighlightPhase(settings, highlighters); } - boolean hitsExecutionNeeded(SearchContext context) { // for testing - return context.highlight() != null && locatePercolatorQuery(context.query()) != null; + return context.highlight() != null && locatePercolatorQuery(context.query()).isEmpty() == false; } @Override - public void hitsExecute(SearchContext context, SearchHit[] hits) { + public void hitsExecute(SearchContext context, SearchHit[] hits) throws IOException { if (hitsExecutionNeeded(context) == false) { return; } - PercolateQuery percolateQuery = locatePercolatorQuery(context.query()); - if (percolateQuery == null) { + List percolateQueries = locatePercolatorQuery(context.query()); + if (percolateQueries.isEmpty()) { // shouldn't happen as we checked for the existence of a percolator query in hitsExecutionNeeded(...) throw new IllegalStateException("couldn't locate percolator query"); } - List ctxs = context.searcher().getIndexReader().leaves(); - IndexSearcher percolatorIndexSearcher = percolateQuery.getPercolatorIndexSearcher(); - PercolateQuery.QueryStore queryStore = percolateQuery.getQueryStore(); + boolean singlePercolateQuery = percolateQueries.size() == 1; + for (PercolateQuery percolateQuery : percolateQueries) { + String fieldName = singlePercolateQuery ? PercolatorMatchedSlotSubFetchPhase.FIELD_NAME_PREFIX : + PercolatorMatchedSlotSubFetchPhase.FIELD_NAME_PREFIX + "_" + percolateQuery.getName(); + List ctxs = context.searcher().getIndexReader().leaves(); + IndexSearcher percolatorIndexSearcher = percolateQuery.getPercolatorIndexSearcher(); + PercolateQuery.QueryStore queryStore = percolateQuery.getQueryStore(); - LeafReaderContext percolatorLeafReaderContext = percolatorIndexSearcher.getIndexReader().leaves().get(0); - FetchSubPhase.HitContext hitContext = new FetchSubPhase.HitContext(); - SubSearchContext subSearchContext = - createSubSearchContext(context, percolatorLeafReaderContext, percolateQuery.getDocumentSource()); + LeafReaderContext percolatorLeafReaderContext = percolatorIndexSearcher.getIndexReader().leaves().get(0); + FetchSubPhase.HitContext hitContext = new FetchSubPhase.HitContext(); - for (SearchHit hit : hits) { - final Query query; - try { + for (SearchHit hit : hits) { LeafReaderContext ctx = ctxs.get(ReaderUtil.subIndex(hit.docId(), ctxs)); int segmentDocId = hit.docId() - ctx.docBase; - query = queryStore.getQueries(ctx).apply(segmentDocId); - } catch (IOException e) { - throw new RuntimeException(e); - } - if (query != null) { - subSearchContext.parsedQuery(new ParsedQuery(query)); - hitContext.reset( - new SearchHit(0, "unknown", new Text(hit.getType()), Collections.emptyMap()), - percolatorLeafReaderContext, 0, percolatorIndexSearcher - ); - hitContext.cache().clear(); - super.hitExecute(subSearchContext, hitContext); - hit.getHighlightFields().putAll(hitContext.hit().getHighlightFields()); + final Query query = queryStore.getQueries(ctx).apply(segmentDocId); + if (query != null) { + DocumentField field = hit.field(fieldName); + if (field == null) { + // It possible that a hit did not match with a particular percolate query, + // so then continue highlighting with the next hit. + continue; + } + + for (Object matchedSlot : field.getValues()) { + int slot = (int) matchedSlot; + BytesReference document = percolateQuery.getDocuments().get(slot); + SubSearchContext subSearchContext = + createSubSearchContext(context, percolatorLeafReaderContext, document, slot); + subSearchContext.parsedQuery(new ParsedQuery(query)); + hitContext.reset( + new SearchHit(slot, "unknown", new Text(hit.getType()), Collections.emptyMap()), + percolatorLeafReaderContext, slot, percolatorIndexSearcher + ); + hitContext.cache().clear(); + highlightPhase.hitExecute(subSearchContext, hitContext); + for (Map.Entry entry : hitContext.hit().getHighlightFields().entrySet()) { + if (percolateQuery.getDocuments().size() == 1) { + String hlFieldName; + if (singlePercolateQuery) { + hlFieldName = entry.getKey(); + } else { + hlFieldName = percolateQuery.getName() + "_" + entry.getKey(); + } + hit.getHighlightFields().put(hlFieldName, new HighlightField(hlFieldName, entry.getValue().fragments())); + } else { + // In case multiple documents are being percolated we need to identify to which document + // a highlight belongs to. + String hlFieldName; + if (singlePercolateQuery) { + hlFieldName = slot + "_" + entry.getKey(); + } else { + hlFieldName = percolateQuery.getName() + "_" + slot + "_" + entry.getKey(); + } + hit.getHighlightFields().put(hlFieldName, new HighlightField(hlFieldName, entry.getValue().fragments())); + } + } + } + } } } } - static PercolateQuery locatePercolatorQuery(Query query) { + static List locatePercolatorQuery(Query query) { if (query instanceof PercolateQuery) { - return (PercolateQuery) query; + return Collections.singletonList((PercolateQuery) query); } else if (query instanceof BooleanQuery) { + List percolateQueries = new ArrayList<>(); for (BooleanClause clause : ((BooleanQuery) query).clauses()) { - PercolateQuery result = locatePercolatorQuery(clause.getQuery()); - if (result != null) { - return result; + List result = locatePercolatorQuery(clause.getQuery()); + if (result.isEmpty() == false) { + percolateQueries.addAll(result); } } + return percolateQueries; } else if (query instanceof DisjunctionMaxQuery) { + List percolateQueries = new ArrayList<>(); for (Query disjunct : ((DisjunctionMaxQuery) query).getDisjuncts()) { - PercolateQuery result = locatePercolatorQuery(disjunct); - if (result != null) { - return result; + List result = locatePercolatorQuery(disjunct); + if (result.isEmpty() == false) { + percolateQueries.addAll(result); } } + return percolateQueries; } else if (query instanceof ConstantScoreQuery) { return locatePercolatorQuery(((ConstantScoreQuery) query).getQuery()); } else if (query instanceof BoostQuery) { @@ -127,16 +165,16 @@ static PercolateQuery locatePercolatorQuery(Query query) { } else if (query instanceof FunctionScoreQuery) { return locatePercolatorQuery(((FunctionScoreQuery) query).getSubQuery()); } - - return null; + return Collections.emptyList(); } - private SubSearchContext createSubSearchContext(SearchContext context, LeafReaderContext leafReaderContext, BytesReference source) { + private SubSearchContext createSubSearchContext(SearchContext context, LeafReaderContext leafReaderContext, + BytesReference source, int docId) { SubSearchContext subSearchContext = new SubSearchContext(context); subSearchContext.highlight(new SearchContextHighlight(context.highlight().fields())); // Enforce highlighting by source, because MemoryIndex doesn't support stored fields. subSearchContext.highlight().globalForceSource(true); - subSearchContext.lookup().source().setSegmentAndDocument(leafReaderContext, 0); + subSearchContext.lookup().source().setSegmentAndDocument(leafReaderContext, docId); subSearchContext.lookup().source().setSource(source); return subSearchContext; } diff --git a/modules/percolator/src/main/java/org/elasticsearch/percolator/PercolatorMatchedSlotSubFetchPhase.java b/modules/percolator/src/main/java/org/elasticsearch/percolator/PercolatorMatchedSlotSubFetchPhase.java new file mode 100644 index 0000000000000..163c4183dd48e --- /dev/null +++ b/modules/percolator/src/main/java/org/elasticsearch/percolator/PercolatorMatchedSlotSubFetchPhase.java @@ -0,0 +1,121 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.percolator; + +import org.apache.lucene.index.LeafReaderContext; +import org.apache.lucene.index.ReaderUtil; +import org.apache.lucene.search.IndexSearcher; +import org.apache.lucene.search.Query; +import org.apache.lucene.search.Scorer; +import org.apache.lucene.search.Sort; +import org.apache.lucene.search.SortField; +import org.apache.lucene.search.TopDocs; +import org.apache.lucene.search.Weight; +import org.apache.lucene.util.BitSet; +import org.apache.lucene.util.BitSetIterator; +import org.elasticsearch.common.document.DocumentField; +import org.elasticsearch.common.lucene.search.Queries; +import org.elasticsearch.search.SearchHit; +import org.elasticsearch.search.fetch.FetchSubPhase; +import org.elasticsearch.search.internal.SearchContext; + +import java.io.IOException; +import java.util.Arrays; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.stream.Collectors; +import java.util.stream.IntStream; + +import static org.apache.lucene.search.DocIdSetIterator.NO_MORE_DOCS; +import static org.elasticsearch.percolator.PercolatorHighlightSubFetchPhase.locatePercolatorQuery; + +/** + * Adds a special field to the a percolator query hit to indicate which documents matched with the percolator query. + * This is useful when multiple documents are being percolated in a single request. + */ +final class PercolatorMatchedSlotSubFetchPhase implements FetchSubPhase { + + static final String FIELD_NAME_PREFIX = "_percolator_document_slot"; + + @Override + public void hitsExecute(SearchContext context, SearchHit[] hits) throws IOException { + List percolateQueries = locatePercolatorQuery(context.query()); + if (percolateQueries.isEmpty()) { + return; + } + + boolean singlePercolateQuery = percolateQueries.size() == 1; + for (PercolateQuery percolateQuery : percolateQueries) { + String fieldName = singlePercolateQuery ? FIELD_NAME_PREFIX : FIELD_NAME_PREFIX + "_" + percolateQuery.getName(); + IndexSearcher percolatorIndexSearcher = percolateQuery.getPercolatorIndexSearcher(); + Weight weight = percolatorIndexSearcher.createNormalizedWeight(Queries.newNonNestedFilter(), false); + Scorer s = weight.scorer(percolatorIndexSearcher.getIndexReader().leaves().get(0)); + int memoryIndexMaxDoc = percolatorIndexSearcher.getIndexReader().maxDoc(); + BitSet rootDocs = BitSet.of(s.iterator(), memoryIndexMaxDoc); + int[] rootDocsBySlot = null; + boolean hasNestedDocs = rootDocs.cardinality() != percolatorIndexSearcher.getIndexReader().numDocs(); + if (hasNestedDocs) { + rootDocsBySlot = buildRootDocsSlots(rootDocs); + } + + PercolateQuery.QueryStore queryStore = percolateQuery.getQueryStore(); + List ctxs = context.searcher().getIndexReader().leaves(); + for (SearchHit hit : hits) { + LeafReaderContext ctx = ctxs.get(ReaderUtil.subIndex(hit.docId(), ctxs)); + int segmentDocId = hit.docId() - ctx.docBase; + Query query = queryStore.getQueries(ctx).apply(segmentDocId); + + TopDocs topDocs = percolatorIndexSearcher.search(query, memoryIndexMaxDoc, new Sort(SortField.FIELD_DOC)); + if (topDocs.totalHits == 0) { + // This hit didn't match with a percolate query, + // likely to happen when percolating multiple documents + continue; + } + + Map fields = hit.fieldsOrNull(); + if (fields == null) { + fields = new HashMap<>(); + hit.fields(fields); + } + IntStream slots = convertTopDocsToSlots(topDocs, rootDocsBySlot); + fields.put(fieldName, new DocumentField(fieldName, slots.boxed().collect(Collectors.toList()))); + } + } + } + + static IntStream convertTopDocsToSlots(TopDocs topDocs, int[] rootDocsBySlot) { + IntStream stream = Arrays.stream(topDocs.scoreDocs) + .mapToInt(scoreDoc -> scoreDoc.doc); + if (rootDocsBySlot != null) { + stream = stream.map(docId -> Arrays.binarySearch(rootDocsBySlot, docId)); + } + return stream; + } + + static int[] buildRootDocsSlots(BitSet rootDocs) { + int slot = 0; + int[] rootDocsBySlot = new int[rootDocs.cardinality()]; + BitSetIterator iterator = new BitSetIterator(rootDocs, 0); + for (int rootDocId = iterator.nextDoc(); rootDocId != NO_MORE_DOCS; rootDocId = iterator.nextDoc()) { + rootDocsBySlot[slot++] = rootDocId; + } + return rootDocsBySlot; + } +} diff --git a/modules/percolator/src/main/java/org/elasticsearch/percolator/PercolatorPlugin.java b/modules/percolator/src/main/java/org/elasticsearch/percolator/PercolatorPlugin.java index d09599a7af43c..7128060448cf1 100644 --- a/modules/percolator/src/main/java/org/elasticsearch/percolator/PercolatorPlugin.java +++ b/modules/percolator/src/main/java/org/elasticsearch/percolator/PercolatorPlugin.java @@ -27,11 +27,12 @@ import org.elasticsearch.plugins.SearchPlugin; import org.elasticsearch.search.fetch.FetchSubPhase; -import java.util.Collections; +import java.util.Arrays; import java.util.List; import java.util.Map; import static java.util.Collections.singletonList; +import static java.util.Collections.singletonMap; public class PercolatorPlugin extends Plugin implements MapperPlugin, SearchPlugin { @@ -48,17 +49,21 @@ public List> getQueries() { @Override public List getFetchSubPhases(FetchPhaseConstructionContext context) { - return singletonList(new PercolatorHighlightSubFetchPhase(settings, context.getHighlighters())); + return Arrays.asList( + new PercolatorMatchedSlotSubFetchPhase(), + new PercolatorHighlightSubFetchPhase(settings, context.getHighlighters()) + ); } @Override public List> getSettings() { - return Collections.singletonList(PercolatorFieldMapper.INDEX_MAP_UNMAPPED_FIELDS_AS_STRING_SETTING); + return Arrays.asList(PercolatorFieldMapper.INDEX_MAP_UNMAPPED_FIELDS_AS_TEXT_SETTING, + PercolatorFieldMapper.INDEX_MAP_UNMAPPED_FIELDS_AS_STRING_SETTING); } @Override public Map getMappers() { - return Collections.singletonMap(PercolatorFieldMapper.CONTENT_TYPE, new PercolatorFieldMapper.TypeParser()); + return singletonMap(PercolatorFieldMapper.CONTENT_TYPE, new PercolatorFieldMapper.TypeParser()); } } diff --git a/modules/percolator/src/main/java/org/elasticsearch/percolator/QueryAnalyzer.java b/modules/percolator/src/main/java/org/elasticsearch/percolator/QueryAnalyzer.java index 77f937e680cf4..8c2a6d7a4553b 100644 --- a/modules/percolator/src/main/java/org/elasticsearch/percolator/QueryAnalyzer.java +++ b/modules/percolator/src/main/java/org/elasticsearch/percolator/QueryAnalyzer.java @@ -47,6 +47,7 @@ import org.apache.lucene.util.NumericUtils; import org.elasticsearch.common.logging.LoggerMessageFormat; import org.elasticsearch.common.lucene.search.function.FunctionScoreQuery; +import org.elasticsearch.index.search.ESToParentBlockJoinQuery; import java.util.ArrayList; import java.util.Arrays; @@ -88,6 +89,7 @@ final class QueryAnalyzer { map.put(FunctionScoreQuery.class, functionScoreQuery()); map.put(PointRangeQuery.class, pointRangeQuery()); map.put(IndexOrDocValuesQuery.class, indexOrDocValuesQuery()); + map.put(ESToParentBlockJoinQuery.class, toParentBlockJoinQuery()); queryProcessors = Collections.unmodifiableMap(map); } @@ -355,8 +357,19 @@ private static BiFunction, Result> functionScoreQuery( private static BiFunction, Result> pointRangeQuery() { return (query, boosts) -> { PointRangeQuery pointRangeQuery = (PointRangeQuery) query; + if (pointRangeQuery.getNumDims() != 1) { + throw new UnsupportedQueryException(query); + } + byte[] lowerPoint = pointRangeQuery.getLowerPoint(); byte[] upperPoint = pointRangeQuery.getUpperPoint(); + + // Need to check whether upper is not smaller than lower, otherwise NumericUtils.subtract(...) fails IAE + // If upper is really smaller than lower then we deal with like MatchNoDocsQuery. (verified and no extractions) + if (new BytesRef(lowerPoint).compareTo(new BytesRef(upperPoint)) > 0) { + return new Result(true, Collections.emptySet()); + } + byte[] interval = new byte[16]; NumericUtils.subtract(16, 0, prepad(upperPoint), prepad(lowerPoint), interval); return new Result(false, Collections.singleton(new QueryExtraction( @@ -379,6 +392,14 @@ private static BiFunction, Result> indexOrDocValuesQue }; } + private static BiFunction, Result> toParentBlockJoinQuery() { + return (query, boosts) -> { + ESToParentBlockJoinQuery toParentBlockJoinQuery = (ESToParentBlockJoinQuery) query; + Result result = analyze(toParentBlockJoinQuery.getChildQuery(), boosts); + return new Result(false, result.extractions); + }; + } + private static Result handleDisjunction(List disjunctions, int minimumShouldMatch, boolean otherClauses, Map boosts) { boolean verified = minimumShouldMatch <= 1 && otherClauses == false; @@ -449,6 +470,12 @@ static Set selectBestExtraction(Map boostFields, if (onlyRangeBasedExtractions) { BytesRef extraction1SmallestRange = smallestRange(filtered1); BytesRef extraction2SmallestRange = smallestRange(filtered2); + if (extraction1SmallestRange == null) { + return extractions2; + } else if (extraction2SmallestRange == null) { + return extractions1; + } + // Keep the clause with smallest range, this is likely to be the rarest. if (extraction1SmallestRange.compareTo(extraction2SmallestRange) <= 0) { return extractions1; @@ -496,10 +523,10 @@ private static int minTermLength(Set extractions) { } private static BytesRef smallestRange(Set terms) { - BytesRef min = terms.iterator().next().range.interval; + BytesRef min = null; for (QueryExtraction qt : terms) { if (qt.range != null) { - if (qt.range.interval.compareTo(min) < 0) { + if (min == null || qt.range.interval.compareTo(min) < 0) { min = qt.range.interval; } } diff --git a/modules/percolator/src/test/java/org/elasticsearch/percolator/CandidateQueryTests.java b/modules/percolator/src/test/java/org/elasticsearch/percolator/CandidateQueryTests.java index 2d78a0db63d23..61f3fd14f9533 100644 --- a/modules/percolator/src/test/java/org/elasticsearch/percolator/CandidateQueryTests.java +++ b/modules/percolator/src/test/java/org/elasticsearch/percolator/CandidateQueryTests.java @@ -309,7 +309,7 @@ public void testRangeQueries() throws Exception { MemoryIndex memoryIndex = MemoryIndex.fromDocument(Collections.singleton(new IntPoint("int_field", 3)), new WhitespaceAnalyzer()); IndexSearcher percolateSearcher = memoryIndex.createSearcher(); - Query query = fieldType.percolateQuery(queryStore, new BytesArray("{}"), percolateSearcher); + Query query = fieldType.percolateQuery("_name", queryStore, Collections.singletonList(new BytesArray("{}")), percolateSearcher); TopDocs topDocs = shardSearcher.search(query, 1); assertEquals(1L, topDocs.totalHits); assertEquals(1, topDocs.scoreDocs.length); @@ -317,7 +317,7 @@ public void testRangeQueries() throws Exception { memoryIndex = MemoryIndex.fromDocument(Collections.singleton(new LongPoint("long_field", 7L)), new WhitespaceAnalyzer()); percolateSearcher = memoryIndex.createSearcher(); - query = fieldType.percolateQuery(queryStore, new BytesArray("{}"), percolateSearcher); + query = fieldType.percolateQuery("_name", queryStore, Collections.singletonList(new BytesArray("{}")), percolateSearcher); topDocs = shardSearcher.search(query, 1); assertEquals(1L, topDocs.totalHits); assertEquals(1, topDocs.scoreDocs.length); @@ -326,7 +326,7 @@ public void testRangeQueries() throws Exception { memoryIndex = MemoryIndex.fromDocument(Collections.singleton(new HalfFloatPoint("half_float_field", 12)), new WhitespaceAnalyzer()); percolateSearcher = memoryIndex.createSearcher(); - query = fieldType.percolateQuery(queryStore, new BytesArray("{}"), percolateSearcher); + query = fieldType.percolateQuery("_name", queryStore, Collections.singletonList(new BytesArray("{}")), percolateSearcher); topDocs = shardSearcher.search(query, 1); assertEquals(1L, topDocs.totalHits); assertEquals(1, topDocs.scoreDocs.length); @@ -334,7 +334,7 @@ public void testRangeQueries() throws Exception { memoryIndex = MemoryIndex.fromDocument(Collections.singleton(new FloatPoint("float_field", 17)), new WhitespaceAnalyzer()); percolateSearcher = memoryIndex.createSearcher(); - query = fieldType.percolateQuery(queryStore, new BytesArray("{}"), percolateSearcher); + query = fieldType.percolateQuery("_name", queryStore, Collections.singletonList(new BytesArray("{}")), percolateSearcher); topDocs = shardSearcher.search(query, 1); assertEquals(1, topDocs.totalHits); assertEquals(1, topDocs.scoreDocs.length); @@ -342,7 +342,7 @@ public void testRangeQueries() throws Exception { memoryIndex = MemoryIndex.fromDocument(Collections.singleton(new DoublePoint("double_field", 21)), new WhitespaceAnalyzer()); percolateSearcher = memoryIndex.createSearcher(); - query = fieldType.percolateQuery(queryStore, new BytesArray("{}"), percolateSearcher); + query = fieldType.percolateQuery("_name", queryStore, Collections.singletonList(new BytesArray("{}")), percolateSearcher); topDocs = shardSearcher.search(query, 1); assertEquals(1, topDocs.totalHits); assertEquals(1, topDocs.scoreDocs.length); @@ -351,7 +351,7 @@ public void testRangeQueries() throws Exception { memoryIndex = MemoryIndex.fromDocument(Collections.singleton(new InetAddressPoint("ip_field", forString("192.168.0.4"))), new WhitespaceAnalyzer()); percolateSearcher = memoryIndex.createSearcher(); - query = fieldType.percolateQuery(queryStore, new BytesArray("{}"), percolateSearcher); + query = fieldType.percolateQuery("_name", queryStore, Collections.singletonList(new BytesArray("{}")), percolateSearcher); topDocs = shardSearcher.search(query, 1); assertEquals(1, topDocs.totalHits); assertEquals(1, topDocs.scoreDocs.length); @@ -464,7 +464,8 @@ public void testDuelRangeQueries() throws Exception { private void duelRun(PercolateQuery.QueryStore queryStore, MemoryIndex memoryIndex, IndexSearcher shardSearcher) throws IOException { boolean requireScore = randomBoolean(); IndexSearcher percolateSearcher = memoryIndex.createSearcher(); - Query percolateQuery = fieldType.percolateQuery(queryStore, new BytesArray("{}"), percolateSearcher); + Query percolateQuery = fieldType.percolateQuery("_name", queryStore, + Collections.singletonList(new BytesArray("{}")), percolateSearcher); Query query = requireScore ? percolateQuery : new ConstantScoreQuery(percolateQuery); TopDocs topDocs = shardSearcher.search(query, 10); @@ -497,7 +498,8 @@ private TopDocs executeQuery(PercolateQuery.QueryStore queryStore, MemoryIndex memoryIndex, IndexSearcher shardSearcher) throws IOException { IndexSearcher percolateSearcher = memoryIndex.createSearcher(); - Query percolateQuery = fieldType.percolateQuery(queryStore, new BytesArray("{}"), percolateSearcher); + Query percolateQuery = fieldType.percolateQuery("_name", queryStore, + Collections.singletonList(new BytesArray("{}")), percolateSearcher); return shardSearcher.search(percolateQuery, 10); } diff --git a/modules/percolator/src/test/java/org/elasticsearch/percolator/PercolateQueryBuilderTests.java b/modules/percolator/src/test/java/org/elasticsearch/percolator/PercolateQueryBuilderTests.java index 30713d61abe07..655c0d508ec5f 100644 --- a/modules/percolator/src/test/java/org/elasticsearch/percolator/PercolateQueryBuilderTests.java +++ b/modules/percolator/src/test/java/org/elasticsearch/percolator/PercolateQueryBuilderTests.java @@ -35,6 +35,7 @@ import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.compress.CompressedXContent; import org.elasticsearch.common.io.stream.BytesStreamOutput; +import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.common.xcontent.XContentType; @@ -53,9 +54,13 @@ import java.io.IOException; import java.util.ArrayList; +import java.util.Arrays; +import java.util.Base64; import java.util.Collection; import java.util.Collections; +import java.util.HashSet; import java.util.List; +import java.util.Map; import java.util.Set; import static org.hamcrest.Matchers.equalTo; @@ -63,7 +68,10 @@ public class PercolateQueryBuilderTests extends AbstractQueryTestCase { - private static final String[] SHUFFLE_PROTECTED_FIELDS = new String[] { PercolateQueryBuilder.DOCUMENT_FIELD.getPreferredName()}; + private static final String[] SHUFFLE_PROTECTED_FIELDS = new String[] { + PercolateQueryBuilder.DOCUMENT_FIELD.getPreferredName(), + PercolateQueryBuilder.DOCUMENTS_FIELD.getPreferredName() + }; private static String queryField; private static String docType; @@ -74,7 +82,7 @@ public class PercolateQueryBuilderTests extends AbstractQueryTestCase documentSource; private boolean indexedDocumentExists = true; @@ -104,7 +112,18 @@ protected PercolateQueryBuilder doCreateTestQueryBuilder() { } private PercolateQueryBuilder doCreateTestQueryBuilder(boolean indexedDocument) { - documentSource = randomSource(); + if (indexedDocument) { + documentSource = Collections.singletonList(randomSource(new HashSet<>())); + } else { + int numDocs = randomIntBetween(1, 8); + documentSource = new ArrayList<>(numDocs); + Set usedFields = new HashSet<>(); + for (int i = 0; i < numDocs; i++) { + documentSource.add(randomSource(usedFields)); + } + } + + PercolateQueryBuilder queryBuilder; if (indexedDocument) { indexedDocumentIndex = randomAlphaOfLength(4); indexedDocumentType = "doc"; @@ -112,11 +131,15 @@ private PercolateQueryBuilder doCreateTestQueryBuilder(boolean indexedDocument) indexedDocumentRouting = randomAlphaOfLength(4); indexedDocumentPreference = randomAlphaOfLength(4); indexedDocumentVersion = (long) randomIntBetween(0, Integer.MAX_VALUE); - return new PercolateQueryBuilder(queryField, docType, indexedDocumentIndex, indexedDocumentType, indexedDocumentId, + queryBuilder = new PercolateQueryBuilder(queryField, docType, indexedDocumentIndex, indexedDocumentType, indexedDocumentId, indexedDocumentRouting, indexedDocumentPreference, indexedDocumentVersion); } else { - return new PercolateQueryBuilder(queryField, docType, documentSource, XContentType.JSON); + queryBuilder = new PercolateQueryBuilder(queryField, docType, documentSource, XContentType.JSON); } + if (randomBoolean()) { + queryBuilder.setName(randomAlphaOfLength(4)); + } + return queryBuilder; } /** @@ -139,8 +162,8 @@ protected GetResponse executeGet(GetRequest getRequest) { assertThat(getRequest.version(), Matchers.equalTo(indexedDocumentVersion)); if (indexedDocumentExists) { return new GetResponse( - new GetResult(indexedDocumentIndex, indexedDocumentType, indexedDocumentId, 0L, true, documentSource, - Collections.emptyMap()) + new GetResult(indexedDocumentIndex, indexedDocumentType, indexedDocumentId, 0L, true, + documentSource.iterator().next(), Collections.emptyMap()) ); } else { return new GetResponse( @@ -154,7 +177,7 @@ protected void doAssertLuceneQuery(PercolateQueryBuilder queryBuilder, Query que assertThat(query, Matchers.instanceOf(PercolateQuery.class)); PercolateQuery percolateQuery = (PercolateQuery) query; assertThat(docType, Matchers.equalTo(queryBuilder.getDocumentType())); - assertThat(percolateQuery.getDocumentSource(), Matchers.equalTo(documentSource)); + assertThat(percolateQuery.getDocuments(), Matchers.equalTo(documentSource)); } @Override @@ -181,12 +204,13 @@ public void testIndexedDocumentDoesNotExist() throws IOException { @Override protected Set getObjectsHoldingArbitraryContent() { //document contains arbitrary content, no error expected when an object is added to it - return Collections.singleton(PercolateQueryBuilder.DOCUMENT_FIELD.getPreferredName()); + return new HashSet<>(Arrays.asList(PercolateQueryBuilder.DOCUMENT_FIELD.getPreferredName(), + PercolateQueryBuilder.DOCUMENTS_FIELD.getPreferredName())); } public void testRequiredParameters() { IllegalArgumentException e = expectThrows(IllegalArgumentException.class, () -> { - new PercolateQueryBuilder(null, null, new BytesArray("{}"), XContentType.JSON); + new PercolateQueryBuilder(null, new BytesArray("{}"), XContentType.JSON); }); assertThat(e.getMessage(), equalTo("[field] is a required argument")); @@ -227,16 +251,42 @@ public void testFromJsonNoDocumentType() throws IOException { } } - public void testCreateMultiDocumentSearcher() throws Exception { - int numDocs = randomIntBetween(2, 8); - List docs = new ArrayList<>(numDocs); - for (int i = 0; i < numDocs; i++) { + public void testBothDocumentAndDocumentsSpecified() throws IOException { + expectThrows(IllegalArgumentException.class, + () -> parseQuery("{\"percolate\" : { \"document\": {}, \"documents\": [{}, {}], \"field\":\"" + queryField + "\"}}")); + } + + public void testCreateNestedDocumentSearcher() throws Exception { + int numNestedDocs = randomIntBetween(2, 8); + List docs = new ArrayList<>(numNestedDocs); + for (int i = 0; i < numNestedDocs; i++) { docs.add(new ParseContext.Document()); } + Collection parsedDocument = Collections.singleton( + new ParsedDocument(null, null, "_id", "_type", null, docs, null, null, null)); Analyzer analyzer = new WhitespaceAnalyzer(); - ParsedDocument parsedDocument = new ParsedDocument(null, null, "_id", "_type", null, docs, null, null, null); IndexSearcher indexSearcher = PercolateQueryBuilder.createMultiDocumentSearcher(analyzer, parsedDocument); + assertThat(indexSearcher.getIndexReader().numDocs(), equalTo(numNestedDocs)); + + // ensure that any query get modified so that the nested docs are never included as hits: + Query query = new MatchAllDocsQuery(); + BooleanQuery result = (BooleanQuery) indexSearcher.createNormalizedWeight(query, true).getQuery(); + assertThat(result.clauses().size(), equalTo(2)); + assertThat(result.clauses().get(0).getQuery(), sameInstance(query)); + assertThat(result.clauses().get(0).getOccur(), equalTo(BooleanClause.Occur.MUST)); + assertThat(result.clauses().get(1).getOccur(), equalTo(BooleanClause.Occur.MUST_NOT)); + } + + public void testCreateMultiDocumentSearcher() throws Exception { + int numDocs = randomIntBetween(2, 8); + List docs = new ArrayList<>(); + for (int i = 0; i < numDocs; i++) { + docs.add(new ParsedDocument(null, null, "_id", "_type", null, + Collections.singletonList(new ParseContext.Document()), null, null, null)); + } + Analyzer analyzer = new WhitespaceAnalyzer(); + IndexSearcher indexSearcher = PercolateQueryBuilder.createMultiDocumentSearcher(analyzer, docs); assertThat(indexSearcher.getIndexReader().numDocs(), equalTo(numDocs)); // ensure that any query get modified so that the nested docs are never included as hits: @@ -248,10 +298,46 @@ public void testCreateMultiDocumentSearcher() throws Exception { assertThat(result.clauses().get(1).getOccur(), equalTo(BooleanClause.Occur.MUST_NOT)); } - private static BytesReference randomSource() { + public void testSerializationBwc() throws IOException { + final byte[] data = Base64.getDecoder().decode("P4AAAAAFZmllbGQEdHlwZQAAAAAAAA57ImZvbyI6ImJhciJ9AAAAAA=="); + final Version version = randomFrom(Version.V_5_0_0, Version.V_5_0_1, Version.V_5_0_2, + Version.V_5_1_1, Version.V_5_1_2, Version.V_5_2_0); + try (StreamInput in = StreamInput.wrap(data)) { + in.setVersion(version); + PercolateQueryBuilder queryBuilder = new PercolateQueryBuilder(in); + assertEquals("type", queryBuilder.getDocumentType()); + assertEquals("field", queryBuilder.getField()); + assertEquals("{\"foo\":\"bar\"}", queryBuilder.getDocuments().iterator().next().utf8ToString()); + assertEquals(XContentType.JSON, queryBuilder.getXContentType()); + + try (BytesStreamOutput out = new BytesStreamOutput()) { + out.setVersion(version); + queryBuilder.writeTo(out); + assertArrayEquals(data, out.bytes().toBytesRef().bytes); + } + } + } + + private static BytesReference randomSource(Set usedFields) { try { + // If we create two source that have the same field, but these fields have different kind of values (str vs. lng) then + // when these source get indexed, indexing can fail. To solve this test issue, we should generate source that + // always have unique fields: + Map source; + boolean duplicateField; + do { + duplicateField = false; + source = RandomDocumentPicks.randomSource(random()); + for (String field : source.keySet()) { + if (usedFields.add(field) == false) { + duplicateField = true; + break; + } + } + } while (duplicateField); + XContentBuilder xContent = XContentFactory.jsonBuilder(); - xContent.map(RandomDocumentPicks.randomSource(random())); + xContent.map(source); return xContent.bytes(); } catch (IOException e) { throw new RuntimeException(e); diff --git a/modules/percolator/src/test/java/org/elasticsearch/percolator/PercolateQueryTests.java b/modules/percolator/src/test/java/org/elasticsearch/percolator/PercolateQueryTests.java index c76ac14cffbb2..ac9cc97499ce6 100644 --- a/modules/percolator/src/test/java/org/elasticsearch/percolator/PercolateQueryTests.java +++ b/modules/percolator/src/test/java/org/elasticsearch/percolator/PercolateQueryTests.java @@ -116,7 +116,7 @@ public void testPercolateQuery() throws Exception { memoryIndex.addField("field", "the quick brown fox jumps over the lazy dog", new WhitespaceAnalyzer()); IndexSearcher percolateSearcher = memoryIndex.createSearcher(); // no scoring, wrapping it in a constant score query: - Query query = new ConstantScoreQuery(new PercolateQuery(queryStore, new BytesArray("a"), + Query query = new ConstantScoreQuery(new PercolateQuery("_name", queryStore, Collections.singletonList(new BytesArray("a")), new TermQuery(new Term("select", "a")), percolateSearcher, new MatchNoDocsQuery(""))); TopDocs topDocs = shardSearcher.search(query, 10); assertThat(topDocs.totalHits, equalTo(1L)); @@ -126,7 +126,7 @@ public void testPercolateQuery() throws Exception { assertThat(explanation.isMatch(), is(true)); assertThat(explanation.getValue(), equalTo(topDocs.scoreDocs[0].score)); - query = new ConstantScoreQuery(new PercolateQuery(queryStore, new BytesArray("b"), + query = new ConstantScoreQuery(new PercolateQuery("_name", queryStore, Collections.singletonList(new BytesArray("b")), new TermQuery(new Term("select", "b")), percolateSearcher, new MatchNoDocsQuery(""))); topDocs = shardSearcher.search(query, 10); assertThat(topDocs.totalHits, equalTo(3L)); @@ -146,13 +146,13 @@ public void testPercolateQuery() throws Exception { assertThat(explanation.isMatch(), is(true)); assertThat(explanation.getValue(), equalTo(topDocs.scoreDocs[2].score)); - query = new ConstantScoreQuery(new PercolateQuery(queryStore, new BytesArray("c"), + query = new ConstantScoreQuery(new PercolateQuery("_name", queryStore, Collections.singletonList(new BytesArray("c")), new MatchAllDocsQuery(), percolateSearcher, new MatchAllDocsQuery())); topDocs = shardSearcher.search(query, 10); assertThat(topDocs.totalHits, equalTo(4L)); - query = new PercolateQuery(queryStore, new BytesArray("{}"), new TermQuery(new Term("select", "b")), - percolateSearcher, new MatchNoDocsQuery("")); + query = new PercolateQuery("_name", queryStore, Collections.singletonList(new BytesArray("{}")), + new TermQuery(new Term("select", "b")), percolateSearcher, new MatchNoDocsQuery("")); topDocs = shardSearcher.search(query, 10); assertThat(topDocs.totalHits, equalTo(3L)); assertThat(topDocs.scoreDocs.length, equalTo(3)); diff --git a/modules/percolator/src/test/java/org/elasticsearch/percolator/PercolatorFieldMapperTests.java b/modules/percolator/src/test/java/org/elasticsearch/percolator/PercolatorFieldMapperTests.java index 0f6b60354e690..441278d23f87a 100644 --- a/modules/percolator/src/test/java/org/elasticsearch/percolator/PercolatorFieldMapperTests.java +++ b/modules/percolator/src/test/java/org/elasticsearch/percolator/PercolatorFieldMapperTests.java @@ -195,10 +195,10 @@ public void testExtractRanges() throws Exception { addQueryFieldMappings(); BooleanQuery.Builder bq = new BooleanQuery.Builder(); Query rangeQuery1 = mapperService.documentMapper("doc").mappers().getMapper("number_field1").fieldType() - .rangeQuery(10, 20, true, true, null); + .rangeQuery(10, 20, true, true, null, null, null, null); bq.add(rangeQuery1, Occur.MUST); Query rangeQuery2 = mapperService.documentMapper("doc").mappers().getMapper("number_field1").fieldType() - .rangeQuery(15, 20, true, true, null); + .rangeQuery(15, 20, true, true, null, null, null, null); bq.add(rangeQuery2, Occur.MUST); DocumentMapper documentMapper = mapperService.documentMapper("doc"); diff --git a/modules/percolator/src/test/java/org/elasticsearch/percolator/PercolatorHighlightSubFetchPhaseTests.java b/modules/percolator/src/test/java/org/elasticsearch/percolator/PercolatorHighlightSubFetchPhaseTests.java index fb2a5f8bdc16f..f1b89d92ab11e 100644 --- a/modules/percolator/src/test/java/org/elasticsearch/percolator/PercolatorHighlightSubFetchPhaseTests.java +++ b/modules/percolator/src/test/java/org/elasticsearch/percolator/PercolatorHighlightSubFetchPhaseTests.java @@ -38,16 +38,15 @@ import java.util.Collections; import static java.util.Collections.emptyMap; +import static org.hamcrest.Matchers.equalTo; import static org.hamcrest.Matchers.is; -import static org.hamcrest.Matchers.nullValue; import static org.hamcrest.Matchers.sameInstance; public class PercolatorHighlightSubFetchPhaseTests extends ESTestCase { public void testHitsExecutionNeeded() { - PercolateQuery percolateQuery = new PercolateQuery( - ctx -> null, new BytesArray("{}"), new MatchAllDocsQuery(), Mockito.mock(IndexSearcher.class), new MatchAllDocsQuery() - ); + PercolateQuery percolateQuery = new PercolateQuery("_name", ctx -> null, Collections.singletonList(new BytesArray("{}")), + new MatchAllDocsQuery(), Mockito.mock(IndexSearcher.class), new MatchAllDocsQuery()); PercolatorHighlightSubFetchPhase subFetchPhase = new PercolatorHighlightSubFetchPhase(Settings.EMPTY, emptyMap()); SearchContext searchContext = Mockito.mock(SearchContext.class); @@ -60,35 +59,50 @@ public void testHitsExecutionNeeded() { } public void testLocatePercolatorQuery() { - PercolateQuery percolateQuery = new PercolateQuery( - ctx -> null, new BytesArray("{}"), new MatchAllDocsQuery(), Mockito.mock(IndexSearcher.class), new MatchAllDocsQuery() - ); - assertThat(PercolatorHighlightSubFetchPhase.locatePercolatorQuery(new MatchAllDocsQuery()), nullValue()); + PercolateQuery percolateQuery = new PercolateQuery("_name", ctx -> null, Collections.singletonList(new BytesArray("{}")), + new MatchAllDocsQuery(), Mockito.mock(IndexSearcher.class), new MatchAllDocsQuery()); + assertThat(PercolatorHighlightSubFetchPhase.locatePercolatorQuery(new MatchAllDocsQuery()).size(), equalTo(0)); BooleanQuery.Builder bq = new BooleanQuery.Builder(); bq.add(new MatchAllDocsQuery(), BooleanClause.Occur.FILTER); - assertThat(PercolatorHighlightSubFetchPhase.locatePercolatorQuery(bq.build()), nullValue()); + assertThat(PercolatorHighlightSubFetchPhase.locatePercolatorQuery(bq.build()).size(), equalTo(0)); bq.add(percolateQuery, BooleanClause.Occur.FILTER); - assertThat(PercolatorHighlightSubFetchPhase.locatePercolatorQuery(bq.build()), sameInstance(percolateQuery)); + assertThat(PercolatorHighlightSubFetchPhase.locatePercolatorQuery(bq.build()).size(), equalTo(1)); + assertThat(PercolatorHighlightSubFetchPhase.locatePercolatorQuery(bq.build()).get(0), sameInstance(percolateQuery)); ConstantScoreQuery constantScoreQuery = new ConstantScoreQuery(new MatchAllDocsQuery()); - assertThat(PercolatorHighlightSubFetchPhase.locatePercolatorQuery(constantScoreQuery), nullValue()); + assertThat(PercolatorHighlightSubFetchPhase.locatePercolatorQuery(constantScoreQuery).size(), equalTo(0)); constantScoreQuery = new ConstantScoreQuery(percolateQuery); - assertThat(PercolatorHighlightSubFetchPhase.locatePercolatorQuery(constantScoreQuery), sameInstance(percolateQuery)); + assertThat(PercolatorHighlightSubFetchPhase.locatePercolatorQuery(constantScoreQuery).size(), equalTo(1)); + assertThat(PercolatorHighlightSubFetchPhase.locatePercolatorQuery(constantScoreQuery).get(0), sameInstance(percolateQuery)); BoostQuery boostQuery = new BoostQuery(new MatchAllDocsQuery(), 1f); - assertThat(PercolatorHighlightSubFetchPhase.locatePercolatorQuery(boostQuery), nullValue()); + assertThat(PercolatorHighlightSubFetchPhase.locatePercolatorQuery(boostQuery).size(), equalTo(0)); boostQuery = new BoostQuery(percolateQuery, 1f); - assertThat(PercolatorHighlightSubFetchPhase.locatePercolatorQuery(boostQuery), sameInstance(percolateQuery)); + assertThat(PercolatorHighlightSubFetchPhase.locatePercolatorQuery(boostQuery).size(), equalTo(1)); + assertThat(PercolatorHighlightSubFetchPhase.locatePercolatorQuery(boostQuery).get(0), sameInstance(percolateQuery)); FunctionScoreQuery functionScoreQuery = new FunctionScoreQuery(new MatchAllDocsQuery(), new RandomScoreFunction(0, 0, null)); - assertThat(PercolatorHighlightSubFetchPhase.locatePercolatorQuery(functionScoreQuery), nullValue()); + assertThat(PercolatorHighlightSubFetchPhase.locatePercolatorQuery(functionScoreQuery).size(), equalTo(0)); functionScoreQuery = new FunctionScoreQuery(percolateQuery, new RandomScoreFunction(0, 0, null)); - assertThat(PercolatorHighlightSubFetchPhase.locatePercolatorQuery(functionScoreQuery), sameInstance(percolateQuery)); + assertThat(PercolatorHighlightSubFetchPhase.locatePercolatorQuery(functionScoreQuery).size(), equalTo(1)); + assertThat(PercolatorHighlightSubFetchPhase.locatePercolatorQuery(functionScoreQuery).get(0), sameInstance(percolateQuery)); - DisjunctionMaxQuery disjunctionMaxQuery = new DisjunctionMaxQuery(Arrays.asList(new MatchAllDocsQuery()), 1f); - assertThat(PercolatorHighlightSubFetchPhase.locatePercolatorQuery(disjunctionMaxQuery), nullValue()); + DisjunctionMaxQuery disjunctionMaxQuery = new DisjunctionMaxQuery(Collections.singleton(new MatchAllDocsQuery()), 1f); + assertThat(PercolatorHighlightSubFetchPhase.locatePercolatorQuery(disjunctionMaxQuery).size(), equalTo(0)); disjunctionMaxQuery = new DisjunctionMaxQuery(Arrays.asList(percolateQuery, new MatchAllDocsQuery()), 1f); - assertThat(PercolatorHighlightSubFetchPhase.locatePercolatorQuery(disjunctionMaxQuery), sameInstance(percolateQuery)); + assertThat(PercolatorHighlightSubFetchPhase.locatePercolatorQuery(disjunctionMaxQuery).size(), equalTo(1)); + assertThat(PercolatorHighlightSubFetchPhase.locatePercolatorQuery(disjunctionMaxQuery).get(0), sameInstance(percolateQuery)); + + PercolateQuery percolateQuery2 = new PercolateQuery("_name", ctx -> null, Collections.singletonList(new BytesArray("{}")), + new MatchAllDocsQuery(), Mockito.mock(IndexSearcher.class), new MatchAllDocsQuery()); + bq = new BooleanQuery.Builder(); + bq.add(new MatchAllDocsQuery(), BooleanClause.Occur.FILTER); + assertThat(PercolatorHighlightSubFetchPhase.locatePercolatorQuery(bq.build()).size(), equalTo(0)); + bq.add(percolateQuery, BooleanClause.Occur.FILTER); + bq.add(percolateQuery2, BooleanClause.Occur.FILTER); + assertThat(PercolatorHighlightSubFetchPhase.locatePercolatorQuery(bq.build()).size(), equalTo(2)); + assertThat(PercolatorHighlightSubFetchPhase.locatePercolatorQuery(bq.build()).get(0), sameInstance(percolateQuery)); + assertThat(PercolatorHighlightSubFetchPhase.locatePercolatorQuery(bq.build()).get(1), sameInstance(percolateQuery2)); } } diff --git a/modules/percolator/src/test/java/org/elasticsearch/percolator/PercolatorMatchedSlotSubFetchPhaseTests.java b/modules/percolator/src/test/java/org/elasticsearch/percolator/PercolatorMatchedSlotSubFetchPhaseTests.java new file mode 100644 index 0000000000000..d4b48174d76d1 --- /dev/null +++ b/modules/percolator/src/test/java/org/elasticsearch/percolator/PercolatorMatchedSlotSubFetchPhaseTests.java @@ -0,0 +1,72 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.percolator; + +import org.apache.lucene.search.ScoreDoc; +import org.apache.lucene.search.TopDocs; +import org.apache.lucene.util.FixedBitSet; +import org.elasticsearch.test.ESTestCase; + +import java.util.stream.IntStream; + +public class PercolatorMatchedSlotSubFetchPhaseTests extends ESTestCase { + + public void testConvertTopDocsToSlots() { + ScoreDoc[] scoreDocs = new ScoreDoc[randomInt(128)]; + for (int i = 0; i < scoreDocs.length; i++) { + scoreDocs[i] = new ScoreDoc(i, 1f); + } + + TopDocs topDocs = new TopDocs(scoreDocs.length, scoreDocs, 1f); + IntStream stream = PercolatorMatchedSlotSubFetchPhase.convertTopDocsToSlots(topDocs, null); + + int[] result = stream.toArray(); + assertEquals(scoreDocs.length, result.length); + for (int i = 0; i < scoreDocs.length; i++) { + assertEquals(scoreDocs[i].doc, result[i]); + } + } + + public void testConvertTopDocsToSlots_nestedDocs() { + ScoreDoc[] scoreDocs = new ScoreDoc[5]; + scoreDocs[0] = new ScoreDoc(2, 1f); + scoreDocs[1] = new ScoreDoc(5, 1f); + scoreDocs[2] = new ScoreDoc(8, 1f); + scoreDocs[3] = new ScoreDoc(11, 1f); + scoreDocs[4] = new ScoreDoc(14, 1f); + TopDocs topDocs = new TopDocs(scoreDocs.length, scoreDocs, 1f); + + FixedBitSet bitSet = new FixedBitSet(15); + bitSet.set(2); + bitSet.set(5); + bitSet.set(8); + bitSet.set(11); + bitSet.set(14); + + int[] rootDocsBySlot = PercolatorMatchedSlotSubFetchPhase.buildRootDocsSlots(bitSet); + int[] result = PercolatorMatchedSlotSubFetchPhase.convertTopDocsToSlots(topDocs, rootDocsBySlot).toArray(); + assertEquals(scoreDocs.length, result.length); + assertEquals(0, result[0]); + assertEquals(1, result[1]); + assertEquals(2, result[2]); + assertEquals(3, result[3]); + assertEquals(4, result[4]); + } + +} diff --git a/modules/percolator/src/test/java/org/elasticsearch/percolator/PercolatorQuerySearchIT.java b/modules/percolator/src/test/java/org/elasticsearch/percolator/PercolatorQuerySearchIT.java index 17833864a42e3..54d6c69112571 100644 --- a/modules/percolator/src/test/java/org/elasticsearch/percolator/PercolatorQuerySearchIT.java +++ b/modules/percolator/src/test/java/org/elasticsearch/percolator/PercolatorQuerySearchIT.java @@ -23,6 +23,8 @@ import org.elasticsearch.action.search.SearchResponse; import org.elasticsearch.common.bytes.BytesArray; import org.elasticsearch.common.bytes.BytesReference; +import org.elasticsearch.common.geo.GeoPoint; +import org.elasticsearch.common.unit.DistanceUnit; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.common.xcontent.XContentType; @@ -35,11 +37,17 @@ import org.elasticsearch.search.sort.SortOrder; import org.elasticsearch.test.ESIntegTestCase; +import java.util.Arrays; +import java.util.Collections; + import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder; import static org.elasticsearch.common.xcontent.XContentFactory.smileBuilder; import static org.elasticsearch.common.xcontent.XContentFactory.yamlBuilder; import static org.elasticsearch.index.query.QueryBuilders.boolQuery; import static org.elasticsearch.index.query.QueryBuilders.commonTermsQuery; +import static org.elasticsearch.index.query.QueryBuilders.geoBoundingBoxQuery; +import static org.elasticsearch.index.query.QueryBuilders.geoDistanceQuery; +import static org.elasticsearch.index.query.QueryBuilders.geoPolygonQuery; import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery; import static org.elasticsearch.index.query.QueryBuilders.matchQuery; import static org.elasticsearch.index.query.QueryBuilders.multiMatchQuery; @@ -93,7 +101,9 @@ public void testPercolatorQuery() throws Exception { .get(); assertHitCount(response, 2); assertThat(response.getHits().getAt(0).getId(), equalTo("1")); + assertThat(response.getHits().getAt(0).getFields().get("_percolator_document_slot").getValue(), equalTo(0)); assertThat(response.getHits().getAt(1).getId(), equalTo("2")); + assertThat(response.getHits().getAt(1).getFields().get("_percolator_document_slot").getValue(), equalTo(0)); source = jsonBuilder().startObject().field("field1", "value").field("field2", "value").endObject().bytes(); logger.info("percolating doc with 2 fields"); @@ -103,8 +113,27 @@ public void testPercolatorQuery() throws Exception { .get(); assertHitCount(response, 3); assertThat(response.getHits().getAt(0).getId(), equalTo("1")); + assertThat(response.getHits().getAt(0).getFields().get("_percolator_document_slot").getValue(), equalTo(0)); assertThat(response.getHits().getAt(1).getId(), equalTo("2")); + assertThat(response.getHits().getAt(1).getFields().get("_percolator_document_slot").getValue(), equalTo(0)); assertThat(response.getHits().getAt(2).getId(), equalTo("3")); + assertThat(response.getHits().getAt(2).getFields().get("_percolator_document_slot").getValue(), equalTo(0)); + + logger.info("percolating doc with 2 fields"); + response = client().prepareSearch() + .setQuery(new PercolateQueryBuilder("query", Arrays.asList( + jsonBuilder().startObject().field("field1", "value").endObject().bytes(), + jsonBuilder().startObject().field("field1", "value").field("field2", "value").endObject().bytes() + ), XContentType.JSON)) + .addSort("_uid", SortOrder.ASC) + .get(); + assertHitCount(response, 3); + assertThat(response.getHits().getAt(0).getId(), equalTo("1")); + assertThat(response.getHits().getAt(0).getFields().get("_percolator_document_slot").getValues(), equalTo(Arrays.asList(0, 1))); + assertThat(response.getHits().getAt(1).getId(), equalTo("2")); + assertThat(response.getHits().getAt(1).getFields().get("_percolator_document_slot").getValues(), equalTo(Arrays.asList(0, 1))); + assertThat(response.getHits().getAt(2).getId(), equalTo("3")); + assertThat(response.getHits().getAt(2).getFields().get("_percolator_document_slot").getValues(), equalTo(Arrays.asList(1))); } public void testPercolatorRangeQueries() throws Exception { @@ -216,6 +245,40 @@ public void testPercolatorRangeQueries() throws Exception { assertThat(response.getHits().getAt(0).getId(), equalTo("10")); } + public void testPercolatorGeoQueries() throws Exception { + assertAcked(client().admin().indices().prepareCreate("test") + .addMapping("type", "field1", "type=geo_point", "field2", "type=geo_shape", "query", "type=percolator") + ); + + client().prepareIndex("test", "type", "1") + .setSource(jsonBuilder().startObject().field("query", + geoDistanceQuery("field1").point(52.18, 4.38).distance(50, DistanceUnit.KILOMETERS)) + .endObject()).get(); + + client().prepareIndex("test", "type", "2") + .setSource(jsonBuilder().startObject().field("query", + geoBoundingBoxQuery("field1").setCorners(52.3, 4.4, 52.1, 4.6)) + .endObject()).get(); + + client().prepareIndex("test", "type", "3") + .setSource(jsonBuilder().startObject().field("query", + geoPolygonQuery("field1", Arrays.asList(new GeoPoint(52.1, 4.4), new GeoPoint(52.3, 4.5), new GeoPoint(52.1, 4.6)))) + .endObject()).get(); + refresh(); + + BytesReference source = jsonBuilder().startObject() + .startObject("field1").field("lat", 52.20).field("lon", 4.51).endObject() + .endObject().bytes(); + SearchResponse response = client().prepareSearch() + .setQuery(new PercolateQueryBuilder("query", source, XContentType.JSON)) + .addSort("_id", SortOrder.ASC) + .get(); + assertHitCount(response, 3); + assertThat(response.getHits().getAt(0).getId(), equalTo("1")); + assertThat(response.getHits().getAt(1).getId(), equalTo("2")); + assertThat(response.getHits().getAt(2).getId(), equalTo("3")); + } + public void testPercolatorQueryExistingDocument() throws Exception { assertAcked(client().admin().indices().prepareCreate("test") .addMapping("type", "field1", "type=keyword", "field2", "type=keyword", "query", "type=percolator") @@ -405,6 +468,119 @@ public void testPercolatorQueryWithHighlighting() throws Exception { equalTo("The quick brown fox jumps over the lazy dog")); assertThat(searchResponse.getHits().getAt(4).getHighlightFields().get("field1").fragments()[0].string(), equalTo("The quick brown fox jumps over the lazy dog")); + + BytesReference document1 = jsonBuilder().startObject() + .field("field1", "The quick brown fox jumps") + .endObject().bytes(); + BytesReference document2 = jsonBuilder().startObject() + .field("field1", "over the lazy dog") + .endObject().bytes(); + searchResponse = client().prepareSearch() + .setQuery(boolQuery() + .should(new PercolateQueryBuilder("query", document1, XContentType.JSON).setName("query1")) + .should(new PercolateQueryBuilder("query", document2, XContentType.JSON).setName("query2")) + ) + .highlighter(new HighlightBuilder().field("field1")) + .addSort("_uid", SortOrder.ASC) + .get(); + logger.info("searchResponse={}", searchResponse); + assertHitCount(searchResponse, 5); + + assertThat(searchResponse.getHits().getAt(0).getHighlightFields().get("query1_field1").fragments()[0].string(), + equalTo("The quick brown fox jumps")); + assertThat(searchResponse.getHits().getAt(1).getHighlightFields().get("query2_field1").fragments()[0].string(), + equalTo("over the lazy dog")); + assertThat(searchResponse.getHits().getAt(2).getHighlightFields().get("query1_field1").fragments()[0].string(), + equalTo("The quick brown fox jumps")); + assertThat(searchResponse.getHits().getAt(3).getHighlightFields().get("query2_field1").fragments()[0].string(), + equalTo("over the lazy dog")); + assertThat(searchResponse.getHits().getAt(4).getHighlightFields().get("query1_field1").fragments()[0].string(), + equalTo("The quick brown fox jumps")); + + searchResponse = client().prepareSearch() + .setQuery(new PercolateQueryBuilder("query", Arrays.asList( + jsonBuilder().startObject().field("field1", "dog").endObject().bytes(), + jsonBuilder().startObject().field("field1", "fox").endObject().bytes(), + jsonBuilder().startObject().field("field1", "jumps").endObject().bytes(), + jsonBuilder().startObject().field("field1", "brown fox").endObject().bytes() + ), XContentType.JSON)) + .highlighter(new HighlightBuilder().field("field1")) + .addSort("_uid", SortOrder.ASC) + .get(); + assertHitCount(searchResponse, 5); + assertThat(searchResponse.getHits().getAt(0).getFields().get("_percolator_document_slot").getValues(), + equalTo(Arrays.asList(1, 3))); + assertThat(searchResponse.getHits().getAt(0).getHighlightFields().get("1_field1").fragments()[0].string(), + equalTo("fox")); + assertThat(searchResponse.getHits().getAt(0).getHighlightFields().get("3_field1").fragments()[0].string(), + equalTo("brown fox")); + assertThat(searchResponse.getHits().getAt(1).getFields().get("_percolator_document_slot").getValues(), + equalTo(Collections.singletonList(0))); + assertThat(searchResponse.getHits().getAt(1).getHighlightFields().get("0_field1").fragments()[0].string(), + equalTo("dog")); + assertThat(searchResponse.getHits().getAt(2).getFields().get("_percolator_document_slot").getValues(), + equalTo(Collections.singletonList(2))); + assertThat(searchResponse.getHits().getAt(2).getHighlightFields().get("2_field1").fragments()[0].string(), + equalTo("jumps")); + assertThat(searchResponse.getHits().getAt(3).getFields().get("_percolator_document_slot").getValues(), + equalTo(Collections.singletonList(0))); + assertThat(searchResponse.getHits().getAt(3).getHighlightFields().get("0_field1").fragments()[0].string(), + equalTo("dog")); + assertThat(searchResponse.getHits().getAt(4).getFields().get("_percolator_document_slot").getValues(), + equalTo(Arrays.asList(1, 3))); + assertThat(searchResponse.getHits().getAt(4).getHighlightFields().get("1_field1").fragments()[0].string(), + equalTo("fox")); + assertThat(searchResponse.getHits().getAt(4).getHighlightFields().get("3_field1").fragments()[0].string(), + equalTo("brown fox")); + + searchResponse = client().prepareSearch() + .setQuery(boolQuery() + .should(new PercolateQueryBuilder("query", Arrays.asList( + jsonBuilder().startObject().field("field1", "dog").endObject().bytes(), + jsonBuilder().startObject().field("field1", "fox").endObject().bytes() + ), XContentType.JSON).setName("query1")) + .should(new PercolateQueryBuilder("query", Arrays.asList( + jsonBuilder().startObject().field("field1", "jumps").endObject().bytes(), + jsonBuilder().startObject().field("field1", "brown fox").endObject().bytes() + ), XContentType.JSON).setName("query2")) + ) + .highlighter(new HighlightBuilder().field("field1")) + .addSort("_uid", SortOrder.ASC) + .get(); + logger.info("searchResponse={}", searchResponse); + assertHitCount(searchResponse, 5); + assertThat(searchResponse.getHits().getAt(0).getFields().get("_percolator_document_slot_query1").getValues(), + equalTo(Collections.singletonList(1))); + assertThat(searchResponse.getHits().getAt(0).getFields().get("_percolator_document_slot_query2").getValues(), + equalTo(Collections.singletonList(1))); + assertThat(searchResponse.getHits().getAt(0).getHighlightFields().get("query1_1_field1").fragments()[0].string(), + equalTo("fox")); + assertThat(searchResponse.getHits().getAt(0).getHighlightFields().get("query2_1_field1").fragments()[0].string(), + equalTo("brown fox")); + + assertThat(searchResponse.getHits().getAt(1).getFields().get("_percolator_document_slot_query1").getValues(), + equalTo(Collections.singletonList(0))); + assertThat(searchResponse.getHits().getAt(1).getHighlightFields().get("query1_0_field1").fragments()[0].string(), + equalTo("dog")); + + assertThat(searchResponse.getHits().getAt(2).getFields().get("_percolator_document_slot_query2").getValues(), + equalTo(Collections.singletonList(0))); + assertThat(searchResponse.getHits().getAt(2).getHighlightFields().get("query2_0_field1").fragments()[0].string(), + equalTo("jumps")); + + assertThat(searchResponse.getHits().getAt(3).getFields().get("_percolator_document_slot_query1").getValues(), + equalTo(Collections.singletonList(0))); + assertThat(searchResponse.getHits().getAt(3).getHighlightFields().get("query1_0_field1").fragments()[0].string(), + equalTo("dog")); + + assertThat(searchResponse.getHits().getAt(4).getFields().get("_percolator_document_slot_query1").getValues(), + equalTo(Collections.singletonList(1))); + assertThat(searchResponse.getHits().getAt(4).getFields().get("_percolator_document_slot_query2").getValues(), + equalTo(Collections.singletonList(1))); + assertThat(searchResponse.getHits().getAt(4).getHighlightFields().get("query1_1_field1").fragments()[0].string(), + equalTo("fox")); + assertThat(searchResponse.getHits().getAt(4).getHighlightFields().get("query2_1_field1").fragments()[0].string(), + equalTo("brown fox")); } public void testTakePositionOffsetGapIntoAccount() throws Exception { @@ -422,7 +598,7 @@ public void testTakePositionOffsetGapIntoAccount() throws Exception { client().admin().indices().prepareRefresh().get(); SearchResponse response = client().prepareSearch().setQuery( - new PercolateQueryBuilder("query", null, new BytesArray("{\"field\" : [\"brown\", \"fox\"]}"), XContentType.JSON) + new PercolateQueryBuilder("query", new BytesArray("{\"field\" : [\"brown\", \"fox\"]}"), XContentType.JSON) ).get(); assertHitCount(response, 1); assertThat(response.getHits().getAt(0).getId(), equalTo("2")); @@ -573,6 +749,29 @@ public void testPercolateQueryWithNestedDocuments() throws Exception { .addSort("_doc", SortOrder.ASC) .get(); assertHitCount(response, 0); + + response = client().prepareSearch() + .setQuery(new PercolateQueryBuilder("query", Arrays.asList( + XContentFactory.jsonBuilder() + .startObject().field("companyname", "stark") + .startArray("employee") + .startObject().field("name", "virginia potts").endObject() + .startObject().field("name", "tony stark").endObject() + .endArray() + .endObject().bytes(), + XContentFactory.jsonBuilder() + .startObject().field("companyname", "stark") + .startArray("employee") + .startObject().field("name", "peter parker").endObject() + .startObject().field("name", "virginia potts").endObject() + .endArray() + .endObject().bytes() + ), XContentType.JSON)) + .addSort("_doc", SortOrder.ASC) + .get(); + assertHitCount(response, 1); + assertThat(response.getHits().getAt(0).getId(), equalTo("q1")); + assertThat(response.getHits().getAt(0).getFields().get("_percolator_document_slot").getValues(), equalTo(Arrays.asList(0, 1))); } public void testPercolatorQueryViaMultiSearch() throws Exception { @@ -606,10 +805,10 @@ public void testPercolatorQueryViaMultiSearch() throws Exception { jsonBuilder().startObject().field("field1", "b").endObject().bytes(), XContentType.JSON))) .add(client().prepareSearch("test") .setQuery(new PercolateQueryBuilder("query", - yamlBuilder().startObject().field("field1", "c").endObject().bytes(), XContentType.JSON))) + yamlBuilder().startObject().field("field1", "c").endObject().bytes(), XContentType.YAML))) .add(client().prepareSearch("test") .setQuery(new PercolateQueryBuilder("query", - smileBuilder().startObject().field("field1", "b c").endObject().bytes(), XContentType.JSON))) + smileBuilder().startObject().field("field1", "b c").endObject().bytes(), XContentType.SMILE))) .add(client().prepareSearch("test") .setQuery(new PercolateQueryBuilder("query", jsonBuilder().startObject().field("field1", "d").endObject().bytes(), XContentType.JSON))) diff --git a/modules/percolator/src/test/java/org/elasticsearch/percolator/PercolatorQuerySearchTests.java b/modules/percolator/src/test/java/org/elasticsearch/percolator/PercolatorQuerySearchTests.java index 020280670c45f..15a33f2090b9f 100644 --- a/modules/percolator/src/test/java/org/elasticsearch/percolator/PercolatorQuerySearchTests.java +++ b/modules/percolator/src/test/java/org/elasticsearch/percolator/PercolatorQuerySearchTests.java @@ -21,6 +21,7 @@ import org.apache.lucene.search.join.ScoreMode; import org.elasticsearch.action.search.SearchResponse; import org.elasticsearch.action.support.WriteRequest; +import org.elasticsearch.common.settings.Setting; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentFactory; @@ -46,6 +47,7 @@ import java.util.function.Function; import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder; +import static org.elasticsearch.index.query.QueryBuilders.matchQuery; import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertHitCount; import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchHits; @@ -99,7 +101,7 @@ public void testPercolateQueryWithNestedDocuments_doNotLeakBitsetCacheEntries() ); client().prepareIndex("test", "employee", "q1").setSource(jsonBuilder().startObject() .field("query", QueryBuilders.nestedQuery("employee", - QueryBuilders.matchQuery("employee.name", "virginia potts").operator(Operator.AND), ScoreMode.Avg) + matchQuery("employee.name", "virginia potts").operator(Operator.AND), ScoreMode.Avg) ).endObject()) .get(); client().admin().indices().prepareRefresh().get(); @@ -202,4 +204,37 @@ public void testPercolateQueryWithNestedDocuments_doLeakFieldDataCacheEntries() assertEquals("The percolator works with in-memory index and therefor shouldn't use field-data cache", 0L, fieldDataSize); } + public void testMapUnmappedFieldAsText() throws IOException { + Settings.Builder settings = Settings.builder() + .put("index.percolator.map_unmapped_fields_as_text", true); + createIndex("test", settings.build(), "query", "query", "type=percolator"); + client().prepareIndex("test", "query", "1") + .setSource(jsonBuilder().startObject().field("query", matchQuery("field1", "value")).endObject()).get(); + client().admin().indices().prepareRefresh().get(); + + SearchResponse response = client().prepareSearch("test") + .setQuery(new PercolateQueryBuilder("query", jsonBuilder().startObject().field("field1", "value").endObject().bytes(), + XContentType.JSON)) + .get(); + assertHitCount(response, 1); + assertSearchHits(response, "1"); + } + + public void testMapUnmappedFieldAsString() throws IOException { + Settings.Builder settings = Settings.builder() + .put("index.percolator.map_unmapped_fields_as_string", true); + createIndex("test", settings.build(), "query", "query", "type=percolator"); + client().prepareIndex("test", "query", "1") + .setSource(jsonBuilder().startObject().field("query", matchQuery("field1", "value")).endObject()).get(); + client().admin().indices().prepareRefresh().get(); + + SearchResponse response = client().prepareSearch("test") + .setQuery(new PercolateQueryBuilder("query", jsonBuilder().startObject().field("field1", "value").endObject().bytes(), + XContentType.JSON)) + .get(); + assertHitCount(response, 1); + assertSearchHits(response, "1"); + assertSettingDeprecationsAndWarnings(new Setting[]{PercolatorFieldMapper.INDEX_MAP_UNMAPPED_FIELDS_AS_STRING_SETTING}); + } + } diff --git a/modules/percolator/src/test/java/org/elasticsearch/percolator/QueryAnalyzerTests.java b/modules/percolator/src/test/java/org/elasticsearch/percolator/QueryAnalyzerTests.java index e1e28b2bbee3b..a6af5fb9dfe38 100644 --- a/modules/percolator/src/test/java/org/elasticsearch/percolator/QueryAnalyzerTests.java +++ b/modules/percolator/src/test/java/org/elasticsearch/percolator/QueryAnalyzerTests.java @@ -23,6 +23,7 @@ import org.apache.lucene.document.HalfFloatPoint; import org.apache.lucene.document.InetAddressPoint; import org.apache.lucene.document.IntPoint; +import org.apache.lucene.document.LatLonPoint; import org.apache.lucene.document.LongPoint; import org.apache.lucene.document.SortedNumericDocValuesField; import org.apache.lucene.index.Term; @@ -43,6 +44,8 @@ import org.apache.lucene.search.TermInSetQuery; import org.apache.lucene.search.TermQuery; import org.apache.lucene.search.TermRangeQuery; +import org.apache.lucene.search.join.QueryBitSetProducer; +import org.apache.lucene.search.join.ScoreMode; import org.apache.lucene.search.spans.SpanFirstQuery; import org.apache.lucene.search.spans.SpanNearQuery; import org.apache.lucene.search.spans.SpanNotQuery; @@ -53,6 +56,7 @@ import org.elasticsearch.common.lucene.search.function.FunctionScoreQuery; import org.elasticsearch.common.lucene.search.function.RandomScoreFunction; import org.elasticsearch.common.network.InetAddresses; +import org.elasticsearch.index.search.ESToParentBlockJoinQuery; import org.elasticsearch.percolator.QueryAnalyzer.QueryExtraction; import org.elasticsearch.percolator.QueryAnalyzer.Result; import org.elasticsearch.test.ESTestCase; @@ -588,6 +592,21 @@ public void testSelectBestExtraction() { queryTerms2 = terms(new int[]{2, 3, 4}, "1", "456"); result = selectBestExtraction(Collections.emptyMap(), queryTerms1, queryTerms2); assertSame("Ignoring ranges, so then prefer queryTerms1, because it has the longest shortest term", queryTerms1, result); + + queryTerms1 = terms(new int[]{}); + queryTerms2 = terms(new int[]{}); + result = selectBestExtraction(Collections.emptyMap(), queryTerms1, queryTerms2); + assertSame("In case query extractions are empty", queryTerms2, result); + + queryTerms1 = terms(new int[]{1}); + queryTerms2 = terms(new int[]{}); + result = selectBestExtraction(Collections.emptyMap(), queryTerms1, queryTerms2); + assertSame("In case query a single extraction is empty", queryTerms1, result); + + queryTerms1 = terms(new int[]{}); + queryTerms2 = terms(new int[]{1}); + result = selectBestExtraction(Collections.emptyMap(), queryTerms1, queryTerms2); + assertSame("In case query a single extraction is empty", queryTerms2, result); } public void testSelectBestExtraction_boostFields() { @@ -743,6 +762,22 @@ public void testPointRangeQuery() { assertArrayEquals(ranges.get(0).range.upperPoint, InetAddressPoint.encode(InetAddresses.forString("192.168.1.255"))); } + public void testTooManyPointDimensions() { + // For now no extraction support for geo queries: + Query query1 = LatLonPoint.newBoxQuery("_field", 0, 1, 0, 1); + expectThrows(UnsupportedQueryException.class, () -> analyze(query1, Collections.emptyMap())); + + Query query2 = LongPoint.newRangeQuery("_field", new long[]{0, 0, 0}, new long[]{1, 1, 1}); + expectThrows(UnsupportedQueryException.class, () -> analyze(query2, Collections.emptyMap())); + } + + public void testPointRangeQuery_lowerUpperReversed() { + Query query = IntPoint.newRangeQuery("_field", 20, 10); + Result result = analyze(query, Collections.emptyMap()); + assertTrue(result.verified); + assertThat(result.extractions.size(), equalTo(0)); + } + public void testIndexOrDocValuesQuery() { Query query = new IndexOrDocValuesQuery(IntPoint.newRangeQuery("_field", 10, 20), SortedNumericDocValuesField.newSlowRangeQuery("_field", 10, 20)); @@ -756,6 +791,17 @@ public void testIndexOrDocValuesQuery() { assertDimension(ranges.get(0).range.upperPoint, bytes -> IntPoint.encodeDimension(20, bytes, 0)); } + public void testToParentBlockJoinQuery() { + TermQuery termQuery = new TermQuery(new Term("field", "value")); + QueryBitSetProducer queryBitSetProducer = new QueryBitSetProducer(new TermQuery(new Term("_type", "child"))); + ESToParentBlockJoinQuery query = new ESToParentBlockJoinQuery(termQuery, queryBitSetProducer, ScoreMode.None, "child"); + Result result = analyze(query, Collections.emptyMap()); + assertFalse(result.verified); + assertEquals(1, result.extractions.size()); + assertNull(result.extractions.toArray(new QueryExtraction[0])[0].range); + assertEquals(new Term("field", "value"), result.extractions.toArray(new QueryExtraction[0])[0].term); + } + public void testPointRangeQuerySelectShortestRange() { BooleanQuery.Builder boolQuery = new BooleanQuery.Builder(); boolQuery.add(LongPoint.newRangeQuery("_field1", 10, 20), BooleanClause.Occur.FILTER); diff --git a/modules/reindex/build.gradle b/modules/reindex/build.gradle index 3642dbdd57e2c..f29daf799122d 100644 --- a/modules/reindex/build.gradle +++ b/modules/reindex/build.gradle @@ -26,15 +26,24 @@ esplugin { } integTestCluster { - // Whitelist reindexing from the local node so we can test it. + // Whitelist reindexing from the local node so we can test reindex-from-remote. setting 'reindex.remote.whitelist', '127.0.0.1:*' } run { - // Whitelist reindexing from the local node so we can test it. + // Whitelist reindexing from the local node so we can test reindex-from-remote. setting 'reindex.remote.whitelist', '127.0.0.1:*' } +test { + /* + * We have to disable setting the number of available processors as tests in the + * same JVM randomize processors and will step on each other if we allow them to + * set the number of available processors as it's set-once in Netty. + */ + systemProperty 'es.set.netty.runtime.available.processors', 'false' +} + dependencies { compile "org.elasticsearch.client:elasticsearch-rest-client:${version}" // for http - testing reindex from remote @@ -54,7 +63,7 @@ thirdPartyAudit.excludes = [ // Commons logging 'javax.servlet.ServletContextEvent', 'javax.servlet.ServletContextListener', - 'org.elasticsearch.client.avalon.framework.logger.Logger', - 'org.elasticsearch.client.log.Hierarchy', - 'org.elasticsearch.client.log.Logger', + 'org.apache.avalon.framework.logger.Logger', + 'org.apache.log.Hierarchy', + 'org.apache.log.Logger', ] diff --git a/modules/reindex/src/main/java/org/elasticsearch/index/reindex/AbstractBulkByQueryRestHandler.java b/modules/reindex/src/main/java/org/elasticsearch/index/reindex/AbstractBulkByQueryRestHandler.java index e9973c9950053..d1dc5de831fc7 100644 --- a/modules/reindex/src/main/java/org/elasticsearch/index/reindex/AbstractBulkByQueryRestHandler.java +++ b/modules/reindex/src/main/java/org/elasticsearch/index/reindex/AbstractBulkByQueryRestHandler.java @@ -49,14 +49,12 @@ protected void parseInternalRequest(Request internal, RestRequest restRequest, assert restRequest != null : "RestRequest should not be null"; SearchRequest searchRequest = internal.getSearchRequest(); - int scrollSize = searchRequest.source().size(); try (XContentParser parser = extractRequestSpecificFields(restRequest, bodyConsumers)) { - RestSearchAction.parseSearchRequest(searchRequest, restRequest, parser); + RestSearchAction.parseSearchRequest(searchRequest, restRequest, parser, internal::setSize); } - internal.setSize(searchRequest.source().size()); - searchRequest.source().size(restRequest.paramAsInt("scroll_size", scrollSize)); + searchRequest.source().size(restRequest.paramAsInt("scroll_size", searchRequest.source().size())); String conflicts = restRequest.param("conflicts"); if (conflicts != null) { diff --git a/modules/reindex/src/main/java/org/elasticsearch/index/reindex/RestDeleteByQueryAction.java b/modules/reindex/src/main/java/org/elasticsearch/index/reindex/RestDeleteByQueryAction.java index 6573cf4fc6e84..e0ebaa85193db 100644 --- a/modules/reindex/src/main/java/org/elasticsearch/index/reindex/RestDeleteByQueryAction.java +++ b/modules/reindex/src/main/java/org/elasticsearch/index/reindex/RestDeleteByQueryAction.java @@ -19,7 +19,6 @@ package org.elasticsearch.index.reindex; -import org.elasticsearch.ElasticsearchException; import org.elasticsearch.action.search.SearchRequest; import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.common.settings.Settings; diff --git a/modules/reindex/src/main/java/org/elasticsearch/index/reindex/RestUpdateByQueryAction.java b/modules/reindex/src/main/java/org/elasticsearch/index/reindex/RestUpdateByQueryAction.java index 3ddd3618ab812..a3f7111053bb5 100644 --- a/modules/reindex/src/main/java/org/elasticsearch/index/reindex/RestUpdateByQueryAction.java +++ b/modules/reindex/src/main/java/org/elasticsearch/index/reindex/RestUpdateByQueryAction.java @@ -67,7 +67,7 @@ protected UpdateByQueryRequest buildRequest(RestRequest request) throws IOExcept Map> consumers = new HashMap<>(); consumers.put("conflicts", o -> internal.setConflicts((String) o)); - consumers.put("script", o -> internal.setScript(parseScript((Map)o))); + consumers.put("script", o -> internal.setScript(parseScript(o))); parseInternalRequest(internal, request, consumers); @@ -76,49 +76,58 @@ protected UpdateByQueryRequest buildRequest(RestRequest request) throws IOExcept } @SuppressWarnings("unchecked") - private static Script parseScript(Map config) { - String script = null; - ScriptType type = null; - String lang = DEFAULT_SCRIPT_LANG; - Map params = Collections.emptyMap(); - for (Iterator> itr = config.entrySet().iterator(); itr.hasNext();) { - Map.Entry entry = itr.next(); - String parameterName = entry.getKey(); - Object parameterValue = entry.getValue(); - if (Script.LANG_PARSE_FIELD.match(parameterName)) { - if (parameterValue instanceof String || parameterValue == null) { - lang = (String) parameterValue; - } else { - throw new ElasticsearchParseException("Value must be of type String: [" + parameterName + "]"); - } - } else if (Script.PARAMS_PARSE_FIELD.match(parameterName)) { - if (parameterValue instanceof Map || parameterValue == null) { - params = (Map) parameterValue; - } else { - throw new ElasticsearchParseException("Value must be of type String: [" + parameterName + "]"); - } - } else if (ScriptType.INLINE.getParseField().match(parameterName)) { - if (parameterValue instanceof String || parameterValue == null) { - script = (String) parameterValue; - type = ScriptType.INLINE; - } else { - throw new ElasticsearchParseException("Value must be of type String: [" + parameterName + "]"); - } - } else if (ScriptType.STORED.getParseField().match(parameterName)) { - if (parameterValue instanceof String || parameterValue == null) { - script = (String) parameterValue; - type = ScriptType.STORED; - } else { - throw new ElasticsearchParseException("Value must be of type String: [" + parameterName + "]"); + private static Script parseScript(Object config) { + assert config != null : "Script should not be null"; + + if (config instanceof String) { + return new Script((String) config); + } else if (config instanceof Map) { + Map configMap = (Map) config; + String script = null; + ScriptType type = null; + String lang = DEFAULT_SCRIPT_LANG; + Map params = Collections.emptyMap(); + for (Iterator> itr = configMap.entrySet().iterator(); itr.hasNext();) { + Map.Entry entry = itr.next(); + String parameterName = entry.getKey(); + Object parameterValue = entry.getValue(); + if (Script.LANG_PARSE_FIELD.match(parameterName)) { + if (parameterValue instanceof String || parameterValue == null) { + lang = (String) parameterValue; + } else { + throw new ElasticsearchParseException("Value must be of type String: [" + parameterName + "]"); + } + } else if (Script.PARAMS_PARSE_FIELD.match(parameterName)) { + if (parameterValue instanceof Map || parameterValue == null) { + params = (Map) parameterValue; + } else { + throw new ElasticsearchParseException("Value must be of type String: [" + parameterName + "]"); + } + } else if (ScriptType.INLINE.getParseField().match(parameterName)) { + if (parameterValue instanceof String || parameterValue == null) { + script = (String) parameterValue; + type = ScriptType.INLINE; + } else { + throw new ElasticsearchParseException("Value must be of type String: [" + parameterName + "]"); + } + } else if (ScriptType.STORED.getParseField().match(parameterName)) { + if (parameterValue instanceof String || parameterValue == null) { + script = (String) parameterValue; + type = ScriptType.STORED; + } else { + throw new ElasticsearchParseException("Value must be of type String: [" + parameterName + "]"); + } } } - } - if (script == null) { - throw new ElasticsearchParseException("expected one of [{}] or [{}] fields, but found none", + if (script == null) { + throw new ElasticsearchParseException("expected one of [{}] or [{}] fields, but found none", ScriptType.INLINE.getParseField().getPreferredName(), ScriptType.STORED.getParseField().getPreferredName()); - } - assert type != null : "if script is not null, type should definitely not be null"; + } + assert type != null : "if script is not null, type should definitely not be null"; - return new Script(type, lang, script, params); + return new Script(type, lang, script, params); + } else { + throw new IllegalArgumentException("Script value should be a String or a Map"); + } } } diff --git a/modules/reindex/src/main/java/org/elasticsearch/index/reindex/TransportReindexAction.java b/modules/reindex/src/main/java/org/elasticsearch/index/reindex/TransportReindexAction.java index 92d7c9ee51fb2..06b683821fd88 100644 --- a/modules/reindex/src/main/java/org/elasticsearch/index/reindex/TransportReindexAction.java +++ b/modules/reindex/src/main/java/org/elasticsearch/index/reindex/TransportReindexAction.java @@ -19,14 +19,14 @@ package org.elasticsearch.index.reindex; -import org.elasticsearch.client.http.Header; -import org.elasticsearch.client.http.HttpHost; -import org.elasticsearch.client.http.auth.AuthScope; -import org.elasticsearch.client.http.auth.UsernamePasswordCredentials; -import org.elasticsearch.client.http.client.CredentialsProvider; -import org.elasticsearch.client.http.impl.client.BasicCredentialsProvider; -import org.elasticsearch.client.http.impl.nio.reactor.IOReactorConfig; -import org.elasticsearch.client.http.message.BasicHeader; +import org.apache.http.Header; +import org.apache.http.HttpHost; +import org.apache.http.auth.AuthScope; +import org.apache.http.auth.UsernamePasswordCredentials; +import org.apache.http.client.CredentialsProvider; +import org.apache.http.impl.client.BasicCredentialsProvider; +import org.apache.http.impl.nio.reactor.IOReactorConfig; +import org.apache.http.message.BasicHeader; import org.apache.logging.log4j.Logger; import org.apache.lucene.util.automaton.Automata; import org.apache.lucene.util.automaton.Automaton; @@ -201,7 +201,7 @@ static RestClient buildRestClient(RemoteInfo remoteInfo, long taskId, List header : remoteInfo.getHeaders().entrySet()) { - clientHeaders[i] = new BasicHeader(header.getKey(), header.getValue()); + clientHeaders[i++] = new BasicHeader(header.getKey(), header.getValue()); } return RestClient.builder(new HttpHost(remoteInfo.getHost(), remoteInfo.getPort(), remoteInfo.getScheme())) .setDefaultHeaders(clientHeaders) diff --git a/modules/reindex/src/main/java/org/elasticsearch/index/reindex/TransportRethrottleAction.java b/modules/reindex/src/main/java/org/elasticsearch/index/reindex/TransportRethrottleAction.java index d8105e4a6ec86..ade9fe0a332d0 100644 --- a/modules/reindex/src/main/java/org/elasticsearch/index/reindex/TransportRethrottleAction.java +++ b/modules/reindex/src/main/java/org/elasticsearch/index/reindex/TransportRethrottleAction.java @@ -70,7 +70,8 @@ static void rethrottle(Logger logger, String localNodeId, Client client, BulkByS return; } - throw new IllegalArgumentException("task [" + task.getId() + "] must be set as a child or parent"); + throw new IllegalArgumentException("task [" + task.getId() + "] has not yet been initialized to the point where it knows how to " + + "rethrottle itself"); } private static void rethrottleParentTask(Logger logger, String localNodeId, Client client, BulkByScrollTask task, diff --git a/modules/reindex/src/main/java/org/elasticsearch/index/reindex/remote/RemoteRequestBuilders.java b/modules/reindex/src/main/java/org/elasticsearch/index/reindex/remote/RemoteRequestBuilders.java index 9f0beb2295a18..50769cc92310b 100644 --- a/modules/reindex/src/main/java/org/elasticsearch/index/reindex/remote/RemoteRequestBuilders.java +++ b/modules/reindex/src/main/java/org/elasticsearch/index/reindex/remote/RemoteRequestBuilders.java @@ -19,10 +19,10 @@ package org.elasticsearch.index.reindex.remote; -import org.elasticsearch.client.http.HttpEntity; -import org.elasticsearch.client.http.entity.ByteArrayEntity; -import org.elasticsearch.client.http.entity.ContentType; -import org.elasticsearch.client.http.entity.StringEntity; +import org.apache.http.HttpEntity; +import org.apache.http.entity.ByteArrayEntity; +import org.apache.http.entity.ContentType; +import org.apache.http.entity.StringEntity; import org.apache.lucene.util.BytesRef; import org.elasticsearch.ElasticsearchException; import org.elasticsearch.Version; diff --git a/modules/reindex/src/main/java/org/elasticsearch/index/reindex/remote/RemoteScrollableHitSource.java b/modules/reindex/src/main/java/org/elasticsearch/index/reindex/remote/RemoteScrollableHitSource.java index 700341b8a813a..85173b7d89962 100644 --- a/modules/reindex/src/main/java/org/elasticsearch/index/reindex/remote/RemoteScrollableHitSource.java +++ b/modules/reindex/src/main/java/org/elasticsearch/index/reindex/remote/RemoteScrollableHitSource.java @@ -19,10 +19,10 @@ package org.elasticsearch.index.reindex.remote; -import org.elasticsearch.client.http.ContentTooLongException; -import org.elasticsearch.client.http.HttpEntity; -import org.elasticsearch.client.http.entity.ContentType; -import org.elasticsearch.client.http.util.EntityUtils; +import org.apache.http.ContentTooLongException; +import org.apache.http.HttpEntity; +import org.apache.http.entity.ContentType; +import org.apache.http.util.EntityUtils; import org.apache.logging.log4j.Logger; import org.apache.logging.log4j.message.ParameterizedMessage; import org.apache.logging.log4j.util.Supplier; diff --git a/modules/reindex/src/main/plugin-metadata/plugin-security.policy b/modules/reindex/src/main/plugin-metadata/plugin-security.policy index 39c1d77277169..70fb51b845ce1 100644 --- a/modules/reindex/src/main/plugin-metadata/plugin-security.policy +++ b/modules/reindex/src/main/plugin-metadata/plugin-security.policy @@ -22,7 +22,7 @@ grant { permission java.net.SocketPermission "*", "connect"; }; -grant codeBase "${codebase.elasticsearch-rest-client-7.0.0-alpha1-SNAPSHOT.jar}" { +grant codeBase "${codebase.elasticsearch-rest-client}" { // rest client uses system properties which gets the default proxy permission java.net.NetPermission "getProxySelector"; }; diff --git a/modules/reindex/src/test/java/org/elasticsearch/index/reindex/DeleteByQueryBasicTests.java b/modules/reindex/src/test/java/org/elasticsearch/index/reindex/DeleteByQueryBasicTests.java index 276dc955f8277..0b8dea6ea41f2 100644 --- a/modules/reindex/src/test/java/org/elasticsearch/index/reindex/DeleteByQueryBasicTests.java +++ b/modules/reindex/src/test/java/org/elasticsearch/index/reindex/DeleteByQueryBasicTests.java @@ -22,6 +22,7 @@ import org.elasticsearch.Version; import org.elasticsearch.action.admin.indices.alias.Alias; import org.elasticsearch.action.index.IndexRequestBuilder; +import org.elasticsearch.common.settings.Settings; import org.elasticsearch.index.IndexNotFoundException; import org.elasticsearch.index.query.QueryBuilders; import org.elasticsearch.plugins.Plugin; @@ -144,7 +145,7 @@ public void testDeleteByQueryWithMissingIndex() throws Exception { } public void testDeleteByQueryWithRouting() throws Exception { - assertAcked(prepareCreate("test").setSettings("number_of_shards", 2)); + assertAcked(prepareCreate("test").setSettings(Settings.builder().put("number_of_shards", 2))); ensureGreen("test"); final int docs = randomIntBetween(2, 10); @@ -313,7 +314,7 @@ public void testMultipleSources() throws Exception { */ public void testFilterByType() throws Exception { assertAcked(client().admin().indices().prepareCreate("test") - .setSettings("index.version.created", Version.V_5_6_0.id)); // allows for multiple types + .setSettings(Settings.builder().put("index.version.created", Version.V_5_6_0.id))); // allows for multiple types indexRandom(true, client().prepareIndex("test", "test1", "1").setSource("foo", "a"), client().prepareIndex("test", "test2", "2").setSource("foo", "a"), diff --git a/modules/reindex/src/test/java/org/elasticsearch/index/reindex/ManyDocumentsIT.java b/modules/reindex/src/test/java/org/elasticsearch/index/reindex/ManyDocumentsIT.java new file mode 100644 index 0000000000000..e9082c96fd163 --- /dev/null +++ b/modules/reindex/src/test/java/org/elasticsearch/index/reindex/ManyDocumentsIT.java @@ -0,0 +1,97 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.index.reindex; + +import org.apache.http.entity.ContentType; +import org.apache.http.entity.StringEntity; +import org.elasticsearch.client.Response; +import org.elasticsearch.common.xcontent.XContentHelper; +import org.elasticsearch.common.xcontent.json.JsonXContent; +import org.elasticsearch.test.rest.ESRestTestCase; +import org.junit.Before; + +import java.io.IOException; +import java.util.Map; + +import static java.util.Collections.emptyMap; +import static java.util.Collections.singletonMap; +import static org.hamcrest.Matchers.hasEntry; + +/** + * Tests {@code _update_by_query}, {@code _delete_by_query}, and {@code _reindex} + * of many documents over REST. It is important to test many documents to make + * sure that we don't change the default behavior of touching all + * documents in the request. + */ +public class ManyDocumentsIT extends ESRestTestCase { + private final int count = between(150, 2000); + + @Before + public void setupTestIndex() throws IOException { + StringBuilder bulk = new StringBuilder(); + for (int i = 0; i < count; i++) { + bulk.append("{\"index\":{}}\n"); + bulk.append("{\"test\":\"test\"}\n"); + } + client().performRequest("POST", "/test/test/_bulk", singletonMap("refresh", "true"), + new StringEntity(bulk.toString(), ContentType.APPLICATION_JSON)); + } + + public void testReindex() throws IOException { + Map response = toMap(client().performRequest("POST", "/_reindex", emptyMap(), new StringEntity( + "{\"source\":{\"index\":\"test\"}, \"dest\":{\"index\":\"des\"}}", + ContentType.APPLICATION_JSON))); + assertThat(response, hasEntry("total", count)); + assertThat(response, hasEntry("created", count)); + } + + public void testReindexFromRemote() throws IOException { + Map nodesInfo = toMap(client().performRequest("GET", "/_nodes/http")); + nodesInfo = (Map) nodesInfo.get("nodes"); + Map nodeInfo = (Map) nodesInfo.values().iterator().next(); + Map http = (Map) nodeInfo.get("http"); + String remote = "http://"+ http.get("publish_address"); + Map response = toMap(client().performRequest("POST", "/_reindex", emptyMap(), new StringEntity( + "{\"source\":{\"index\":\"test\",\"remote\":{\"host\":\"" + remote + "\"}}, \"dest\":{\"index\":\"des\"}}", + ContentType.APPLICATION_JSON))); + assertThat(response, hasEntry("total", count)); + assertThat(response, hasEntry("created", count)); + } + + + public void testUpdateByQuery() throws IOException { + Map response = toMap(client().performRequest("POST", "/test/_update_by_query")); + assertThat(response, hasEntry("total", count)); + assertThat(response, hasEntry("updated", count)); + } + + public void testDeleteByQuery() throws IOException { + Map response = toMap(client().performRequest("POST", "/test/_delete_by_query", emptyMap(), new StringEntity( + "{\"query\":{\"match_all\":{}}}", + ContentType.APPLICATION_JSON))); + assertThat(response, hasEntry("total", count)); + assertThat(response, hasEntry("deleted", count)); + } + + static Map toMap(Response response) throws IOException { + return XContentHelper.convertToMap(JsonXContent.jsonXContent, response.getEntity().getContent(), false); + } + +} diff --git a/modules/reindex/src/test/java/org/elasticsearch/index/reindex/ReindexFailureTests.java b/modules/reindex/src/test/java/org/elasticsearch/index/reindex/ReindexFailureTests.java index 09af00ca36664..f101b12538289 100644 --- a/modules/reindex/src/test/java/org/elasticsearch/index/reindex/ReindexFailureTests.java +++ b/modules/reindex/src/test/java/org/elasticsearch/index/reindex/ReindexFailureTests.java @@ -63,7 +63,7 @@ public void testFailuresCauseAbortDefault() throws Exception { .batches(1) .failures(both(greaterThan(0)).and(lessThanOrEqualTo(maximumNumberOfShards())))); for (Failure failure: response.getBulkFailures()) { - assertThat(failure.getMessage(), containsString("NumberFormatException[For input string: \"words words\"]")); + assertThat(failure.getMessage(), containsString("IllegalArgumentException[For input string: \"words words\"]")); } } diff --git a/modules/reindex/src/test/java/org/elasticsearch/index/reindex/ReindexFromRemoteBuildRestClientTests.java b/modules/reindex/src/test/java/org/elasticsearch/index/reindex/ReindexFromRemoteBuildRestClientTests.java index d7d9cfe051b8c..c5957ef8be5a7 100644 --- a/modules/reindex/src/test/java/org/elasticsearch/index/reindex/ReindexFromRemoteBuildRestClientTests.java +++ b/modules/reindex/src/test/java/org/elasticsearch/index/reindex/ReindexFromRemoteBuildRestClientTests.java @@ -20,17 +20,21 @@ package org.elasticsearch.index.reindex; import org.elasticsearch.client.RestClient; +import org.elasticsearch.client.RestClientBuilderTestCase; import org.elasticsearch.common.bytes.BytesArray; import org.elasticsearch.test.ESTestCase; import java.util.ArrayList; +import java.util.Collections; +import java.util.HashMap; import java.util.List; +import java.util.Map; import static java.util.Collections.emptyMap; import static java.util.Collections.synchronizedList; import static org.hamcrest.Matchers.hasSize; -public class ReindexFromRemoteBuildRestClientTests extends ESTestCase { +public class ReindexFromRemoteBuildRestClientTests extends RestClientBuilderTestCase { public void testBuildRestClient() throws Exception { RemoteInfo remoteInfo = new RemoteInfo("https", "localhost", 9200, new BytesArray("ignored"), null, null, emptyMap(), RemoteInfo.DEFAULT_SOCKET_TIMEOUT, RemoteInfo.DEFAULT_CONNECT_TIMEOUT); @@ -48,4 +52,22 @@ public void testBuildRestClient() throws Exception { client.close(); } } + + public void testHeaders() throws Exception { + Map headers = new HashMap<>(); + int numHeaders = randomIntBetween(1, 5); + for (int i = 0; i < numHeaders; ++i) { + headers.put("header" + i, Integer.toString(i)); + } + RemoteInfo remoteInfo = new RemoteInfo("https", "localhost", 9200, new BytesArray("ignored"), null, null, + headers, RemoteInfo.DEFAULT_SOCKET_TIMEOUT, RemoteInfo.DEFAULT_CONNECT_TIMEOUT); + long taskId = randomLong(); + List threads = synchronizedList(new ArrayList<>()); + RestClient client = TransportReindexAction.buildRestClient(remoteInfo, taskId, threads); + try { + assertHeaders(client, headers); + } finally { + client.close(); + } + } } diff --git a/modules/reindex/src/test/java/org/elasticsearch/index/reindex/ReindexFromRemoteWithAuthTests.java b/modules/reindex/src/test/java/org/elasticsearch/index/reindex/ReindexFromRemoteWithAuthTests.java index c4b5c26e5c4ef..31077c405d8e1 100644 --- a/modules/reindex/src/test/java/org/elasticsearch/index/reindex/ReindexFromRemoteWithAuthTests.java +++ b/modules/reindex/src/test/java/org/elasticsearch/index/reindex/ReindexFromRemoteWithAuthTests.java @@ -19,6 +19,7 @@ package org.elasticsearch.index.reindex; +import org.apache.lucene.util.SetOnce; import org.elasticsearch.ElasticsearchSecurityException; import org.elasticsearch.ElasticsearchStatusException; import org.elasticsearch.action.ActionListener; @@ -29,23 +30,31 @@ import org.elasticsearch.action.support.ActionFilter; import org.elasticsearch.action.support.ActionFilterChain; import org.elasticsearch.action.support.WriteRequest.RefreshPolicy; +import org.elasticsearch.client.Client; +import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.bytes.BytesArray; -import org.elasticsearch.common.inject.Inject; +import org.elasticsearch.common.io.stream.NamedWriteableRegistry; import org.elasticsearch.common.network.NetworkModule; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.transport.TransportAddress; import org.elasticsearch.common.util.concurrent.ThreadContext; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; +import org.elasticsearch.env.Environment; +import org.elasticsearch.env.NodeEnvironment; import org.elasticsearch.plugins.ActionPlugin; import org.elasticsearch.plugins.Plugin; import org.elasticsearch.rest.RestStatus; +import org.elasticsearch.script.ScriptService; import org.elasticsearch.tasks.Task; import org.elasticsearch.test.ESSingleNodeTestCase; import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.Netty4Plugin; +import org.elasticsearch.watcher.ResourceWatcherService; import org.junit.Before; import java.util.Arrays; import java.util.Collection; +import java.util.Collections; import java.util.List; import java.util.Map; @@ -129,9 +138,21 @@ public void testReindexWithBadAuthentication() throws Exception { * Plugin that demands authentication. */ public static class TestPlugin extends Plugin implements ActionPlugin { + + private final SetOnce testFilter = new SetOnce<>(); + + @Override + public Collection createComponents(Client client, ClusterService clusterService, ThreadPool threadPool, + ResourceWatcherService resourceWatcherService, ScriptService scriptService, + NamedXContentRegistry xContentRegistry, Environment environment, + NodeEnvironment nodeEnvironment, NamedWriteableRegistry namedWriteableRegistry) { + testFilter.set(new ReindexFromRemoteWithAuthTests.TestFilter(threadPool)); + return Collections.emptyList(); + } + @Override - public List> getActionFilters() { - return singletonList(ReindexFromRemoteWithAuthTests.TestFilter.class); + public List getActionFilters() { + return singletonList(testFilter.get()); } @Override @@ -153,7 +174,6 @@ public static class TestFilter implements ActionFilter { private static final String EXAMPLE_HEADER = "Example-Header"; private final ThreadContext context; - @Inject public TestFilter(ThreadPool threadPool) { context = threadPool.getThreadContext(); } diff --git a/modules/reindex/src/test/java/org/elasticsearch/index/reindex/ReindexParentChildTests.java b/modules/reindex/src/test/java/org/elasticsearch/index/reindex/ReindexParentChildTests.java index 14eb9245939fe..b7737beb33af6 100644 --- a/modules/reindex/src/test/java/org/elasticsearch/index/reindex/ReindexParentChildTests.java +++ b/modules/reindex/src/test/java/org/elasticsearch/index/reindex/ReindexParentChildTests.java @@ -21,6 +21,7 @@ import org.elasticsearch.Version; import org.elasticsearch.action.admin.indices.create.CreateIndexRequestBuilder; +import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.index.query.QueryBuilder; import org.elasticsearch.join.ParentJoinPlugin; @@ -113,7 +114,7 @@ public void testParentChild() throws Exception { */ public void testScriptAddsParent() throws Exception { assertAcked(client().admin().indices().prepareCreate("source") - .setSettings("index.version.created", Version.V_5_6_0.id)); // allows for multiple types + .setSettings(Settings.builder().put("index.version.created", Version.V_5_6_0.id))); // allows for multiple types createParentChildIndex("dest"); createParentChildDocs("source", false); @@ -149,7 +150,7 @@ public void testErrorMessageWhenBadParentChild() throws Exception { */ private void createParentChildIndex(String indexName) throws Exception { CreateIndexRequestBuilder create = client().admin().indices().prepareCreate(indexName); - create.setSettings("index.version.created", Version.V_5_6_0.id); // allows for multiple types + create.setSettings(Settings.builder().put("index.version.created", Version.V_5_6_0.id)); // allows for multiple types create.addMapping("city", "{\"_parent\": {\"type\": \"country\"}}", XContentType.JSON); create.addMapping("neighborhood", "{\"_parent\": {\"type\": \"city\"}}", XContentType.JSON); assertAcked(create); diff --git a/modules/reindex/src/test/java/org/elasticsearch/index/reindex/RestDeleteByQueryActionTests.java b/modules/reindex/src/test/java/org/elasticsearch/index/reindex/RestDeleteByQueryActionTests.java new file mode 100644 index 0000000000000..1f972cd282425 --- /dev/null +++ b/modules/reindex/src/test/java/org/elasticsearch/index/reindex/RestDeleteByQueryActionTests.java @@ -0,0 +1,41 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.index.reindex; + +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; +import org.elasticsearch.rest.RestController; +import org.elasticsearch.test.ESTestCase; +import org.elasticsearch.test.rest.FakeRestRequest; + +import java.io.IOException; + +import static java.util.Collections.emptyList; +import static org.mockito.Mockito.mock; + +public class RestDeleteByQueryActionTests extends ESTestCase { + public void testParseEmpty() throws IOException { + RestDeleteByQueryAction action = new RestDeleteByQueryAction(Settings.EMPTY, mock(RestController.class)); + DeleteByQueryRequest request = action.buildRequest(new FakeRestRequest.Builder(new NamedXContentRegistry(emptyList())) + .build()); + assertEquals(AbstractBulkByScrollRequest.SIZE_ALL_MATCHES, request.getSize()); + assertEquals(AbstractBulkByScrollRequest.DEFAULT_SCROLL_SIZE, request.getSearchRequest().source().size()); + } +} diff --git a/modules/reindex/src/test/java/org/elasticsearch/index/reindex/RestUpdateByQueryActionTests.java b/modules/reindex/src/test/java/org/elasticsearch/index/reindex/RestUpdateByQueryActionTests.java new file mode 100644 index 0000000000000..efb6e20a20089 --- /dev/null +++ b/modules/reindex/src/test/java/org/elasticsearch/index/reindex/RestUpdateByQueryActionTests.java @@ -0,0 +1,41 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.index.reindex; + +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; +import org.elasticsearch.rest.RestController; +import org.elasticsearch.test.ESTestCase; +import org.elasticsearch.test.rest.FakeRestRequest; + +import java.io.IOException; + +import static java.util.Collections.emptyList; +import static org.mockito.Mockito.mock; + +public class RestUpdateByQueryActionTests extends ESTestCase { + public void testParseEmpty() throws IOException { + RestUpdateByQueryAction action = new RestUpdateByQueryAction(Settings.EMPTY, mock(RestController.class)); + UpdateByQueryRequest request = action.buildRequest(new FakeRestRequest.Builder(new NamedXContentRegistry(emptyList())) + .build()); + assertEquals(AbstractBulkByScrollRequest.SIZE_ALL_MATCHES, request.getSize()); + assertEquals(AbstractBulkByScrollRequest.DEFAULT_SCROLL_SIZE, request.getSearchRequest().source().size()); + } +} diff --git a/modules/reindex/src/test/java/org/elasticsearch/index/reindex/RethrottleTests.java b/modules/reindex/src/test/java/org/elasticsearch/index/reindex/RethrottleTests.java index 5fd44a0063166..3ebd674a81eab 100644 --- a/modules/reindex/src/test/java/org/elasticsearch/index/reindex/RethrottleTests.java +++ b/modules/reindex/src/test/java/org/elasticsearch/index/reindex/RethrottleTests.java @@ -19,18 +19,20 @@ package org.elasticsearch.index.reindex; -import org.apache.lucene.util.LuceneTestCase.AwaitsFix; +import org.elasticsearch.ElasticsearchException; import org.elasticsearch.action.ActionFuture; import org.elasticsearch.action.admin.cluster.node.tasks.list.ListTasksResponse; import org.elasticsearch.action.admin.cluster.node.tasks.list.TaskGroup; import org.elasticsearch.action.index.IndexRequestBuilder; import org.elasticsearch.index.query.QueryBuilders; import org.elasticsearch.tasks.TaskId; +import org.elasticsearch.test.junit.annotations.TestLogging; import java.util.ArrayList; import java.util.List; import java.util.Objects; import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicReference; import static org.hamcrest.Matchers.allOf; import static org.hamcrest.Matchers.both; @@ -46,7 +48,7 @@ * too but this is the only place that tests running against multiple nodes so it is the only integration tests that checks for * serialization. */ -@AwaitsFix(bugUrl = "https://github.com/elastic/elasticsearch/issues/26192") +@TestLogging("org.elasticsearch.index.reindex:TRACE,org.elasticsearch.action.bulk:TRACE,org.elasticsearch.search.SearchService:TRACE") public class RethrottleTests extends ReindexTestCase { public void testReindex() throws Exception { @@ -116,9 +118,7 @@ private void testCase(AbstractBulkByScrollRequestBuilder request, String a // Now rethrottle it so it'll finish float newRequestsPerSecond = randomBoolean() ? Float.POSITIVE_INFINITY : between(1, 1000) * 100000; // No throttle or "very fast" - ListTasksResponse rethrottleResponse = rethrottle().setTaskId(taskToRethrottle).setRequestsPerSecond(newRequestsPerSecond).get(); - rethrottleResponse.rethrowFailures("Rethrottle"); - assertThat(rethrottleResponse.getTasks(), hasSize(1)); + ListTasksResponse rethrottleResponse = rethrottleTask(taskToRethrottle, newRequestsPerSecond); BulkByScrollTask.Status status = (BulkByScrollTask.Status) rethrottleResponse.getTasks().get(0).getStatus(); // Now check the resulting requests per second. @@ -174,6 +174,33 @@ private void testCase(AbstractBulkByScrollRequestBuilder request, String a response.getBatches(), greaterThanOrEqualTo(numSlices)); } + private ListTasksResponse rethrottleTask(TaskId taskToRethrottle, float newRequestsPerSecond) throws Exception { + // the task isn't ready to be rethrottled until it has figured out how many slices it will use. if we rethrottle when the task is + // in this state, the request will fail. so we try a few times + AtomicReference response = new AtomicReference<>(); + + assertBusy(() -> { + try { + ListTasksResponse rethrottleResponse = rethrottle() + .setTaskId(taskToRethrottle) + .setRequestsPerSecond(newRequestsPerSecond) + .get(); + rethrottleResponse.rethrowFailures("Rethrottle"); + assertThat(rethrottleResponse.getTasks(), hasSize(1)); + response.set(rethrottleResponse); + } catch (ElasticsearchException e) { + // if it's the error we're expecting, rethrow as AssertionError so awaitBusy doesn't exit early + if (e.getCause() instanceof IllegalArgumentException) { + throw new AssertionError("Rethrottle request for task [" + taskToRethrottle.getId() + "] failed", e); + } else { + throw e; + } + } + }); + + return response.get(); + } + private TaskGroup findTaskToRethrottle(String actionName, int sliceCount) { long start = System.nanoTime(); do { diff --git a/modules/reindex/src/test/java/org/elasticsearch/index/reindex/RetryTests.java b/modules/reindex/src/test/java/org/elasticsearch/index/reindex/RetryTests.java index 9b19c572c0b3e..da0dbf2aae345 100644 --- a/modules/reindex/src/test/java/org/elasticsearch/index/reindex/RetryTests.java +++ b/modules/reindex/src/test/java/org/elasticsearch/index/reindex/RetryTests.java @@ -26,6 +26,7 @@ import org.elasticsearch.action.bulk.BulkRequestBuilder; import org.elasticsearch.action.bulk.BulkResponse; import org.elasticsearch.action.bulk.Retry; +import org.elasticsearch.client.Client; import org.elasticsearch.common.bytes.BytesArray; import org.elasticsearch.common.network.NetworkModule; import org.elasticsearch.common.settings.Settings; @@ -33,16 +34,17 @@ import org.elasticsearch.common.util.concurrent.EsRejectedExecutionException; import org.elasticsearch.index.query.QueryBuilders; import org.elasticsearch.plugins.Plugin; -import org.elasticsearch.test.ESSingleNodeTestCase; +import org.elasticsearch.test.ESIntegTestCase; import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.Netty4Plugin; import org.junit.After; -import org.junit.Before; import java.util.ArrayList; +import java.util.Arrays; import java.util.Collection; import java.util.List; import java.util.concurrent.CyclicBarrier; +import java.util.function.Function; import static java.util.Collections.emptyMap; import static org.elasticsearch.index.reindex.ReindexTestCase.matcher; @@ -51,32 +53,15 @@ import static org.hamcrest.Matchers.hasSize; /** - * Integration test for retry behavior. Useful because retrying relies on the way that the rest of Elasticsearch throws exceptions and unit - * tests won't verify that. + * Integration test for retry behavior. Useful because retrying relies on the way that the + * rest of Elasticsearch throws exceptions and unit tests won't verify that. */ -public class RetryTests extends ESSingleNodeTestCase { +public class RetryTests extends ESIntegTestCase { private static final int DOC_COUNT = 20; private List blockedExecutors = new ArrayList<>(); - - @Before - public void setUp() throws Exception { - super.setUp(); - createIndex("source"); - // Build the test data. Don't use indexRandom because that won't work consistently with such small thread pools. - BulkRequestBuilder bulk = client().prepareBulk(); - for (int i = 0; i < DOC_COUNT; i++) { - bulk.add(client().prepareIndex("source", "test").setSource("foo", "bar " + i)); - } - - Retry retry = new Retry(EsRejectedExecutionException.class, BackoffPolicy.exponentialBackoff(), client().threadPool()); - BulkResponse response = retry.withBackoff(client()::bulk, bulk.request(), client().settings()).actionGet(); - assertFalse(response.buildFailureMessage(), response.hasFailures()); - client().admin().indices().prepareRefresh("source").get(); - } - @After public void forceUnblockAllExecutors() { for (CyclicBarrier barrier: blockedExecutors) { @@ -85,8 +70,15 @@ public void forceUnblockAllExecutors() { } @Override - protected Collection> getPlugins() { - return pluginList( + protected Collection> nodePlugins() { + return Arrays.asList( + ReindexPlugin.class, + Netty4Plugin.class); + } + + @Override + protected Collection> transportClientPlugins() { + return Arrays.asList( ReindexPlugin.class, Netty4Plugin.class); } @@ -95,63 +87,123 @@ protected Collection> getPlugins() { * Lower the queue sizes to be small enough that both bulk and searches will time out and have to be retried. */ @Override - protected Settings nodeSettings() { - Settings.Builder settings = Settings.builder().put(super.nodeSettings()); - // Use pools of size 1 so we can block them - settings.put("thread_pool.bulk.size", 1); - settings.put("thread_pool.search.size", 1); - // Use queues of size 1 because size 0 is broken and because search requests need the queue to function - settings.put("thread_pool.bulk.queue_size", 1); - settings.put("thread_pool.search.queue_size", 1); - // Enable http so we can test retries on reindex from remote. In this case the "remote" cluster is just this cluster. - settings.put(NetworkModule.HTTP_ENABLED.getKey(), true); - // Whitelist reindexing from the http host we're going to use - settings.put(TransportReindexAction.REMOTE_CLUSTER_WHITELIST.getKey(), "127.0.0.1:*"); - return settings.build(); + protected Settings nodeSettings(int nodeOrdinal) { + return Settings.builder().put(super.nodeSettings(nodeOrdinal)).put(nodeSettings()).build(); + } + + final Settings nodeSettings() { + return Settings.builder() + // enable HTTP so we can test retries on reindex from remote; in this case the "remote" cluster is just this cluster + .put(NetworkModule.HTTP_ENABLED.getKey(), true) + // whitelist reindexing from the HTTP host we're going to use + .put(TransportReindexAction.REMOTE_CLUSTER_WHITELIST.getKey(), "127.0.0.1:*") + .build(); } public void testReindex() throws Exception { - testCase(ReindexAction.NAME, ReindexAction.INSTANCE.newRequestBuilder(client()).source("source").destination("dest"), + testCase( + ReindexAction.NAME, + client -> ReindexAction.INSTANCE.newRequestBuilder(client).source("source").destination("dest"), matcher().created(DOC_COUNT)); } public void testReindexFromRemote() throws Exception { - NodeInfo nodeInfo = client().admin().cluster().prepareNodesInfo().get().getNodes().get(0); - TransportAddress address = nodeInfo.getHttp().getAddress().publishAddress(); - RemoteInfo remote = new RemoteInfo("http", address.getAddress(), address.getPort(), new BytesArray("{\"match_all\":{}}"), null, - null, emptyMap(), RemoteInfo.DEFAULT_SOCKET_TIMEOUT, RemoteInfo.DEFAULT_CONNECT_TIMEOUT); - ReindexRequestBuilder request = ReindexAction.INSTANCE.newRequestBuilder(client()).source("source").destination("dest") - .setRemoteInfo(remote); - testCase(ReindexAction.NAME, request, matcher().created(DOC_COUNT)); + Function> function = client -> { + /* + * Use the master node for the reindex from remote because that node + * doesn't have a copy of the data on it. + */ + NodeInfo masterNode = null; + for (NodeInfo candidate : client.admin().cluster().prepareNodesInfo().get().getNodes()) { + if (candidate.getNode().isMasterNode()) { + masterNode = candidate; + } + } + assertNotNull(masterNode); + + TransportAddress address = masterNode.getHttp().getAddress().publishAddress(); + RemoteInfo remote = new RemoteInfo("http", address.getAddress(), address.getPort(), new BytesArray("{\"match_all\":{}}"), null, + null, emptyMap(), RemoteInfo.DEFAULT_SOCKET_TIMEOUT, RemoteInfo.DEFAULT_CONNECT_TIMEOUT); + ReindexRequestBuilder request = ReindexAction.INSTANCE.newRequestBuilder(client).source("source").destination("dest") + .setRemoteInfo(remote); + return request; + }; + testCase(ReindexAction.NAME, function, matcher().created(DOC_COUNT)); } public void testUpdateByQuery() throws Exception { - testCase(UpdateByQueryAction.NAME, UpdateByQueryAction.INSTANCE.newRequestBuilder(client()).source("source"), + testCase(UpdateByQueryAction.NAME, client -> UpdateByQueryAction.INSTANCE.newRequestBuilder(client).source("source"), matcher().updated(DOC_COUNT)); } public void testDeleteByQuery() throws Exception { - testCase(DeleteByQueryAction.NAME, DeleteByQueryAction.INSTANCE.newRequestBuilder(client()).source("source") + testCase(DeleteByQueryAction.NAME, client -> DeleteByQueryAction.INSTANCE.newRequestBuilder(client).source("source") .filter(QueryBuilders.matchAllQuery()), matcher().deleted(DOC_COUNT)); } - private void testCase(String action, AbstractBulkByScrollRequestBuilder request, BulkIndexByScrollResponseMatcher matcher) + private void testCase( + String action, + Function> request, + BulkIndexByScrollResponseMatcher matcher) throws Exception { + /* + * These test cases work by stuffing the search and bulk queues of a single node and + * making sure that we read and write from that node. Because of some "fun" with the + * way that searches work, we need at least one more node to act as the coordinating + * node for the search request. If we didn't do this then the searches would get stuck + * in the queue anyway because we force queue portions of the coordinating node's + * actions. This is not a big deal in normal operations but a real pain when you are + * intentionally stuffing queues hoping for a failure. + */ + + final Settings nodeSettings = Settings.builder() + // use pools of size 1 so we can block them + .put("thread_pool.bulk.size", 1) + .put("thread_pool.search.size", 1) + // use queues of size 1 because size 0 is broken and because search requests need the queue to function + .put("thread_pool.bulk.queue_size", 1) + .put("thread_pool.search.queue_size", 1) + .put("node.attr.color", "blue") + .build(); + final String node = internalCluster().startDataOnlyNode(nodeSettings); + final Settings indexSettings = + Settings.builder() + .put("index.number_of_shards", 1) + .put("index.number_of_replicas", 0) + .put("index.routing.allocation.include.color", "blue") + .build(); + + // Create the source index on the node with small thread pools so we can block them. + client().admin().indices().prepareCreate("source").setSettings(indexSettings).execute().actionGet(); + // Not all test cases use the dest index but those that do require that it be on the node will small thread pools + client().admin().indices().prepareCreate("dest").setSettings(indexSettings).execute().actionGet(); + // Build the test data. Don't use indexRandom because that won't work consistently with such small thread pools. + BulkRequestBuilder bulk = client().prepareBulk(); + for (int i = 0; i < DOC_COUNT; i++) { + bulk.add(client().prepareIndex("source", "test").setSource("foo", "bar " + i)); + } + + Retry retry = new Retry(EsRejectedExecutionException.class, BackoffPolicy.exponentialBackoff(), client().threadPool()); + BulkResponse initialBulkResponse = retry.withBackoff(client()::bulk, bulk.request(), client().settings()).actionGet(); + assertFalse(initialBulkResponse.buildFailureMessage(), initialBulkResponse.hasFailures()); + client().admin().indices().prepareRefresh("source").get(); + logger.info("Blocking search"); - CyclicBarrier initialSearchBlock = blockExecutor(ThreadPool.Names.SEARCH); + CyclicBarrier initialSearchBlock = blockExecutor(ThreadPool.Names.SEARCH, node); + AbstractBulkByScrollRequestBuilder builder = request.apply(internalCluster().masterClient()); // Make sure we use more than one batch so we have to scroll - request.source().setSize(DOC_COUNT / randomIntBetween(2, 10)); + builder.source().setSize(DOC_COUNT / randomIntBetween(2, 10)); logger.info("Starting request"); - ActionFuture responseListener = request.execute(); + ActionFuture responseListener = builder.execute(); try { logger.info("Waiting for search rejections on the initial search"); assertBusy(() -> assertThat(taskStatus(action).getSearchRetries(), greaterThan(0L))); logger.info("Blocking bulk and unblocking search so we start to get bulk rejections"); - CyclicBarrier bulkBlock = blockExecutor(ThreadPool.Names.BULK); + CyclicBarrier bulkBlock = blockExecutor(ThreadPool.Names.BULK, node); initialSearchBlock.await(); logger.info("Waiting for bulk rejections"); @@ -161,7 +213,7 @@ private void testCase(String action, AbstractBulkByScrollRequestBuilder re long initialSearchRejections = taskStatus(action).getSearchRetries(); logger.info("Blocking search and unblocking bulk so we should get search rejections for the scroll"); - CyclicBarrier scrollBlock = blockExecutor(ThreadPool.Names.SEARCH); + CyclicBarrier scrollBlock = blockExecutor(ThreadPool.Names.SEARCH, node); bulkBlock.await(); logger.info("Waiting for search rejections for the scroll"); @@ -187,8 +239,8 @@ private void testCase(String action, AbstractBulkByScrollRequestBuilder re * Blocks the named executor by getting its only thread running a task blocked on a CyclicBarrier and fills the queue with a noop task. * So requests to use this queue should get {@link EsRejectedExecutionException}s. */ - private CyclicBarrier blockExecutor(String name) throws Exception { - ThreadPool threadPool = getInstanceFromNode(ThreadPool.class); + private CyclicBarrier blockExecutor(String name, String node) throws Exception { + ThreadPool threadPool = internalCluster().getInstance(ThreadPool.class, node); CyclicBarrier barrier = new CyclicBarrier(2); logger.info("Blocking the [{}] executor", name); threadPool.executor(name).execute(() -> { @@ -211,6 +263,11 @@ private CyclicBarrier blockExecutor(String name) throws Exception { * Fetch the status for a task of type "action". Fails if there aren't exactly one of that type of task running. */ private BulkByScrollTask.Status taskStatus(String action) { + /* + * We always use the master client because we always start the test requests on the + * master. We do this simply to make sure that the test request is not started on the + * node who's queue we're manipulating. + */ ListTasksResponse response = client().admin().cluster().prepareListTasks().setActions(action).setDetailed(true).get(); assertThat(response.getTasks(), hasSize(1)); return (BulkByScrollTask.Status) response.getTasks().get(0).getStatus(); diff --git a/modules/reindex/src/test/java/org/elasticsearch/index/reindex/remote/RemoteRequestBuildersTests.java b/modules/reindex/src/test/java/org/elasticsearch/index/reindex/remote/RemoteRequestBuildersTests.java index a65ef21663f0e..9cb644162da40 100644 --- a/modules/reindex/src/test/java/org/elasticsearch/index/reindex/remote/RemoteRequestBuildersTests.java +++ b/modules/reindex/src/test/java/org/elasticsearch/index/reindex/remote/RemoteRequestBuildersTests.java @@ -19,8 +19,8 @@ package org.elasticsearch.index.reindex.remote; -import org.elasticsearch.client.http.HttpEntity; -import org.elasticsearch.client.http.entity.ContentType; +import org.apache.http.HttpEntity; +import org.apache.http.entity.ContentType; import org.elasticsearch.Version; import org.elasticsearch.action.search.SearchRequest; import org.elasticsearch.common.bytes.BytesArray; diff --git a/modules/reindex/src/test/java/org/elasticsearch/index/reindex/remote/RemoteScrollableHitSourceTests.java b/modules/reindex/src/test/java/org/elasticsearch/index/reindex/remote/RemoteScrollableHitSourceTests.java index 211dd5d0d0062..f67a5b627fb4c 100644 --- a/modules/reindex/src/test/java/org/elasticsearch/index/reindex/remote/RemoteScrollableHitSourceTests.java +++ b/modules/reindex/src/test/java/org/elasticsearch/index/reindex/remote/RemoteScrollableHitSourceTests.java @@ -19,24 +19,24 @@ package org.elasticsearch.index.reindex.remote; -import org.elasticsearch.client.http.ContentTooLongException; -import org.elasticsearch.client.http.HttpEntity; -import org.elasticsearch.client.http.HttpEntityEnclosingRequest; -import org.elasticsearch.client.http.HttpHost; -import org.elasticsearch.client.http.HttpResponse; -import org.elasticsearch.client.http.ProtocolVersion; -import org.elasticsearch.client.http.StatusLine; -import org.elasticsearch.client.http.client.protocol.HttpClientContext; -import org.elasticsearch.client.http.concurrent.FutureCallback; -import org.elasticsearch.client.http.entity.ContentType; -import org.elasticsearch.client.http.entity.InputStreamEntity; -import org.elasticsearch.client.http.entity.StringEntity; -import org.elasticsearch.client.http.impl.nio.client.CloseableHttpAsyncClient; -import org.elasticsearch.client.http.impl.nio.client.HttpAsyncClientBuilder; -import org.elasticsearch.client.http.message.BasicHttpResponse; -import org.elasticsearch.client.http.message.BasicStatusLine; -import org.elasticsearch.client.http.nio.protocol.HttpAsyncRequestProducer; -import org.elasticsearch.client.http.nio.protocol.HttpAsyncResponseConsumer; +import org.apache.http.ContentTooLongException; +import org.apache.http.HttpEntity; +import org.apache.http.HttpEntityEnclosingRequest; +import org.apache.http.HttpHost; +import org.apache.http.HttpResponse; +import org.apache.http.ProtocolVersion; +import org.apache.http.StatusLine; +import org.apache.http.client.protocol.HttpClientContext; +import org.apache.http.concurrent.FutureCallback; +import org.apache.http.entity.ContentType; +import org.apache.http.entity.InputStreamEntity; +import org.apache.http.entity.StringEntity; +import org.apache.http.impl.nio.client.CloseableHttpAsyncClient; +import org.apache.http.impl.nio.client.HttpAsyncClientBuilder; +import org.apache.http.message.BasicHttpResponse; +import org.apache.http.message.BasicStatusLine; +import org.apache.http.nio.protocol.HttpAsyncRequestProducer; +import org.apache.http.nio.protocol.HttpAsyncResponseConsumer; import org.elasticsearch.ElasticsearchStatusException; import org.elasticsearch.Version; import org.elasticsearch.action.bulk.BackoffPolicy; diff --git a/modules/repository-url/src/test/java/org/elasticsearch/repositories/url/URLRepositoryTests.java b/modules/repository-url/src/test/java/org/elasticsearch/repositories/url/URLRepositoryTests.java index ea274eeae602a..1af4c1eaba9ad 100644 --- a/modules/repository-url/src/test/java/org/elasticsearch/repositories/url/URLRepositoryTests.java +++ b/modules/repository-url/src/test/java/org/elasticsearch/repositories/url/URLRepositoryTests.java @@ -23,6 +23,7 @@ import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.env.Environment; +import org.elasticsearch.env.TestEnvironment; import org.elasticsearch.repositories.RepositoryException; import org.elasticsearch.test.ESTestCase; @@ -40,7 +41,8 @@ public void testWhiteListingRepoURL() throws IOException { .put(URLRepository.REPOSITORIES_URL_SETTING.getKey(), repoPath) .build(); RepositoryMetaData repositoryMetaData = new RepositoryMetaData("url", URLRepository.TYPE, baseSettings); - new URLRepository(repositoryMetaData, new Environment(baseSettings), new NamedXContentRegistry(Collections.emptyList())); + new URLRepository(repositoryMetaData, TestEnvironment.newEnvironment(baseSettings), + new NamedXContentRegistry(Collections.emptyList())); } public void testIfNotWhiteListedMustSetRepoURL() throws IOException { @@ -51,7 +53,8 @@ public void testIfNotWhiteListedMustSetRepoURL() throws IOException { .build(); RepositoryMetaData repositoryMetaData = new RepositoryMetaData("url", URLRepository.TYPE, baseSettings); try { - new URLRepository(repositoryMetaData, new Environment(baseSettings), new NamedXContentRegistry(Collections.emptyList())); + new URLRepository(repositoryMetaData, TestEnvironment.newEnvironment(baseSettings), + new NamedXContentRegistry(Collections.emptyList())); fail("RepositoryException should have been thrown."); } catch (RepositoryException e) { String msg = "[url] file url [" + repoPath @@ -71,7 +74,8 @@ public void testMustBeSupportedProtocol() throws IOException { .build(); RepositoryMetaData repositoryMetaData = new RepositoryMetaData("url", URLRepository.TYPE, baseSettings); try { - new URLRepository(repositoryMetaData, new Environment(baseSettings), new NamedXContentRegistry(Collections.emptyList())); + new URLRepository(repositoryMetaData, TestEnvironment.newEnvironment(baseSettings), + new NamedXContentRegistry(Collections.emptyList())); fail("RepositoryException should have been thrown."); } catch (RepositoryException e) { assertEquals("[url] unsupported url protocol [file] from URL [" + repoPath +"]", e.getMessage()); diff --git a/modules/repository-url/src/test/java/org/elasticsearch/repositories/url/URLSnapshotRestoreTests.java b/modules/repository-url/src/test/java/org/elasticsearch/repositories/url/URLSnapshotRestoreTests.java index 035eb2a26d64c..9eb5e37a51c81 100644 --- a/modules/repository-url/src/test/java/org/elasticsearch/repositories/url/URLSnapshotRestoreTests.java +++ b/modules/repository-url/src/test/java/org/elasticsearch/repositories/url/URLSnapshotRestoreTests.java @@ -99,7 +99,7 @@ public void testUrlRepository() throws Exception { logger.info("--> create read-only URL repository"); assertAcked(client.admin().cluster().preparePutRepository("url-repo") .setType(URLRepository.TYPE).setSettings(Settings.builder() - .put(URLRepository.URL_SETTING.getKey(), repositoryLocation.toUri().toURL()) + .put(URLRepository.URL_SETTING.getKey(), repositoryLocation.toUri().toURL().toString()) .put("list_directories", randomBoolean()))); logger.info("--> restore index after deletion"); RestoreSnapshotResponse restoreSnapshotResponse = client diff --git a/modules/transport-netty4/src/main/java/org/elasticsearch/http/netty4/Netty4HttpRequestHandler.java b/modules/transport-netty4/src/main/java/org/elasticsearch/http/netty4/Netty4HttpRequestHandler.java index b31c412920aa1..6da0f5433bae6 100644 --- a/modules/transport-netty4/src/main/java/org/elasticsearch/http/netty4/Netty4HttpRequestHandler.java +++ b/modules/transport-netty4/src/main/java/org/elasticsearch/http/netty4/Netty4HttpRequestHandler.java @@ -64,7 +64,15 @@ protected void channelRead0(ChannelHandlerContext ctx, Object msg) throws Except Unpooled.copiedBuffer(request.content()), request.headers(), request.trailingHeaders()); - final Netty4HttpRequest httpRequest = new Netty4HttpRequest(serverTransport.xContentRegistry, copy, ctx.channel()); + final Netty4HttpRequest httpRequest; + try { + httpRequest = new Netty4HttpRequest(serverTransport.xContentRegistry, copy, ctx.channel()); + } catch (Exception ex) { + if (pipelinedRequest != null) { + pipelinedRequest.release(); + } + throw ex; + } final Netty4HttpChannel channel = new Netty4HttpChannel(serverTransport, httpRequest, pipelinedRequest, detailedErrorsEnabled, threadContext); diff --git a/modules/transport-netty4/src/main/java/org/elasticsearch/http/netty4/Netty4HttpServerTransport.java b/modules/transport-netty4/src/main/java/org/elasticsearch/http/netty4/Netty4HttpServerTransport.java index 83907c56e64ab..893efeb6957ae 100644 --- a/modules/transport-netty4/src/main/java/org/elasticsearch/http/netty4/Netty4HttpServerTransport.java +++ b/modules/transport-netty4/src/main/java/org/elasticsearch/http/netty4/Netty4HttpServerTransport.java @@ -22,7 +22,6 @@ import com.carrotsearch.hppc.IntHashSet; import com.carrotsearch.hppc.IntSet; import io.netty.bootstrap.ServerBootstrap; -import io.netty.channel.AdaptiveRecvByteBufAllocator; import io.netty.channel.Channel; import io.netty.channel.ChannelFuture; import io.netty.channel.ChannelHandler; @@ -88,7 +87,6 @@ import java.util.regex.Pattern; import static org.elasticsearch.common.settings.Setting.boolSetting; -import static org.elasticsearch.common.settings.Setting.byteSizeSetting; import static org.elasticsearch.common.util.concurrent.EsExecutors.daemonThreadFactory; import static org.elasticsearch.http.HttpTransportSettings.SETTING_CORS_ALLOW_CREDENTIALS; import static org.elasticsearch.http.HttpTransportSettings.SETTING_CORS_ALLOW_HEADERS; @@ -108,6 +106,11 @@ import static org.elasticsearch.http.HttpTransportSettings.SETTING_HTTP_PUBLISH_HOST; import static org.elasticsearch.http.HttpTransportSettings.SETTING_HTTP_PUBLISH_PORT; import static org.elasticsearch.http.HttpTransportSettings.SETTING_HTTP_RESET_COOKIES; +import static org.elasticsearch.http.HttpTransportSettings.SETTING_HTTP_TCP_KEEP_ALIVE; +import static org.elasticsearch.http.HttpTransportSettings.SETTING_HTTP_TCP_NO_DELAY; +import static org.elasticsearch.http.HttpTransportSettings.SETTING_HTTP_TCP_RECEIVE_BUFFER_SIZE; +import static org.elasticsearch.http.HttpTransportSettings.SETTING_HTTP_TCP_REUSE_ADDRESS; +import static org.elasticsearch.http.HttpTransportSettings.SETTING_HTTP_TCP_SEND_BUFFER_SIZE; import static org.elasticsearch.http.HttpTransportSettings.SETTING_PIPELINING; import static org.elasticsearch.http.HttpTransportSettings.SETTING_PIPELINING_MAX_EVENTS; import static org.elasticsearch.http.netty4.cors.Netty4CorsHandler.ANY_ORIGIN; @@ -125,23 +128,8 @@ public class Netty4HttpServerTransport extends AbstractLifecycleComponent implem (s) -> Integer.toString(EsExecutors.numberOfProcessors(s) * 2), (s) -> Setting.parseInt(s, 1, "http.netty.worker_count"), Property.NodeScope); - public static final Setting SETTING_HTTP_TCP_NO_DELAY = - boolSetting("http.tcp_no_delay", NetworkService.TCP_NO_DELAY, Property.NodeScope); - public static final Setting SETTING_HTTP_TCP_KEEP_ALIVE = - boolSetting("http.tcp.keep_alive", NetworkService.TCP_KEEP_ALIVE, Property.NodeScope); - public static final Setting SETTING_HTTP_TCP_REUSE_ADDRESS = - boolSetting("http.tcp.reuse_address", NetworkService.TCP_REUSE_ADDRESS, Property.NodeScope); - - public static final Setting SETTING_HTTP_TCP_SEND_BUFFER_SIZE = - Setting.byteSizeSetting("http.tcp.send_buffer_size", NetworkService.TCP_SEND_BUFFER_SIZE, Property.NodeScope); - public static final Setting SETTING_HTTP_TCP_RECEIVE_BUFFER_SIZE = - Setting.byteSizeSetting("http.tcp.receive_buffer_size", NetworkService.TCP_RECEIVE_BUFFER_SIZE, Property.NodeScope); public static final Setting SETTING_HTTP_NETTY_RECEIVE_PREDICTOR_SIZE = Setting.byteSizeSetting("http.netty.receive_predictor_size", new ByteSizeValue(64, ByteSizeUnit.KB), Property.NodeScope); - public static final Setting SETTING_HTTP_NETTY_RECEIVE_PREDICTOR_MIN = - byteSizeSetting("http.netty.receive_predictor_min", SETTING_HTTP_NETTY_RECEIVE_PREDICTOR_SIZE, Property.NodeScope); - public static final Setting SETTING_HTTP_NETTY_RECEIVE_PREDICTOR_MAX = - byteSizeSetting("http.netty.receive_predictor_max", SETTING_HTTP_NETTY_RECEIVE_PREDICTOR_SIZE, Property.NodeScope); protected final NetworkService networkService; @@ -233,17 +221,8 @@ public Netty4HttpServerTransport(Settings settings, NetworkService networkServic this.tcpReceiveBufferSize = SETTING_HTTP_TCP_RECEIVE_BUFFER_SIZE.get(settings); this.detailedErrorsEnabled = SETTING_HTTP_DETAILED_ERRORS_ENABLED.get(settings); - // See AdaptiveReceiveBufferSizePredictor#DEFAULT_XXX for default values in netty..., we can use higher ones for us, even fixed one - ByteSizeValue receivePredictorMin = SETTING_HTTP_NETTY_RECEIVE_PREDICTOR_MIN.get(settings); - ByteSizeValue receivePredictorMax = SETTING_HTTP_NETTY_RECEIVE_PREDICTOR_MAX.get(settings); - if (receivePredictorMax.getBytes() == receivePredictorMin.getBytes()) { - recvByteBufAllocator = new FixedRecvByteBufAllocator(Math.toIntExact(receivePredictorMax.getBytes())); - } else { - recvByteBufAllocator = new AdaptiveRecvByteBufAllocator( - Math.toIntExact(receivePredictorMin.getBytes()), - Math.toIntExact(receivePredictorMin.getBytes()), - Math.toIntExact(receivePredictorMax.getBytes())); - } + ByteSizeValue receivePredictor = SETTING_HTTP_NETTY_RECEIVE_PREDICTOR_SIZE.get(settings); + recvByteBufAllocator = new FixedRecvByteBufAllocator(receivePredictor.bytesAsInt()); this.compression = SETTING_HTTP_COMPRESSION.get(settings); this.compressionLevel = SETTING_HTTP_COMPRESSION_LEVEL.get(settings); @@ -259,9 +238,8 @@ public Netty4HttpServerTransport(Settings settings, NetworkService networkServic this.maxContentLength = maxContentLength; logger.debug("using max_chunk_size[{}], max_header_size[{}], max_initial_line_length[{}], max_content_length[{}], " + - "receive_predictor[{}->{}], pipelining[{}], pipelining_max_events[{}]", - maxChunkSize, maxHeaderSize, maxInitialLineLength, this.maxContentLength, - receivePredictorMin, receivePredictorMax, pipelining, pipeliningMaxEvents); + "receive_predictor[{}], pipelining[{}], pipelining_max_events[{}]", + maxChunkSize, maxHeaderSize, maxInitialLineLength, this.maxContentLength, receivePredictor, pipelining, pipeliningMaxEvents); } public Settings settings() { @@ -566,7 +544,7 @@ protected void initChannel(Channel ch) throws Exception { ch.pipeline().addLast("cors", new Netty4CorsHandler(transport.getCorsConfig())); } if (transport.pipelining) { - ch.pipeline().addLast("pipelining", new HttpPipeliningHandler(transport.pipeliningMaxEvents)); + ch.pipeline().addLast("pipelining", new HttpPipeliningHandler(transport.logger, transport.pipeliningMaxEvents)); } ch.pipeline().addLast("handler", requestHandler); } diff --git a/modules/transport-netty4/src/main/java/org/elasticsearch/http/netty4/pipelining/HttpPipeliningHandler.java b/modules/transport-netty4/src/main/java/org/elasticsearch/http/netty4/pipelining/HttpPipeliningHandler.java index 54cdbd3ba9d47..a90027c81482b 100644 --- a/modules/transport-netty4/src/main/java/org/elasticsearch/http/netty4/pipelining/HttpPipeliningHandler.java +++ b/modules/transport-netty4/src/main/java/org/elasticsearch/http/netty4/pipelining/HttpPipeliningHandler.java @@ -23,8 +23,10 @@ import io.netty.channel.ChannelHandlerContext; import io.netty.channel.ChannelPromise; import io.netty.handler.codec.http.LastHttpContent; +import org.apache.logging.log4j.Logger; import org.elasticsearch.transport.netty4.Netty4Utils; +import java.nio.channels.ClosedChannelException; import java.util.Collections; import java.util.PriorityQueue; @@ -36,6 +38,7 @@ public class HttpPipeliningHandler extends ChannelDuplexHandler { // we use a priority queue so that responses are ordered by their sequence number private final PriorityQueue holdingQueue; + private final Logger logger; private final int maxEventsHeld; /* @@ -49,10 +52,12 @@ public class HttpPipeliningHandler extends ChannelDuplexHandler { /** * Construct a new pipelining handler; this handler should be used downstream of HTTP decoding/aggregation. * + * @param logger for logging unexpected errors * @param maxEventsHeld the maximum number of channel events that will be retained prior to aborting the channel connection; this is * required as events cannot queue up indefinitely */ - public HttpPipeliningHandler(final int maxEventsHeld) { + public HttpPipeliningHandler(Logger logger, final int maxEventsHeld) { + this.logger = logger; this.maxEventsHeld = maxEventsHeld; this.holdingQueue = new PriorityQueue<>(1); } @@ -120,4 +125,20 @@ public void write(final ChannelHandlerContext ctx, final Object msg, final Chann } } + @Override + public void close(ChannelHandlerContext ctx, ChannelPromise promise) throws Exception { + if (holdingQueue.isEmpty() == false) { + ClosedChannelException closedChannelException = new ClosedChannelException(); + HttpPipelinedResponse pipelinedResponse; + while ((pipelinedResponse = holdingQueue.poll()) != null) { + try { + pipelinedResponse.release(); + pipelinedResponse.promise().setFailure(closedChannelException); + } catch (Exception e) { + logger.error("unexpected error while releasing pipelined http responses", e); + } + } + } + ctx.close(promise); + } } diff --git a/modules/transport-netty4/src/main/java/org/elasticsearch/transport/Netty4Plugin.java b/modules/transport-netty4/src/main/java/org/elasticsearch/transport/Netty4Plugin.java index 61d818c0d323a..4c842d5a4dca7 100644 --- a/modules/transport-netty4/src/main/java/org/elasticsearch/transport/Netty4Plugin.java +++ b/modules/transport-netty4/src/main/java/org/elasticsearch/transport/Netty4Plugin.java @@ -55,14 +55,7 @@ public List> getSettings() { return Arrays.asList( Netty4HttpServerTransport.SETTING_HTTP_NETTY_MAX_COMPOSITE_BUFFER_COMPONENTS, Netty4HttpServerTransport.SETTING_HTTP_WORKER_COUNT, - Netty4HttpServerTransport.SETTING_HTTP_TCP_NO_DELAY, - Netty4HttpServerTransport.SETTING_HTTP_TCP_KEEP_ALIVE, - Netty4HttpServerTransport.SETTING_HTTP_TCP_REUSE_ADDRESS, - Netty4HttpServerTransport.SETTING_HTTP_TCP_SEND_BUFFER_SIZE, - Netty4HttpServerTransport.SETTING_HTTP_TCP_RECEIVE_BUFFER_SIZE, Netty4HttpServerTransport.SETTING_HTTP_NETTY_RECEIVE_PREDICTOR_SIZE, - Netty4HttpServerTransport.SETTING_HTTP_NETTY_RECEIVE_PREDICTOR_MIN, - Netty4HttpServerTransport.SETTING_HTTP_NETTY_RECEIVE_PREDICTOR_MAX, Netty4Transport.WORKER_COUNT, Netty4Transport.NETTY_RECEIVE_PREDICTOR_SIZE, Netty4Transport.NETTY_RECEIVE_PREDICTOR_MIN, diff --git a/modules/transport-netty4/src/main/java/org/elasticsearch/transport/netty4/ByteBufStreamInput.java b/modules/transport-netty4/src/main/java/org/elasticsearch/transport/netty4/ByteBufStreamInput.java index f29ca9c10ff2d..2713f34308575 100644 --- a/modules/transport-netty4/src/main/java/org/elasticsearch/transport/netty4/ByteBufStreamInput.java +++ b/modules/transport-netty4/src/main/java/org/elasticsearch/transport/netty4/ByteBufStreamInput.java @@ -33,7 +33,6 @@ class ByteBufStreamInput extends StreamInput { private final ByteBuf buffer; - private final int startIndex; private final int endIndex; ByteBufStreamInput(ByteBuf buffer, int length) { @@ -41,26 +40,27 @@ class ByteBufStreamInput extends StreamInput { throw new IndexOutOfBoundsException(); } this.buffer = buffer; - startIndex = buffer.readerIndex(); + int startIndex = buffer.readerIndex(); endIndex = startIndex + length; buffer.markReaderIndex(); } @Override public BytesReference readBytesReference(int length) throws IOException { - BytesReference ref = Netty4Utils.toBytesReference(buffer.slice(buffer.readerIndex(), length)); - buffer.skipBytes(length); - return ref; + // NOTE: It is unsafe to share a reference of the internal structure, so we + // use the default implementation which will copy the bytes. It is unsafe because + // a netty ByteBuf might be pooled which requires a manual release to prevent + // memory leaks. + return super.readBytesReference(length); } @Override public BytesRef readBytesRef(int length) throws IOException { - if (!buffer.hasArray()) { - return super.readBytesRef(length); - } - BytesRef bytesRef = new BytesRef(buffer.array(), buffer.arrayOffset() + buffer.readerIndex(), length); - buffer.skipBytes(length); - return bytesRef; + // NOTE: It is unsafe to share a reference of the internal structure, so we + // use the default implementation which will copy the bytes. It is unsafe because + // a netty ByteBuf might be pooled which requires a manual release to prevent + // memory leaks. + return super.readBytesRef(length); } @Override diff --git a/modules/transport-netty4/src/main/java/org/elasticsearch/transport/netty4/ESLoggingHandler.java b/modules/transport-netty4/src/main/java/org/elasticsearch/transport/netty4/ESLoggingHandler.java new file mode 100644 index 0000000000000..47a31f268a6a8 --- /dev/null +++ b/modules/transport-netty4/src/main/java/org/elasticsearch/transport/netty4/ESLoggingHandler.java @@ -0,0 +1,131 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.transport.netty4; + +import io.netty.buffer.ByteBuf; +import io.netty.channel.ChannelHandlerContext; +import io.netty.handler.logging.LogLevel; +import io.netty.handler.logging.LoggingHandler; +import io.netty.util.internal.StringUtil; +import org.elasticsearch.Version; +import org.elasticsearch.common.compress.Compressor; +import org.elasticsearch.common.compress.CompressorFactory; +import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.util.concurrent.ThreadContext; +import org.elasticsearch.transport.TcpHeader; +import org.elasticsearch.transport.TcpTransport; +import org.elasticsearch.transport.TransportStatus; + +import java.io.IOException; +import java.io.UncheckedIOException; + +final class ESLoggingHandler extends LoggingHandler { + + ESLoggingHandler() { + super(LogLevel.TRACE); + } + + @Override + protected String format(final ChannelHandlerContext ctx, final String eventName, final Object arg) { + if (arg instanceof ByteBuf) { + try { + return format(ctx, eventName, (ByteBuf) arg); + } catch (final Exception e) { + // we really do not want to allow a bug in the formatting handling to escape + logger.trace("an exception occurred formatting a trace message", e); + // we are going to let this be formatted via the default formatting + return super.format(ctx, eventName, arg); + } + } else { + return super.format(ctx, eventName, arg); + } + } + + private static final int MESSAGE_LENGTH_OFFSET = TcpHeader.MARKER_BYTES_SIZE; + private static final int REQUEST_ID_OFFSET = MESSAGE_LENGTH_OFFSET + TcpHeader.MESSAGE_LENGTH_SIZE; + private static final int STATUS_OFFSET = REQUEST_ID_OFFSET + TcpHeader.REQUEST_ID_SIZE; + private static final int VERSION_ID_OFFSET = STATUS_OFFSET + TcpHeader.STATUS_SIZE; + private static final int ACTION_OFFSET = VERSION_ID_OFFSET + TcpHeader.VERSION_ID_SIZE; + + private String format(final ChannelHandlerContext ctx, final String eventName, final ByteBuf arg) throws IOException { + final int readableBytes = arg.readableBytes(); + if (readableBytes == 0) { + return super.format(ctx, eventName, arg); + } else if (readableBytes >= 2) { + final StringBuilder sb = new StringBuilder(); + sb.append(ctx.channel().toString()); + final int offset = arg.readerIndex(); + // this might be an ES message, check the header + if (arg.getByte(offset) == (byte) 'E' && arg.getByte(offset + 1) == (byte) 'S') { + if (readableBytes == TcpHeader.MARKER_BYTES_SIZE + TcpHeader.MESSAGE_LENGTH_SIZE) { + final int length = arg.getInt(offset + MESSAGE_LENGTH_OFFSET); + if (length == TcpTransport.PING_DATA_SIZE) { + sb.append(" [ping]").append(' ').append(eventName).append(": ").append(readableBytes).append('B'); + return sb.toString(); + } + } + else if (readableBytes >= TcpHeader.HEADER_SIZE) { + // we are going to try to decode this as an ES message + final int length = arg.getInt(offset + MESSAGE_LENGTH_OFFSET); + final long requestId = arg.getLong(offset + REQUEST_ID_OFFSET); + final byte status = arg.getByte(offset + STATUS_OFFSET); + final boolean isRequest = TransportStatus.isRequest(status); + final String type = isRequest ? "request" : "response"; + final String version = Version.fromId(arg.getInt(offset + VERSION_ID_OFFSET)).toString(); + sb.append(" [length: ").append(length); + sb.append(", request id: ").append(requestId); + sb.append(", type: ").append(type); + sb.append(", version: ").append(version); + if (isRequest) { + // it looks like an ES request, try to decode the action + final int remaining = readableBytes - ACTION_OFFSET; + final ByteBuf slice = arg.slice(offset + ACTION_OFFSET, remaining); + // the stream might be compressed + try (StreamInput in = in(status, slice, remaining)) { + // the first bytes in the message is the context headers + try (ThreadContext context = new ThreadContext(Settings.EMPTY)) { + context.readHeaders(in); + } + // now we can decode the action name + sb.append(", action: ").append(in.readString()); + } + } + sb.append(']'); + sb.append(' ').append(eventName).append(": ").append(readableBytes).append('B'); + return sb.toString(); + } + } + } + // we could not decode this as an ES message, use the default formatting + return super.format(ctx, eventName, arg); + } + + private StreamInput in(final Byte status, final ByteBuf slice, final int remaining) throws IOException { + final ByteBufStreamInput in = new ByteBufStreamInput(slice, remaining); + if (TransportStatus.isCompress(status)) { + final Compressor compressor = CompressorFactory.compressor(Netty4Utils.toBytesReference(slice)); + return compressor.streamInput(in); + } else { + return in; + } + } + +} diff --git a/modules/transport-netty4/src/main/java/org/elasticsearch/transport/netty4/Netty4Transport.java b/modules/transport-netty4/src/main/java/org/elasticsearch/transport/netty4/Netty4Transport.java index d92066e1fcc63..11e5d2f44a81a 100644 --- a/modules/transport-netty4/src/main/java/org/elasticsearch/transport/netty4/Netty4Transport.java +++ b/modules/transport-netty4/src/main/java/org/elasticsearch/transport/netty4/Netty4Transport.java @@ -320,18 +320,30 @@ protected void sendMessage(Channel channel, BytesReference reference, ActionList if (f.isSuccess()) { listener.onResponse(channel); } else { - Throwable cause = f.cause(); - // If the Throwable is an Error something has gone very wrong and Netty4MessageChannelHandler is - // going to cause that to bubble up and kill the process. - if (cause instanceof Exception) { - listener.onFailure((Exception) cause); - } + final Throwable cause = f.cause(); + Netty4Utils.maybeDie(cause); + logger.warn((Supplier) () -> + new ParameterizedMessage("write and flush on the network layer failed (channel: {})", channel), cause); + assert cause instanceof Exception; + listener.onFailure((Exception) cause); } }); } @Override - protected void closeChannels(final List channels, boolean blocking) throws IOException { + protected void closeChannels(final List channels, boolean blocking, boolean doNotLinger) throws IOException { + if (doNotLinger) { + for (Channel channel : channels) { + /* We set SO_LINGER timeout to 0 to ensure that when we shutdown the node we don't have a gazillion connections sitting + * in TIME_WAIT to free up resources quickly. This is really the only part where we close the connection from the server + * side otherwise the client (node) initiates the TCP closing sequence which doesn't cause these issues. Setting this + * by default from the beginning can have unexpected side-effects an should be avoided, our protocol is designed + * in a way that clients close connection which is how it should be*/ + if (channel.isOpen()) { + channel.config().setOption(ChannelOption.SO_LINGER, 0); + } + } + } if (blocking) { Netty4Utils.closeChannels(channels); } else { @@ -397,6 +409,7 @@ protected class ClientChannelInitializer extends ChannelInitializer { @Override protected void initChannel(Channel ch) throws Exception { + ch.pipeline().addLast("logging", new ESLoggingHandler()); ch.pipeline().addLast("size", new Netty4SizeHeaderFrameDecoder()); // using a dot as a prefix means this cannot come from any settings parsed ch.pipeline().addLast("dispatcher", new Netty4MessageChannelHandler(Netty4Transport.this, ".client")); @@ -420,6 +433,7 @@ protected ServerChannelInitializer(String name) { @Override protected void initChannel(Channel ch) throws Exception { + ch.pipeline().addLast("logging", new ESLoggingHandler()); ch.pipeline().addLast("open_channels", Netty4Transport.this.serverOpenChannels); ch.pipeline().addLast("size", new Netty4SizeHeaderFrameDecoder()); ch.pipeline().addLast("dispatcher", new Netty4MessageChannelHandler(Netty4Transport.this, name)); diff --git a/modules/transport-netty4/src/main/java/org/elasticsearch/transport/netty4/Netty4Utils.java b/modules/transport-netty4/src/main/java/org/elasticsearch/transport/netty4/Netty4Utils.java index d71e1ee937690..05295c1d4da4e 100644 --- a/modules/transport-netty4/src/main/java/org/elasticsearch/transport/netty4/Netty4Utils.java +++ b/modules/transport-netty4/src/main/java/org/elasticsearch/transport/netty4/Netty4Utils.java @@ -27,6 +27,7 @@ import io.netty.util.NettyRuntime; import io.netty.util.internal.logging.InternalLogger; import io.netty.util.internal.logging.InternalLoggerFactory; +import org.apache.logging.log4j.Logger; import org.apache.lucene.util.BytesRef; import org.apache.lucene.util.BytesRefIterator; import org.elasticsearch.common.Booleans; @@ -37,8 +38,12 @@ import java.util.ArrayList; import java.util.Arrays; import java.util.Collection; +import java.util.Collections; +import java.util.LinkedList; import java.util.List; import java.util.Locale; +import java.util.Optional; +import java.util.Queue; import java.util.concurrent.atomic.AtomicBoolean; import java.util.stream.Collectors; @@ -167,7 +172,8 @@ public static void closeChannels(final Collection channels) throws IOEx * @param cause the throwable to test */ public static void maybeDie(final Throwable cause) { - if (cause instanceof Error) { + final Optional maybeError = maybeError(cause); + if (maybeError.isPresent()) { /* * Here be dragons. We want to rethrow this so that it bubbles up to the uncaught exception handler. Yet, Netty wraps too many * invocations of user-code in try/catch blocks that swallow all throwables. This means that a rethrow here will not bubble up @@ -178,15 +184,52 @@ public static void maybeDie(final Throwable cause) { // try to log the current stack trace final StackTraceElement[] stackTrace = Thread.currentThread().getStackTrace(); final String formatted = Arrays.stream(stackTrace).skip(1).map(e -> "\tat " + e).collect(Collectors.joining("\n")); - ESLoggerFactory.getLogger(Netty4Utils.class).error("fatal error on the network layer\n{}", formatted); + final Logger logger = ESLoggerFactory.getLogger(Netty4Utils.class); + logger.error("fatal error on the network layer\n{}", formatted); } finally { new Thread( () -> { - throw (Error) cause; + throw maybeError.get(); }) .start(); } } } + static final int MAX_ITERATIONS = 1024; + + /** + * Unwrap the specified throwable looking for any suppressed errors or errors as a root cause of the specified throwable. + * + * @param cause the root throwable + * + * @return an optional error if one is found suppressed or a root cause in the tree rooted at the specified throwable + */ + static Optional maybeError(final Throwable cause) { + // early terminate if the cause is already an error + if (cause instanceof Error) { + return Optional.of((Error) cause); + } + + final Queue queue = new LinkedList<>(); + queue.add(cause); + int iterations = 0; + while (!queue.isEmpty()) { + iterations++; + if (iterations > MAX_ITERATIONS) { + ESLoggerFactory.getLogger(Netty4Utils.class).warn("giving up looking for fatal errors on the network layer", cause); + break; + } + final Throwable current = queue.remove(); + if (current instanceof Error) { + return Optional.of((Error) current); + } + Collections.addAll(queue, current.getSuppressed()); + if (current.getCause() != null) { + queue.add(current.getCause()); + } + } + return Optional.empty(); + } + } diff --git a/modules/transport-netty4/src/test/java/org/elasticsearch/http/netty4/pipelining/Netty4HttpPipeliningHandlerTests.java b/modules/transport-netty4/src/test/java/org/elasticsearch/http/netty4/pipelining/Netty4HttpPipeliningHandlerTests.java index ce8e840e246ce..ffb6c8fb3569d 100644 --- a/modules/transport-netty4/src/test/java/org/elasticsearch/http/netty4/pipelining/Netty4HttpPipeliningHandlerTests.java +++ b/modules/transport-netty4/src/test/java/org/elasticsearch/http/netty4/pipelining/Netty4HttpPipeliningHandlerTests.java @@ -40,6 +40,7 @@ import org.elasticsearch.test.ESTestCase; import org.junit.After; +import java.nio.channels.ClosedChannelException; import java.nio.charset.StandardCharsets; import java.util.ArrayList; import java.util.List; @@ -61,9 +62,9 @@ public class Netty4HttpPipeliningHandlerTests extends ESTestCase { - private ExecutorService executorService = Executors.newFixedThreadPool(randomIntBetween(4, 8)); - private Map waitingRequests = new ConcurrentHashMap<>(); - private Map finishingRequests = new ConcurrentHashMap<>(); + private final ExecutorService executorService = Executors.newFixedThreadPool(randomIntBetween(4, 8)); + private final Map waitingRequests = new ConcurrentHashMap<>(); + private final Map finishingRequests = new ConcurrentHashMap<>(); @After public void tearDown() throws Exception { @@ -86,7 +87,8 @@ private void shutdownExecutorService() throws InterruptedException { public void testThatPipeliningWorksWithFastSerializedRequests() throws InterruptedException { final int numberOfRequests = randomIntBetween(2, 128); - final EmbeddedChannel embeddedChannel = new EmbeddedChannel(new HttpPipeliningHandler(numberOfRequests), new WorkEmulatorHandler()); + final EmbeddedChannel embeddedChannel = new EmbeddedChannel(new HttpPipeliningHandler(logger, numberOfRequests), + new WorkEmulatorHandler()); for (int i = 0; i < numberOfRequests; i++) { embeddedChannel.writeInbound(createHttpRequest("/" + String.valueOf(i))); @@ -112,7 +114,8 @@ public void testThatPipeliningWorksWithFastSerializedRequests() throws Interrupt public void testThatPipeliningWorksWhenSlowRequestsInDifferentOrder() throws InterruptedException { final int numberOfRequests = randomIntBetween(2, 128); - final EmbeddedChannel embeddedChannel = new EmbeddedChannel(new HttpPipeliningHandler(numberOfRequests), new WorkEmulatorHandler()); + final EmbeddedChannel embeddedChannel = new EmbeddedChannel(new HttpPipeliningHandler(logger, numberOfRequests), + new WorkEmulatorHandler()); for (int i = 0; i < numberOfRequests; i++) { embeddedChannel.writeInbound(createHttpRequest("/" + String.valueOf(i))); @@ -144,7 +147,7 @@ public void testThatPipeliningWorksWithChunkedRequests() throws InterruptedExcep final EmbeddedChannel embeddedChannel = new EmbeddedChannel( new AggregateUrisAndHeadersHandler(), - new HttpPipeliningHandler(numberOfRequests), + new HttpPipeliningHandler(logger, numberOfRequests), new WorkEmulatorHandler()); for (int i = 0; i < numberOfRequests; i++) { @@ -173,7 +176,8 @@ public void testThatPipeliningWorksWithChunkedRequests() throws InterruptedExcep public void testThatPipeliningClosesConnectionWithTooManyEvents() throws InterruptedException { final int numberOfRequests = randomIntBetween(2, 128); - final EmbeddedChannel embeddedChannel = new EmbeddedChannel(new HttpPipeliningHandler(numberOfRequests), new WorkEmulatorHandler()); + final EmbeddedChannel embeddedChannel = new EmbeddedChannel(new HttpPipeliningHandler(logger, numberOfRequests), + new WorkEmulatorHandler()); for (int i = 0; i < 1 + numberOfRequests + 1; i++) { embeddedChannel.writeInbound(createHttpRequest("/" + Integer.toString(i))); @@ -198,6 +202,40 @@ public void testThatPipeliningClosesConnectionWithTooManyEvents() throws Interru assertFalse(embeddedChannel.isOpen()); } + public void testPipeliningRequestsAreReleased() throws InterruptedException { + final int numberOfRequests = 10; + final EmbeddedChannel embeddedChannel = + new EmbeddedChannel(new HttpPipeliningHandler(logger, numberOfRequests + 1)); + + for (int i = 0; i < numberOfRequests; i++) { + embeddedChannel.writeInbound(createHttpRequest("/" + i)); + } + + HttpPipelinedRequest inbound; + ArrayList requests = new ArrayList<>(); + while ((inbound = embeddedChannel.readInbound()) != null) { + requests.add(inbound); + } + + ArrayList promises = new ArrayList<>(); + for (int i = 1; i < requests.size(); ++i) { + final DefaultFullHttpResponse httpResponse = new DefaultFullHttpResponse(HTTP_1_1, OK); + ChannelPromise promise = embeddedChannel.newPromise(); + promises.add(promise); + HttpPipelinedResponse response = requests.get(i).createHttpResponse(httpResponse, promise); + embeddedChannel.writeAndFlush(response, promise); + } + + for (ChannelPromise promise : promises) { + assertFalse(promise.isDone()); + } + embeddedChannel.close().syncUninterruptibly(); + for (ChannelPromise promise : promises) { + assertTrue(promise.isDone()); + assertTrue(promise.cause() instanceof ClosedChannelException); + } + } + private void assertReadHttpMessageHasContent(EmbeddedChannel embeddedChannel, String expectedContent) { FullHttpResponse response = (FullHttpResponse) embeddedChannel.outboundMessages().poll(); @@ -255,7 +293,5 @@ protected void channelRead0(final ChannelHandlerContext ctx, final HttpPipelined } }); } - } - } diff --git a/modules/transport-netty4/src/test/java/org/elasticsearch/rest/Netty4HeadBodyIsEmptyIT.java b/modules/transport-netty4/src/test/java/org/elasticsearch/rest/Netty4HeadBodyIsEmptyIT.java index 0079f147d9fe1..d1a5c15a29c22 100644 --- a/modules/transport-netty4/src/test/java/org/elasticsearch/rest/Netty4HeadBodyIsEmptyIT.java +++ b/modules/transport-netty4/src/test/java/org/elasticsearch/rest/Netty4HeadBodyIsEmptyIT.java @@ -19,8 +19,8 @@ package org.elasticsearch.rest; -import org.elasticsearch.client.http.entity.ContentType; -import org.elasticsearch.client.http.entity.StringEntity; +import org.apache.http.entity.ContentType; +import org.apache.http.entity.StringEntity; import org.elasticsearch.client.Response; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.test.rest.ESRestTestCase; diff --git a/modules/transport-netty4/src/test/java/org/elasticsearch/transport/netty4/ESLoggingHandlerIT.java b/modules/transport-netty4/src/test/java/org/elasticsearch/transport/netty4/ESLoggingHandlerIT.java new file mode 100644 index 0000000000000..acd71749e2333 --- /dev/null +++ b/modules/transport-netty4/src/test/java/org/elasticsearch/transport/netty4/ESLoggingHandlerIT.java @@ -0,0 +1,83 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.transport.netty4; + +import org.apache.logging.log4j.Level; +import org.elasticsearch.ESNetty4IntegTestCase; +import org.elasticsearch.action.admin.cluster.node.hotthreads.NodesHotThreadsRequest; +import org.elasticsearch.common.logging.Loggers; +import org.elasticsearch.test.ESIntegTestCase; +import org.elasticsearch.test.MockLogAppender; +import org.elasticsearch.test.junit.annotations.TestLogging; + +@ESIntegTestCase.ClusterScope(numDataNodes = 2) +@TestLogging(value = "org.elasticsearch.transport.netty4.ESLoggingHandler:trace") +public class ESLoggingHandlerIT extends ESNetty4IntegTestCase { + + private MockLogAppender appender; + + public void setUp() throws Exception { + super.setUp(); + appender = new MockLogAppender(); + Loggers.addAppender(Loggers.getLogger(ESLoggingHandler.class), appender); + appender.start(); + } + + public void tearDown() throws Exception { + Loggers.removeAppender(Loggers.getLogger(ESLoggingHandler.class), appender); + appender.stop(); + super.tearDown(); + } + + public void testLoggingHandler() throws IllegalAccessException { + final String writePattern = + ".*\\[length: \\d+" + + ", request id: \\d+" + + ", type: request" + + ", version: .*" + + ", action: cluster:monitor/nodes/hot_threads\\[n\\]\\]" + + " WRITE: \\d+B"; + final MockLogAppender.LoggingExpectation writeExpectation = + new MockLogAppender.PatternSeenEventExcpectation( + "hot threads request", ESLoggingHandler.class.getCanonicalName(), Level.TRACE, writePattern); + + final MockLogAppender.LoggingExpectation flushExpectation = + new MockLogAppender.SeenEventExpectation("flush", ESLoggingHandler.class.getCanonicalName(), Level.TRACE, "*FLUSH*"); + + final String readPattern = + ".*\\[length: \\d+" + + ", request id: \\d+" + + ", type: request" + + ", version: .*" + + ", action: cluster:monitor/nodes/hot_threads\\[n\\]\\]" + + " READ: \\d+B"; + + final MockLogAppender.LoggingExpectation readExpectation = + new MockLogAppender.PatternSeenEventExcpectation( + "hot threads request", ESLoggingHandler.class.getCanonicalName(), Level.TRACE, readPattern); + + appender.addExpectation(writeExpectation); + appender.addExpectation(flushExpectation); + appender.addExpectation(readExpectation); + client().admin().cluster().nodesHotThreads(new NodesHotThreadsRequest()).actionGet(); + appender.assertAllExpectationsMatched(); + } + +} diff --git a/modules/transport-netty4/src/test/java/org/elasticsearch/transport/netty4/Netty4UtilsTests.java b/modules/transport-netty4/src/test/java/org/elasticsearch/transport/netty4/Netty4UtilsTests.java index 8372a8540b8be..43be6f0efdda0 100644 --- a/modules/transport-netty4/src/test/java/org/elasticsearch/transport/netty4/Netty4UtilsTests.java +++ b/modules/transport-netty4/src/test/java/org/elasticsearch/transport/netty4/Netty4UtilsTests.java @@ -22,6 +22,7 @@ import io.netty.buffer.ByteBuf; import io.netty.buffer.CompositeByteBuf; import io.netty.buffer.Unpooled; +import io.netty.handler.codec.DecoderException; import org.apache.lucene.util.BytesRef; import org.elasticsearch.common.bytes.AbstractBytesReferenceTestCase; import org.elasticsearch.common.bytes.BytesArray; @@ -32,6 +33,9 @@ import org.elasticsearch.test.ESTestCase; import java.io.IOException; +import java.util.Optional; + +import static org.hamcrest.CoreMatchers.equalTo; public class Netty4UtilsTests extends ESTestCase { @@ -75,6 +79,60 @@ public void testToChannelBuffer() throws IOException { assertArrayEquals(BytesReference.toBytes(ref), BytesReference.toBytes(bytesReference)); } + public void testMaybeError() { + final Error outOfMemoryError = new OutOfMemoryError(); + assertError(outOfMemoryError, outOfMemoryError); + + final DecoderException decoderException = new DecoderException(outOfMemoryError); + assertError(decoderException, outOfMemoryError); + + final Exception e = new Exception(); + e.addSuppressed(decoderException); + assertError(e, outOfMemoryError); + + final int depth = randomIntBetween(1, 16); + Throwable cause = new Exception(); + boolean fatal = false; + Error error = null; + for (int i = 0; i < depth; i++) { + final int length = randomIntBetween(1, 4); + for (int j = 0; j < length; j++) { + if (!fatal && rarely()) { + error = new Error(); + cause.addSuppressed(error); + fatal = true; + } else { + cause.addSuppressed(new Exception()); + } + } + if (!fatal && rarely()) { + cause = error = new Error(cause); + fatal = true; + } else { + cause = new Exception(cause); + } + } + if (fatal) { + assertError(cause, error); + } else { + assertFalse(Netty4Utils.maybeError(cause).isPresent()); + } + + assertFalse(Netty4Utils.maybeError(new Exception(new DecoderException())).isPresent()); + + Throwable chain = outOfMemoryError; + for (int i = 0; i < Netty4Utils.MAX_ITERATIONS; i++) { + chain = new Exception(chain); + } + assertFalse(Netty4Utils.maybeError(chain).isPresent()); + } + + private void assertError(final Throwable cause, final Error error) { + final Optional maybeError = Netty4Utils.maybeError(cause); + assertTrue(maybeError.isPresent()); + assertThat(maybeError.get(), equalTo(error)); + } + private BytesReference getRandomizedBytesReference(int length) throws IOException { // we know bytes stream output always creates a paged bytes reference, we use it to create randomized content ReleasableBytesStreamOutput out = new ReleasableBytesStreamOutput(length, bigarrays); diff --git a/modules/transport-netty4/src/test/java/org/elasticsearch/transport/netty4/SimpleNetty4TransportTests.java b/modules/transport-netty4/src/test/java/org/elasticsearch/transport/netty4/SimpleNetty4TransportTests.java index 92c21f942c292..bdf4adb5ea91c 100644 --- a/modules/transport-netty4/src/test/java/org/elasticsearch/transport/netty4/SimpleNetty4TransportTests.java +++ b/modules/transport-netty4/src/test/java/org/elasticsearch/transport/netty4/SimpleNetty4TransportTests.java @@ -52,7 +52,7 @@ public class SimpleNetty4TransportTests extends AbstractSimpleTransportTestCase { public static MockTransportService nettyFromThreadPool(Settings settings, ThreadPool threadPool, final Version version, - ClusterSettings clusterSettings, boolean doHandshake) { + ClusterSettings clusterSettings, boolean doHandshake) { NamedWriteableRegistry namedWriteableRegistry = new NamedWriteableRegistry(Collections.emptyList()); Transport transport = new Netty4Transport(settings, threadPool, new NetworkService(Collections.emptyList()), BigArrays.NON_RECYCLING_INSTANCE, namedWriteableRegistry, new NoneCircuitBreakerService()) { @@ -86,6 +86,13 @@ protected MockTransportService build(Settings settings, Version version, Cluster return transportService; } + @Override + protected void closeConnectionChannel(Transport transport, Transport.Connection connection) throws IOException { + final Netty4Transport t = (Netty4Transport) transport; + @SuppressWarnings("unchecked") final TcpTransport.NodeChannels channels = (TcpTransport.NodeChannels) connection; + t.closeChannels(channels.getChannels().subList(0, randomIntBetween(1, channels.getChannels().size())), true, false); + } + public void testConnectException() throws UnknownHostException { try { serviceA.connectToNode(new DiscoveryNode("C", new TransportAddress(InetAddress.getByName("localhost"), 9876), @@ -108,7 +115,8 @@ public void testBindUnavailableAddress() { .build(); ClusterSettings clusterSettings = new ClusterSettings(settings, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS); BindTransportException bindTransportException = expectThrows(BindTransportException.class, () -> { - MockTransportService transportService = nettyFromThreadPool(settings, threadPool, Version.CURRENT, clusterSettings, true); + MockTransportService transportService = + nettyFromThreadPool(settings, threadPool, Version.CURRENT, clusterSettings, true); try { transportService.start(); } finally { diff --git a/modules/tribe/src/main/java/org/elasticsearch/tribe/TribePlugin.java b/modules/tribe/src/main/java/org/elasticsearch/tribe/TribePlugin.java index f77dea1dd306d..7479e60a95cba 100644 --- a/modules/tribe/src/main/java/org/elasticsearch/tribe/TribePlugin.java +++ b/modules/tribe/src/main/java/org/elasticsearch/tribe/TribePlugin.java @@ -137,7 +137,7 @@ public Settings additionalSettings() { return sb.build(); } else { - for (String s : settings.getAsMap().keySet()) { + for (String s : settings.keySet()) { if (s.startsWith("tribe.") && !s.equals(TribeService.TRIBE_NAME_SETTING.getKey())) { throw new IllegalArgumentException("tribe cannot contain inner tribes: " + s); } diff --git a/modules/tribe/src/main/java/org/elasticsearch/tribe/TribeService.java b/modules/tribe/src/main/java/org/elasticsearch/tribe/TribeService.java index 714749b94782c..d32df242e487c 100644 --- a/modules/tribe/src/main/java/org/elasticsearch/tribe/TribeService.java +++ b/modules/tribe/src/main/java/org/elasticsearch/tribe/TribeService.java @@ -195,7 +195,7 @@ public TribeService(Settings settings, NodeEnvironment nodeEnvironment, ClusterS * combined with tribe specific settings. */ static Settings buildClientSettings(String tribeName, String parentNodeId, Settings globalSettings, Settings tribeSettings) { - for (String tribeKey : tribeSettings.getAsMap().keySet()) { + for (String tribeKey : tribeSettings.keySet()) { if (tribeKey.startsWith("path.")) { throw new IllegalArgumentException("Setting [" + tribeKey + "] not allowed in tribe client [" + tribeName + "]"); } diff --git a/modules/tribe/src/test/java/org/elasticsearch/tribe/TribeIntegrationTests.java b/modules/tribe/src/test/java/org/elasticsearch/tribe/TribeIntegrationTests.java index cd86d7216a74a..9957ad6bae9ee 100644 --- a/modules/tribe/src/test/java/org/elasticsearch/tribe/TribeIntegrationTests.java +++ b/modules/tribe/src/test/java/org/elasticsearch/tribe/TribeIntegrationTests.java @@ -110,7 +110,7 @@ public class TribeIntegrationTests extends ESIntegTestCase { private static final Predicate CLUSTER2_ONLY = c -> c.getClusterName().equals(cluster2.getClusterName()); /** - * A predicate that is used to select the the two remote clusters + * A predicate that is used to select the two remote clusters **/ private static final Predicate ALL = c -> true; diff --git a/plugins/analysis-icu/build.gradle b/plugins/analysis-icu/build.gradle index 2a8905e080f0f..123db9fc4a575 100644 --- a/plugins/analysis-icu/build.gradle +++ b/plugins/analysis-icu/build.gradle @@ -22,6 +22,12 @@ esplugin { classname 'org.elasticsearch.plugin.analysis.icu.AnalysisICUPlugin' } +forbiddenApis { + signatures += [ + "com.ibm.icu.text.Collator#getInstance() @ Don't use default locale, use getInstance(ULocale) instead" + ] +} + dependencies { compile "org.apache.lucene:lucene-analyzers-icu:${versions.lucene}" compile 'com.ibm.icu:icu4j:59.1' diff --git a/plugins/analysis-icu/licenses/lucene-NOTICE.txt b/plugins/analysis-icu/licenses/lucene-NOTICE.txt index ecf08201a5ee6..1a1d51572432a 100644 --- a/plugins/analysis-icu/licenses/lucene-NOTICE.txt +++ b/plugins/analysis-icu/licenses/lucene-NOTICE.txt @@ -54,13 +54,14 @@ The KStem stemmer in was developed by Bob Krovetz and Sergio Guzman-Lara (CIIR-UMass Amherst) under the BSD-license. -The Arabic,Persian,Romanian,Bulgarian, and Hindi analyzers (common) come with a default +The Arabic,Persian,Romanian,Bulgarian, Hindi and Bengali analyzers (common) come with a default stopword list that is BSD-licensed created by Jacques Savoy. These files reside in: analysis/common/src/resources/org/apache/lucene/analysis/ar/stopwords.txt, analysis/common/src/resources/org/apache/lucene/analysis/fa/stopwords.txt, analysis/common/src/resources/org/apache/lucene/analysis/ro/stopwords.txt, analysis/common/src/resources/org/apache/lucene/analysis/bg/stopwords.txt, -analysis/common/src/resources/org/apache/lucene/analysis/hi/stopwords.txt +analysis/common/src/resources/org/apache/lucene/analysis/hi/stopwords.txt, +analysis/common/src/resources/org/apache/lucene/analysis/bn/stopwords.txt See http://members.unine.ch/jacques.savoy/clef/index.html. The German,Spanish,Finnish,French,Hungarian,Italian,Portuguese,Russian and Swedish light stemmers diff --git a/plugins/analysis-icu/licenses/lucene-analyzers-icu-7.0.0-snapshot-a128fcb.jar.sha1 b/plugins/analysis-icu/licenses/lucene-analyzers-icu-7.0.0-snapshot-a128fcb.jar.sha1 deleted file mode 100644 index 4b982b80cf582..0000000000000 --- a/plugins/analysis-icu/licenses/lucene-analyzers-icu-7.0.0-snapshot-a128fcb.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -aebac165a4545e727d13cb5016c7530b08599546 \ No newline at end of file diff --git a/plugins/analysis-icu/licenses/lucene-analyzers-icu-7.1.0.jar.sha1 b/plugins/analysis-icu/licenses/lucene-analyzers-icu-7.1.0.jar.sha1 new file mode 100644 index 0000000000000..d1619f9fc1952 --- /dev/null +++ b/plugins/analysis-icu/licenses/lucene-analyzers-icu-7.1.0.jar.sha1 @@ -0,0 +1 @@ +d9a640081289c9c50da08479ff198b579df71c26 \ No newline at end of file diff --git a/plugins/analysis-icu/src/main/java/org/elasticsearch/index/analysis/IcuCollationTokenFilterFactory.java b/plugins/analysis-icu/src/main/java/org/elasticsearch/index/analysis/IcuCollationTokenFilterFactory.java index 7710c2e671524..220e4448559d7 100644 --- a/plugins/analysis-icu/src/main/java/org/elasticsearch/index/analysis/IcuCollationTokenFilterFactory.java +++ b/plugins/analysis-icu/src/main/java/org/elasticsearch/index/analysis/IcuCollationTokenFilterFactory.java @@ -84,7 +84,7 @@ public IcuCollationTokenFilterFactory(IndexSettings indexSettings, Environment e } collator = Collator.getInstance(locale); } else { - collator = Collator.getInstance(); + collator = Collator.getInstance(ULocale.ROOT); } } @@ -131,7 +131,7 @@ public IcuCollationTokenFilterFactory(IndexSettings indexSettings, Environment e } } - Boolean caseLevel = settings.getAsBooleanLenientForPreEs6Indices(indexSettings.getIndexVersionCreated(), "caseLevel", null, deprecationLogger); + Boolean caseLevel = settings.getAsBoolean("caseLevel", null); if (caseLevel != null) { rbc.setCaseLevel(caseLevel); } @@ -147,7 +147,7 @@ public IcuCollationTokenFilterFactory(IndexSettings indexSettings, Environment e } } - Boolean numeric = settings.getAsBooleanLenientForPreEs6Indices(indexSettings.getIndexVersionCreated(), "numeric", null, deprecationLogger); + Boolean numeric = settings.getAsBoolean("numeric", null); if (numeric != null) { rbc.setNumericCollation(numeric); } @@ -157,8 +157,7 @@ public IcuCollationTokenFilterFactory(IndexSettings indexSettings, Environment e rbc.setVariableTop(variableTop); } - Boolean hiraganaQuaternaryMode = settings - .getAsBooleanLenientForPreEs6Indices(indexSettings.getIndexVersionCreated(), "hiraganaQuaternaryMode", null, deprecationLogger); + Boolean hiraganaQuaternaryMode = settings.getAsBoolean("hiraganaQuaternaryMode", null); if (hiraganaQuaternaryMode != null) { rbc.setHiraganaQuaternary(hiraganaQuaternaryMode); } diff --git a/plugins/analysis-icu/src/main/java/org/elasticsearch/index/analysis/IcuTokenizerFactory.java b/plugins/analysis-icu/src/main/java/org/elasticsearch/index/analysis/IcuTokenizerFactory.java index 14fa5922c1d90..fa1999cf17e39 100644 --- a/plugins/analysis-icu/src/main/java/org/elasticsearch/index/analysis/IcuTokenizerFactory.java +++ b/plugins/analysis-icu/src/main/java/org/elasticsearch/index/analysis/IcuTokenizerFactory.java @@ -37,6 +37,7 @@ import java.nio.file.Files; import java.nio.file.Path; import java.util.HashMap; +import java.util.List; import java.util.Map; import java.util.stream.Collectors; @@ -63,7 +64,7 @@ private ICUTokenizerConfig getIcuConfig(Environment env, Settings settings) { Map tailored = new HashMap<>(); try { - String[] ruleFiles = settings.getAsArray(RULE_FILES); + List ruleFiles = settings.getAsList(RULE_FILES); for (String scriptAndResourcePath : ruleFiles) { int colonPos = scriptAndResourcePath.indexOf(":"); diff --git a/plugins/analysis-icu/src/main/java/org/elasticsearch/index/mapper/ICUCollationKeywordFieldMapper.java b/plugins/analysis-icu/src/main/java/org/elasticsearch/index/mapper/ICUCollationKeywordFieldMapper.java index 54b79334ba70a..f927f920f9097 100644 --- a/plugins/analysis-icu/src/main/java/org/elasticsearch/index/mapper/ICUCollationKeywordFieldMapper.java +++ b/plugins/analysis-icu/src/main/java/org/elasticsearch/index/mapper/ICUCollationKeywordFieldMapper.java @@ -23,13 +23,19 @@ import com.ibm.icu.text.RawCollationKey; import com.ibm.icu.text.RuleBasedCollator; import com.ibm.icu.util.ULocale; + import org.apache.lucene.document.Field; import org.apache.lucene.document.SortedDocValuesField; +import org.apache.lucene.document.SortedSetDocValuesField; import org.apache.lucene.index.IndexOptions; import org.apache.lucene.index.IndexableField; +import org.apache.lucene.index.Term; +import org.apache.lucene.search.DocValuesFieldExistsQuery; import org.apache.lucene.search.MultiTermQuery; import org.apache.lucene.search.Query; +import org.apache.lucene.search.TermQuery; import org.apache.lucene.util.BytesRef; +import org.elasticsearch.Version; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.lucene.Lucene; import org.elasticsearch.common.settings.Settings; @@ -50,6 +56,7 @@ import java.util.List; import java.util.Map; import java.util.Objects; +import java.util.function.BiFunction; import java.util.function.LongSupplier; public class ICUCollationKeywordFieldMapper extends FieldMapper { @@ -119,6 +126,15 @@ public void setCollator(Collator collator) { this.collator = collator.isFrozen() ? collator : collator.freeze(); } + @Override + public Query existsQuery(QueryShardContext context) { + if (hasDocValues()) { + return new DocValuesFieldExistsQuery(name()); + } else { + return new TermQuery(new Term(FieldNamesFieldMapper.NAME, name())); + } + } + @Override public Query nullValueQuery() { if (nullValue() == null) { @@ -386,7 +402,7 @@ public Collator buildCollator() { } collator = Collator.getInstance(locale); } else { - collator = Collator.getInstance(); + collator = Collator.getInstance(ULocale.ROOT); } } @@ -563,6 +579,7 @@ public static class TypeParser implements Mapper.TypeParser { private final String variableTop; private final boolean hiraganaQuaternaryMode; private final Collator collator; + private final BiFunction getDVField; protected ICUCollationKeywordFieldMapper(String simpleName, MappedFieldType fieldType, MappedFieldType defaultFieldType, Settings indexSettings, MultiFields multiFields, CopyTo copyTo, String rules, String language, @@ -584,6 +601,11 @@ protected ICUCollationKeywordFieldMapper(String simpleName, MappedFieldType fiel this.variableTop = variableTop; this.hiraganaQuaternaryMode = hiraganaQuaternaryMode; this.collator = collator; + if (indexCreatedVersion.onOrAfter(Version.V_5_6_0)) { + getDVField = SortedSetDocValuesField::new; + } else { + getDVField = SortedDocValuesField::new; + } } @Override @@ -740,7 +762,9 @@ protected void parseCreateField(ParseContext context, List field } if (fieldType().hasDocValues()) { - fields.add(new SortedDocValuesField(fieldType().name(), binaryValue)); + fields.add(getDVField.apply(fieldType().name(), binaryValue)); + } else if (fieldType().indexOptions() != IndexOptions.NONE || fieldType().stored()) { + createFieldNamesField(context, fields); } } } diff --git a/plugins/analysis-icu/src/test/java/org/elasticsearch/index/analysis/IcuTokenizerFactoryTests.java b/plugins/analysis-icu/src/test/java/org/elasticsearch/index/analysis/IcuTokenizerFactoryTests.java index ffc6cab6a256a..8cce4c13542c6 100644 --- a/plugins/analysis-icu/src/test/java/org/elasticsearch/index/analysis/IcuTokenizerFactoryTests.java +++ b/plugins/analysis-icu/src/test/java/org/elasticsearch/index/analysis/IcuTokenizerFactoryTests.java @@ -95,7 +95,7 @@ private static TestAnalysis createTestAnalysis() throws IOException { String json = "/org/elasticsearch/index/analysis/icu_analysis.json"; Settings settings = Settings.builder() - .loadFromStream(json, IcuTokenizerFactoryTests.class.getResourceAsStream(json)) + .loadFromStream(json, IcuTokenizerFactoryTests.class.getResourceAsStream(json), false) .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT) .build(); Settings nodeSettings = Settings.builder().put(Environment.PATH_HOME_SETTING.getKey(), home).build(); diff --git a/plugins/analysis-icu/src/test/java/org/elasticsearch/index/analysis/SimpleIcuCollationTokenFilterTests.java b/plugins/analysis-icu/src/test/java/org/elasticsearch/index/analysis/SimpleIcuCollationTokenFilterTests.java index 8f9a38dc8f18c..f0689bd1db9ea 100644 --- a/plugins/analysis-icu/src/test/java/org/elasticsearch/index/analysis/SimpleIcuCollationTokenFilterTests.java +++ b/plugins/analysis-icu/src/test/java/org/elasticsearch/index/analysis/SimpleIcuCollationTokenFilterTests.java @@ -38,6 +38,19 @@ // Tests borrowed from Solr's Icu collation key filter factory test. public class SimpleIcuCollationTokenFilterTests extends ESTestCase { + /* + * Tests usage where we do not provide a language or locale + */ + public void testDefaultUsage() throws Exception { + Settings settings = Settings.builder() + .put("index.analysis.filter.myCollator.type", "icu_collation") + .put("index.analysis.filter.myCollator.strength", "primary") + .build(); + TestAnalysis analysis = createTestAnalysis(new Index("test", "_na_"), settings, new AnalysisICUPlugin()); + + TokenFilterFactory filterFactory = analysis.tokenFilter.get("myCollator"); + assertCollatesToSame(filterFactory, "FOO", "foo"); + } /* * Turkish has some funny casing. * This test shows how you can solve this kind of thing easily with collation. diff --git a/plugins/analysis-icu/src/test/java/org/elasticsearch/index/mapper/CollationFieldTypeTests.java b/plugins/analysis-icu/src/test/java/org/elasticsearch/index/mapper/CollationFieldTypeTests.java index 94634fc79c893..71d8f25bf9f3b 100644 --- a/plugins/analysis-icu/src/test/java/org/elasticsearch/index/mapper/CollationFieldTypeTests.java +++ b/plugins/analysis-icu/src/test/java/org/elasticsearch/index/mapper/CollationFieldTypeTests.java @@ -78,7 +78,7 @@ public void testTermsQuery() { ft.setName("field"); ft.setIndexOptions(IndexOptions.DOCS); - Collator collator = Collator.getInstance().freeze(); + Collator collator = Collator.getInstance(ULocale.ROOT).freeze(); ((CollationFieldType) ft).setCollator(collator); RawCollationKey fooKey = collator.getRawCollationKey("foo", null); @@ -126,7 +126,7 @@ public void testRangeQuery() { ft.setName("field"); ft.setIndexOptions(IndexOptions.DOCS); - Collator collator = Collator.getInstance().freeze(); + Collator collator = Collator.getInstance(ULocale.ROOT).freeze(); ((CollationFieldType) ft).setCollator(collator); RawCollationKey aKey = collator.getRawCollationKey("a", null); @@ -135,11 +135,11 @@ public void testRangeQuery() { TermRangeQuery expected = new TermRangeQuery("field", new BytesRef(aKey.bytes, 0, aKey.size), new BytesRef(bKey.bytes, 0, bKey.size), false, false); - assertEquals(expected, ft.rangeQuery("a", "b", false, false, null)); + assertEquals(expected, ft.rangeQuery("a", "b", false, false, null, null, null, null)); ft.setIndexOptions(IndexOptions.NONE); IllegalArgumentException e = expectThrows(IllegalArgumentException.class, - () -> ft.rangeQuery("a", "b", false, false, null)); + () -> ft.rangeQuery("a", "b", false, false, null, null, null, null)); assertEquals("Cannot search on field [field] since it is not indexed.", e.getMessage()); } } diff --git a/plugins/analysis-icu/src/test/java/org/elasticsearch/index/mapper/ICUCollationKeywordFieldMapperIT.java b/plugins/analysis-icu/src/test/java/org/elasticsearch/index/mapper/ICUCollationKeywordFieldMapperIT.java index 8a6e9b49ac974..5220d44dca308 100644 --- a/plugins/analysis-icu/src/test/java/org/elasticsearch/index/mapper/ICUCollationKeywordFieldMapperIT.java +++ b/plugins/analysis-icu/src/test/java/org/elasticsearch/index/mapper/ICUCollationKeywordFieldMapperIT.java @@ -35,6 +35,8 @@ import org.elasticsearch.plugin.analysis.icu.AnalysisICUPlugin; import org.elasticsearch.plugins.Plugin; import org.elasticsearch.search.builder.SearchSourceBuilder; +import org.elasticsearch.search.sort.SortBuilders; +import org.elasticsearch.search.sort.SortMode; import org.elasticsearch.search.sort.SortOrder; import org.elasticsearch.test.ESIntegTestCase; @@ -94,6 +96,64 @@ public void testBasicUsage() throws Exception { assertOrderedSearchHits(response, "2", "1"); } + public void testMultipleValues() throws Exception { + String index = "foo"; + String type = "mytype"; + + String[] equilavent = {"a", "C", "a", "B"}; + + XContentBuilder builder = jsonBuilder() + .startObject().startObject("properties") + .startObject("collate") + .field("type", "icu_collation_keyword") + .field("language", "en") + .endObject() + .endObject().endObject(); + + assertAcked(client().admin().indices().prepareCreate(index).addMapping(type, builder)); + + // everything should be indexed fine, no exceptions + indexRandom(true, + client().prepareIndex(index, type, "1").setSource("{\"collate\":[\"" + equilavent[0] + "\", \"" + + equilavent[1] + "\"]}", XContentType.JSON), + client().prepareIndex(index, type, "2").setSource("{\"collate\":\"" + equilavent[2] + "\"}", XContentType.JSON) + ); + + // using sort mode = max, values B and C will be used for the sort + SearchRequest request = new SearchRequest() + .indices(index) + .types(type) + .source(new SearchSourceBuilder() + .fetchSource(false) + .query(QueryBuilders.termQuery("collate", "a")) + // if mode max we use c and b as sort values, if max we use "a" for both + .sort(SortBuilders.fieldSort("collate").sortMode(SortMode.MAX).order(SortOrder.DESC)) + .sort("_uid", SortOrder.DESC) // will be ignored + ); + + SearchResponse response = client().search(request).actionGet(); + assertNoFailures(response); + assertHitCount(response, 2L); + assertOrderedSearchHits(response, "1", "2"); + + // same thing, using different sort mode that will use a for both docs + request = new SearchRequest() + .indices(index) + .types(type) + .source(new SearchSourceBuilder() + .fetchSource(false) + .query(QueryBuilders.termQuery("collate", "a")) + // if mode max we use c and b as sort values, if max we use "a" for both + .sort(SortBuilders.fieldSort("collate").sortMode(SortMode.MIN).order(SortOrder.DESC)) + .sort("_uid", SortOrder.DESC) // will NOT be ignored and will determine order + ); + + response = client().search(request).actionGet(); + assertNoFailures(response); + assertHitCount(response, 2L); + assertOrderedSearchHits(response, "2", "1"); + } + /* * Test usage of the decomposition option for unicode normalization. */ diff --git a/plugins/analysis-icu/src/test/java/org/elasticsearch/index/mapper/ICUCollationKeywordFieldMapperTests.java b/plugins/analysis-icu/src/test/java/org/elasticsearch/index/mapper/ICUCollationKeywordFieldMapperTests.java index ebe909837e999..060a94a9d27b4 100644 --- a/plugins/analysis-icu/src/test/java/org/elasticsearch/index/mapper/ICUCollationKeywordFieldMapperTests.java +++ b/plugins/analysis-icu/src/test/java/org/elasticsearch/index/mapper/ICUCollationKeywordFieldMapperTests.java @@ -28,7 +28,9 @@ import org.apache.lucene.index.IndexableField; import org.apache.lucene.index.IndexableFieldType; import org.apache.lucene.util.BytesRef; +import org.elasticsearch.Version; import org.elasticsearch.common.compress.CompressedXContent; +import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.index.IndexService; @@ -80,7 +82,51 @@ public void testDefaults() throws Exception { IndexableField[] fields = doc.rootDoc().getFields("field"); assertEquals(2, fields.length); - Collator collator = Collator.getInstance(); + Collator collator = Collator.getInstance(ULocale.ROOT); + RawCollationKey key = collator.getRawCollationKey("1234", null); + BytesRef expected = new BytesRef(key.bytes, 0, key.size); + + assertEquals(expected, fields[0].binaryValue()); + IndexableFieldType fieldType = fields[0].fieldType(); + assertThat(fieldType.omitNorms(), equalTo(true)); + assertFalse(fieldType.tokenized()); + assertFalse(fieldType.stored()); + assertThat(fieldType.indexOptions(), equalTo(IndexOptions.DOCS)); + assertThat(fieldType.storeTermVectors(), equalTo(false)); + assertThat(fieldType.storeTermVectorOffsets(), equalTo(false)); + assertThat(fieldType.storeTermVectorPositions(), equalTo(false)); + assertThat(fieldType.storeTermVectorPayloads(), equalTo(false)); + assertEquals(DocValuesType.NONE, fieldType.docValuesType()); + + assertEquals(expected, fields[1].binaryValue()); + fieldType = fields[1].fieldType(); + assertThat(fieldType.indexOptions(), equalTo(IndexOptions.NONE)); + assertEquals(DocValuesType.SORTED_SET, fieldType.docValuesType()); + } + + public void testBackCompat() throws Exception { + indexService = createIndex("oldindex", Settings.builder().put("index.version.created", Version.V_5_5_0).build()); + parser = indexService.mapperService().documentMapperParser(); + + String mapping = XContentFactory.jsonBuilder().startObject().startObject("type") + .startObject("properties").startObject("field").field("type", FIELD_TYPE).endObject().endObject() + .endObject().endObject().string(); + + DocumentMapper mapper = parser.parse("type", new CompressedXContent(mapping)); + + assertEquals(mapping, mapper.mappingSource().toString()); + + ParsedDocument doc = mapper.parse(SourceToParse.source("oldindex", "type", "1", XContentFactory.jsonBuilder() + .startObject() + .field("field", "1234") + .endObject() + .bytes(), + XContentType.JSON)); + + IndexableField[] fields = doc.rootDoc().getFields("field"); + assertEquals(2, fields.length); + + Collator collator = Collator.getInstance(ULocale.ROOT); RawCollationKey key = collator.getRawCollationKey("1234", null); BytesRef expected = new BytesRef(key.bytes, 0, key.size); @@ -143,7 +189,7 @@ public void testNullValue() throws IOException { .bytes(), XContentType.JSON)); - Collator collator = Collator.getInstance(); + Collator collator = Collator.getInstance(ULocale.ROOT); RawCollationKey key = collator.getRawCollationKey("1234", null); BytesRef expected = new BytesRef(key.bytes, 0, key.size); @@ -194,7 +240,7 @@ public void testDisableIndex() throws IOException { IndexableField[] fields = doc.rootDoc().getFields("field"); assertEquals(1, fields.length); assertEquals(IndexOptions.NONE, fields[0].fieldType().indexOptions()); - assertEquals(DocValuesType.SORTED, fields[0].fieldType().docValuesType()); + assertEquals(DocValuesType.SORTED_SET, fields[0].fieldType().docValuesType()); } public void testDisableDocValues() throws IOException { @@ -219,6 +265,68 @@ public void testDisableDocValues() throws IOException { assertEquals(DocValuesType.NONE, fields[0].fieldType().docValuesType()); } + public void testMultipleValues() throws IOException { + String mapping = XContentFactory.jsonBuilder().startObject().startObject("type") + .startObject("properties").startObject("field").field("type", FIELD_TYPE).endObject().endObject() + .endObject().endObject().string(); + + DocumentMapper mapper = parser.parse("type", new CompressedXContent(mapping)); + + assertEquals(mapping, mapper.mappingSource().toString()); + + ParsedDocument doc = mapper.parse(SourceToParse.source("test", "type", "1", XContentFactory.jsonBuilder() + .startObject() + .field("field", Arrays.asList("1234", "5678")) + .endObject() + .bytes(), + XContentType.JSON)); + + IndexableField[] fields = doc.rootDoc().getFields("field"); + assertEquals(4, fields.length); + + Collator collator = Collator.getInstance(ULocale.ROOT); + RawCollationKey key = collator.getRawCollationKey("1234", null); + BytesRef expected = new BytesRef(key.bytes, 0, key.size); + + assertEquals(expected, fields[0].binaryValue()); + IndexableFieldType fieldType = fields[0].fieldType(); + assertThat(fieldType.omitNorms(), equalTo(true)); + assertFalse(fieldType.tokenized()); + assertFalse(fieldType.stored()); + assertThat(fieldType.indexOptions(), equalTo(IndexOptions.DOCS)); + assertThat(fieldType.storeTermVectors(), equalTo(false)); + assertThat(fieldType.storeTermVectorOffsets(), equalTo(false)); + assertThat(fieldType.storeTermVectorPositions(), equalTo(false)); + assertThat(fieldType.storeTermVectorPayloads(), equalTo(false)); + assertEquals(DocValuesType.NONE, fieldType.docValuesType()); + + assertEquals(expected, fields[1].binaryValue()); + fieldType = fields[1].fieldType(); + assertThat(fieldType.indexOptions(), equalTo(IndexOptions.NONE)); + assertEquals(DocValuesType.SORTED_SET, fieldType.docValuesType()); + + collator = Collator.getInstance(ULocale.ROOT); + key = collator.getRawCollationKey("5678", null); + expected = new BytesRef(key.bytes, 0, key.size); + + assertEquals(expected, fields[2].binaryValue()); + fieldType = fields[2].fieldType(); + assertThat(fieldType.omitNorms(), equalTo(true)); + assertFalse(fieldType.tokenized()); + assertFalse(fieldType.stored()); + assertThat(fieldType.indexOptions(), equalTo(IndexOptions.DOCS)); + assertThat(fieldType.storeTermVectors(), equalTo(false)); + assertThat(fieldType.storeTermVectorOffsets(), equalTo(false)); + assertThat(fieldType.storeTermVectorPositions(), equalTo(false)); + assertThat(fieldType.storeTermVectorPayloads(), equalTo(false)); + assertEquals(DocValuesType.NONE, fieldType.docValuesType()); + + assertEquals(expected, fields[3].binaryValue()); + fieldType = fields[3].fieldType(); + assertThat(fieldType.indexOptions(), equalTo(IndexOptions.NONE)); + assertEquals(DocValuesType.SORTED_SET, fieldType.docValuesType()); + } + public void testIndexOptions() throws IOException { String mapping = XContentFactory.jsonBuilder().startObject().startObject("type") .startObject("properties").startObject("field").field("type", FIELD_TYPE) @@ -316,7 +424,7 @@ public void testCollator() throws IOException { assertEquals(expected, fields[1].binaryValue()); fieldType = fields[1].fieldType(); assertThat(fieldType.indexOptions(), equalTo(IndexOptions.NONE)); - assertEquals(DocValuesType.SORTED, fieldType.docValuesType()); + assertEquals(DocValuesType.SORTED_SET, fieldType.docValuesType()); } public void testUpdateCollator() throws IOException { diff --git a/plugins/analysis-kuromoji/licenses/lucene-NOTICE.txt b/plugins/analysis-kuromoji/licenses/lucene-NOTICE.txt index ecf08201a5ee6..1a1d51572432a 100644 --- a/plugins/analysis-kuromoji/licenses/lucene-NOTICE.txt +++ b/plugins/analysis-kuromoji/licenses/lucene-NOTICE.txt @@ -54,13 +54,14 @@ The KStem stemmer in was developed by Bob Krovetz and Sergio Guzman-Lara (CIIR-UMass Amherst) under the BSD-license. -The Arabic,Persian,Romanian,Bulgarian, and Hindi analyzers (common) come with a default +The Arabic,Persian,Romanian,Bulgarian, Hindi and Bengali analyzers (common) come with a default stopword list that is BSD-licensed created by Jacques Savoy. These files reside in: analysis/common/src/resources/org/apache/lucene/analysis/ar/stopwords.txt, analysis/common/src/resources/org/apache/lucene/analysis/fa/stopwords.txt, analysis/common/src/resources/org/apache/lucene/analysis/ro/stopwords.txt, analysis/common/src/resources/org/apache/lucene/analysis/bg/stopwords.txt, -analysis/common/src/resources/org/apache/lucene/analysis/hi/stopwords.txt +analysis/common/src/resources/org/apache/lucene/analysis/hi/stopwords.txt, +analysis/common/src/resources/org/apache/lucene/analysis/bn/stopwords.txt See http://members.unine.ch/jacques.savoy/clef/index.html. The German,Spanish,Finnish,French,Hungarian,Italian,Portuguese,Russian and Swedish light stemmers diff --git a/plugins/analysis-kuromoji/licenses/lucene-analyzers-kuromoji-7.0.0-snapshot-a128fcb.jar.sha1 b/plugins/analysis-kuromoji/licenses/lucene-analyzers-kuromoji-7.0.0-snapshot-a128fcb.jar.sha1 deleted file mode 100644 index 5a523d254376f..0000000000000 --- a/plugins/analysis-kuromoji/licenses/lucene-analyzers-kuromoji-7.0.0-snapshot-a128fcb.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -51683881146dd8de3cbeb528cfaa002ff4a49cd6 \ No newline at end of file diff --git a/plugins/analysis-kuromoji/licenses/lucene-analyzers-kuromoji-7.1.0.jar.sha1 b/plugins/analysis-kuromoji/licenses/lucene-analyzers-kuromoji-7.1.0.jar.sha1 new file mode 100644 index 0000000000000..89ca7249ffb5b --- /dev/null +++ b/plugins/analysis-kuromoji/licenses/lucene-analyzers-kuromoji-7.1.0.jar.sha1 @@ -0,0 +1 @@ +a2ca81efc31d857fa2ade104dcdb3fed20c95ea0 \ No newline at end of file diff --git a/plugins/analysis-kuromoji/src/main/java/org/elasticsearch/index/analysis/JapaneseStopTokenFilterFactory.java b/plugins/analysis-kuromoji/src/main/java/org/elasticsearch/index/analysis/JapaneseStopTokenFilterFactory.java index 712ad0add1533..d10fe4089f2a6 100644 --- a/plugins/analysis-kuromoji/src/main/java/org/elasticsearch/index/analysis/JapaneseStopTokenFilterFactory.java +++ b/plugins/analysis-kuromoji/src/main/java/org/elasticsearch/index/analysis/JapaneseStopTokenFilterFactory.java @@ -45,9 +45,8 @@ public class JapaneseStopTokenFilterFactory extends AbstractTokenFilterFactory{ public JapaneseStopTokenFilterFactory(IndexSettings indexSettings, Environment env, String name, Settings settings) { super(indexSettings, name, settings); - this.ignoreCase = settings.getAsBooleanLenientForPreEs6Indices(indexSettings.getIndexVersionCreated(), "ignore_case", false, deprecationLogger); - this.removeTrailing = settings - .getAsBooleanLenientForPreEs6Indices(indexSettings.getIndexVersionCreated(), "remove_trailing", true, deprecationLogger); + this.ignoreCase = settings.getAsBoolean("ignore_case", false); + this.removeTrailing = settings.getAsBoolean("remove_trailing", true); this.stopWords = Analysis.parseWords(env, settings, "stopwords", JapaneseAnalyzer.getDefaultStopSet(), NAMED_STOP_WORDS, ignoreCase); } diff --git a/plugins/analysis-kuromoji/src/main/java/org/elasticsearch/index/analysis/KuromojiAnalyzerProvider.java b/plugins/analysis-kuromoji/src/main/java/org/elasticsearch/index/analysis/KuromojiAnalyzerProvider.java index 3749f957a3c0e..1776977c8e2a5 100644 --- a/plugins/analysis-kuromoji/src/main/java/org/elasticsearch/index/analysis/KuromojiAnalyzerProvider.java +++ b/plugins/analysis-kuromoji/src/main/java/org/elasticsearch/index/analysis/KuromojiAnalyzerProvider.java @@ -35,8 +35,7 @@ public class KuromojiAnalyzerProvider extends AbstractIndexAnalyzerProvider stopWords = Analysis.parseStopWords( - env, indexSettings.getIndexVersionCreated(), settings, JapaneseAnalyzer.getDefaultStopSet()); + final Set stopWords = Analysis.parseStopWords(env, settings, JapaneseAnalyzer.getDefaultStopSet()); final JapaneseTokenizer.Mode mode = KuromojiTokenizerFactory.getMode(settings); final UserDictionary userDictionary = KuromojiTokenizerFactory.getUserDictionary(env, settings); analyzer = new JapaneseAnalyzer(userDictionary, mode, CharArraySet.copy(stopWords), JapaneseAnalyzer.getDefaultStopTags()); diff --git a/plugins/analysis-kuromoji/src/main/java/org/elasticsearch/index/analysis/KuromojiIterationMarkCharFilterFactory.java b/plugins/analysis-kuromoji/src/main/java/org/elasticsearch/index/analysis/KuromojiIterationMarkCharFilterFactory.java index 5bdafbe39e46b..836dbbdfae219 100644 --- a/plugins/analysis-kuromoji/src/main/java/org/elasticsearch/index/analysis/KuromojiIterationMarkCharFilterFactory.java +++ b/plugins/analysis-kuromoji/src/main/java/org/elasticsearch/index/analysis/KuromojiIterationMarkCharFilterFactory.java @@ -33,10 +33,8 @@ public class KuromojiIterationMarkCharFilterFactory extends AbstractCharFilterFa public KuromojiIterationMarkCharFilterFactory(IndexSettings indexSettings, Environment env, String name, Settings settings) { super(indexSettings, name); - normalizeKanji = settings.getAsBooleanLenientForPreEs6Indices(indexSettings.getIndexVersionCreated(), "normalize_kanji", - JapaneseIterationMarkCharFilter.NORMALIZE_KANJI_DEFAULT, deprecationLogger); - normalizeKana = settings.getAsBooleanLenientForPreEs6Indices(indexSettings.getIndexVersionCreated(), "normalize_kana", - JapaneseIterationMarkCharFilter.NORMALIZE_KANA_DEFAULT, deprecationLogger); + normalizeKanji = settings.getAsBoolean("normalize_kanji", JapaneseIterationMarkCharFilter.NORMALIZE_KANJI_DEFAULT); + normalizeKana = settings.getAsBoolean("normalize_kana", JapaneseIterationMarkCharFilter.NORMALIZE_KANA_DEFAULT); } @Override diff --git a/plugins/analysis-kuromoji/src/main/java/org/elasticsearch/index/analysis/KuromojiPartOfSpeechFilterFactory.java b/plugins/analysis-kuromoji/src/main/java/org/elasticsearch/index/analysis/KuromojiPartOfSpeechFilterFactory.java index e3a58360e9b5f..bea12470cb026 100644 --- a/plugins/analysis-kuromoji/src/main/java/org/elasticsearch/index/analysis/KuromojiPartOfSpeechFilterFactory.java +++ b/plugins/analysis-kuromoji/src/main/java/org/elasticsearch/index/analysis/KuromojiPartOfSpeechFilterFactory.java @@ -20,6 +20,7 @@ package org.elasticsearch.index.analysis; import org.apache.lucene.analysis.TokenStream; +import org.apache.lucene.analysis.ja.JapaneseAnalyzer; import org.apache.lucene.analysis.ja.JapanesePartOfSpeechStopFilter; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.env.Environment; @@ -38,6 +39,8 @@ public KuromojiPartOfSpeechFilterFactory(IndexSettings indexSettings, Environmen List wordList = Analysis.getWordList(env, settings, "stoptags"); if (wordList != null) { stopTags.addAll(wordList); + } else { + stopTags.addAll(JapaneseAnalyzer.getDefaultStopTags()); } } diff --git a/plugins/analysis-kuromoji/src/main/java/org/elasticsearch/index/analysis/KuromojiReadingFormFilterFactory.java b/plugins/analysis-kuromoji/src/main/java/org/elasticsearch/index/analysis/KuromojiReadingFormFilterFactory.java index 4b85368da930a..d0eb0cecdb93a 100644 --- a/plugins/analysis-kuromoji/src/main/java/org/elasticsearch/index/analysis/KuromojiReadingFormFilterFactory.java +++ b/plugins/analysis-kuromoji/src/main/java/org/elasticsearch/index/analysis/KuromojiReadingFormFilterFactory.java @@ -31,8 +31,7 @@ public class KuromojiReadingFormFilterFactory extends AbstractTokenFilterFactory public KuromojiReadingFormFilterFactory(IndexSettings indexSettings, Environment environment, String name, Settings settings) { super(indexSettings, name, settings); - useRomaji = - settings.getAsBooleanLenientForPreEs6Indices(indexSettings.getIndexVersionCreated(), "use_romaji", false, deprecationLogger); + useRomaji = settings.getAsBoolean("use_romaji", false); } @Override diff --git a/plugins/analysis-kuromoji/src/main/java/org/elasticsearch/index/analysis/KuromojiTokenizerFactory.java b/plugins/analysis-kuromoji/src/main/java/org/elasticsearch/index/analysis/KuromojiTokenizerFactory.java index 69da3bd2bc4d4..2f00e68a75ebc 100644 --- a/plugins/analysis-kuromoji/src/main/java/org/elasticsearch/index/analysis/KuromojiTokenizerFactory.java +++ b/plugins/analysis-kuromoji/src/main/java/org/elasticsearch/index/analysis/KuromojiTokenizerFactory.java @@ -48,8 +48,7 @@ public KuromojiTokenizerFactory(IndexSettings indexSettings, Environment env, St super(indexSettings, name, settings); mode = getMode(settings); userDictionary = getUserDictionary(env, settings); - discartPunctuation = settings - .getAsBooleanLenientForPreEs6Indices(indexSettings.getIndexVersionCreated(), "discard_punctuation", true, deprecationLogger); + discartPunctuation = settings.getAsBoolean("discard_punctuation", true); nBestCost = settings.getAsInt(NBEST_COST, -1); nBestExamples = settings.get(NBEST_EXAMPLES); } diff --git a/plugins/analysis-kuromoji/src/test/java/org/elasticsearch/index/analysis/KuromojiAnalysisTests.java b/plugins/analysis-kuromoji/src/test/java/org/elasticsearch/index/analysis/KuromojiAnalysisTests.java index c0271c997849f..b1b23f7f1b6a4 100644 --- a/plugins/analysis-kuromoji/src/test/java/org/elasticsearch/index/analysis/KuromojiAnalysisTests.java +++ b/plugins/analysis-kuromoji/src/test/java/org/elasticsearch/index/analysis/KuromojiAnalysisTests.java @@ -93,6 +93,21 @@ public void testBaseFormFilterFactory() throws IOException { assertSimpleTSOutput(tokenFilter.create(tokenizer), expected); } + public void testPartOfSpeechFilter() throws IOException { + TestAnalysis analysis = createTestAnalysis(); + TokenFilterFactory tokenFilter = analysis.tokenFilter.get("kuromoji_part_of_speech"); + + assertThat(tokenFilter, instanceOf(KuromojiPartOfSpeechFilterFactory.class)); + + String source = "寿司がおいしいね"; + String[] expected_tokens = new String[]{"寿司", "おいしい"}; + + Tokenizer tokenizer = new JapaneseTokenizer(null, true, JapaneseTokenizer.Mode.SEARCH); + tokenizer.setReader(new StringReader(source)); + + assertSimpleTSOutput(tokenFilter.create(tokenizer), expected_tokens); + } + public void testReadingFormFilterFactory() throws IOException { TestAnalysis analysis = createTestAnalysis(); TokenFilterFactory tokenFilter = analysis.tokenFilter.get("kuromoji_rf"); @@ -193,7 +208,7 @@ private static TestAnalysis createTestAnalysis() throws IOException { String json = "/org/elasticsearch/index/analysis/kuromoji_analysis.json"; Settings settings = Settings.builder() - .loadFromStream(json, KuromojiAnalysisTests.class.getResourceAsStream(json)) + .loadFromStream(json, KuromojiAnalysisTests.class.getResourceAsStream(json), false) .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT) .build(); Settings nodeSettings = Settings.builder().put(Environment.PATH_HOME_SETTING.getKey(), home).build(); @@ -208,7 +223,7 @@ public static void assertSimpleTSOutput(TokenStream stream, int i = 0; while (stream.incrementToken()) { assertThat(expected.length, greaterThan(i)); - assertThat( "expected different term at index " + i, expected[i++], equalTo(termAttr.toString())); + assertThat("expected different term at index " + i, termAttr.toString(), equalTo(expected[i++])); } assertThat("not all tokens produced", i, equalTo(expected.length)); } diff --git a/plugins/analysis-phonetic/licenses/lucene-NOTICE.txt b/plugins/analysis-phonetic/licenses/lucene-NOTICE.txt index ecf08201a5ee6..1a1d51572432a 100644 --- a/plugins/analysis-phonetic/licenses/lucene-NOTICE.txt +++ b/plugins/analysis-phonetic/licenses/lucene-NOTICE.txt @@ -54,13 +54,14 @@ The KStem stemmer in was developed by Bob Krovetz and Sergio Guzman-Lara (CIIR-UMass Amherst) under the BSD-license. -The Arabic,Persian,Romanian,Bulgarian, and Hindi analyzers (common) come with a default +The Arabic,Persian,Romanian,Bulgarian, Hindi and Bengali analyzers (common) come with a default stopword list that is BSD-licensed created by Jacques Savoy. These files reside in: analysis/common/src/resources/org/apache/lucene/analysis/ar/stopwords.txt, analysis/common/src/resources/org/apache/lucene/analysis/fa/stopwords.txt, analysis/common/src/resources/org/apache/lucene/analysis/ro/stopwords.txt, analysis/common/src/resources/org/apache/lucene/analysis/bg/stopwords.txt, -analysis/common/src/resources/org/apache/lucene/analysis/hi/stopwords.txt +analysis/common/src/resources/org/apache/lucene/analysis/hi/stopwords.txt, +analysis/common/src/resources/org/apache/lucene/analysis/bn/stopwords.txt See http://members.unine.ch/jacques.savoy/clef/index.html. The German,Spanish,Finnish,French,Hungarian,Italian,Portuguese,Russian and Swedish light stemmers diff --git a/plugins/analysis-phonetic/licenses/lucene-analyzers-phonetic-7.0.0-snapshot-a128fcb.jar.sha1 b/plugins/analysis-phonetic/licenses/lucene-analyzers-phonetic-7.0.0-snapshot-a128fcb.jar.sha1 deleted file mode 100644 index 7b9dab39ab9d1..0000000000000 --- a/plugins/analysis-phonetic/licenses/lucene-analyzers-phonetic-7.0.0-snapshot-a128fcb.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -ceaa949d3794393dc2ba918af22e356bc7694695 \ No newline at end of file diff --git a/plugins/analysis-phonetic/licenses/lucene-analyzers-phonetic-7.1.0.jar.sha1 b/plugins/analysis-phonetic/licenses/lucene-analyzers-phonetic-7.1.0.jar.sha1 new file mode 100644 index 0000000000000..512ecb59fc5bd --- /dev/null +++ b/plugins/analysis-phonetic/licenses/lucene-analyzers-phonetic-7.1.0.jar.sha1 @@ -0,0 +1 @@ +42058220ada77c4c5340e8383f62a4398e10a8ce \ No newline at end of file diff --git a/plugins/analysis-phonetic/src/main/java/org/elasticsearch/index/analysis/PhoneticTokenFilterFactory.java b/plugins/analysis-phonetic/src/main/java/org/elasticsearch/index/analysis/PhoneticTokenFilterFactory.java index 5c9a6a463d3b1..d02ac2ae2be70 100644 --- a/plugins/analysis-phonetic/src/main/java/org/elasticsearch/index/analysis/PhoneticTokenFilterFactory.java +++ b/plugins/analysis-phonetic/src/main/java/org/elasticsearch/index/analysis/PhoneticTokenFilterFactory.java @@ -19,9 +19,6 @@ package org.elasticsearch.index.analysis; -import java.util.Arrays; -import java.util.HashSet; - import org.apache.commons.codec.Encoder; import org.apache.commons.codec.language.Caverphone1; import org.apache.commons.codec.language.Caverphone2; @@ -45,12 +42,15 @@ import org.elasticsearch.index.analysis.phonetic.KoelnerPhonetik; import org.elasticsearch.index.analysis.phonetic.Nysiis; +import java.util.HashSet; +import java.util.List; + public class PhoneticTokenFilterFactory extends AbstractTokenFilterFactory { private final Encoder encoder; private final boolean replace; private int maxcodelength; - private String[] languageset; + private List languageset; private NameType nametype; private RuleType ruletype; @@ -60,7 +60,7 @@ public PhoneticTokenFilterFactory(IndexSettings indexSettings, Environment envir this.nametype = null; this.ruletype = null; this.maxcodelength = 0; - this.replace = settings.getAsBooleanLenientForPreEs6Indices(indexSettings.getIndexVersionCreated(), "replace", true, deprecationLogger); + this.replace = settings.getAsBoolean("replace", true); // weird, encoder is null at last step in SimplePhoneticAnalysisTests, so we set it to metaphone as default String encodername = settings.get("encoder", "metaphone"); if ("metaphone".equalsIgnoreCase(encodername)) { @@ -82,7 +82,7 @@ public PhoneticTokenFilterFactory(IndexSettings indexSettings, Environment envir this.maxcodelength = settings.getAsInt("max_code_len", 4); } else if ("bm".equalsIgnoreCase(encodername) || "beider_morse".equalsIgnoreCase(encodername) || "beidermorse".equalsIgnoreCase(encodername)) { this.encoder = null; - this.languageset = settings.getAsArray("languageset"); + this.languageset = settings.getAsList("languageset"); String ruleType = settings.get("rule_type", "approx"); if ("approx".equalsIgnoreCase(ruleType)) { ruletype = RuleType.APPROX; @@ -116,11 +116,11 @@ public PhoneticTokenFilterFactory(IndexSettings indexSettings, Environment envir public TokenStream create(TokenStream tokenStream) { if (encoder == null) { if (ruletype != null && nametype != null) { - if (languageset != null) { - final LanguageSet languages = LanguageSet.from(new HashSet<>(Arrays.asList(languageset))); - return new BeiderMorseFilter(tokenStream, new PhoneticEngine(nametype, ruletype, true), languages); + LanguageSet langset = null; + if (languageset != null && languageset.size() > 0) { + langset = LanguageSet.from(new HashSet<>(languageset)); } - return new BeiderMorseFilter(tokenStream, new PhoneticEngine(nametype, ruletype, true)); + return new BeiderMorseFilter(tokenStream, new PhoneticEngine(nametype, ruletype, true), langset); } if (maxcodelength > 0) { return new DoubleMetaphoneFilter(tokenStream, maxcodelength, !replace); diff --git a/plugins/analysis-phonetic/src/test/java/org/elasticsearch/index/analysis/SimplePhoneticAnalysisTests.java b/plugins/analysis-phonetic/src/test/java/org/elasticsearch/index/analysis/SimplePhoneticAnalysisTests.java index 127a258f75a22..e3877faee3146 100644 --- a/plugins/analysis-phonetic/src/test/java/org/elasticsearch/index/analysis/SimplePhoneticAnalysisTests.java +++ b/plugins/analysis-phonetic/src/test/java/org/elasticsearch/index/analysis/SimplePhoneticAnalysisTests.java @@ -19,6 +19,9 @@ package org.elasticsearch.index.analysis; +import org.apache.lucene.analysis.BaseTokenStreamTestCase; +import org.apache.lucene.analysis.Tokenizer; +import org.apache.lucene.analysis.core.WhitespaceTokenizer; import org.elasticsearch.Version; import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.common.settings.Settings; @@ -26,19 +29,47 @@ import org.elasticsearch.plugin.analysis.AnalysisPhoneticPlugin; import org.elasticsearch.test.ESTestCase; import org.hamcrest.MatcherAssert; +import org.junit.Before; import java.io.IOException; +import java.io.StringReader; import static org.hamcrest.Matchers.instanceOf; public class SimplePhoneticAnalysisTests extends ESTestCase { - public void testPhoneticTokenFilterFactory() throws IOException { + + private TestAnalysis analysis; + + @Before + public void setup() throws IOException { String yaml = "/org/elasticsearch/index/analysis/phonetic-1.yml"; - Settings settings = Settings.builder().loadFromStream(yaml, getClass().getResourceAsStream(yaml)) + Settings settings = Settings.builder().loadFromStream(yaml, getClass().getResourceAsStream(yaml), false) .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT) .build(); - TestAnalysis analysis = createTestAnalysis(new Index("test", "_na_"), settings, new AnalysisPhoneticPlugin()); + this.analysis = createTestAnalysis(new Index("test", "_na_"), settings, new AnalysisPhoneticPlugin()); + } + + public void testPhoneticTokenFilterFactory() throws IOException { TokenFilterFactory filterFactory = analysis.tokenFilter.get("phonetic"); MatcherAssert.assertThat(filterFactory, instanceOf(PhoneticTokenFilterFactory.class)); } + + public void testPhoneticTokenFilterBeiderMorseNoLanguage() throws IOException { + TokenFilterFactory filterFactory = analysis.tokenFilter.get("beidermorsefilter"); + Tokenizer tokenizer = new WhitespaceTokenizer(); + tokenizer.setReader(new StringReader("ABADIAS")); + String[] expected = new String[] { "abYdias", "abYdios", "abadia", "abadiaS", "abadias", "abadio", "abadioS", "abadios", "abodia", + "abodiaS", "abodias", "abodio", "abodioS", "abodios", "avadias", "avadios", "avodias", "avodios", "obadia", "obadiaS", + "obadias", "obadio", "obadioS", "obadios", "obodia", "obodiaS", "obodias", "obodioS" }; + BaseTokenStreamTestCase.assertTokenStreamContents(filterFactory.create(tokenizer), expected); + } + + public void testPhoneticTokenFilterBeiderMorseWithLanguage() throws IOException { + TokenFilterFactory filterFactory = analysis.tokenFilter.get("beidermorsefilterfrench"); + Tokenizer tokenizer = new WhitespaceTokenizer(); + tokenizer.setReader(new StringReader("Rimbault")); + String[] expected = new String[] { "rimbD", "rimbDlt", "rimba", "rimbalt", "rimbo", "rimbolt", "rimbu", "rimbult", "rmbD", "rmbDlt", + "rmba", "rmbalt", "rmbo", "rmbolt", "rmbu", "rmbult" }; + BaseTokenStreamTestCase.assertTokenStreamContents(filterFactory.create(tokenizer), expected); + } } diff --git a/plugins/analysis-phonetic/src/test/resources/org/elasticsearch/index/analysis/phonetic-1.yml b/plugins/analysis-phonetic/src/test/resources/org/elasticsearch/index/analysis/phonetic-1.yml index 6c0a076388110..1909c7ee06390 100644 --- a/plugins/analysis-phonetic/src/test/resources/org/elasticsearch/index/analysis/phonetic-1.yml +++ b/plugins/analysis-phonetic/src/test/resources/org/elasticsearch/index/analysis/phonetic-1.yml @@ -19,6 +19,10 @@ index: beidermorsefilter: type: phonetic encoder: beidermorse + beidermorsefilterfrench: + type: phonetic + encoder: beidermorse + languageset : [ "french" ] koelnerphonetikfilter: type: phonetic encoder: koelnerphonetik diff --git a/plugins/analysis-smartcn/licenses/lucene-NOTICE.txt b/plugins/analysis-smartcn/licenses/lucene-NOTICE.txt index ecf08201a5ee6..1a1d51572432a 100644 --- a/plugins/analysis-smartcn/licenses/lucene-NOTICE.txt +++ b/plugins/analysis-smartcn/licenses/lucene-NOTICE.txt @@ -54,13 +54,14 @@ The KStem stemmer in was developed by Bob Krovetz and Sergio Guzman-Lara (CIIR-UMass Amherst) under the BSD-license. -The Arabic,Persian,Romanian,Bulgarian, and Hindi analyzers (common) come with a default +The Arabic,Persian,Romanian,Bulgarian, Hindi and Bengali analyzers (common) come with a default stopword list that is BSD-licensed created by Jacques Savoy. These files reside in: analysis/common/src/resources/org/apache/lucene/analysis/ar/stopwords.txt, analysis/common/src/resources/org/apache/lucene/analysis/fa/stopwords.txt, analysis/common/src/resources/org/apache/lucene/analysis/ro/stopwords.txt, analysis/common/src/resources/org/apache/lucene/analysis/bg/stopwords.txt, -analysis/common/src/resources/org/apache/lucene/analysis/hi/stopwords.txt +analysis/common/src/resources/org/apache/lucene/analysis/hi/stopwords.txt, +analysis/common/src/resources/org/apache/lucene/analysis/bn/stopwords.txt See http://members.unine.ch/jacques.savoy/clef/index.html. The German,Spanish,Finnish,French,Hungarian,Italian,Portuguese,Russian and Swedish light stemmers diff --git a/plugins/analysis-smartcn/licenses/lucene-analyzers-smartcn-7.0.0-snapshot-a128fcb.jar.sha1 b/plugins/analysis-smartcn/licenses/lucene-analyzers-smartcn-7.0.0-snapshot-a128fcb.jar.sha1 deleted file mode 100644 index 914bebb82b989..0000000000000 --- a/plugins/analysis-smartcn/licenses/lucene-analyzers-smartcn-7.0.0-snapshot-a128fcb.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -864425808e61e06d13db9740d58d61a0f01b495e \ No newline at end of file diff --git a/plugins/analysis-smartcn/licenses/lucene-analyzers-smartcn-7.1.0.jar.sha1 b/plugins/analysis-smartcn/licenses/lucene-analyzers-smartcn-7.1.0.jar.sha1 new file mode 100644 index 0000000000000..7e68fa106c108 --- /dev/null +++ b/plugins/analysis-smartcn/licenses/lucene-analyzers-smartcn-7.1.0.jar.sha1 @@ -0,0 +1 @@ +2769d7f7330c78aea1edf4d8cd2eb111564c6800 \ No newline at end of file diff --git a/plugins/analysis-stempel/licenses/lucene-NOTICE.txt b/plugins/analysis-stempel/licenses/lucene-NOTICE.txt index ecf08201a5ee6..1a1d51572432a 100644 --- a/plugins/analysis-stempel/licenses/lucene-NOTICE.txt +++ b/plugins/analysis-stempel/licenses/lucene-NOTICE.txt @@ -54,13 +54,14 @@ The KStem stemmer in was developed by Bob Krovetz and Sergio Guzman-Lara (CIIR-UMass Amherst) under the BSD-license. -The Arabic,Persian,Romanian,Bulgarian, and Hindi analyzers (common) come with a default +The Arabic,Persian,Romanian,Bulgarian, Hindi and Bengali analyzers (common) come with a default stopword list that is BSD-licensed created by Jacques Savoy. These files reside in: analysis/common/src/resources/org/apache/lucene/analysis/ar/stopwords.txt, analysis/common/src/resources/org/apache/lucene/analysis/fa/stopwords.txt, analysis/common/src/resources/org/apache/lucene/analysis/ro/stopwords.txt, analysis/common/src/resources/org/apache/lucene/analysis/bg/stopwords.txt, -analysis/common/src/resources/org/apache/lucene/analysis/hi/stopwords.txt +analysis/common/src/resources/org/apache/lucene/analysis/hi/stopwords.txt, +analysis/common/src/resources/org/apache/lucene/analysis/bn/stopwords.txt See http://members.unine.ch/jacques.savoy/clef/index.html. The German,Spanish,Finnish,French,Hungarian,Italian,Portuguese,Russian and Swedish light stemmers diff --git a/plugins/analysis-stempel/licenses/lucene-analyzers-stempel-7.0.0-snapshot-a128fcb.jar.sha1 b/plugins/analysis-stempel/licenses/lucene-analyzers-stempel-7.0.0-snapshot-a128fcb.jar.sha1 deleted file mode 100644 index 71c51d45bf935..0000000000000 --- a/plugins/analysis-stempel/licenses/lucene-analyzers-stempel-7.0.0-snapshot-a128fcb.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -278e4eaaeab98bbadb569fa3fd0c363039f0a900 \ No newline at end of file diff --git a/plugins/analysis-stempel/licenses/lucene-analyzers-stempel-7.1.0.jar.sha1 b/plugins/analysis-stempel/licenses/lucene-analyzers-stempel-7.1.0.jar.sha1 new file mode 100644 index 0000000000000..3e59d9c24c4a1 --- /dev/null +++ b/plugins/analysis-stempel/licenses/lucene-analyzers-stempel-7.1.0.jar.sha1 @@ -0,0 +1 @@ +2bec616dc5bb33df9d0beddf6a9565ef14a227ff \ No newline at end of file diff --git a/plugins/analysis-stempel/src/test/java/org/elasticsearch/index/analysis/AnalysisPolishFactoryTests.java b/plugins/analysis-stempel/src/test/java/org/elasticsearch/index/analysis/AnalysisPolishFactoryTests.java index ae78b9c01b3f8..f13d7b01149b5 100644 --- a/plugins/analysis-stempel/src/test/java/org/elasticsearch/index/analysis/AnalysisPolishFactoryTests.java +++ b/plugins/analysis-stempel/src/test/java/org/elasticsearch/index/analysis/AnalysisPolishFactoryTests.java @@ -28,6 +28,7 @@ import org.elasticsearch.common.UUIDs; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.env.Environment; +import org.elasticsearch.env.TestEnvironment; import org.elasticsearch.index.IndexSettings; import org.elasticsearch.index.analysis.pl.PolishStemTokenFilterFactory; import org.elasticsearch.indices.analysis.AnalysisFactoryTestCase; @@ -59,7 +60,7 @@ public void testThreadSafety() throws IOException { .put(IndexMetaData.SETTING_INDEX_UUID, UUIDs.randomBase64UUID()) .put(Environment.PATH_HOME_SETTING.getKey(), createTempDir().toString()) .build(); - Environment environment = new Environment(settings); + Environment environment = TestEnvironment.newEnvironment(settings); IndexMetaData metaData = IndexMetaData.builder(IndexMetaData.INDEX_UUID_NA_VALUE).settings(settings).build(); IndexSettings indexSettings = new IndexSettings(metaData, Settings.EMPTY); testThreadSafety(new PolishStemTokenFilterFactory(indexSettings, environment, "stempelpolishstem", settings)); diff --git a/plugins/analysis-ukrainian/licenses/lucene-NOTICE.txt b/plugins/analysis-ukrainian/licenses/lucene-NOTICE.txt index ecf08201a5ee6..1a1d51572432a 100644 --- a/plugins/analysis-ukrainian/licenses/lucene-NOTICE.txt +++ b/plugins/analysis-ukrainian/licenses/lucene-NOTICE.txt @@ -54,13 +54,14 @@ The KStem stemmer in was developed by Bob Krovetz and Sergio Guzman-Lara (CIIR-UMass Amherst) under the BSD-license. -The Arabic,Persian,Romanian,Bulgarian, and Hindi analyzers (common) come with a default +The Arabic,Persian,Romanian,Bulgarian, Hindi and Bengali analyzers (common) come with a default stopword list that is BSD-licensed created by Jacques Savoy. These files reside in: analysis/common/src/resources/org/apache/lucene/analysis/ar/stopwords.txt, analysis/common/src/resources/org/apache/lucene/analysis/fa/stopwords.txt, analysis/common/src/resources/org/apache/lucene/analysis/ro/stopwords.txt, analysis/common/src/resources/org/apache/lucene/analysis/bg/stopwords.txt, -analysis/common/src/resources/org/apache/lucene/analysis/hi/stopwords.txt +analysis/common/src/resources/org/apache/lucene/analysis/hi/stopwords.txt, +analysis/common/src/resources/org/apache/lucene/analysis/bn/stopwords.txt See http://members.unine.ch/jacques.savoy/clef/index.html. The German,Spanish,Finnish,French,Hungarian,Italian,Portuguese,Russian and Swedish light stemmers diff --git a/plugins/analysis-ukrainian/licenses/lucene-analyzers-morfologik-7.0.0-snapshot-a128fcb.jar.sha1 b/plugins/analysis-ukrainian/licenses/lucene-analyzers-morfologik-7.0.0-snapshot-a128fcb.jar.sha1 deleted file mode 100644 index e42131e957b6d..0000000000000 --- a/plugins/analysis-ukrainian/licenses/lucene-analyzers-morfologik-7.0.0-snapshot-a128fcb.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -d6178248b6a5cbce685b8733fc2ee99cc79aa131 \ No newline at end of file diff --git a/plugins/analysis-ukrainian/licenses/lucene-analyzers-morfologik-7.1.0.jar.sha1 b/plugins/analysis-ukrainian/licenses/lucene-analyzers-morfologik-7.1.0.jar.sha1 new file mode 100644 index 0000000000000..55f36cb5f8eee --- /dev/null +++ b/plugins/analysis-ukrainian/licenses/lucene-analyzers-morfologik-7.1.0.jar.sha1 @@ -0,0 +1 @@ +0e78e3e59b7bdf6e1aa24ff8289cc1246248f642 \ No newline at end of file diff --git a/plugins/analysis-ukrainian/src/main/java/org/elasticsearch/index/analysis/UkrainianAnalyzerProvider.java b/plugins/analysis-ukrainian/src/main/java/org/elasticsearch/index/analysis/UkrainianAnalyzerProvider.java index 0a00b752b9978..8a44e9e020f7f 100644 --- a/plugins/analysis-ukrainian/src/main/java/org/elasticsearch/index/analysis/UkrainianAnalyzerProvider.java +++ b/plugins/analysis-ukrainian/src/main/java/org/elasticsearch/index/analysis/UkrainianAnalyzerProvider.java @@ -32,7 +32,7 @@ public class UkrainianAnalyzerProvider extends AbstractIndexAnalyzerProvider TAG_SETTING = Setting.groupSetting("discovery.ec2.tag.", Property.NodeScope); + Setting.AffixSetting> TAG_SETTING = Setting.prefixKeySetting("discovery.ec2.tag.", + key -> Setting.listSetting(key, Collections.emptyList(), Function.identity(), Property.NodeScope)); AmazonEC2 client(); } diff --git a/plugins/discovery-ec2/src/main/java/org/elasticsearch/discovery/ec2/AwsEc2UnicastHostsProvider.java b/plugins/discovery-ec2/src/main/java/org/elasticsearch/discovery/ec2/AwsEc2UnicastHostsProvider.java index 91713ce2177fd..f291413d408ed 100644 --- a/plugins/discovery-ec2/src/main/java/org/elasticsearch/discovery/ec2/AwsEc2UnicastHostsProvider.java +++ b/plugins/discovery-ec2/src/main/java/org/elasticsearch/discovery/ec2/AwsEc2UnicastHostsProvider.java @@ -65,7 +65,7 @@ class AwsEc2UnicastHostsProvider extends AbstractComponent implements UnicastHos private final Set groups; - private final Map tags; + private final Map> tags; private final Set availabilityZones; @@ -85,7 +85,7 @@ class AwsEc2UnicastHostsProvider extends AbstractComponent implements UnicastHos this.groups = new HashSet<>(); this.groups.addAll(AwsEc2Service.GROUPS_SETTING.get(settings)); - this.tags = AwsEc2Service.TAG_SETTING.get(settings).getAsMap(); + this.tags = AwsEc2Service.TAG_SETTING.getAsMap(settings); this.availabilityZones = new HashSet<>(); availabilityZones.addAll(AwsEc2Service.AVAILABILITY_ZONES_SETTING.get(settings)); @@ -206,7 +206,7 @@ private DescribeInstancesRequest buildDescribeInstancesRequest() { new Filter("instance-state-name").withValues("running", "pending") ); - for (Map.Entry tagFilter : tags.entrySet()) { + for (Map.Entry> tagFilter : tags.entrySet()) { // for a given tag key, OR relationship for multiple different values describeInstancesRequest.withFilters( new Filter("tag:" + tagFilter.getKey()).withValues(tagFilter.getValue()) diff --git a/plugins/discovery-ec2/src/test/java/org/elasticsearch/discovery/ec2/AmazonEC2Mock.java b/plugins/discovery-ec2/src/test/java/org/elasticsearch/discovery/ec2/AmazonEC2Mock.java index 050a25bb18dc3..34ad449d06e8d 100644 --- a/plugins/discovery-ec2/src/test/java/org/elasticsearch/discovery/ec2/AmazonEC2Mock.java +++ b/plugins/discovery-ec2/src/test/java/org/elasticsearch/discovery/ec2/AmazonEC2Mock.java @@ -32,14 +32,27 @@ import com.amazonaws.services.ec2.model.AllocateHostsRequest; import com.amazonaws.services.ec2.model.AllocateHostsResult; import com.amazonaws.services.ec2.model.AssignPrivateIpAddressesRequest; +import com.amazonaws.services.ec2.model.AssignPrivateIpAddressesResult; +import com.amazonaws.services.ec2.model.AssignIpv6AddressesRequest; +import com.amazonaws.services.ec2.model.AssignIpv6AddressesResult; import com.amazonaws.services.ec2.model.AssociateAddressRequest; import com.amazonaws.services.ec2.model.AssociateAddressResult; +import com.amazonaws.services.ec2.model.AssociateVpcCidrBlockRequest; +import com.amazonaws.services.ec2.model.AssociateVpcCidrBlockResult; +import com.amazonaws.services.ec2.model.AssociateSubnetCidrBlockRequest; +import com.amazonaws.services.ec2.model.AssociateSubnetCidrBlockResult; +import com.amazonaws.services.ec2.model.AssociateIamInstanceProfileRequest; +import com.amazonaws.services.ec2.model.AssociateIamInstanceProfileResult; +import com.amazonaws.services.ec2.model.AcceptReservedInstancesExchangeQuoteRequest; +import com.amazonaws.services.ec2.model.AcceptReservedInstancesExchangeQuoteResult; import com.amazonaws.services.ec2.model.AssociateDhcpOptionsRequest; +import com.amazonaws.services.ec2.model.AssociateDhcpOptionsResult; import com.amazonaws.services.ec2.model.AssociateRouteTableRequest; import com.amazonaws.services.ec2.model.AssociateRouteTableResult; import com.amazonaws.services.ec2.model.AttachClassicLinkVpcRequest; import com.amazonaws.services.ec2.model.AttachClassicLinkVpcResult; import com.amazonaws.services.ec2.model.AttachInternetGatewayRequest; +import com.amazonaws.services.ec2.model.AttachInternetGatewayResult; import com.amazonaws.services.ec2.model.AttachNetworkInterfaceRequest; import com.amazonaws.services.ec2.model.AttachNetworkInterfaceResult; import com.amazonaws.services.ec2.model.AttachVolumeRequest; @@ -47,13 +60,17 @@ import com.amazonaws.services.ec2.model.AttachVpnGatewayRequest; import com.amazonaws.services.ec2.model.AttachVpnGatewayResult; import com.amazonaws.services.ec2.model.AuthorizeSecurityGroupEgressRequest; +import com.amazonaws.services.ec2.model.AuthorizeSecurityGroupEgressResult; import com.amazonaws.services.ec2.model.AuthorizeSecurityGroupIngressRequest; +import com.amazonaws.services.ec2.model.AuthorizeSecurityGroupIngressResult; import com.amazonaws.services.ec2.model.BundleInstanceRequest; import com.amazonaws.services.ec2.model.BundleInstanceResult; import com.amazonaws.services.ec2.model.CancelBundleTaskRequest; import com.amazonaws.services.ec2.model.CancelBundleTaskResult; import com.amazonaws.services.ec2.model.CancelConversionTaskRequest; +import com.amazonaws.services.ec2.model.CancelConversionTaskResult; import com.amazonaws.services.ec2.model.CancelExportTaskRequest; +import com.amazonaws.services.ec2.model.CancelExportTaskResult; import com.amazonaws.services.ec2.model.CancelImportTaskRequest; import com.amazonaws.services.ec2.model.CancelImportTaskResult; import com.amazonaws.services.ec2.model.CancelReservedInstancesListingRequest; @@ -69,9 +86,17 @@ import com.amazonaws.services.ec2.model.CopySnapshotRequest; import com.amazonaws.services.ec2.model.CopySnapshotResult; import com.amazonaws.services.ec2.model.CreateCustomerGatewayRequest; +import com.amazonaws.services.ec2.model.CreateDefaultVpcResult; +import com.amazonaws.services.ec2.model.CreateDefaultVpcRequest; import com.amazonaws.services.ec2.model.CreateCustomerGatewayResult; import com.amazonaws.services.ec2.model.CreateDhcpOptionsRequest; import com.amazonaws.services.ec2.model.CreateDhcpOptionsResult; +import com.amazonaws.services.ec2.model.CreateEgressOnlyInternetGatewayRequest; +import com.amazonaws.services.ec2.model.CreateEgressOnlyInternetGatewayResult; +import com.amazonaws.services.ec2.model.CreateFpgaImageRequest; +import com.amazonaws.services.ec2.model.CreateFpgaImageResult; +import com.amazonaws.services.ec2.model.CreateNetworkInterfacePermissionRequest; +import com.amazonaws.services.ec2.model.CreateNetworkInterfacePermissionResult; import com.amazonaws.services.ec2.model.CreateFlowLogsRequest; import com.amazonaws.services.ec2.model.CreateFlowLogsResult; import com.amazonaws.services.ec2.model.CreateImageRequest; @@ -85,11 +110,13 @@ import com.amazonaws.services.ec2.model.CreateNatGatewayRequest; import com.amazonaws.services.ec2.model.CreateNatGatewayResult; import com.amazonaws.services.ec2.model.CreateNetworkAclEntryRequest; +import com.amazonaws.services.ec2.model.CreateNetworkAclEntryResult; import com.amazonaws.services.ec2.model.CreateNetworkAclRequest; import com.amazonaws.services.ec2.model.CreateNetworkAclResult; import com.amazonaws.services.ec2.model.CreateNetworkInterfaceRequest; import com.amazonaws.services.ec2.model.CreateNetworkInterfaceResult; import com.amazonaws.services.ec2.model.CreatePlacementGroupRequest; +import com.amazonaws.services.ec2.model.CreatePlacementGroupResult; import com.amazonaws.services.ec2.model.CreateReservedInstancesListingRequest; import com.amazonaws.services.ec2.model.CreateReservedInstancesListingResult; import com.amazonaws.services.ec2.model.CreateRouteRequest; @@ -105,6 +132,7 @@ import com.amazonaws.services.ec2.model.CreateSubnetRequest; import com.amazonaws.services.ec2.model.CreateSubnetResult; import com.amazonaws.services.ec2.model.CreateTagsRequest; +import com.amazonaws.services.ec2.model.CreateTagsResult; import com.amazonaws.services.ec2.model.CreateVolumeRequest; import com.amazonaws.services.ec2.model.CreateVolumeResult; import com.amazonaws.services.ec2.model.CreateVpcEndpointRequest; @@ -116,37 +144,63 @@ import com.amazonaws.services.ec2.model.CreateVpnConnectionRequest; import com.amazonaws.services.ec2.model.CreateVpnConnectionResult; import com.amazonaws.services.ec2.model.CreateVpnConnectionRouteRequest; +import com.amazonaws.services.ec2.model.CreateVpnConnectionRouteResult; import com.amazonaws.services.ec2.model.CreateVpnGatewayRequest; import com.amazonaws.services.ec2.model.CreateVpnGatewayResult; import com.amazonaws.services.ec2.model.DeleteCustomerGatewayRequest; +import com.amazonaws.services.ec2.model.DeleteCustomerGatewayResult; import com.amazonaws.services.ec2.model.DeleteDhcpOptionsRequest; +import com.amazonaws.services.ec2.model.DeleteDhcpOptionsResult; +import com.amazonaws.services.ec2.model.DeleteEgressOnlyInternetGatewayRequest; +import com.amazonaws.services.ec2.model.DeleteEgressOnlyInternetGatewayResult; +import com.amazonaws.services.ec2.model.DeleteNetworkInterfacePermissionRequest; +import com.amazonaws.services.ec2.model.DeleteNetworkInterfacePermissionResult; import com.amazonaws.services.ec2.model.DeleteFlowLogsRequest; import com.amazonaws.services.ec2.model.DeleteFlowLogsResult; import com.amazonaws.services.ec2.model.DeleteInternetGatewayRequest; +import com.amazonaws.services.ec2.model.DeleteInternetGatewayResult; import com.amazonaws.services.ec2.model.DeleteKeyPairRequest; +import com.amazonaws.services.ec2.model.DeleteKeyPairResult; import com.amazonaws.services.ec2.model.DeleteNatGatewayRequest; import com.amazonaws.services.ec2.model.DeleteNatGatewayResult; import com.amazonaws.services.ec2.model.DeleteNetworkAclEntryRequest; +import com.amazonaws.services.ec2.model.DeleteNetworkAclEntryResult; import com.amazonaws.services.ec2.model.DeleteNetworkAclRequest; +import com.amazonaws.services.ec2.model.DeleteNetworkAclResult; import com.amazonaws.services.ec2.model.DeleteNetworkInterfaceRequest; +import com.amazonaws.services.ec2.model.DeleteNetworkInterfaceResult; import com.amazonaws.services.ec2.model.DeletePlacementGroupRequest; +import com.amazonaws.services.ec2.model.DeletePlacementGroupResult; import com.amazonaws.services.ec2.model.DeleteRouteRequest; +import com.amazonaws.services.ec2.model.DeleteRouteResult; import com.amazonaws.services.ec2.model.DeleteRouteTableRequest; +import com.amazonaws.services.ec2.model.DeleteRouteTableResult; import com.amazonaws.services.ec2.model.DeleteSecurityGroupRequest; +import com.amazonaws.services.ec2.model.DeleteSecurityGroupResult; import com.amazonaws.services.ec2.model.DeleteSnapshotRequest; +import com.amazonaws.services.ec2.model.DeleteSnapshotResult; import com.amazonaws.services.ec2.model.DeleteSpotDatafeedSubscriptionRequest; +import com.amazonaws.services.ec2.model.DeleteSpotDatafeedSubscriptionResult; import com.amazonaws.services.ec2.model.DeleteSubnetRequest; +import com.amazonaws.services.ec2.model.DeleteSubnetResult; import com.amazonaws.services.ec2.model.DeleteTagsRequest; +import com.amazonaws.services.ec2.model.DeleteTagsResult; import com.amazonaws.services.ec2.model.DeleteVolumeRequest; +import com.amazonaws.services.ec2.model.DeleteVolumeResult; import com.amazonaws.services.ec2.model.DeleteVpcEndpointsRequest; import com.amazonaws.services.ec2.model.DeleteVpcEndpointsResult; import com.amazonaws.services.ec2.model.DeleteVpcPeeringConnectionRequest; import com.amazonaws.services.ec2.model.DeleteVpcPeeringConnectionResult; import com.amazonaws.services.ec2.model.DeleteVpcRequest; +import com.amazonaws.services.ec2.model.DeleteVpcResult; import com.amazonaws.services.ec2.model.DeleteVpnConnectionRequest; +import com.amazonaws.services.ec2.model.DeleteVpnConnectionResult; import com.amazonaws.services.ec2.model.DeleteVpnConnectionRouteRequest; +import com.amazonaws.services.ec2.model.DeleteVpnConnectionRouteResult; import com.amazonaws.services.ec2.model.DeleteVpnGatewayRequest; +import com.amazonaws.services.ec2.model.DeleteVpnGatewayResult; import com.amazonaws.services.ec2.model.DeregisterImageRequest; +import com.amazonaws.services.ec2.model.DeregisterImageResult; import com.amazonaws.services.ec2.model.DescribeAccountAttributesRequest; import com.amazonaws.services.ec2.model.DescribeAccountAttributesResult; import com.amazonaws.services.ec2.model.DescribeAddressesRequest; @@ -163,12 +217,26 @@ import com.amazonaws.services.ec2.model.DescribeCustomerGatewaysResult; import com.amazonaws.services.ec2.model.DescribeDhcpOptionsRequest; import com.amazonaws.services.ec2.model.DescribeDhcpOptionsResult; +import com.amazonaws.services.ec2.model.DescribeEgressOnlyInternetGatewaysRequest; +import com.amazonaws.services.ec2.model.DescribeEgressOnlyInternetGatewaysResult; import com.amazonaws.services.ec2.model.DescribeExportTasksRequest; import com.amazonaws.services.ec2.model.DescribeExportTasksResult; +import com.amazonaws.services.ec2.model.DescribeElasticGpusRequest; +import com.amazonaws.services.ec2.model.DescribeElasticGpusResult; +import com.amazonaws.services.ec2.model.DescribeFpgaImagesRequest; +import com.amazonaws.services.ec2.model.DescribeFpgaImagesResult; +import com.amazonaws.services.ec2.model.DescribeHostReservationOfferingsRequest; +import com.amazonaws.services.ec2.model.DescribeHostReservationOfferingsResult; +import com.amazonaws.services.ec2.model.DescribeHostReservationsRequest; +import com.amazonaws.services.ec2.model.DescribeHostReservationsResult; +import com.amazonaws.services.ec2.model.DescribeIdentityIdFormatRequest; +import com.amazonaws.services.ec2.model.DescribeIdentityIdFormatResult; import com.amazonaws.services.ec2.model.DescribeFlowLogsRequest; import com.amazonaws.services.ec2.model.DescribeFlowLogsResult; import com.amazonaws.services.ec2.model.DescribeHostsRequest; import com.amazonaws.services.ec2.model.DescribeHostsResult; +import com.amazonaws.services.ec2.model.DescribeIamInstanceProfileAssociationsRequest; +import com.amazonaws.services.ec2.model.DescribeIamInstanceProfileAssociationsResult; import com.amazonaws.services.ec2.model.DescribeIdFormatRequest; import com.amazonaws.services.ec2.model.DescribeIdFormatResult; import com.amazonaws.services.ec2.model.DescribeImageAttributeRequest; @@ -199,6 +267,8 @@ import com.amazonaws.services.ec2.model.DescribeNetworkInterfaceAttributeResult; import com.amazonaws.services.ec2.model.DescribeNetworkInterfacesRequest; import com.amazonaws.services.ec2.model.DescribeNetworkInterfacesResult; +import com.amazonaws.services.ec2.model.DescribeNetworkInterfacePermissionsRequest; +import com.amazonaws.services.ec2.model.DescribeNetworkInterfacePermissionsResult; import com.amazonaws.services.ec2.model.DescribePlacementGroupsRequest; import com.amazonaws.services.ec2.model.DescribePlacementGroupsResult; import com.amazonaws.services.ec2.model.DescribePrefixListsRequest; @@ -221,6 +291,10 @@ import com.amazonaws.services.ec2.model.DescribeScheduledInstancesResult; import com.amazonaws.services.ec2.model.DescribeSecurityGroupsRequest; import com.amazonaws.services.ec2.model.DescribeSecurityGroupsResult; +import com.amazonaws.services.ec2.model.DescribeStaleSecurityGroupsRequest; +import com.amazonaws.services.ec2.model.DescribeStaleSecurityGroupsResult; +import com.amazonaws.services.ec2.model.DescribeSecurityGroupReferencesRequest; +import com.amazonaws.services.ec2.model.DescribeSecurityGroupReferencesResult; import com.amazonaws.services.ec2.model.DescribeSnapshotAttributeRequest; import com.amazonaws.services.ec2.model.DescribeSnapshotAttributeResult; import com.amazonaws.services.ec2.model.DescribeSnapshotsRequest; @@ -245,6 +319,8 @@ import com.amazonaws.services.ec2.model.DescribeVolumeAttributeResult; import com.amazonaws.services.ec2.model.DescribeVolumeStatusRequest; import com.amazonaws.services.ec2.model.DescribeVolumeStatusResult; +import com.amazonaws.services.ec2.model.DescribeVolumesModificationsRequest; +import com.amazonaws.services.ec2.model.DescribeVolumesModificationsResult; import com.amazonaws.services.ec2.model.DescribeVolumesRequest; import com.amazonaws.services.ec2.model.DescribeVolumesResult; import com.amazonaws.services.ec2.model.DescribeVpcAttributeRequest; @@ -268,21 +344,35 @@ import com.amazonaws.services.ec2.model.DetachClassicLinkVpcRequest; import com.amazonaws.services.ec2.model.DetachClassicLinkVpcResult; import com.amazonaws.services.ec2.model.DetachInternetGatewayRequest; +import com.amazonaws.services.ec2.model.DetachInternetGatewayResult; import com.amazonaws.services.ec2.model.DetachNetworkInterfaceRequest; +import com.amazonaws.services.ec2.model.DetachNetworkInterfaceResult; import com.amazonaws.services.ec2.model.DetachVolumeRequest; import com.amazonaws.services.ec2.model.DetachVolumeResult; import com.amazonaws.services.ec2.model.DetachVpnGatewayRequest; +import com.amazonaws.services.ec2.model.DetachVpnGatewayResult; import com.amazonaws.services.ec2.model.DisableVgwRoutePropagationRequest; +import com.amazonaws.services.ec2.model.DisableVgwRoutePropagationResult; import com.amazonaws.services.ec2.model.DisableVpcClassicLinkDnsSupportRequest; import com.amazonaws.services.ec2.model.DisableVpcClassicLinkDnsSupportResult; import com.amazonaws.services.ec2.model.DisableVpcClassicLinkRequest; import com.amazonaws.services.ec2.model.DisableVpcClassicLinkResult; import com.amazonaws.services.ec2.model.DisassociateAddressRequest; +import com.amazonaws.services.ec2.model.DisassociateAddressResult; import com.amazonaws.services.ec2.model.DisassociateRouteTableRequest; +import com.amazonaws.services.ec2.model.DisassociateRouteTableResult; +import com.amazonaws.services.ec2.model.DisassociateIamInstanceProfileRequest; +import com.amazonaws.services.ec2.model.DisassociateIamInstanceProfileResult; +import com.amazonaws.services.ec2.model.DisassociateVpcCidrBlockRequest; +import com.amazonaws.services.ec2.model.DisassociateVpcCidrBlockResult; +import com.amazonaws.services.ec2.model.DisassociateSubnetCidrBlockRequest; +import com.amazonaws.services.ec2.model.DisassociateSubnetCidrBlockResult; import com.amazonaws.services.ec2.model.DryRunResult; import com.amazonaws.services.ec2.model.DryRunSupportedRequest; import com.amazonaws.services.ec2.model.EnableVgwRoutePropagationRequest; +import com.amazonaws.services.ec2.model.EnableVgwRoutePropagationResult; import com.amazonaws.services.ec2.model.EnableVolumeIORequest; +import com.amazonaws.services.ec2.model.EnableVolumeIOResult; import com.amazonaws.services.ec2.model.EnableVpcClassicLinkDnsSupportRequest; import com.amazonaws.services.ec2.model.EnableVpcClassicLinkDnsSupportResult; import com.amazonaws.services.ec2.model.EnableVpcClassicLinkRequest; @@ -290,8 +380,14 @@ import com.amazonaws.services.ec2.model.Filter; import com.amazonaws.services.ec2.model.GetConsoleOutputRequest; import com.amazonaws.services.ec2.model.GetConsoleOutputResult; +import com.amazonaws.services.ec2.model.GetConsoleScreenshotRequest; +import com.amazonaws.services.ec2.model.GetConsoleScreenshotResult; +import com.amazonaws.services.ec2.model.GetHostReservationPurchasePreviewRequest; +import com.amazonaws.services.ec2.model.GetHostReservationPurchasePreviewResult; import com.amazonaws.services.ec2.model.GetPasswordDataRequest; import com.amazonaws.services.ec2.model.GetPasswordDataResult; +import com.amazonaws.services.ec2.model.GetReservedInstancesExchangeQuoteRequest; +import com.amazonaws.services.ec2.model.GetReservedInstancesExchangeQuoteResult; import com.amazonaws.services.ec2.model.ImportImageRequest; import com.amazonaws.services.ec2.model.ImportImageResult; import com.amazonaws.services.ec2.model.ImportInstanceRequest; @@ -308,19 +404,31 @@ import com.amazonaws.services.ec2.model.ModifyHostsRequest; import com.amazonaws.services.ec2.model.ModifyHostsResult; import com.amazonaws.services.ec2.model.ModifyIdFormatRequest; +import com.amazonaws.services.ec2.model.ModifyIdFormatResult; import com.amazonaws.services.ec2.model.ModifyImageAttributeRequest; +import com.amazonaws.services.ec2.model.ModifyImageAttributeResult; import com.amazonaws.services.ec2.model.ModifyInstanceAttributeRequest; +import com.amazonaws.services.ec2.model.ModifyInstanceAttributeResult; import com.amazonaws.services.ec2.model.ModifyInstancePlacementRequest; import com.amazonaws.services.ec2.model.ModifyInstancePlacementResult; +import com.amazonaws.services.ec2.model.ModifyIdentityIdFormatRequest; +import com.amazonaws.services.ec2.model.ModifyIdentityIdFormatResult; import com.amazonaws.services.ec2.model.ModifyNetworkInterfaceAttributeRequest; +import com.amazonaws.services.ec2.model.ModifyNetworkInterfaceAttributeResult; import com.amazonaws.services.ec2.model.ModifyReservedInstancesRequest; import com.amazonaws.services.ec2.model.ModifyReservedInstancesResult; import com.amazonaws.services.ec2.model.ModifySnapshotAttributeRequest; +import com.amazonaws.services.ec2.model.ModifySnapshotAttributeResult; import com.amazonaws.services.ec2.model.ModifySpotFleetRequestRequest; import com.amazonaws.services.ec2.model.ModifySpotFleetRequestResult; import com.amazonaws.services.ec2.model.ModifySubnetAttributeRequest; +import com.amazonaws.services.ec2.model.ModifySubnetAttributeResult; import com.amazonaws.services.ec2.model.ModifyVolumeAttributeRequest; +import com.amazonaws.services.ec2.model.ModifyVolumeAttributeResult; +import com.amazonaws.services.ec2.model.ModifyVolumeRequest; +import com.amazonaws.services.ec2.model.ModifyVolumeResult; import com.amazonaws.services.ec2.model.ModifyVpcAttributeRequest; +import com.amazonaws.services.ec2.model.ModifyVpcAttributeResult; import com.amazonaws.services.ec2.model.ModifyVpcEndpointRequest; import com.amazonaws.services.ec2.model.ModifyVpcEndpointResult; import com.amazonaws.services.ec2.model.MonitorInstancesRequest; @@ -331,34 +439,51 @@ import com.amazonaws.services.ec2.model.PurchaseReservedInstancesOfferingResult; import com.amazonaws.services.ec2.model.PurchaseScheduledInstancesRequest; import com.amazonaws.services.ec2.model.PurchaseScheduledInstancesResult; +import com.amazonaws.services.ec2.model.PurchaseHostReservationRequest; +import com.amazonaws.services.ec2.model.PurchaseHostReservationResult; import com.amazonaws.services.ec2.model.RebootInstancesRequest; +import com.amazonaws.services.ec2.model.RebootInstancesResult; import com.amazonaws.services.ec2.model.RegisterImageRequest; import com.amazonaws.services.ec2.model.RegisterImageResult; import com.amazonaws.services.ec2.model.RejectVpcPeeringConnectionRequest; import com.amazonaws.services.ec2.model.RejectVpcPeeringConnectionResult; +import com.amazonaws.services.ec2.model.ModifyVpcPeeringConnectionOptionsRequest; +import com.amazonaws.services.ec2.model.ModifyVpcPeeringConnectionOptionsResult; import com.amazonaws.services.ec2.model.ReleaseAddressRequest; +import com.amazonaws.services.ec2.model.ReleaseAddressResult; import com.amazonaws.services.ec2.model.ReleaseHostsRequest; import com.amazonaws.services.ec2.model.ReleaseHostsResult; +import com.amazonaws.services.ec2.model.ReplaceIamInstanceProfileAssociationRequest; +import com.amazonaws.services.ec2.model.ReplaceIamInstanceProfileAssociationResult; import com.amazonaws.services.ec2.model.ReplaceNetworkAclAssociationRequest; import com.amazonaws.services.ec2.model.ReplaceNetworkAclAssociationResult; import com.amazonaws.services.ec2.model.ReplaceNetworkAclEntryRequest; +import com.amazonaws.services.ec2.model.ReplaceNetworkAclEntryResult; import com.amazonaws.services.ec2.model.ReplaceRouteRequest; +import com.amazonaws.services.ec2.model.ReplaceRouteResult; import com.amazonaws.services.ec2.model.ReplaceRouteTableAssociationRequest; import com.amazonaws.services.ec2.model.ReplaceRouteTableAssociationResult; import com.amazonaws.services.ec2.model.ReportInstanceStatusRequest; +import com.amazonaws.services.ec2.model.ReportInstanceStatusResult; import com.amazonaws.services.ec2.model.RequestSpotFleetRequest; import com.amazonaws.services.ec2.model.RequestSpotFleetResult; import com.amazonaws.services.ec2.model.RequestSpotInstancesRequest; import com.amazonaws.services.ec2.model.RequestSpotInstancesResult; import com.amazonaws.services.ec2.model.Reservation; import com.amazonaws.services.ec2.model.ResetImageAttributeRequest; +import com.amazonaws.services.ec2.model.ResetImageAttributeResult; import com.amazonaws.services.ec2.model.ResetInstanceAttributeRequest; +import com.amazonaws.services.ec2.model.ResetInstanceAttributeResult; import com.amazonaws.services.ec2.model.ResetNetworkInterfaceAttributeRequest; +import com.amazonaws.services.ec2.model.ResetNetworkInterfaceAttributeResult; import com.amazonaws.services.ec2.model.ResetSnapshotAttributeRequest; +import com.amazonaws.services.ec2.model.ResetSnapshotAttributeResult; import com.amazonaws.services.ec2.model.RestoreAddressToClassicRequest; import com.amazonaws.services.ec2.model.RestoreAddressToClassicResult; import com.amazonaws.services.ec2.model.RevokeSecurityGroupEgressRequest; +import com.amazonaws.services.ec2.model.RevokeSecurityGroupEgressResult; import com.amazonaws.services.ec2.model.RevokeSecurityGroupIngressRequest; +import com.amazonaws.services.ec2.model.RevokeSecurityGroupIngressResult; import com.amazonaws.services.ec2.model.RunInstancesRequest; import com.amazonaws.services.ec2.model.RunInstancesResult; import com.amazonaws.services.ec2.model.RunScheduledInstancesRequest; @@ -370,9 +495,17 @@ import com.amazonaws.services.ec2.model.Tag; import com.amazonaws.services.ec2.model.TerminateInstancesRequest; import com.amazonaws.services.ec2.model.TerminateInstancesResult; +import com.amazonaws.services.ec2.model.UnassignIpv6AddressesRequest; +import com.amazonaws.services.ec2.model.UnassignIpv6AddressesResult; import com.amazonaws.services.ec2.model.UnassignPrivateIpAddressesRequest; +import com.amazonaws.services.ec2.model.UnassignPrivateIpAddressesResult; import com.amazonaws.services.ec2.model.UnmonitorInstancesRequest; import com.amazonaws.services.ec2.model.UnmonitorInstancesResult; +import com.amazonaws.services.ec2.model.UpdateSecurityGroupRuleDescriptionsEgressRequest; +import com.amazonaws.services.ec2.model.UpdateSecurityGroupRuleDescriptionsEgressResult; +import com.amazonaws.services.ec2.model.UpdateSecurityGroupRuleDescriptionsIngressRequest; +import com.amazonaws.services.ec2.model.UpdateSecurityGroupRuleDescriptionsIngressResult; +import com.amazonaws.services.ec2.waiters.AmazonEC2Waiters; import org.apache.logging.log4j.Logger; import org.elasticsearch.common.logging.ESLoggerFactory; @@ -518,7 +651,13 @@ public void setRegion(Region region) throws IllegalArgumentException { } @Override - public void rebootInstances(RebootInstancesRequest rebootInstancesRequest) throws AmazonServiceException, AmazonClientException { + public AcceptReservedInstancesExchangeQuoteResult acceptReservedInstancesExchangeQuote( + AcceptReservedInstancesExchangeQuoteRequest acceptReservedInstancesExchangeQuoteRequest) throws AmazonServiceException, AmazonClientException { + throw new UnsupportedOperationException("Not supported in mock"); + } + + @Override + public RebootInstancesResult rebootInstances(RebootInstancesRequest rebootInstancesRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @@ -548,7 +687,7 @@ public DetachVolumeResult detachVolume(DetachVolumeRequest detachVolumeRequest) } @Override - public void deleteKeyPair(DeleteKeyPairRequest deleteKeyPairRequest) throws AmazonServiceException, AmazonClientException { + public DeleteKeyPairResult deleteKeyPair(DeleteKeyPairRequest deleteKeyPairRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @@ -562,6 +701,16 @@ public UnmonitorInstancesResult unmonitorInstances(UnmonitorInstancesRequest unm throw new UnsupportedOperationException("Not supported in mock"); } + @Override + public UpdateSecurityGroupRuleDescriptionsIngressResult updateSecurityGroupRuleDescriptionsIngress(UpdateSecurityGroupRuleDescriptionsIngressRequest updateSecurityGroupRuleDescriptionsIngressRequest) throws AmazonServiceException, AmazonClientException { + throw new UnsupportedOperationException("Not supported in mock"); + } + + @Override + public UpdateSecurityGroupRuleDescriptionsEgressResult updateSecurityGroupRuleDescriptionsEgress(UpdateSecurityGroupRuleDescriptionsEgressRequest updateSecurityGroupRuleDescriptionsEgressRequest) throws AmazonServiceException, AmazonClientException { + throw new UnsupportedOperationException("Not supported in mock"); + } + @Override public AttachVpnGatewayResult attachVpnGateway(AttachVpnGatewayRequest attachVpnGatewayRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); @@ -573,7 +722,7 @@ public CreateImageResult createImage(CreateImageRequest createImageRequest) thro } @Override - public void deleteSecurityGroup(DeleteSecurityGroupRequest deleteSecurityGroupRequest) throws AmazonServiceException, AmazonClientException { + public DeleteSecurityGroupResult deleteSecurityGroup(DeleteSecurityGroupRequest deleteSecurityGroupRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @@ -583,12 +732,12 @@ public CreateInstanceExportTaskResult createInstanceExportTask(CreateInstanceExp } @Override - public void authorizeSecurityGroupEgress(AuthorizeSecurityGroupEgressRequest authorizeSecurityGroupEgressRequest) throws AmazonServiceException, AmazonClientException { + public AuthorizeSecurityGroupEgressResult authorizeSecurityGroupEgress(AuthorizeSecurityGroupEgressRequest authorizeSecurityGroupEgressRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @Override - public void associateDhcpOptions(AssociateDhcpOptionsRequest associateDhcpOptionsRequest) throws AmazonServiceException, AmazonClientException { + public AssociateDhcpOptionsResult associateDhcpOptions(AssociateDhcpOptionsRequest associateDhcpOptionsRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @@ -597,6 +746,11 @@ public GetPasswordDataResult getPasswordData(GetPasswordDataRequest getPasswordD throw new UnsupportedOperationException("Not supported in mock"); } + @Override + public GetReservedInstancesExchangeQuoteResult getReservedInstancesExchangeQuote(GetReservedInstancesExchangeQuoteRequest getReservedInstancesExchangeQuoteRequest) throws AmazonServiceException, AmazonClientException { + throw new UnsupportedOperationException("Not supported in mock"); + } + @Override public StopInstancesResult stopInstances(StopInstancesRequest stopInstancesRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); @@ -608,12 +762,12 @@ public ImportKeyPairResult importKeyPair(ImportKeyPairRequest importKeyPairReque } @Override - public void deleteNetworkInterface(DeleteNetworkInterfaceRequest deleteNetworkInterfaceRequest) throws AmazonServiceException, AmazonClientException { + public DeleteNetworkInterfaceResult deleteNetworkInterface(DeleteNetworkInterfaceRequest deleteNetworkInterfaceRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @Override - public void modifyVpcAttribute(ModifyVpcAttributeRequest modifyVpcAttributeRequest) throws AmazonServiceException, AmazonClientException { + public ModifyVpcAttributeResult modifyVpcAttribute(ModifyVpcAttributeRequest modifyVpcAttributeRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @@ -637,6 +791,11 @@ public DescribeNetworkInterfacesResult describeNetworkInterfaces(DescribeNetwork throw new UnsupportedOperationException("Not supported in mock"); } + @Override + public DescribeNetworkInterfacePermissionsResult describeNetworkInterfacePermissions(DescribeNetworkInterfacePermissionsRequest describeNetworkInterfacePermissionsRequest) throws AmazonServiceException, AmazonClientException { + throw new UnsupportedOperationException("Not supported in mock"); + } + @Override public DescribeRegionsResult describeRegions(DescribeRegionsRequest describeRegionsRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); @@ -658,12 +817,12 @@ public DeleteVpcEndpointsResult deleteVpcEndpoints(DeleteVpcEndpointsRequest del } @Override - public void resetSnapshotAttribute(ResetSnapshotAttributeRequest resetSnapshotAttributeRequest) throws AmazonServiceException, AmazonClientException { + public ResetSnapshotAttributeResult resetSnapshotAttribute(ResetSnapshotAttributeRequest resetSnapshotAttributeRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @Override - public void deleteRoute(DeleteRouteRequest deleteRouteRequest) throws AmazonServiceException, AmazonClientException { + public DeleteRouteResult deleteRoute(DeleteRouteRequest deleteRouteRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @@ -683,7 +842,7 @@ public ModifyHostsResult modifyHosts(ModifyHostsRequest modifyHostsRequest) { } @Override - public void modifyIdFormat(ModifyIdFormatRequest modifyIdFormatRequest) { + public ModifyIdFormatResult modifyIdFormat(ModifyIdFormatRequest modifyIdFormatRequest) { throw new UnsupportedOperationException("Not supported in mock"); } @@ -692,23 +851,38 @@ public DescribeSecurityGroupsResult describeSecurityGroups(DescribeSecurityGroup throw new UnsupportedOperationException("Not supported in mock"); } + @Override + public DescribeStaleSecurityGroupsResult describeStaleSecurityGroups(DescribeStaleSecurityGroupsRequest describeStaleSecurityGroupsRequest) throws AmazonServiceException, AmazonClientException { + throw new UnsupportedOperationException("Not supported in mock"); + } + + @Override + public DescribeSecurityGroupReferencesResult describeSecurityGroupReferences(DescribeSecurityGroupReferencesRequest describeSecurityGroupReferencesRequest) throws AmazonServiceException, AmazonClientException { + throw new UnsupportedOperationException("Not supported in mock"); + } + @Override public RejectVpcPeeringConnectionResult rejectVpcPeeringConnection(RejectVpcPeeringConnectionRequest rejectVpcPeeringConnectionRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } + @Override + public ModifyVpcPeeringConnectionOptionsResult modifyVpcPeeringConnectionOptions(ModifyVpcPeeringConnectionOptionsRequest modifyVpcPeeringConnectionOptionsRequest) throws AmazonServiceException, AmazonClientException { + throw new UnsupportedOperationException("Not supported in mock"); + } + @Override public DeleteFlowLogsResult deleteFlowLogs(DeleteFlowLogsRequest deleteFlowLogsRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @Override - public void detachVpnGateway(DetachVpnGatewayRequest detachVpnGatewayRequest) throws AmazonServiceException, AmazonClientException { + public DetachVpnGatewayResult detachVpnGateway(DetachVpnGatewayRequest detachVpnGatewayRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @Override - public void deregisterImage(DeregisterImageRequest deregisterImageRequest) throws AmazonServiceException, AmazonClientException { + public DeregisterImageResult deregisterImage(DeregisterImageRequest deregisterImageRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @@ -718,12 +892,12 @@ public DescribeSpotDatafeedSubscriptionResult describeSpotDatafeedSubscription(D } @Override - public void deleteTags(DeleteTagsRequest deleteTagsRequest) throws AmazonServiceException, AmazonClientException { + public DeleteTagsResult deleteTags(DeleteTagsRequest deleteTagsRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @Override - public void deleteSubnet(DeleteSubnetRequest deleteSubnetRequest) throws AmazonServiceException, AmazonClientException { + public DeleteSubnetResult deleteSubnet(DeleteSubnetRequest deleteSubnetRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @@ -743,7 +917,7 @@ public CreateVpnGatewayResult createVpnGateway(CreateVpnGatewayRequest createVpn } @Override - public void enableVolumeIO(EnableVolumeIORequest enableVolumeIORequest) throws AmazonServiceException, AmazonClientException { + public EnableVolumeIOResult enableVolumeIO(EnableVolumeIORequest enableVolumeIORequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @@ -753,7 +927,7 @@ public MoveAddressToVpcResult moveAddressToVpc(MoveAddressToVpcRequest moveAddre } @Override - public void deleteVpnGateway(DeleteVpnGatewayRequest deleteVpnGatewayRequest) throws AmazonServiceException, AmazonClientException { + public DeleteVpnGatewayResult deleteVpnGateway(DeleteVpnGatewayRequest deleteVpnGatewayRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @@ -767,6 +941,11 @@ public DescribeVolumeStatusResult describeVolumeStatus(DescribeVolumeStatusReque throw new UnsupportedOperationException("Not supported in mock"); } + @Override + public DescribeVolumesModificationsResult describeVolumesModifications(DescribeVolumesModificationsRequest describeVolumesModificationsRequest) throws AmazonServiceException, AmazonClientException { + throw new UnsupportedOperationException("Not supported in mock"); + } + @Override public DescribeImportSnapshotTasksResult describeImportSnapshotTasks(DescribeImportSnapshotTasksRequest describeImportSnapshotTasksRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); @@ -778,12 +957,12 @@ public DescribeVpnConnectionsResult describeVpnConnections(DescribeVpnConnection } @Override - public void resetImageAttribute(ResetImageAttributeRequest resetImageAttributeRequest) throws AmazonServiceException, AmazonClientException { + public ResetImageAttributeResult resetImageAttribute(ResetImageAttributeRequest resetImageAttributeRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @Override - public void enableVgwRoutePropagation(EnableVgwRoutePropagationRequest enableVgwRoutePropagationRequest) throws AmazonServiceException, AmazonClientException { + public EnableVgwRoutePropagationResult enableVgwRoutePropagation(EnableVgwRoutePropagationRequest enableVgwRoutePropagationRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @@ -793,7 +972,7 @@ public CreateSnapshotResult createSnapshot(CreateSnapshotRequest createSnapshotR } @Override - public void deleteVolume(DeleteVolumeRequest deleteVolumeRequest) throws AmazonServiceException, AmazonClientException { + public DeleteVolumeResult deleteVolume(DeleteVolumeRequest deleteVolumeRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @@ -813,7 +992,12 @@ public CancelSpotFleetRequestsResult cancelSpotFleetRequests(CancelSpotFleetRequ } @Override - public void unassignPrivateIpAddresses(UnassignPrivateIpAddressesRequest unassignPrivateIpAddressesRequest) throws AmazonServiceException, AmazonClientException { + public UnassignPrivateIpAddressesResult unassignPrivateIpAddresses(UnassignPrivateIpAddressesRequest unassignPrivateIpAddressesRequest) throws AmazonServiceException, AmazonClientException { + throw new UnsupportedOperationException("Not supported in mock"); + } + + @Override + public UnassignIpv6AddressesResult unassignIpv6Addresses(UnassignIpv6AddressesRequest unassignIpv6AddressesRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @@ -823,7 +1007,7 @@ public DescribeVpcsResult describeVpcs(DescribeVpcsRequest describeVpcsRequest) } @Override - public void cancelConversionTask(CancelConversionTaskRequest cancelConversionTaskRequest) throws AmazonServiceException, AmazonClientException { + public CancelConversionTaskResult cancelConversionTask(CancelConversionTaskRequest cancelConversionTaskRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @@ -833,12 +1017,27 @@ public AssociateAddressResult associateAddress(AssociateAddressRequest associate } @Override - public void deleteCustomerGateway(DeleteCustomerGatewayRequest deleteCustomerGatewayRequest) throws AmazonServiceException, AmazonClientException { + public AssociateIamInstanceProfileResult associateIamInstanceProfile(AssociateIamInstanceProfileRequest associateIamInstanceRequest) throws AmazonServiceException, AmazonClientException { + throw new UnsupportedOperationException("Not supported in mock"); + } + + @Override + public AssociateVpcCidrBlockResult associateVpcCidrBlock(AssociateVpcCidrBlockRequest associateVpcCidrBlockRequest) throws AmazonServiceException, AmazonClientException { + throw new UnsupportedOperationException("Not supported in mock"); + } + + @Override + public AssociateSubnetCidrBlockResult associateSubnetCidrBlock(AssociateSubnetCidrBlockRequest associateSubnetCidrBlockRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @Override - public void createNetworkAclEntry(CreateNetworkAclEntryRequest createNetworkAclEntryRequest) throws AmazonServiceException, AmazonClientException { + public DeleteCustomerGatewayResult deleteCustomerGateway(DeleteCustomerGatewayRequest deleteCustomerGatewayRequest) throws AmazonServiceException, AmazonClientException { + throw new UnsupportedOperationException("Not supported in mock"); + } + + @Override + public CreateNetworkAclEntryResult createNetworkAclEntry(CreateNetworkAclEntryRequest createNetworkAclEntryRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @@ -853,7 +1052,32 @@ public DescribeExportTasksResult describeExportTasks(DescribeExportTasksRequest } @Override - public void detachInternetGateway(DetachInternetGatewayRequest detachInternetGatewayRequest) throws AmazonServiceException, AmazonClientException { + public DescribeElasticGpusResult describeElasticGpus(DescribeElasticGpusRequest describeElasticGpusRequest) throws AmazonServiceException, AmazonClientException { + throw new UnsupportedOperationException("Not supported in mock"); + } + + @Override + public DescribeFpgaImagesResult describeFpgaImages(DescribeFpgaImagesRequest describeFpgaImagesRequest) throws AmazonServiceException, AmazonClientException { + throw new UnsupportedOperationException("Not supported in mock"); + } + + @Override + public DescribeHostReservationOfferingsResult describeHostReservationOfferings(DescribeHostReservationOfferingsRequest describeHostReservationOfferingsRequest) throws AmazonServiceException, AmazonClientException { + throw new UnsupportedOperationException("Not supported in mock"); + } + + @Override + public DescribeHostReservationsResult describeHostReservations(DescribeHostReservationsRequest describeHostReservationsRequest) throws AmazonServiceException, AmazonClientException { + throw new UnsupportedOperationException("Not supported in mock"); + } + + @Override + public DescribeIdentityIdFormatResult describeIdentityIdFormat(DescribeIdentityIdFormatRequest describeIdentityIdFormatRequest) throws AmazonServiceException, AmazonClientException { + throw new UnsupportedOperationException("Not supported in mock"); + } + + @Override + public DetachInternetGatewayResult detachInternetGateway(DetachInternetGatewayRequest detachInternetGatewayRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @@ -883,7 +1107,7 @@ public DescribeReservedInstancesListingsResult describeReservedInstancesListings } @Override - public void reportInstanceStatus(ReportInstanceStatusRequest reportInstanceStatusRequest) throws AmazonServiceException, AmazonClientException { + public ReportInstanceStatusResult reportInstanceStatus(ReportInstanceStatusRequest reportInstanceStatusRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @@ -896,6 +1120,12 @@ public DescribeRouteTablesResult describeRouteTables(DescribeRouteTablesRequest public DescribeDhcpOptionsResult describeDhcpOptions(DescribeDhcpOptionsRequest describeDhcpOptionsRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } + + @Override + public DescribeEgressOnlyInternetGatewaysResult describeEgressOnlyInternetGateways( + DescribeEgressOnlyInternetGatewaysRequest describeEgressOnlyInternetGatewaysRequest) throws AmazonServiceException, AmazonClientException { + throw new UnsupportedOperationException("Not supported in mock"); + } @Override public MonitorInstancesResult monitorInstances(MonitorInstancesRequest monitorInstancesRequest) throws AmazonServiceException, AmazonClientException { @@ -933,17 +1163,22 @@ public ImportInstanceResult importInstance(ImportInstanceRequest importInstanceR } @Override - public void revokeSecurityGroupIngress(RevokeSecurityGroupIngressRequest revokeSecurityGroupIngressRequest) throws AmazonServiceException, AmazonClientException { + public DeleteVpcPeeringConnectionResult deleteVpcPeeringConnection(DeleteVpcPeeringConnectionRequest deleteVpcPeeringConnectionRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @Override - public DeleteVpcPeeringConnectionResult deleteVpcPeeringConnection(DeleteVpcPeeringConnectionRequest deleteVpcPeeringConnectionRequest) throws AmazonServiceException, AmazonClientException { + public GetConsoleOutputResult getConsoleOutput(GetConsoleOutputRequest getConsoleOutputRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @Override - public GetConsoleOutputResult getConsoleOutput(GetConsoleOutputRequest getConsoleOutputRequest) throws AmazonServiceException, AmazonClientException { + public GetConsoleScreenshotResult getConsoleScreenshot(GetConsoleScreenshotRequest getConsoleScreenshotRequest) throws AmazonServiceException, AmazonClientException { + throw new UnsupportedOperationException("Not supported in mock"); + } + + @Override + public GetHostReservationPurchasePreviewResult getHostReservationPurchasePreview(GetHostReservationPurchasePreviewRequest getHostReservationPurchasePreviewRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @@ -953,17 +1188,17 @@ public CreateInternetGatewayResult createInternetGateway(CreateInternetGatewayRe } @Override - public void deleteVpnConnectionRoute(DeleteVpnConnectionRouteRequest deleteVpnConnectionRouteRequest) throws AmazonServiceException, AmazonClientException { + public DeleteVpnConnectionRouteResult deleteVpnConnectionRoute(DeleteVpnConnectionRouteRequest deleteVpnConnectionRouteRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @Override - public void detachNetworkInterface(DetachNetworkInterfaceRequest detachNetworkInterfaceRequest) throws AmazonServiceException, AmazonClientException { + public DetachNetworkInterfaceResult detachNetworkInterface(DetachNetworkInterfaceRequest detachNetworkInterfaceRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @Override - public void modifyImageAttribute(ModifyImageAttributeRequest modifyImageAttributeRequest) throws AmazonServiceException, AmazonClientException { + public ModifyImageAttributeResult modifyImageAttribute(ModifyImageAttributeRequest modifyImageAttributeRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @@ -972,18 +1207,38 @@ public CreateCustomerGatewayResult createCustomerGateway(CreateCustomerGatewayRe throw new UnsupportedOperationException("Not supported in mock"); } + @Override + public CreateEgressOnlyInternetGatewayResult createEgressOnlyInternetGateway(CreateEgressOnlyInternetGatewayRequest createEgressOnlyInternetGatewayRequest) throws AmazonServiceException, AmazonClientException { + throw new UnsupportedOperationException("Not supported in mock"); + } + + @Override + public CreateFpgaImageResult createFpgaImage(CreateFpgaImageRequest createFpgaImageRequest) throws AmazonServiceException, AmazonClientException { + throw new UnsupportedOperationException("Not supported in mock"); + } + + @Override + public CreateNetworkInterfacePermissionResult createNetworkInterfacePermission(CreateNetworkInterfacePermissionRequest createNetworkInterfacePermissionRequest) throws AmazonServiceException, AmazonClientException { + throw new UnsupportedOperationException("Not supported in mock"); + } + + @Override + public CreateDefaultVpcResult createDefaultVpc(CreateDefaultVpcRequest createDefaultVpcRequest) throws AmazonServiceException, AmazonClientException { + throw new UnsupportedOperationException("Not supported in mock"); + } + @Override public CreateSpotDatafeedSubscriptionResult createSpotDatafeedSubscription(CreateSpotDatafeedSubscriptionRequest createSpotDatafeedSubscriptionRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @Override - public void attachInternetGateway(AttachInternetGatewayRequest attachInternetGatewayRequest) throws AmazonServiceException, AmazonClientException { + public AttachInternetGatewayResult attachInternetGateway(AttachInternetGatewayRequest attachInternetGatewayRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @Override - public void deleteVpnConnection(DeleteVpnConnectionRequest deleteVpnConnectionRequest) throws AmazonServiceException, AmazonClientException { + public DeleteVpnConnectionResult deleteVpnConnection(DeleteVpnConnectionRequest deleteVpnConnectionRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @@ -1058,12 +1313,12 @@ public AssociateRouteTableResult associateRouteTable(AssociateRouteTableRequest } @Override - public void modifyVolumeAttribute(ModifyVolumeAttributeRequest modifyVolumeAttributeRequest) throws AmazonServiceException, AmazonClientException { + public ModifyVolumeAttributeResult modifyVolumeAttribute(ModifyVolumeAttributeRequest modifyVolumeAttributeRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @Override - public void deleteNetworkAcl(DeleteNetworkAclRequest deleteNetworkAclRequest) throws AmazonServiceException, AmazonClientException { + public DeleteNetworkAclResult deleteNetworkAcl(DeleteNetworkAclRequest deleteNetworkAclRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @@ -1078,7 +1333,7 @@ public StartInstancesResult startInstances(StartInstancesRequest startInstancesR } @Override - public void modifyInstanceAttribute(ModifyInstanceAttributeRequest modifyInstanceAttributeRequest) throws AmazonServiceException, AmazonClientException { + public ModifyInstanceAttributeResult modifyInstanceAttribute(ModifyInstanceAttributeRequest modifyInstanceAttributeRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @@ -1087,18 +1342,33 @@ public ModifyInstancePlacementResult modifyInstancePlacement(ModifyInstancePlace throw new UnsupportedOperationException("Not supported in mock"); } + @Override + public ModifyIdentityIdFormatResult modifyIdentityIdFormat(ModifyIdentityIdFormatRequest modifyIdentityIdFormatRequest) { + throw new UnsupportedOperationException("Not supported in mock"); + } + @Override public CancelReservedInstancesListingResult cancelReservedInstancesListing(CancelReservedInstancesListingRequest cancelReservedInstancesListingRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @Override - public void deleteDhcpOptions(DeleteDhcpOptionsRequest deleteDhcpOptionsRequest) throws AmazonServiceException, AmazonClientException { + public DeleteDhcpOptionsResult deleteDhcpOptions(DeleteDhcpOptionsRequest deleteDhcpOptionsRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @Override - public void authorizeSecurityGroupIngress(AuthorizeSecurityGroupIngressRequest authorizeSecurityGroupIngressRequest) throws AmazonServiceException, AmazonClientException { + public DeleteEgressOnlyInternetGatewayResult deleteEgressOnlyInternetGateway(DeleteEgressOnlyInternetGatewayRequest deleteEgressOnlyInternetGatewayRequest) throws AmazonServiceException, AmazonClientException { + throw new UnsupportedOperationException("Not supported in mock"); + } + + @Override + public DeleteNetworkInterfacePermissionResult deleteNetworkInterfacePermission(DeleteNetworkInterfacePermissionRequest deleteNetworkInterfacePermissionRequest) throws AmazonServiceException, AmazonClientException { + throw new UnsupportedOperationException("Not supported in mock"); + } + + @Override + public AuthorizeSecurityGroupIngressResult authorizeSecurityGroupIngress(AuthorizeSecurityGroupIngressRequest authorizeSecurityGroupIngressRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @@ -1118,7 +1388,7 @@ public DescribeCustomerGatewaysResult describeCustomerGateways(DescribeCustomerG } @Override - public void cancelExportTask(CancelExportTaskRequest cancelExportTaskRequest) throws AmazonServiceException, AmazonClientException { + public CancelExportTaskResult cancelExportTask(CancelExportTaskRequest cancelExportTaskRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @@ -1143,12 +1413,12 @@ public DescribeVpcClassicLinkResult describeVpcClassicLink(DescribeVpcClassicLin } @Override - public void modifyNetworkInterfaceAttribute(ModifyNetworkInterfaceAttributeRequest modifyNetworkInterfaceAttributeRequest) throws AmazonServiceException, AmazonClientException { + public ModifyNetworkInterfaceAttributeResult modifyNetworkInterfaceAttribute(ModifyNetworkInterfaceAttributeRequest modifyNetworkInterfaceAttributeRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @Override - public void deleteRouteTable(DeleteRouteTableRequest deleteRouteTableRequest) throws AmazonServiceException, AmazonClientException { + public DeleteRouteTableResult deleteRouteTable(DeleteRouteTableRequest deleteRouteTableRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @@ -1168,7 +1438,7 @@ public RequestSpotInstancesResult requestSpotInstances(RequestSpotInstancesReque } @Override - public void createTags(CreateTagsRequest createTagsRequest) throws AmazonServiceException, AmazonClientException { + public CreateTagsResult createTags(CreateTagsRequest createTagsRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @@ -1183,7 +1453,7 @@ public AttachNetworkInterfaceResult attachNetworkInterface(AttachNetworkInterfac } @Override - public void replaceRoute(ReplaceRouteRequest replaceRouteRequest) throws AmazonServiceException, AmazonClientException { + public ReplaceRouteResult replaceRoute(ReplaceRouteRequest replaceRouteRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @@ -1198,7 +1468,7 @@ public CancelBundleTaskResult cancelBundleTask(CancelBundleTaskRequest cancelBun } @Override - public void disableVgwRoutePropagation(DisableVgwRoutePropagationRequest disableVgwRoutePropagationRequest) throws AmazonServiceException, AmazonClientException { + public DisableVgwRoutePropagationResult disableVgwRoutePropagation(DisableVgwRoutePropagationRequest disableVgwRoutePropagationRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @@ -1228,7 +1498,12 @@ public PurchaseScheduledInstancesResult purchaseScheduledInstances(PurchaseSched } @Override - public void modifySnapshotAttribute(ModifySnapshotAttributeRequest modifySnapshotAttributeRequest) throws AmazonServiceException, AmazonClientException { + public PurchaseHostReservationResult purchaseHostReservation(PurchaseHostReservationRequest purchaseHostReservationRequest) { + throw new UnsupportedOperationException("Not supported in mock"); + } + + @Override + public ModifySnapshotAttributeResult modifySnapshotAttribute(ModifySnapshotAttributeRequest modifySnapshotAttributeRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @@ -1248,12 +1523,12 @@ public ModifyVpcEndpointResult modifyVpcEndpoint(ModifyVpcEndpointRequest modify } @Override - public void deleteSpotDatafeedSubscription(DeleteSpotDatafeedSubscriptionRequest deleteSpotDatafeedSubscriptionRequest) throws AmazonServiceException, AmazonClientException { + public DeleteSpotDatafeedSubscriptionResult deleteSpotDatafeedSubscription(DeleteSpotDatafeedSubscriptionRequest deleteSpotDatafeedSubscriptionRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @Override - public void deleteInternetGateway(DeleteInternetGatewayRequest deleteInternetGatewayRequest) throws AmazonServiceException, AmazonClientException { + public DeleteInternetGatewayResult deleteInternetGateway(DeleteInternetGatewayRequest deleteInternetGatewayRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @@ -1288,7 +1563,22 @@ public ConfirmProductInstanceResult confirmProductInstance(ConfirmProductInstanc } @Override - public void disassociateRouteTable(DisassociateRouteTableRequest disassociateRouteTableRequest) throws AmazonServiceException, AmazonClientException { + public DisassociateRouteTableResult disassociateRouteTable(DisassociateRouteTableRequest disassociateRouteTableRequest) throws AmazonServiceException, AmazonClientException { + throw new UnsupportedOperationException("Not supported in mock"); + } + + @Override + public DisassociateIamInstanceProfileResult disassociateIamInstanceProfile(DisassociateIamInstanceProfileRequest disassociateIamInstanceProfileRequest) throws AmazonServiceException, AmazonClientException { + throw new UnsupportedOperationException("Not supported in mock"); + } + + @Override + public DisassociateVpcCidrBlockResult disassociateVpcCidrBlock(DisassociateVpcCidrBlockRequest disassociateVpcCidrBlockRequest) throws AmazonServiceException, AmazonClientException { + throw new UnsupportedOperationException("Not supported in mock"); + } + + @Override + public DisassociateSubnetCidrBlockResult disassociateSubnetCidrBlock(DisassociateSubnetCidrBlockRequest disassociateSubnetCidrBlockRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @@ -1298,12 +1588,12 @@ public DescribeVpcAttributeResult describeVpcAttribute(DescribeVpcAttributeReque } @Override - public void revokeSecurityGroupEgress(RevokeSecurityGroupEgressRequest revokeSecurityGroupEgressRequest) throws AmazonServiceException, AmazonClientException { + public RevokeSecurityGroupEgressResult revokeSecurityGroupEgress(RevokeSecurityGroupEgressRequest revokeSecurityGroupEgressRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @Override - public void deleteNetworkAclEntry(DeleteNetworkAclEntryRequest deleteNetworkAclEntryRequest) throws AmazonServiceException, AmazonClientException { + public DeleteNetworkAclEntryResult deleteNetworkAclEntry(DeleteNetworkAclEntryRequest deleteNetworkAclEntryRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @@ -1312,6 +1602,11 @@ public CreateVolumeResult createVolume(CreateVolumeRequest createVolumeRequest) throw new UnsupportedOperationException("Not supported in mock"); } + @Override + public ModifyVolumeResult modifyVolume(ModifyVolumeRequest modifyVolumeRequest) throws AmazonServiceException, AmazonClientException { + throw new UnsupportedOperationException("Not supported in mock"); + } + @Override public DescribeInstanceStatusResult describeInstanceStatus(DescribeInstanceStatusRequest describeInstanceStatusRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); @@ -1333,7 +1628,12 @@ public DescribeReservedInstancesOfferingsResult describeReservedInstancesOfferin } @Override - public void assignPrivateIpAddresses(AssignPrivateIpAddressesRequest assignPrivateIpAddressesRequest) throws AmazonServiceException, AmazonClientException { + public AssignPrivateIpAddressesResult assignPrivateIpAddresses(AssignPrivateIpAddressesRequest assignPrivateIpAddressesRequest) throws AmazonServiceException, AmazonClientException { + throw new UnsupportedOperationException("Not supported in mock"); + } + + @Override + public AssignIpv6AddressesResult assignIpv6Addresses(AssignIpv6AddressesRequest assignIpv6AddressesRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @@ -1343,7 +1643,7 @@ public DescribeSpotFleetRequestHistoryResult describeSpotFleetRequestHistory(Des } @Override - public void deleteSnapshot(DeleteSnapshotRequest deleteSnapshotRequest) throws AmazonServiceException, AmazonClientException { + public DeleteSnapshotResult deleteSnapshot(DeleteSnapshotRequest deleteSnapshotRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @@ -1353,12 +1653,12 @@ public ReplaceNetworkAclAssociationResult replaceNetworkAclAssociation(ReplaceNe } @Override - public void disassociateAddress(DisassociateAddressRequest disassociateAddressRequest) throws AmazonServiceException, AmazonClientException { + public DisassociateAddressResult disassociateAddress(DisassociateAddressRequest disassociateAddressRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @Override - public void createPlacementGroup(CreatePlacementGroupRequest createPlacementGroupRequest) throws AmazonServiceException, AmazonClientException { + public CreatePlacementGroupResult createPlacementGroup(CreatePlacementGroupRequest createPlacementGroupRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @@ -1368,17 +1668,17 @@ public BundleInstanceResult bundleInstance(BundleInstanceRequest bundleInstanceR } @Override - public void deletePlacementGroup(DeletePlacementGroupRequest deletePlacementGroupRequest) throws AmazonServiceException, AmazonClientException { + public DeletePlacementGroupResult deletePlacementGroup(DeletePlacementGroupRequest deletePlacementGroupRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @Override - public void modifySubnetAttribute(ModifySubnetAttributeRequest modifySubnetAttributeRequest) throws AmazonServiceException, AmazonClientException { + public ModifySubnetAttributeResult modifySubnetAttribute(ModifySubnetAttributeRequest modifySubnetAttributeRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @Override - public void deleteVpc(DeleteVpcRequest deleteVpcRequest) throws AmazonServiceException, AmazonClientException { + public DeleteVpcResult deleteVpc(DeleteVpcRequest deleteVpcRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @@ -1398,7 +1698,7 @@ public AllocateAddressResult allocateAddress(AllocateAddressRequest allocateAddr } @Override - public void releaseAddress(ReleaseAddressRequest releaseAddressRequest) throws AmazonServiceException, AmazonClientException { + public ReleaseAddressResult releaseAddress(ReleaseAddressRequest releaseAddressRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @@ -1408,7 +1708,12 @@ public ReleaseHostsResult releaseHosts(ReleaseHostsRequest releaseHostsRequest) } @Override - public void resetInstanceAttribute(ResetInstanceAttributeRequest resetInstanceAttributeRequest) throws AmazonServiceException, AmazonClientException { + public ReplaceIamInstanceProfileAssociationResult replaceIamInstanceProfileAssociation(ReplaceIamInstanceProfileAssociationRequest replaceIamInstanceProfileAssociationRequest) { + throw new UnsupportedOperationException("Not supported in mock"); + } + + @Override + public ResetInstanceAttributeResult resetInstanceAttribute(ResetInstanceAttributeRequest resetInstanceAttributeRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @@ -1423,7 +1728,7 @@ public CreateNatGatewayResult createNatGateway(CreateNatGatewayRequest createNat } @Override - public void replaceNetworkAclEntry(ReplaceNetworkAclEntryRequest replaceNetworkAclEntryRequest) throws AmazonServiceException, AmazonClientException { + public ReplaceNetworkAclEntryResult replaceNetworkAclEntry(ReplaceNetworkAclEntryRequest replaceNetworkAclEntryRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @@ -1443,7 +1748,7 @@ public RegisterImageResult registerImage(RegisterImageRequest registerImageReque } @Override - public void resetNetworkInterfaceAttribute(ResetNetworkInterfaceAttributeRequest resetNetworkInterfaceAttributeRequest) throws AmazonServiceException, AmazonClientException { + public ResetNetworkInterfaceAttributeResult resetNetworkInterfaceAttribute(ResetNetworkInterfaceAttributeRequest resetNetworkInterfaceAttributeRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @@ -1458,7 +1763,7 @@ public EnableVpcClassicLinkDnsSupportResult enableVpcClassicLinkDnsSupport(Enabl } @Override - public void createVpnConnectionRoute(CreateVpnConnectionRouteRequest createVpnConnectionRouteRequest) throws AmazonServiceException, AmazonClientException { + public CreateVpnConnectionRouteResult createVpnConnectionRoute(CreateVpnConnectionRouteRequest createVpnConnectionRouteRequest) throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @@ -1608,7 +1913,12 @@ public DescribeBundleTasksResult describeBundleTasks() throws AmazonServiceExcep } @Override - public void revokeSecurityGroupIngress() throws AmazonServiceException, AmazonClientException { + public RevokeSecurityGroupIngressResult revokeSecurityGroupIngress(RevokeSecurityGroupIngressRequest revokeSecurityGroupIngressRequest) throws AmazonServiceException, AmazonClientException { + throw new UnsupportedOperationException("Not supported in mock"); + } + + @Override + public RevokeSecurityGroupIngressResult revokeSecurityGroupIngress() throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @@ -1652,6 +1962,12 @@ public DescribeHostsResult describeHosts() { throw new UnsupportedOperationException("Not supported in mock"); } + @Override + public DescribeIamInstanceProfileAssociationsResult describeIamInstanceProfileAssociations( + DescribeIamInstanceProfileAssociationsRequest describeIamInstanceProfileAssociationsRequest) throws AmazonServiceException, AmazonClientException { + throw new UnsupportedOperationException("Not supported in mock"); + } + @Override public DescribeIdFormatResult describeIdFormat(DescribeIdFormatRequest describeIdFormatRequest) { throw new UnsupportedOperationException("Not supported in mock"); @@ -1733,7 +2049,7 @@ public DescribeReservedInstancesModificationsResult describeReservedInstancesMod } @Override - public void deleteSpotDatafeedSubscription() throws AmazonServiceException, AmazonClientException { + public DeleteSpotDatafeedSubscriptionResult deleteSpotDatafeedSubscription() throws AmazonServiceException, AmazonClientException { throw new UnsupportedOperationException("Not supported in mock"); } @@ -1797,6 +2113,11 @@ public void shutdown() { throw new UnsupportedOperationException("Not supported in mock"); } + @Override + public AmazonEC2Waiters waiters() { + throw new UnsupportedOperationException("Not supported in mock"); + } + @Override public ResponseMetadata getCachedResponseMetadata(AmazonWebServiceRequest request) { throw new UnsupportedOperationException("Not supported in mock"); diff --git a/plugins/discovery-ec2/src/test/java/org/elasticsearch/discovery/ec2/Ec2DiscoveryTests.java b/plugins/discovery-ec2/src/test/java/org/elasticsearch/discovery/ec2/Ec2DiscoveryTests.java index f3685278dc6b9..e7986cb878e41 100644 --- a/plugins/discovery-ec2/src/test/java/org/elasticsearch/discovery/ec2/Ec2DiscoveryTests.java +++ b/plugins/discovery-ec2/src/test/java/org/elasticsearch/discovery/ec2/Ec2DiscoveryTests.java @@ -229,7 +229,7 @@ public void testFilterByTags() throws InterruptedException { public void testFilterByMultipleTags() throws InterruptedException { int nodes = randomIntBetween(5, 10); Settings nodeSettings = Settings.builder() - .putArray(AwsEc2Service.TAG_SETTING.getKey() + "stage", "prod", "preprod") + .putList(AwsEc2Service.TAG_SETTING.getKey() + "stage", "prod", "preprod") .build(); int prodInstances = 0; diff --git a/plugins/discovery-file/build.gradle b/plugins/discovery-file/build.gradle index 91457924351bf..145d959fa4100 100644 --- a/plugins/discovery-file/build.gradle +++ b/plugins/discovery-file/build.gradle @@ -52,6 +52,7 @@ setupSeedNodeAndUnicastHostsFile.doLast { integTestCluster { dependsOn setupSeedNodeAndUnicastHostsFile clusterName = 'discovery-file-test-cluster' + setting 'discovery.zen.hosts_provider', 'file' extraConfigFile 'discovery-file/unicast_hosts.txt', srcUnicastHostsFile } diff --git a/plugins/discovery-file/src/main/java/org/elasticsearch/discovery/file/FileBasedDiscoveryPlugin.java b/plugins/discovery-file/src/main/java/org/elasticsearch/discovery/file/FileBasedDiscoveryPlugin.java index 0cd8176df83f5..b5d16a547d514 100644 --- a/plugins/discovery-file/src/main/java/org/elasticsearch/discovery/file/FileBasedDiscoveryPlugin.java +++ b/plugins/discovery-file/src/main/java/org/elasticsearch/discovery/file/FileBasedDiscoveryPlugin.java @@ -42,6 +42,7 @@ import org.elasticsearch.watcher.ResourceWatcherService; import java.io.IOException; +import java.nio.file.Path; import java.util.Collection; import java.util.Collections; import java.util.Map; @@ -61,10 +62,12 @@ public class FileBasedDiscoveryPlugin extends Plugin implements DiscoveryPlugin private static final DeprecationLogger deprecationLogger = new DeprecationLogger(logger); private final Settings settings; + private final Path configPath; private ExecutorService fileBasedDiscoveryExecutorService; - public FileBasedDiscoveryPlugin(Settings settings) { + public FileBasedDiscoveryPlugin(Settings settings, Path configPath) { this.settings = settings; + this.configPath = configPath; } @Override @@ -96,22 +99,7 @@ public Map> getZenHostsProviders(Transpor NetworkService networkService) { return Collections.singletonMap( "file", - () -> new FileBasedUnicastHostsProvider(settings, transportService, fileBasedDiscoveryExecutorService)); - } - - @Override - public Settings additionalSettings() { - // For 5.0, the hosts provider was "zen", but this was before the discovery.zen.hosts_provider - // setting existed. This check looks for the legacy zen, and sets the file hosts provider if not set - String discoveryType = DiscoveryModule.DISCOVERY_TYPE_SETTING.get(settings); - if (discoveryType.equals("zen")) { - deprecationLogger.deprecated("Using " + DiscoveryModule.DISCOVERY_TYPE_SETTING.getKey() + - " setting to set hosts provider is deprecated. " + - "Set \"" + DiscoveryModule.DISCOVERY_HOSTS_PROVIDER_SETTING.getKey() + ": file\" instead"); - if (DiscoveryModule.DISCOVERY_HOSTS_PROVIDER_SETTING.exists(settings) == false) { - return Settings.builder().put(DiscoveryModule.DISCOVERY_HOSTS_PROVIDER_SETTING.getKey(), "file").build(); - } - } - return Settings.EMPTY; + () -> new FileBasedUnicastHostsProvider( + new Environment(settings, configPath), transportService, fileBasedDiscoveryExecutorService)); } } diff --git a/plugins/discovery-file/src/main/java/org/elasticsearch/discovery/file/FileBasedUnicastHostsProvider.java b/plugins/discovery-file/src/main/java/org/elasticsearch/discovery/file/FileBasedUnicastHostsProvider.java index 196e98d658217..ee5f6c08b91ce 100644 --- a/plugins/discovery-file/src/main/java/org/elasticsearch/discovery/file/FileBasedUnicastHostsProvider.java +++ b/plugins/discovery-file/src/main/java/org/elasticsearch/discovery/file/FileBasedUnicastHostsProvider.java @@ -71,11 +71,11 @@ class FileBasedUnicastHostsProvider extends AbstractComponent implements Unicast private final TimeValue resolveTimeout; - FileBasedUnicastHostsProvider(Settings settings, TransportService transportService, ExecutorService executorService) { - super(settings); + FileBasedUnicastHostsProvider(Environment environment, TransportService transportService, ExecutorService executorService) { + super(environment.settings()); this.transportService = transportService; this.executorService = executorService; - this.unicastHostsFilePath = new Environment(settings).configFile().resolve("discovery-file").resolve(UNICAST_HOSTS_FILE); + this.unicastHostsFilePath = environment.configFile().resolve("discovery-file").resolve(UNICAST_HOSTS_FILE); this.resolveTimeout = DISCOVERY_ZEN_PING_UNICAST_HOSTS_RESOLVE_TIMEOUT.get(settings); } diff --git a/plugins/discovery-file/src/test/java/org/elasticsearch/discovery/file/FileBasedDiscoveryPluginTests.java b/plugins/discovery-file/src/test/java/org/elasticsearch/discovery/file/FileBasedDiscoveryPluginTests.java deleted file mode 100644 index 7a7ee9dbd037e..0000000000000 --- a/plugins/discovery-file/src/test/java/org/elasticsearch/discovery/file/FileBasedDiscoveryPluginTests.java +++ /dev/null @@ -1,45 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.discovery.file; - -import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.discovery.DiscoveryModule; -import org.elasticsearch.test.ESTestCase; - -import java.io.IOException; - -public class FileBasedDiscoveryPluginTests extends ESTestCase { - - public void testHostsProviderBwc() { - FileBasedDiscoveryPlugin plugin = new FileBasedDiscoveryPlugin(Settings.EMPTY); - Settings additionalSettings = plugin.additionalSettings(); - assertEquals("file", additionalSettings.get(DiscoveryModule.DISCOVERY_HOSTS_PROVIDER_SETTING.getKey())); - assertWarnings("Using discovery.type setting to set hosts provider is deprecated. " + - "Set \"discovery.zen.hosts_provider: file\" instead"); - } - - public void testHostsProviderExplicit() { - Settings settings = Settings.builder().put(DiscoveryModule.DISCOVERY_HOSTS_PROVIDER_SETTING.getKey(), "foo").build(); - FileBasedDiscoveryPlugin plugin = new FileBasedDiscoveryPlugin(settings); - assertEquals(Settings.EMPTY, plugin.additionalSettings()); - assertWarnings("Using discovery.type setting to set hosts provider is deprecated. " + - "Set \"discovery.zen.hosts_provider: file\" instead"); - } -} diff --git a/plugins/discovery-file/src/test/java/org/elasticsearch/discovery/file/FileBasedUnicastHostsProviderTests.java b/plugins/discovery-file/src/test/java/org/elasticsearch/discovery/file/FileBasedUnicastHostsProviderTests.java index 4395d16db377a..3ddd15a7b4cf3 100644 --- a/plugins/discovery-file/src/test/java/org/elasticsearch/discovery/file/FileBasedUnicastHostsProviderTests.java +++ b/plugins/discovery-file/src/test/java/org/elasticsearch/discovery/file/FileBasedUnicastHostsProviderTests.java @@ -27,6 +27,7 @@ import org.elasticsearch.common.transport.TransportAddress; import org.elasticsearch.common.util.BigArrays; import org.elasticsearch.env.Environment; +import org.elasticsearch.env.TestEnvironment; import org.elasticsearch.indices.breaker.NoneCircuitBreakerService; import org.elasticsearch.test.ESTestCase; import org.elasticsearch.test.transport.MockTransportService; @@ -126,7 +127,8 @@ public void testUnicastHostsDoesNotExist() throws Exception { final Settings settings = Settings.builder() .put(Environment.PATH_HOME_SETTING.getKey(), createTempDir()) .build(); - final FileBasedUnicastHostsProvider provider = new FileBasedUnicastHostsProvider(settings, transportService, executorService); + final Environment environment = TestEnvironment.newEnvironment(settings); + final FileBasedUnicastHostsProvider provider = new FileBasedUnicastHostsProvider(environment, transportService, executorService); final List nodes = provider.buildDynamicNodes(); assertEquals(0, nodes.size()); } @@ -152,13 +154,20 @@ private List setupAndRunHostProvider(final List hostEntri final Settings settings = Settings.builder() .put(Environment.PATH_HOME_SETTING.getKey(), homeDir) .build(); - final Path configDir = homeDir.resolve("config").resolve("discovery-file"); - Files.createDirectories(configDir); - final Path unicastHostsPath = configDir.resolve(UNICAST_HOSTS_FILE); + final Path configPath; + if (randomBoolean()) { + configPath = homeDir.resolve("config"); + } else { + configPath = createTempDir(); + } + final Path discoveryFilePath = configPath.resolve("discovery-file"); + Files.createDirectories(discoveryFilePath); + final Path unicastHostsPath = discoveryFilePath.resolve(UNICAST_HOSTS_FILE); try (BufferedWriter writer = Files.newBufferedWriter(unicastHostsPath)) { writer.write(String.join("\n", hostEntries)); } - return new FileBasedUnicastHostsProvider(settings, transportService, executorService).buildDynamicNodes(); + return new FileBasedUnicastHostsProvider( + new Environment(settings, configPath), transportService, executorService).buildDynamicNodes(); } } diff --git a/plugins/discovery-gce/src/main/java/org/elasticsearch/cloud/gce/GceInstancesService.java b/plugins/discovery-gce/src/main/java/org/elasticsearch/cloud/gce/GceInstancesService.java index 6f7313051fea2..4c90fd5c3731b 100644 --- a/plugins/discovery-gce/src/main/java/org/elasticsearch/cloud/gce/GceInstancesService.java +++ b/plugins/discovery-gce/src/main/java/org/elasticsearch/cloud/gce/GceInstancesService.java @@ -65,7 +65,7 @@ public interface GceInstancesService { /** * cloud.gce.max_wait: How long exponential backoff should retry before definitely failing. - * It's a total time since the the initial call is made. + * It's a total time since the initial call is made. * A negative value will retry indefinitely. Defaults to `-1s` (retry indefinitely). */ Setting MAX_WAIT_SETTING = diff --git a/plugins/discovery-gce/src/test/java/org/elasticsearch/discovery/gce/GceDiscoveryTests.java b/plugins/discovery-gce/src/test/java/org/elasticsearch/discovery/gce/GceDiscoveryTests.java index 5ae30c74a3226..31ea9bdb1c21e 100644 --- a/plugins/discovery-gce/src/test/java/org/elasticsearch/discovery/gce/GceDiscoveryTests.java +++ b/plugins/discovery-gce/src/test/java/org/elasticsearch/discovery/gce/GceDiscoveryTests.java @@ -128,7 +128,7 @@ public void testNodesWithDifferentTagsAndOneTagSet() { Settings nodeSettings = Settings.builder() .put(GceInstancesServiceImpl.PROJECT_SETTING.getKey(), projectName) .put(GceInstancesServiceImpl.ZONE_SETTING.getKey(), "europe-west1-b") - .putArray(GceUnicastHostsProvider.TAGS_SETTING.getKey(), "elasticsearch") + .putList(GceUnicastHostsProvider.TAGS_SETTING.getKey(), "elasticsearch") .build(); mock = new GceInstancesServiceMock(nodeSettings); List discoveryNodes = buildDynamicNodes(mock, nodeSettings); @@ -140,7 +140,7 @@ public void testNodesWithDifferentTagsAndTwoTagSet() { Settings nodeSettings = Settings.builder() .put(GceInstancesServiceImpl.PROJECT_SETTING.getKey(), projectName) .put(GceInstancesServiceImpl.ZONE_SETTING.getKey(), "europe-west1-b") - .putArray(GceUnicastHostsProvider.TAGS_SETTING.getKey(), "elasticsearch", "dev") + .putList(GceUnicastHostsProvider.TAGS_SETTING.getKey(), "elasticsearch", "dev") .build(); mock = new GceInstancesServiceMock(nodeSettings); List discoveryNodes = buildDynamicNodes(mock, nodeSettings); @@ -162,7 +162,7 @@ public void testNodesWithSameTagsAndOneTagSet() { Settings nodeSettings = Settings.builder() .put(GceInstancesServiceImpl.PROJECT_SETTING.getKey(), projectName) .put(GceInstancesServiceImpl.ZONE_SETTING.getKey(), "europe-west1-b") - .putArray(GceUnicastHostsProvider.TAGS_SETTING.getKey(), "elasticsearch") + .putList(GceUnicastHostsProvider.TAGS_SETTING.getKey(), "elasticsearch") .build(); mock = new GceInstancesServiceMock(nodeSettings); List discoveryNodes = buildDynamicNodes(mock, nodeSettings); @@ -173,7 +173,7 @@ public void testNodesWithSameTagsAndTwoTagsSet() { Settings nodeSettings = Settings.builder() .put(GceInstancesServiceImpl.PROJECT_SETTING.getKey(), projectName) .put(GceInstancesServiceImpl.ZONE_SETTING.getKey(), "europe-west1-b") - .putArray(GceUnicastHostsProvider.TAGS_SETTING.getKey(), "elasticsearch", "dev") + .putList(GceUnicastHostsProvider.TAGS_SETTING.getKey(), "elasticsearch", "dev") .build(); mock = new GceInstancesServiceMock(nodeSettings); List discoveryNodes = buildDynamicNodes(mock, nodeSettings); @@ -183,7 +183,7 @@ public void testNodesWithSameTagsAndTwoTagsSet() { public void testMultipleZonesAndTwoNodesInSameZone() { Settings nodeSettings = Settings.builder() .put(GceInstancesServiceImpl.PROJECT_SETTING.getKey(), projectName) - .putArray(GceInstancesServiceImpl.ZONE_SETTING.getKey(), "us-central1-a", "europe-west1-b") + .putList(GceInstancesServiceImpl.ZONE_SETTING.getKey(), "us-central1-a", "europe-west1-b") .build(); mock = new GceInstancesServiceMock(nodeSettings); List discoveryNodes = buildDynamicNodes(mock, nodeSettings); @@ -193,7 +193,7 @@ public void testMultipleZonesAndTwoNodesInSameZone() { public void testMultipleZonesAndTwoNodesInDifferentZones() { Settings nodeSettings = Settings.builder() .put(GceInstancesServiceImpl.PROJECT_SETTING.getKey(), projectName) - .putArray(GceInstancesServiceImpl.ZONE_SETTING.getKey(), "us-central1-a", "europe-west1-b") + .putList(GceInstancesServiceImpl.ZONE_SETTING.getKey(), "us-central1-a", "europe-west1-b") .build(); mock = new GceInstancesServiceMock(nodeSettings); List discoveryNodes = buildDynamicNodes(mock, nodeSettings); @@ -206,7 +206,7 @@ public void testMultipleZonesAndTwoNodesInDifferentZones() { public void testZeroNode43() { Settings nodeSettings = Settings.builder() .put(GceInstancesServiceImpl.PROJECT_SETTING.getKey(), projectName) - .putArray(GceInstancesServiceImpl.ZONE_SETTING.getKey(), "us-central1-a", "us-central1-b") + .putList(GceInstancesServiceImpl.ZONE_SETTING.getKey(), "us-central1-a", "us-central1-b") .build(); mock = new GceInstancesServiceMock(nodeSettings); List discoveryNodes = buildDynamicNodes(mock, nodeSettings); @@ -226,7 +226,7 @@ public void testIllegalSettingsMissingAllRequired() { public void testIllegalSettingsMissingProject() { Settings nodeSettings = Settings.builder() - .putArray(GceInstancesServiceImpl.ZONE_SETTING.getKey(), "us-central1-a", "us-central1-b") + .putList(GceInstancesServiceImpl.ZONE_SETTING.getKey(), "us-central1-a", "us-central1-b") .build(); mock = new GceInstancesServiceMock(nodeSettings); try { @@ -258,7 +258,7 @@ public void testIllegalSettingsMissingZone() { public void testNoRegionReturnsEmptyList() { Settings nodeSettings = Settings.builder() .put(GceInstancesServiceImpl.PROJECT_SETTING.getKey(), projectName) - .putArray(GceInstancesServiceImpl.ZONE_SETTING.getKey(), "europe-west1-b", "us-central1-a") + .putList(GceInstancesServiceImpl.ZONE_SETTING.getKey(), "europe-west1-b", "us-central1-a") .build(); mock = new GceInstancesServiceMock(nodeSettings); List discoveryNodes = buildDynamicNodes(mock, nodeSettings); diff --git a/plugins/examples/rescore/build.gradle b/plugins/examples/rescore/build.gradle new file mode 100644 index 0000000000000..4adeb0c721baf --- /dev/null +++ b/plugins/examples/rescore/build.gradle @@ -0,0 +1,26 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +apply plugin: 'elasticsearch.esplugin' + +esplugin { + name 'example-rescore' + description 'An example plugin implementing rescore and verifying that plugins *can* implement rescore' + classname 'org.elasticsearch.example.rescore.ExampleRescorePlugin' +} diff --git a/plugins/examples/rescore/src/main/java/org/elasticsearch/example/rescore/ExampleRescoreBuilder.java b/plugins/examples/rescore/src/main/java/org/elasticsearch/example/rescore/ExampleRescoreBuilder.java new file mode 100644 index 0000000000000..358d2cb00ab14 --- /dev/null +++ b/plugins/examples/rescore/src/main/java/org/elasticsearch/example/rescore/ExampleRescoreBuilder.java @@ -0,0 +1,232 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.example.rescore; + +import org.apache.lucene.index.LeafReaderContext; +import org.apache.lucene.index.Term; +import org.apache.lucene.search.Explanation; +import org.apache.lucene.search.IndexSearcher; +import org.apache.lucene.search.ScoreDoc; +import org.apache.lucene.search.TopDocs; +import org.elasticsearch.common.Nullable; +import org.elasticsearch.common.ParseField; +import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.xcontent.ConstructingObjectParser; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.index.fielddata.AtomicFieldData; +import org.elasticsearch.index.fielddata.AtomicNumericFieldData; +import org.elasticsearch.index.fielddata.IndexFieldData; +import org.elasticsearch.index.fielddata.SortedNumericDoubleValues; +import org.elasticsearch.index.query.QueryRewriteContext; +import org.elasticsearch.index.query.QueryShardContext; +import org.elasticsearch.search.rescore.RescoreContext; +import org.elasticsearch.search.rescore.Rescorer; +import org.elasticsearch.search.rescore.RescorerBuilder; + +import java.io.IOException; +import java.util.Arrays; +import java.util.Iterator; +import java.util.Objects; +import java.util.Set; + +import static java.util.Collections.singletonList; +import static org.elasticsearch.common.xcontent.ConstructingObjectParser.constructorArg; +import static org.elasticsearch.common.xcontent.ConstructingObjectParser.optionalConstructorArg; + +/** + * Example rescorer that multiplies the score of the hit by some factor and doesn't resort them. + */ +public class ExampleRescoreBuilder extends RescorerBuilder { + public static final String NAME = "example"; + + private final float factor; + private final String factorField; + + public ExampleRescoreBuilder(float factor, @Nullable String factorField) { + this.factor = factor; + this.factorField = factorField; + } + + ExampleRescoreBuilder(StreamInput in) throws IOException { + super(in); + factor = in.readFloat(); + factorField = in.readOptionalString(); + } + + @Override + protected void doWriteTo(StreamOutput out) throws IOException { + out.writeFloat(factor); + out.writeOptionalString(factorField); + } + + @Override + public String getWriteableName() { + return NAME; + } + + @Override + public RescorerBuilder rewrite(QueryRewriteContext ctx) throws IOException { + return this; + } + + private static final ParseField FACTOR = new ParseField("factor"); + private static final ParseField FACTOR_FIELD = new ParseField("factor_field"); + @Override + protected void doXContent(XContentBuilder builder, Params params) throws IOException { + builder.field(FACTOR.getPreferredName(), factor); + if (factorField != null) { + builder.field(FACTOR_FIELD.getPreferredName(), factorField); + } + } + + private static final ConstructingObjectParser PARSER = new ConstructingObjectParser<>(NAME, + args -> new ExampleRescoreBuilder((float) args[0], (String) args[1])); + static { + PARSER.declareFloat(constructorArg(), FACTOR); + PARSER.declareString(optionalConstructorArg(), FACTOR_FIELD); + } + public static ExampleRescoreBuilder fromXContent(XContentParser parser) { + return PARSER.apply(parser, null); + } + + @Override + public RescoreContext innerBuildContext(int windowSize, QueryShardContext context) throws IOException { + IndexFieldData factorField = + this.factorField == null ? null : context.getForField(context.fieldMapper(this.factorField)); + return new ExampleRescoreContext(windowSize, factor, factorField); + } + + @Override + public boolean equals(Object obj) { + if (false == super.equals(obj)) { + return false; + } + ExampleRescoreBuilder other = (ExampleRescoreBuilder) obj; + return factor == other.factor + && Objects.equals(factorField, other.factorField); + } + + @Override + public int hashCode() { + return Objects.hash(super.hashCode(), factor, factorField); + } + + float factor() { + return factor; + } + + @Nullable + String factorField() { + return factorField; + } + + private static class ExampleRescoreContext extends RescoreContext { + private final float factor; + @Nullable + private final IndexFieldData factorField; + + ExampleRescoreContext(int windowSize, float factor, @Nullable IndexFieldData factorField) { + super(windowSize, ExampleRescorer.INSTANCE); + this.factor = factor; + this.factorField = factorField; + } + } + + private static class ExampleRescorer implements Rescorer { + private static final ExampleRescorer INSTANCE = new ExampleRescorer(); + + @Override + public TopDocs rescore(TopDocs topDocs, IndexSearcher searcher, RescoreContext rescoreContext) throws IOException { + ExampleRescoreContext context = (ExampleRescoreContext) rescoreContext; + int end = Math.min(topDocs.scoreDocs.length, rescoreContext.getWindowSize()); + for (int i = 0; i < end; i++) { + topDocs.scoreDocs[i].score *= context.factor; + } + if (context.factorField != null) { + /* + * Since this example looks up a single field value it should + * access them in docId order because that is the order in + * which they are stored on disk and we want reads to be + * forwards and close together if possible. + * + * If accessing multiple fields we'd be better off accessing + * them in (reader, field, docId) order because that is the + * order they are on disk. + */ + ScoreDoc[] sortedByDocId = new ScoreDoc[topDocs.scoreDocs.length]; + System.arraycopy(topDocs.scoreDocs, 0, sortedByDocId, 0, topDocs.scoreDocs.length); + Arrays.sort(sortedByDocId, (a, b) -> a.doc - b.doc); // Safe because doc ids >= 0 + Iterator leaves = searcher.getIndexReader().leaves().iterator(); + LeafReaderContext leaf = null; + SortedNumericDoubleValues data = null; + int endDoc = 0; + for (int i = 0; i < end; i++) { + if (topDocs.scoreDocs[i].doc >= endDoc) { + do { + leaf = leaves.next(); + endDoc = leaf.docBase + leaf.reader().maxDoc(); + } while (topDocs.scoreDocs[i].doc >= endDoc); + AtomicFieldData fd = context.factorField.load(leaf); + if (false == (fd instanceof AtomicNumericFieldData)) { + throw new IllegalArgumentException("[" + context.factorField.getFieldName() + "] is not a number"); + } + data = ((AtomicNumericFieldData) fd).getDoubleValues(); + } + if (false == data.advanceExact(topDocs.scoreDocs[i].doc)) { + throw new IllegalArgumentException("document [" + topDocs.scoreDocs[i].doc + + "] does not have the field [" + context.factorField.getFieldName() + "]"); + } + if (data.docValueCount() > 1) { + throw new IllegalArgumentException("document [" + topDocs.scoreDocs[i].doc + + "] has more than one value for [" + context.factorField.getFieldName() + "]"); + } + topDocs.scoreDocs[i].score *= data.nextValue(); + } + } + // Sort by score descending, then docID ascending, just like lucene's QueryRescorer + Arrays.sort(topDocs.scoreDocs, (a, b) -> { + if (a.score > b.score) { + return -1; + } + if (a.score < b.score) { + return 1; + } + // Safe because doc ids >= 0 + return a.doc - b.doc; + }); + return topDocs; + } + + @Override + public Explanation explain(int topLevelDocId, IndexSearcher searcher, RescoreContext rescoreContext, + Explanation sourceExplanation) throws IOException { + ExampleRescoreContext context = (ExampleRescoreContext) rescoreContext; + // Note that this is inaccurate because it ignores factor field + return Explanation.match(context.factor, "test", singletonList(sourceExplanation)); + } + + @Override + public void extractTerms(IndexSearcher searcher, RescoreContext rescoreContext, Set termsSet) { + // Since we don't use queries there are no terms to extract. + } + } +} diff --git a/plugins/repository-azure/src/test/java/org/elasticsearch/cloud/azure/AbstractAzureIntegTestCase.java b/plugins/examples/rescore/src/main/java/org/elasticsearch/example/rescore/ExampleRescorePlugin.java similarity index 65% rename from plugins/repository-azure/src/test/java/org/elasticsearch/cloud/azure/AbstractAzureIntegTestCase.java rename to plugins/examples/rescore/src/main/java/org/elasticsearch/example/rescore/ExampleRescorePlugin.java index 82c5e6c188b9a..9df2e55c1081b 100644 --- a/plugins/repository-azure/src/test/java/org/elasticsearch/cloud/azure/AbstractAzureIntegTestCase.java +++ b/plugins/examples/rescore/src/main/java/org/elasticsearch/example/rescore/ExampleRescorePlugin.java @@ -17,22 +17,19 @@ * under the License. */ -package org.elasticsearch.cloud.azure; +package org.elasticsearch.example.rescore; -import org.elasticsearch.plugin.repository.azure.AzureRepositoryPlugin; import org.elasticsearch.plugins.Plugin; -import org.elasticsearch.test.ESIntegTestCase; +import org.elasticsearch.plugins.SearchPlugin; -import java.util.Arrays; -import java.util.Collection; +import java.util.List; -/** - * Base class for Azure tests. - */ -public abstract class AbstractAzureIntegTestCase extends ESIntegTestCase { +import static java.util.Collections.singletonList; +public class ExampleRescorePlugin extends Plugin implements SearchPlugin { @Override - protected Collection> nodePlugins() { - return Arrays.asList(AzureRepositoryPlugin.class); + public List> getRescorers() { + return singletonList( + new RescorerSpec<>(ExampleRescoreBuilder.NAME, ExampleRescoreBuilder::new, ExampleRescoreBuilder::fromXContent)); } } diff --git a/plugins/examples/rescore/src/test/java/org/elasticsearch/example/rescore/ExampleRescoreBuilderTests.java b/plugins/examples/rescore/src/test/java/org/elasticsearch/example/rescore/ExampleRescoreBuilderTests.java new file mode 100644 index 0000000000000..d9fc4521a3593 --- /dev/null +++ b/plugins/examples/rescore/src/test/java/org/elasticsearch/example/rescore/ExampleRescoreBuilderTests.java @@ -0,0 +1,80 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.example.rescore; + +import org.apache.lucene.search.ScoreDoc; +import org.apache.lucene.search.TopDocs; +import org.elasticsearch.common.io.stream.Writeable.Reader; +import org.elasticsearch.search.rescore.RescoreContext; +import org.elasticsearch.test.AbstractWireSerializingTestCase; +import org.elasticsearch.test.ESTestCase; + +import java.io.IOException; +import java.util.function.Supplier; + +public class ExampleRescoreBuilderTests extends AbstractWireSerializingTestCase { + @Override + protected ExampleRescoreBuilder createTestInstance() { + String factorField = randomBoolean() ? null : randomAlphaOfLength(5); + return new ExampleRescoreBuilder(randomFloat(), factorField).windowSize(between(0, Integer.MAX_VALUE)); + } + + @Override + protected Reader instanceReader() { + return ExampleRescoreBuilder::new; + } + + @Override + protected ExampleRescoreBuilder mutateInstance(ExampleRescoreBuilder instance) throws IOException { + @SuppressWarnings("unchecked") + Supplier supplier = randomFrom( + () -> new ExampleRescoreBuilder(instance.factor(), instance.factorField()) + .windowSize(randomValueOtherThan(instance.windowSize(), () -> between(0, Integer.MAX_VALUE))), + () -> new ExampleRescoreBuilder(randomValueOtherThan(instance.factor(), ESTestCase::randomFloat), instance.factorField()) + .windowSize(instance.windowSize()), + () -> new ExampleRescoreBuilder( + instance.factor(), randomValueOtherThan(instance.factorField(), () -> randomAlphaOfLength(5))) + .windowSize(instance.windowSize())); + + return supplier.get(); + } + + public void testRewrite() throws IOException { + ExampleRescoreBuilder builder = createTestInstance(); + assertSame(builder, builder.rewrite(null)); + } + + public void testRescore() throws IOException { + // Always use a factor > 1 so rescored fields are sorted in front of the unrescored fields. + float factor = (float) randomDoubleBetween(1.0d, Float.MAX_VALUE, false); + // Skipping factorField because it is much harder to mock. We'll catch it in an integration test. + String fieldFactor = null; + ExampleRescoreBuilder builder = new ExampleRescoreBuilder(factor, fieldFactor).windowSize(2); + RescoreContext context = builder.buildContext(null); + TopDocs docs = new TopDocs(10, new ScoreDoc[3], 0); + docs.scoreDocs[0] = new ScoreDoc(0, 1.0f); + docs.scoreDocs[1] = new ScoreDoc(1, 1.0f); + docs.scoreDocs[2] = new ScoreDoc(2, 1.0f); + context.rescorer().rescore(docs, null, context); + assertEquals(factor, docs.scoreDocs[0].score, 0.0f); + assertEquals(factor, docs.scoreDocs[1].score, 0.0f); + assertEquals(1.0f, docs.scoreDocs[2].score, 0.0f); + } +} diff --git a/plugins/repository-azure/src/main/java/org/elasticsearch/cloud/azure/blobstore/AzureOutputStream.java b/plugins/examples/rescore/src/test/java/org/elasticsearch/example/rescore/ExampleRescoreClientYamlTestSuiteIT.java similarity index 54% rename from plugins/repository-azure/src/main/java/org/elasticsearch/cloud/azure/blobstore/AzureOutputStream.java rename to plugins/examples/rescore/src/test/java/org/elasticsearch/example/rescore/ExampleRescoreClientYamlTestSuiteIT.java index 6a95eeba7789c..b9b12b6a42d0f 100644 --- a/plugins/repository-azure/src/main/java/org/elasticsearch/cloud/azure/blobstore/AzureOutputStream.java +++ b/plugins/examples/rescore/src/test/java/org/elasticsearch/example/rescore/ExampleRescoreClientYamlTestSuiteIT.java @@ -17,30 +17,22 @@ * under the License. */ -package org.elasticsearch.cloud.azure.blobstore; +package org.elasticsearch.example.rescore; -import java.io.IOException; -import java.io.OutputStream; +import com.carrotsearch.randomizedtesting.annotations.Name; +import com.carrotsearch.randomizedtesting.annotations.ParametersFactory; +import org.elasticsearch.test.rest.yaml.ClientYamlTestCandidate; +import org.elasticsearch.test.rest.yaml.ESClientYamlSuiteTestCase; -public class AzureOutputStream extends OutputStream { +public class ExampleRescoreClientYamlTestSuiteIT extends ESClientYamlSuiteTestCase { - private final OutputStream blobOutputStream; - - public AzureOutputStream(OutputStream blobOutputStream) { - this.blobOutputStream = blobOutputStream; - } - - @Override - public void write(int b) throws IOException { - blobOutputStream.write(b); + public ExampleRescoreClientYamlTestSuiteIT(@Name("yaml") ClientYamlTestCandidate testCandidate) { + super(testCandidate); } - @Override - public void close() throws IOException { - try { - blobOutputStream.close(); - } catch (IOException e) { - // Azure is sending a "java.io.IOException: Stream is already closed." - } + @ParametersFactory + public static Iterable parameters() throws Exception { + return ESClientYamlSuiteTestCase.createParameters(); } } + diff --git a/plugins/examples/rescore/src/test/resources/rest-api-spec/test/example-rescore/10_basic.yml b/plugins/examples/rescore/src/test/resources/rest-api-spec/test/example-rescore/10_basic.yml new file mode 100644 index 0000000000000..75c22d6b578bd --- /dev/null +++ b/plugins/examples/rescore/src/test/resources/rest-api-spec/test/example-rescore/10_basic.yml @@ -0,0 +1,13 @@ +# Integration tests for the expert scoring script example plugin +# +"Plugin loaded": + - do: + cluster.state: {} + + # Get master node id + - set: { master_node: master } + + - do: + nodes.info: {} + + - match: { nodes.$master.plugins.0.name: example-rescore } diff --git a/plugins/examples/rescore/src/test/resources/rest-api-spec/test/example-rescore/20_score.yml b/plugins/examples/rescore/src/test/resources/rest-api-spec/test/example-rescore/20_score.yml new file mode 100644 index 0000000000000..ab660b395d05a --- /dev/null +++ b/plugins/examples/rescore/src/test/resources/rest-api-spec/test/example-rescore/20_score.yml @@ -0,0 +1,63 @@ +--- +setup: + - do: + indices.create: + index: test + body: + settings: + number_of_shards: 1 + number_of_replicas: 1 + + - do: + index: + index: test + type: test + id: 1 + body: { "test": 1 } + - do: + index: + index: test + type: test + id: 2 + body: { "test": 2 } + - do: + indices.refresh: {} + +--- +"just factor": + - do: + search: + index: test + body: + rescore: + example: + factor: 0 + - length: { hits.hits: 2 } + - match: { hits.hits.0._score: 0 } + - match: { hits.hits.1._score: 0 } + + - do: + search: + index: test + body: + rescore: + window_size: 1 + example: + factor: 0 + - length: { hits.hits: 2 } + - match: { hits.hits.0._score: 1 } + - match: { hits.hits.1._score: 0 } + +--- +"with factor field": + - do: + search: + index: test + body: + rescore: + example: + factor: 1 + factor_field: test + - length: { hits.hits: 2 } + - match: { hits.hits.0._score: 2 } + - match: { hits.hits.1._score: 1 } diff --git a/plugins/ingest-attachment/src/main/java/org/elasticsearch/ingest/attachment/TikaImpl.java b/plugins/ingest-attachment/src/main/java/org/elasticsearch/ingest/attachment/TikaImpl.java index 9fee4df5cf60d..9c3366d160eb9 100644 --- a/plugins/ingest-attachment/src/main/java/org/elasticsearch/ingest/attachment/TikaImpl.java +++ b/plugins/ingest-attachment/src/main/java/org/elasticsearch/ingest/attachment/TikaImpl.java @@ -27,17 +27,19 @@ import org.apache.tika.parser.Parser; import org.apache.tika.parser.ParserDecorator; import org.elasticsearch.SpecialPermission; +import org.elasticsearch.bootstrap.FilePermissionUtils; import org.elasticsearch.bootstrap.JarHell; import org.elasticsearch.common.SuppressForbidden; import org.elasticsearch.common.io.PathUtils; import java.io.ByteArrayInputStream; -import java.io.FilePermission; import java.io.IOException; +import java.io.UncheckedIOException; import java.lang.reflect.ReflectPermission; import java.net.URISyntaxException; import java.net.URL; import java.net.URLClassLoader; +import java.nio.file.Files; import java.nio.file.Path; import java.security.AccessControlContext; import java.security.AccessController; @@ -127,27 +129,32 @@ static String parse(final byte content[], final Metadata metadata, final int lim // compute some minimal permissions for parsers. they only get r/w access to the java temp directory, // the ability to load some resources from JARs, and read sysprops + @SuppressForbidden(reason = "adds access to tmp directory") static PermissionCollection getRestrictedPermissions() { Permissions perms = new Permissions(); // property/env access needed for parsing perms.add(new PropertyPermission("*", "read")); perms.add(new RuntimePermission("getenv.TIKA_CONFIG")); - // add permissions for resource access: - // classpath - addReadPermissions(perms, JarHell.parseClassPath()); - // plugin jars - if (TikaImpl.class.getClassLoader() instanceof URLClassLoader) { - URL[] urls = ((URLClassLoader)TikaImpl.class.getClassLoader()).getURLs(); - Set set = new LinkedHashSet<>(Arrays.asList(urls)); - if (set.size() != urls.length) { - throw new AssertionError("duplicate jars: " + Arrays.toString(urls)); + try { + // add permissions for resource access: + // classpath + addReadPermissions(perms, JarHell.parseClassPath()); + // plugin jars + if (TikaImpl.class.getClassLoader() instanceof URLClassLoader) { + URL[] urls = ((URLClassLoader)TikaImpl.class.getClassLoader()).getURLs(); + Set set = new LinkedHashSet<>(Arrays.asList(urls)); + if (set.size() != urls.length) { + throw new AssertionError("duplicate jars: " + Arrays.toString(urls)); + } + addReadPermissions(perms, set); } - addReadPermissions(perms, set); + // jvm's java.io.tmpdir (needs read/write) + FilePermissionUtils.addDirectoryPath(perms, "java.io.tmpdir", + PathUtils.get(System.getProperty("java.io.tmpdir")), "read,readlink,write,delete"); + } catch (IOException e) { + throw new UncheckedIOException(e); } - // jvm's java.io.tmpdir (needs read/write) - perms.add(new FilePermission(System.getProperty("java.io.tmpdir") + System.getProperty("file.separator") + "-", - "read,readlink,write,delete")); // current hacks needed for POI/PDFbox issues: perms.add(new SecurityPermission("putProviderProperty.BC")); perms.add(new SecurityPermission("insertProvider")); @@ -160,14 +167,15 @@ static PermissionCollection getRestrictedPermissions() { // add resources to (what is typically) a jar, but might not be (e.g. in tests/IDE) @SuppressForbidden(reason = "adds access to jar resources") - static void addReadPermissions(Permissions perms, Set resources) { + static void addReadPermissions(Permissions perms, Set resources) throws IOException { try { for (URL url : resources) { Path path = PathUtils.get(url.toURI()); - // resource itself - perms.add(new FilePermission(path.toString(), "read,readlink")); - // classes underneath - perms.add(new FilePermission(path.toString() + System.getProperty("file.separator") + "-", "read,readlink")); + if (Files.isDirectory(path)) { + FilePermissionUtils.addDirectoryPath(perms, "class.path", path, "read,readlink"); + } else { + FilePermissionUtils.addSingleFilePath(perms, path, "read,readlink"); + } } } catch (URISyntaxException bogus) { throw new RuntimeException(bogus); diff --git a/plugins/ingest-geoip/src/test/java/org/elasticsearch/ingest/geoip/GeoIpProcessorFactoryTests.java b/plugins/ingest-geoip/src/test/java/org/elasticsearch/ingest/geoip/GeoIpProcessorFactoryTests.java index 3904b043a5255..d76056cac3563 100644 --- a/plugins/ingest-geoip/src/test/java/org/elasticsearch/ingest/geoip/GeoIpProcessorFactoryTests.java +++ b/plugins/ingest-geoip/src/test/java/org/elasticsearch/ingest/geoip/GeoIpProcessorFactoryTests.java @@ -231,7 +231,7 @@ public void testLazyLoading() throws Exception { Files.copy(new ByteArrayInputStream(StreamsUtils.copyToBytesFromClasspath("/GeoLite2-Country.mmdb.gz")), geoIpConfigDir.resolve("GeoLite2-Country.mmdb.gz")); - // Loading another database reader instances, because otherwise we can't test lazy loading as the the + // Loading another database reader instances, because otherwise we can't test lazy loading as the // database readers used at class level are reused between tests. (we want to keep that otherwise running this // test will take roughly 4 times more time) Map databaseReaders = diff --git a/plugins/mapper-murmur3/src/main/java/org/elasticsearch/index/mapper/murmur3/Murmur3FieldMapper.java b/plugins/mapper-murmur3/src/main/java/org/elasticsearch/index/mapper/murmur3/Murmur3FieldMapper.java index 6824c8bf20244..a6dc27b1f8a1c 100644 --- a/plugins/mapper-murmur3/src/main/java/org/elasticsearch/index/mapper/murmur3/Murmur3FieldMapper.java +++ b/plugins/mapper-murmur3/src/main/java/org/elasticsearch/index/mapper/murmur3/Murmur3FieldMapper.java @@ -19,14 +19,11 @@ package org.elasticsearch.index.mapper.murmur3; -import java.io.IOException; -import java.util.List; -import java.util.Map; - import org.apache.lucene.document.SortedNumericDocValuesField; import org.apache.lucene.document.StoredField; import org.apache.lucene.index.IndexOptions; import org.apache.lucene.index.IndexableField; +import org.apache.lucene.search.DocValuesFieldExistsQuery; import org.apache.lucene.search.Query; import org.apache.lucene.util.BytesRef; import org.elasticsearch.Version; @@ -44,6 +41,10 @@ import org.elasticsearch.index.query.QueryShardContext; import org.elasticsearch.index.query.QueryShardException; +import java.io.IOException; +import java.util.List; +import java.util.Map; + public class Murmur3FieldMapper extends FieldMapper { public static final String CONTENT_TYPE = "murmur3"; @@ -127,6 +128,11 @@ public IndexFieldData.Builder fielddataBuilder(String fullyQualifiedIndexName) { return new DocValuesIndexFieldData.Builder().numericType(NumericType.LONG); } + @Override + public Query existsQuery(QueryShardContext context) { + return new DocValuesFieldExistsQuery(name()); + } + @Override public Query termQuery(Object value, QueryShardContext context) { throw new QueryShardException(context, "Murmur3 fields are not searchable: [" + name() + "]"); diff --git a/plugins/repository-azure/build.gradle b/plugins/repository-azure/build.gradle index 3264b512b2de9..bb5e1e757812f 100644 --- a/plugins/repository-azure/build.gradle +++ b/plugins/repository-azure/build.gradle @@ -19,7 +19,7 @@ esplugin { description 'The Azure Repository plugin adds support for Azure storage repositories.' - classname 'org.elasticsearch.plugin.repository.azure.AzureRepositoryPlugin' + classname 'org.elasticsearch.repositories.azure.AzureRepositoryPlugin' } dependencies { @@ -43,8 +43,6 @@ thirdPartyAudit.excludes = [ ] integTestCluster { - setting 'cloud.azure.storage.my_account_test.account', 'cloudazureresource' - setting 'cloud.azure.storage.my_account_test.key', 'abcdefgh' keystoreSetting 'azure.client.default.account', 'cloudazureresource' keystoreSetting 'azure.client.default.key', 'abcdefgh' keystoreSetting 'azure.client.secondary.account', 'cloudazureresource' diff --git a/plugins/repository-azure/src/main/java/org/elasticsearch/cloud/azure/storage/AzureStorageSettings.java b/plugins/repository-azure/src/main/java/org/elasticsearch/cloud/azure/storage/AzureStorageSettings.java deleted file mode 100644 index 5478ba60e0ea5..0000000000000 --- a/plugins/repository-azure/src/main/java/org/elasticsearch/cloud/azure/storage/AzureStorageSettings.java +++ /dev/null @@ -1,266 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.cloud.azure.storage; - -import com.microsoft.azure.storage.RetryPolicy; -import org.elasticsearch.cloud.azure.storage.AzureStorageService.Storage; -import org.elasticsearch.common.collect.Tuple; -import org.elasticsearch.common.settings.SecureSetting; -import org.elasticsearch.common.settings.SecureString; -import org.elasticsearch.common.settings.Setting; -import org.elasticsearch.common.settings.Setting.AffixSetting; -import org.elasticsearch.common.settings.Setting.Property; -import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.common.settings.SettingsException; -import org.elasticsearch.common.unit.TimeValue; - -import java.util.ArrayList; -import java.util.Collections; -import java.util.HashMap; -import java.util.List; -import java.util.Map; -import java.util.Set; - -import static org.elasticsearch.cloud.azure.storage.AzureStorageService.Storage.STORAGE_ACCOUNTS; - -public final class AzureStorageSettings { - // prefix for azure client settings - private static final String PREFIX = "azure.client."; - - /** - * Azure account name - */ - public static final AffixSetting ACCOUNT_SETTING = Setting.affixKeySetting(PREFIX, "account", - key -> SecureSetting.secureString(key, null)); - - /** - * max_retries: Number of retries in case of Azure errors. Defaults to 3 (RetryPolicy.DEFAULT_CLIENT_RETRY_COUNT). - */ - private static final Setting MAX_RETRIES_SETTING = - Setting.affixKeySetting(PREFIX, "max_retries", - (key) -> Setting.intSetting(key, RetryPolicy.DEFAULT_CLIENT_RETRY_COUNT, Setting.Property.NodeScope)); - - /** - * Azure key - */ - public static final AffixSetting KEY_SETTING = Setting.affixKeySetting(PREFIX, "key", - key -> SecureSetting.secureString(key, null)); - - public static final AffixSetting TIMEOUT_SETTING = Setting.affixKeySetting(PREFIX, "timeout", - (key) -> Setting.timeSetting(key, Storage.TIMEOUT_SETTING, Property.NodeScope)); - - - @Deprecated - public static final Setting DEPRECATED_TIMEOUT_SETTING = Setting.affixKeySetting(Storage.PREFIX, "timeout", - (key) -> Setting.timeSetting(key, Storage.TIMEOUT_SETTING, Property.NodeScope, Property.Deprecated)); - @Deprecated - public static final Setting DEPRECATED_ACCOUNT_SETTING = Setting.affixKeySetting(Storage.PREFIX, "account", - (key) -> Setting.simpleString(key, Property.NodeScope, Property.Deprecated)); - @Deprecated - public static final Setting DEPRECATED_KEY_SETTING = Setting.affixKeySetting(Storage.PREFIX, "key", - (key) -> Setting.simpleString(key, Property.NodeScope, Property.Deprecated)); - @Deprecated - public static final Setting DEPRECATED_DEFAULT_SETTING = Setting.affixKeySetting(Storage.PREFIX, "default", - (key) -> Setting.boolSetting(key, false, Property.NodeScope, Property.Deprecated)); - - - @Deprecated - private final String name; - private final String account; - private final String key; - private final TimeValue timeout; - @Deprecated - private final boolean activeByDefault; - private final int maxRetries; - - public AzureStorageSettings(String account, String key, TimeValue timeout, int maxRetries) { - this.name = null; - this.account = account; - this.key = key; - this.timeout = timeout; - this.activeByDefault = false; - this.maxRetries = maxRetries; - } - - @Deprecated - public AzureStorageSettings(String name, String account, String key, TimeValue timeout, boolean activeByDefault, int maxRetries) { - this.name = name; - this.account = account; - this.key = key; - this.timeout = timeout; - this.activeByDefault = activeByDefault; - this.maxRetries = maxRetries; - } - - @Deprecated - public String getName() { - return name; - } - - public String getKey() { - return key; - } - - public String getAccount() { - return account; - } - - public TimeValue getTimeout() { - return timeout; - } - - @Deprecated - public Boolean isActiveByDefault() { - return activeByDefault; - } - - public int getMaxRetries() { - return maxRetries; - } - - @Override - public String toString() { - final StringBuilder sb = new StringBuilder("AzureStorageSettings{"); - sb.append("name='").append(name).append('\''); - sb.append(", account='").append(account).append('\''); - sb.append(", key='").append(key).append('\''); - sb.append(", activeByDefault='").append(activeByDefault).append('\''); - sb.append(", timeout=").append(timeout); - sb.append(", maxRetries=").append(maxRetries); - sb.append('}'); - return sb.toString(); - } - - /** - * Parses settings and read all legacy settings available under cloud.azure.storage.* - * @param settings settings to parse - * @return A tuple with v1 = primary storage and v2 = secondary storage - */ - @Deprecated - public static Tuple> loadLegacy(Settings settings) { - List storageSettings = createStorageSettingsDeprecated(settings); - return Tuple.tuple(getPrimary(storageSettings), getSecondaries(storageSettings)); - } - - /** - * Parses settings and read all settings available under azure.client.* - * @param settings settings to parse - * @return All the named configurations - */ - public static Map load(Settings settings) { - // Get the list of existing named configurations - Set clientNames = settings.getGroups(PREFIX).keySet(); - Map storageSettings = new HashMap<>(); - for (String clientName : clientNames) { - storageSettings.put(clientName, getClientSettings(settings, clientName)); - } - - if (storageSettings.containsKey("default") == false && storageSettings.isEmpty() == false) { - // in case no setting named "default" has been set, let's define our "default" - // as the first named config we get - AzureStorageSettings defaultSettings = storageSettings.values().iterator().next(); - storageSettings.put("default", defaultSettings); - } - return Collections.unmodifiableMap(storageSettings); - } - - // pkg private for tests - /** Parse settings for a single client. */ - static AzureStorageSettings getClientSettings(Settings settings, String clientName) { - try (SecureString account = getConfigValue(settings, clientName, ACCOUNT_SETTING); - SecureString key = getConfigValue(settings, clientName, KEY_SETTING)) { - return new AzureStorageSettings(account.toString(), key.toString(), - getValue(settings, clientName, TIMEOUT_SETTING), - getValue(settings, clientName, MAX_RETRIES_SETTING)); - } - } - - @Deprecated - private static List createStorageSettingsDeprecated(Settings settings) { - // ignore global timeout which has the same prefix but does not belong to any group - Settings groups = STORAGE_ACCOUNTS.get(settings.filter((k) -> k.equals(Storage.TIMEOUT_SETTING.getKey()) == false)); - List storageSettings = new ArrayList<>(); - for (String groupName : groups.getAsGroups().keySet()) { - storageSettings.add( - new AzureStorageSettings( - groupName, - getValue(settings, groupName, DEPRECATED_ACCOUNT_SETTING), - getValue(settings, groupName, DEPRECATED_KEY_SETTING), - getValue(settings, groupName, DEPRECATED_TIMEOUT_SETTING), - getValue(settings, groupName, DEPRECATED_DEFAULT_SETTING), - getValue(settings, groupName, MAX_RETRIES_SETTING)) - ); - } - return storageSettings; - } - - private static T getConfigValue(Settings settings, String clientName, - Setting.AffixSetting clientSetting) { - Setting concreteSetting = clientSetting.getConcreteSettingForNamespace(clientName); - return concreteSetting.get(settings); - } - - public static T getValue(Settings settings, String groupName, Setting setting) { - Setting.AffixKey k = (Setting.AffixKey) setting.getRawKey(); - String fullKey = k.toConcreteKey(groupName).toString(); - return setting.getConcreteSetting(fullKey).get(settings); - } - - @Deprecated - private static AzureStorageSettings getPrimary(List settings) { - if (settings.isEmpty()) { - return null; - } else if (settings.size() == 1) { - // the only storage settings belong (implicitly) to the default primary storage - AzureStorageSettings storage = settings.get(0); - return new AzureStorageSettings(storage.getName(), storage.getAccount(), storage.getKey(), storage.getTimeout(), true, - storage.getMaxRetries()); - } else { - AzureStorageSettings primary = null; - for (AzureStorageSettings setting : settings) { - if (setting.isActiveByDefault()) { - if (primary == null) { - primary = setting; - } else { - throw new SettingsException("Multiple default Azure data stores configured: [" + primary.getName() + "] and [" + setting.getName() + "]"); - } - } - } - if (primary == null) { - throw new SettingsException("No default Azure data store configured"); - } - return primary; - } - } - - @Deprecated - private static Map getSecondaries(List settings) { - Map secondaries = new HashMap<>(); - // when only one setting is defined, we don't have secondaries - if (settings.size() > 1) { - for (AzureStorageSettings setting : settings) { - if (setting.isActiveByDefault() == false) { - secondaries.put(setting.getName(), setting); - } - } - } - return secondaries; - } -} diff --git a/plugins/repository-azure/src/main/java/org/elasticsearch/cloud/azure/blobstore/AzureBlobContainer.java b/plugins/repository-azure/src/main/java/org/elasticsearch/repositories/azure/AzureBlobContainer.java similarity index 71% rename from plugins/repository-azure/src/main/java/org/elasticsearch/cloud/azure/blobstore/AzureBlobContainer.java rename to plugins/repository-azure/src/main/java/org/elasticsearch/repositories/azure/AzureBlobContainer.java index b1b359956b6da..8f7671697db56 100644 --- a/plugins/repository-azure/src/main/java/org/elasticsearch/cloud/azure/blobstore/AzureBlobContainer.java +++ b/plugins/repository-azure/src/main/java/org/elasticsearch/repositories/azure/AzureBlobContainer.java @@ -17,7 +17,7 @@ * under the License. */ -package org.elasticsearch.cloud.azure.blobstore; +package org.elasticsearch.repositories.azure; import com.microsoft.azure.storage.LocationMode; import com.microsoft.azure.storage.StorageException; @@ -26,13 +26,10 @@ import org.elasticsearch.common.blobstore.BlobMetaData; import org.elasticsearch.common.blobstore.BlobPath; import org.elasticsearch.common.blobstore.support.AbstractBlobContainer; -import org.elasticsearch.common.io.Streams; import org.elasticsearch.common.logging.Loggers; -import org.elasticsearch.repositories.RepositoryException; import java.io.IOException; import java.io.InputStream; -import java.io.OutputStream; import java.net.HttpURLConnection; import java.net.URISyntaxException; import java.nio.file.FileAlreadyExistsException; @@ -41,26 +38,24 @@ public class AzureBlobContainer extends AbstractBlobContainer { - protected final Logger logger = Loggers.getLogger(AzureBlobContainer.class); - protected final AzureBlobStore blobStore; + private final Logger logger = Loggers.getLogger(AzureBlobContainer.class); + private final AzureBlobStore blobStore; - protected final String keyPath; - protected final String repositoryName; + private final String keyPath; - public AzureBlobContainer(String repositoryName, BlobPath path, AzureBlobStore blobStore) { + public AzureBlobContainer(BlobPath path, AzureBlobStore blobStore) { super(path); this.blobStore = blobStore; this.keyPath = path.buildAsString(); - this.repositoryName = repositoryName; } @Override public boolean blobExists(String blobName) { logger.trace("blobExists({})", blobName); try { - return blobStore.blobExists(blobStore.container(), buildKey(blobName)); + return blobStore.blobExists(buildKey(blobName)); } catch (URISyntaxException | StorageException e) { - logger.warn("can not access [{}] in container {{}}: {}", blobName, blobStore.container(), e.getMessage()); + logger.warn("can not access [{}] in container {{}}: {}", blobName, blobStore, e.getMessage()); } return false; } @@ -80,7 +75,7 @@ public InputStream readBlob(String blobName) throws IOException { } try { - return blobStore.getInputStream(blobStore.container(), buildKey(blobName)); + return blobStore.getInputStream(buildKey(blobName)); } catch (StorageException e) { if (e.getHttpStatusCode() == HttpURLConnection.HTTP_NOT_FOUND) { throw new NoSuchFileException(e.getMessage()); @@ -96,24 +91,11 @@ public void writeBlob(String blobName, InputStream inputStream, long blobSize) t if (blobExists(blobName)) { throw new FileAlreadyExistsException("blob [" + blobName + "] already exists, cannot overwrite"); } - logger.trace("writeBlob({}, stream, {})", blobName, blobSize); - try (OutputStream stream = createOutput(blobName)) { - Streams.copy(inputStream, stream); - } - } - - private OutputStream createOutput(String blobName) throws IOException { + logger.trace("writeBlob({}, stream, {})", buildKey(blobName), blobSize); try { - return new AzureOutputStream(blobStore.getOutputStream(blobStore.container(), buildKey(blobName))); - } catch (StorageException e) { - if (e.getHttpStatusCode() == HttpURLConnection.HTTP_NOT_FOUND) { - throw new NoSuchFileException(e.getMessage()); - } - throw new IOException(e); - } catch (URISyntaxException e) { - throw new IOException(e); - } catch (IllegalArgumentException e) { - throw new RepositoryException(repositoryName, e.getMessage()); + blobStore.writeBlob(buildKey(blobName), inputStream, blobSize); + } catch (URISyntaxException|StorageException e) { + throw new IOException("Can not write blob " + blobName, e); } } @@ -126,9 +108,9 @@ public void deleteBlob(String blobName) throws IOException { } try { - blobStore.deleteBlob(blobStore.container(), buildKey(blobName)); + blobStore.deleteBlob(buildKey(blobName)); } catch (URISyntaxException | StorageException e) { - logger.warn("can not access [{}] in container {{}}: {}", blobName, blobStore.container(), e.getMessage()); + logger.warn("can not access [{}] in container {{}}: {}", blobName, blobStore, e.getMessage()); throw new IOException(e); } } @@ -138,9 +120,9 @@ public Map listBlobsByPrefix(@Nullable String prefix) thro logger.trace("listBlobsByPrefix({})", prefix); try { - return blobStore.listBlobsByPrefix(blobStore.container(), keyPath, prefix); + return blobStore.listBlobsByPrefix(keyPath, prefix); } catch (URISyntaxException | StorageException e) { - logger.warn("can not access [{}] in container {{}}: {}", prefix, blobStore.container(), e.getMessage()); + logger.warn("can not access [{}] in container {{}}: {}", prefix, blobStore, e.getMessage()); throw new IOException(e); } } @@ -152,11 +134,11 @@ public void move(String sourceBlobName, String targetBlobName) throws IOExceptio String source = keyPath + sourceBlobName; String target = keyPath + targetBlobName; - logger.debug("moving blob [{}] to [{}] in container {{}}", source, target, blobStore.container()); + logger.debug("moving blob [{}] to [{}] in container {{}}", source, target, blobStore); - blobStore.moveBlob(blobStore.container(), source, target); + blobStore.moveBlob(source, target); } catch (URISyntaxException | StorageException e) { - logger.warn("can not move blob [{}] to [{}] in container {{}}: {}", sourceBlobName, targetBlobName, blobStore.container(), e.getMessage()); + logger.warn("can not move blob [{}] to [{}] in container {{}}: {}", sourceBlobName, targetBlobName, blobStore, e.getMessage()); throw new IOException(e); } } diff --git a/plugins/repository-azure/src/main/java/org/elasticsearch/cloud/azure/blobstore/AzureBlobStore.java b/plugins/repository-azure/src/main/java/org/elasticsearch/repositories/azure/AzureBlobStore.java similarity index 72% rename from plugins/repository-azure/src/main/java/org/elasticsearch/cloud/azure/blobstore/AzureBlobStore.java rename to plugins/repository-azure/src/main/java/org/elasticsearch/repositories/azure/AzureBlobStore.java index 868b661b50899..7e8987ae94576 100644 --- a/plugins/repository-azure/src/main/java/org/elasticsearch/cloud/azure/blobstore/AzureBlobStore.java +++ b/plugins/repository-azure/src/main/java/org/elasticsearch/repositories/azure/AzureBlobStore.java @@ -17,11 +17,10 @@ * under the License. */ -package org.elasticsearch.cloud.azure.blobstore; +package org.elasticsearch.repositories.azure; import com.microsoft.azure.storage.LocationMode; import com.microsoft.azure.storage.StorageException; -import org.elasticsearch.cloud.azure.storage.AzureStorageService; import org.elasticsearch.cluster.metadata.RepositoryMetaData; import org.elasticsearch.common.Strings; import org.elasticsearch.common.blobstore.BlobContainer; @@ -33,7 +32,6 @@ import java.io.IOException; import java.io.InputStream; -import java.io.OutputStream; import java.net.URISyntaxException; import java.util.Locale; import java.util.Map; @@ -47,14 +45,12 @@ public class AzureBlobStore extends AbstractComponent implements BlobStore { private final String clientName; private final LocationMode locMode; private final String container; - private final String repositoryName; public AzureBlobStore(RepositoryMetaData metadata, Settings settings, AzureStorageService client) throws URISyntaxException, StorageException { super(settings); this.client = client; this.container = Repository.CONTAINER_SETTING.get(metadata.settings()); - this.repositoryName = metadata.name(); this.clientName = Repository.CLIENT_NAME.get(metadata.settings()); String modeStr = Repository.LOCATION_MODE_SETTING.get(metadata.settings()); @@ -70,10 +66,6 @@ public String toString() { return container; } - public String container() { - return container; - } - /** * Gets the configured {@link LocationMode} for the Azure storage requests. */ @@ -83,7 +75,7 @@ public LocationMode getLocationMode() { @Override public BlobContainer blobContainer(BlobPath path) { - return new AzureBlobContainer(repositoryName, path, this); + return new AzureBlobContainer(path, this); } @Override @@ -100,43 +92,37 @@ public void delete(BlobPath path) { public void close() { } - public boolean doesContainerExist(String container) + public boolean doesContainerExist() { return this.client.doesContainerExist(this.clientName, this.locMode, container); } - public void deleteFiles(String container, String path) throws URISyntaxException, StorageException - { - this.client.deleteFiles(this.clientName, this.locMode, container, path); - } - - public boolean blobExists(String container, String blob) throws URISyntaxException, StorageException + public boolean blobExists(String blob) throws URISyntaxException, StorageException { return this.client.blobExists(this.clientName, this.locMode, container, blob); } - public void deleteBlob(String container, String blob) throws URISyntaxException, StorageException + public void deleteBlob(String blob) throws URISyntaxException, StorageException { this.client.deleteBlob(this.clientName, this.locMode, container, blob); } - public InputStream getInputStream(String container, String blob) throws URISyntaxException, StorageException, IOException + public InputStream getInputStream(String blob) throws URISyntaxException, StorageException, IOException { return this.client.getInputStream(this.clientName, this.locMode, container, blob); } - public OutputStream getOutputStream(String container, String blob) throws URISyntaxException, StorageException - { - return this.client.getOutputStream(this.clientName, this.locMode, container, blob); - } - - public Map listBlobsByPrefix(String container, String keyPath, String prefix) throws URISyntaxException, StorageException - { + public Map listBlobsByPrefix(String keyPath, String prefix) + throws URISyntaxException, StorageException { return this.client.listBlobsByPrefix(this.clientName, this.locMode, container, keyPath, prefix); } - public void moveBlob(String container, String sourceBlob, String targetBlob) throws URISyntaxException, StorageException + public void moveBlob(String sourceBlob, String targetBlob) throws URISyntaxException, StorageException { this.client.moveBlob(this.clientName, this.locMode, container, sourceBlob, targetBlob); } + + public void writeBlob(String blobName, InputStream inputStream, long blobSize) throws URISyntaxException, StorageException { + this.client.writeBlob(this.clientName, this.locMode, container, blobName, inputStream, blobSize); + } } diff --git a/plugins/repository-azure/src/main/java/org/elasticsearch/repositories/azure/AzureRepository.java b/plugins/repository-azure/src/main/java/org/elasticsearch/repositories/azure/AzureRepository.java index 2a02ec5f8d8c8..06bf10fb2e292 100644 --- a/plugins/repository-azure/src/main/java/org/elasticsearch/repositories/azure/AzureRepository.java +++ b/plugins/repository-azure/src/main/java/org/elasticsearch/repositories/azure/AzureRepository.java @@ -21,8 +21,6 @@ import com.microsoft.azure.storage.LocationMode; import com.microsoft.azure.storage.StorageException; -import org.elasticsearch.cloud.azure.blobstore.AzureBlobStore; -import org.elasticsearch.cloud.azure.storage.AzureStorageService; import org.elasticsearch.cluster.metadata.MetaData; import org.elasticsearch.cluster.metadata.RepositoryMetaData; import org.elasticsearch.common.Strings; @@ -43,8 +41,8 @@ import java.util.Locale; import java.util.function.Function; -import static org.elasticsearch.cloud.azure.storage.AzureStorageService.MAX_CHUNK_SIZE; -import static org.elasticsearch.cloud.azure.storage.AzureStorageService.MIN_CHUNK_SIZE; +import static org.elasticsearch.repositories.azure.AzureStorageService.MAX_CHUNK_SIZE; +import static org.elasticsearch.repositories.azure.AzureStorageService.MIN_CHUNK_SIZE; /** * Azure file system implementation of the BlobStoreRepository @@ -155,10 +153,11 @@ protected ByteSizeValue chunkSize() { @Override public void initializeSnapshot(SnapshotId snapshotId, List indices, MetaData clusterMetadata) { - if (blobStore.doesContainerExist(blobStore.container()) == false) { - throw new IllegalArgumentException("The bucket [" + blobStore.container() + "] does not exist. Please create it before " + + if (blobStore.doesContainerExist() == false) { + throw new IllegalArgumentException("The bucket [" + blobStore + "] does not exist. Please create it before " + " creating an azure snapshot repository backed by it."); } + super.initializeSnapshot(snapshotId, indices, clusterMetadata); } @Override diff --git a/plugins/repository-azure/src/main/java/org/elasticsearch/plugin/repository/azure/AzureRepositoryPlugin.java b/plugins/repository-azure/src/main/java/org/elasticsearch/repositories/azure/AzureRepositoryPlugin.java similarity index 77% rename from plugins/repository-azure/src/main/java/org/elasticsearch/plugin/repository/azure/AzureRepositoryPlugin.java rename to plugins/repository-azure/src/main/java/org/elasticsearch/repositories/azure/AzureRepositoryPlugin.java index b90d44264ecf7..c0126cb8df065 100644 --- a/plugins/repository-azure/src/main/java/org/elasticsearch/plugin/repository/azure/AzureRepositoryPlugin.java +++ b/plugins/repository-azure/src/main/java/org/elasticsearch/repositories/azure/AzureRepositoryPlugin.java @@ -17,11 +17,8 @@ * under the License. */ -package org.elasticsearch.plugin.repository.azure; +package org.elasticsearch.repositories.azure; -import org.elasticsearch.cloud.azure.storage.AzureStorageService; -import org.elasticsearch.cloud.azure.storage.AzureStorageServiceImpl; -import org.elasticsearch.cloud.azure.storage.AzureStorageSettings; import org.elasticsearch.common.settings.Setting; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.NamedXContentRegistry; @@ -29,7 +26,6 @@ import org.elasticsearch.plugins.Plugin; import org.elasticsearch.plugins.RepositoryPlugin; import org.elasticsearch.repositories.Repository; -import org.elasticsearch.repositories.azure.AzureRepository; import java.util.Arrays; import java.util.Collections; @@ -62,16 +58,13 @@ public Map getRepositories(Environment env, NamedXCo @Override public List> getSettings() { return Arrays.asList( - AzureStorageService.Storage.STORAGE_ACCOUNTS, AzureStorageSettings.ACCOUNT_SETTING, AzureStorageSettings.KEY_SETTING, - AzureStorageSettings.TIMEOUT_SETTING + AzureStorageSettings.ENDPOINT_SUFFIX_SETTING, + AzureStorageSettings.TIMEOUT_SETTING, + AzureStorageSettings.PROXY_TYPE_SETTING, + AzureStorageSettings.PROXY_HOST_SETTING, + AzureStorageSettings.PROXY_PORT_SETTING ); } - - @Override - public List getSettingsFilter() { - // Cloud storage API settings using a pattern needed to be hidden - return Arrays.asList(AzureStorageService.Storage.PREFIX + "*.account", AzureStorageService.Storage.PREFIX + "*.key"); - } } diff --git a/plugins/repository-azure/src/main/java/org/elasticsearch/cloud/azure/AzureServiceDisableException.java b/plugins/repository-azure/src/main/java/org/elasticsearch/repositories/azure/AzureServiceDisableException.java similarity index 95% rename from plugins/repository-azure/src/main/java/org/elasticsearch/cloud/azure/AzureServiceDisableException.java rename to plugins/repository-azure/src/main/java/org/elasticsearch/repositories/azure/AzureServiceDisableException.java index 487997d71b63f..a100079668b54 100644 --- a/plugins/repository-azure/src/main/java/org/elasticsearch/cloud/azure/AzureServiceDisableException.java +++ b/plugins/repository-azure/src/main/java/org/elasticsearch/repositories/azure/AzureServiceDisableException.java @@ -17,7 +17,7 @@ * under the License. */ -package org.elasticsearch.cloud.azure; +package org.elasticsearch.repositories.azure; public class AzureServiceDisableException extends IllegalStateException { public AzureServiceDisableException(String msg) { diff --git a/plugins/repository-azure/src/main/java/org/elasticsearch/cloud/azure/AzureServiceRemoteException.java b/plugins/repository-azure/src/main/java/org/elasticsearch/repositories/azure/AzureServiceRemoteException.java similarity index 95% rename from plugins/repository-azure/src/main/java/org/elasticsearch/cloud/azure/AzureServiceRemoteException.java rename to plugins/repository-azure/src/main/java/org/elasticsearch/repositories/azure/AzureServiceRemoteException.java index 4bd4f1d67f197..3f20e29505751 100644 --- a/plugins/repository-azure/src/main/java/org/elasticsearch/cloud/azure/AzureServiceRemoteException.java +++ b/plugins/repository-azure/src/main/java/org/elasticsearch/repositories/azure/AzureServiceRemoteException.java @@ -17,7 +17,7 @@ * under the License. */ -package org.elasticsearch.cloud.azure; +package org.elasticsearch.repositories.azure; public class AzureServiceRemoteException extends IllegalStateException { public AzureServiceRemoteException(String msg) { diff --git a/plugins/repository-azure/src/main/java/org/elasticsearch/cloud/azure/storage/AzureStorageService.java b/plugins/repository-azure/src/main/java/org/elasticsearch/repositories/azure/AzureStorageService.java similarity index 71% rename from plugins/repository-azure/src/main/java/org/elasticsearch/cloud/azure/storage/AzureStorageService.java rename to plugins/repository-azure/src/main/java/org/elasticsearch/repositories/azure/AzureStorageService.java index 79455a78c005c..78eee24a34de5 100644 --- a/plugins/repository-azure/src/main/java/org/elasticsearch/cloud/azure/storage/AzureStorageService.java +++ b/plugins/repository-azure/src/main/java/org/elasticsearch/repositories/azure/AzureStorageService.java @@ -17,21 +17,16 @@ * under the License. */ -package org.elasticsearch.cloud.azure.storage; +package org.elasticsearch.repositories.azure; import com.microsoft.azure.storage.LocationMode; import com.microsoft.azure.storage.StorageException; import org.elasticsearch.common.blobstore.BlobMetaData; -import org.elasticsearch.common.settings.Setting; -import org.elasticsearch.common.settings.Setting.Property; -import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.ByteSizeUnit; import org.elasticsearch.common.unit.ByteSizeValue; -import org.elasticsearch.common.unit.TimeValue; import java.io.IOException; import java.io.InputStream; -import java.io.OutputStream; import java.net.URISyntaxException; import java.util.Map; @@ -44,22 +39,6 @@ public interface AzureStorageService { ByteSizeValue MIN_CHUNK_SIZE = new ByteSizeValue(1, ByteSizeUnit.BYTES); ByteSizeValue MAX_CHUNK_SIZE = new ByteSizeValue(64, ByteSizeUnit.MB); - final class Storage { - @Deprecated - public static final String PREFIX = "cloud.azure.storage."; - - @Deprecated - public static final Setting STORAGE_ACCOUNTS = Setting.groupSetting(Storage.PREFIX, Setting.Property.NodeScope); - - /** - * Azure timeout (defaults to -1 minute) - * @deprecated We don't want to support global timeout settings anymore - */ - @Deprecated - static final Setting TIMEOUT_SETTING = - Setting.timeSetting("cloud.azure.storage.timeout", TimeValue.timeValueMinutes(-1), Property.NodeScope, Property.Deprecated); - } - boolean doesContainerExist(String account, LocationMode mode, String container); void removeContainer(String account, LocationMode mode, String container) throws URISyntaxException, StorageException; @@ -75,12 +54,12 @@ final class Storage { InputStream getInputStream(String account, LocationMode mode, String container, String blob) throws URISyntaxException, StorageException, IOException; - OutputStream getOutputStream(String account, LocationMode mode, String container, String blob) - throws URISyntaxException, StorageException; - Map listBlobsByPrefix(String account, LocationMode mode, String container, String keyPath, String prefix) throws URISyntaxException, StorageException; void moveBlob(String account, LocationMode mode, String container, String sourceBlob, String targetBlob) throws URISyntaxException, StorageException; + + void writeBlob(String account, LocationMode mode, String container, String blobName, InputStream inputStream, long blobSize) throws + URISyntaxException, StorageException; } diff --git a/plugins/repository-azure/src/main/java/org/elasticsearch/cloud/azure/storage/AzureStorageServiceImpl.java b/plugins/repository-azure/src/main/java/org/elasticsearch/repositories/azure/AzureStorageServiceImpl.java similarity index 69% rename from plugins/repository-azure/src/main/java/org/elasticsearch/cloud/azure/storage/AzureStorageServiceImpl.java rename to plugins/repository-azure/src/main/java/org/elasticsearch/repositories/azure/AzureStorageServiceImpl.java index c928d79c0c242..2b8992386eb2d 100644 --- a/plugins/repository-azure/src/main/java/org/elasticsearch/cloud/azure/storage/AzureStorageServiceImpl.java +++ b/plugins/repository-azure/src/main/java/org/elasticsearch/repositories/azure/AzureStorageServiceImpl.java @@ -17,74 +17,53 @@ * under the License. */ -package org.elasticsearch.cloud.azure.storage; +package org.elasticsearch.repositories.azure; import com.microsoft.azure.storage.CloudStorageAccount; import com.microsoft.azure.storage.LocationMode; +import com.microsoft.azure.storage.OperationContext; import com.microsoft.azure.storage.RetryExponentialRetry; import com.microsoft.azure.storage.RetryPolicy; import com.microsoft.azure.storage.StorageException; +import com.microsoft.azure.storage.blob.BlobListingDetails; import com.microsoft.azure.storage.blob.BlobProperties; import com.microsoft.azure.storage.blob.CloudBlobClient; import com.microsoft.azure.storage.blob.CloudBlobContainer; import com.microsoft.azure.storage.blob.CloudBlockBlob; +import com.microsoft.azure.storage.blob.DeleteSnapshotsOption; import com.microsoft.azure.storage.blob.ListBlobItem; import org.apache.logging.log4j.message.ParameterizedMessage; import org.apache.logging.log4j.util.Supplier; -import org.elasticsearch.cloud.azure.blobstore.util.SocketAccess; -import org.elasticsearch.common.Strings; import org.elasticsearch.common.blobstore.BlobMetaData; import org.elasticsearch.common.blobstore.support.PlainBlobMetaData; import org.elasticsearch.common.collect.MapBuilder; -import org.elasticsearch.common.collect.Tuple; import org.elasticsearch.common.component.AbstractComponent; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.repositories.RepositoryException; import java.io.InputStream; -import java.io.OutputStream; import java.net.URI; import java.net.URISyntaxException; +import java.util.EnumSet; import java.util.HashMap; import java.util.Map; public class AzureStorageServiceImpl extends AbstractComponent implements AzureStorageService { final Map storageSettings; - final Map deprecatedStorageSettings; - final Map clients; + final Map clients = new HashMap<>(); - public AzureStorageServiceImpl(Settings settings, Map regularStorageSettings) { + public AzureStorageServiceImpl(Settings settings, Map storageSettings) { super(settings); - if (regularStorageSettings.isEmpty()) { - this.storageSettings = new HashMap<>(); - // We have deprecated settings so we need to migrate them to the new implementation - Tuple> storageSettingsMapTuple = AzureStorageSettings.loadLegacy(settings); - deprecatedStorageSettings = storageSettingsMapTuple.v2(); - if (storageSettingsMapTuple.v1() != null) { - if (storageSettingsMapTuple.v1().getName().equals("default") == false) { - // We add the primary configuration to the list of all settings with its deprecated name in case someone is - // forcing a specific configuration name when creating the repository instance - deprecatedStorageSettings.put(storageSettingsMapTuple.v1().getName(), storageSettingsMapTuple.v1()); - } - // We add the primary configuration to the list of all settings as the "default" one - deprecatedStorageSettings.put("default", storageSettingsMapTuple.v1()); - } else { - // If someone did not register any settings or deprecated settings, they - // basically can't use the plugin - throw new IllegalArgumentException("If you want to use an azure repository, you need to define a client configuration."); - } + this.storageSettings = storageSettings; - - } else { - this.storageSettings = regularStorageSettings; - this.deprecatedStorageSettings = new HashMap<>(); + if (storageSettings.isEmpty()) { + // If someone did not register any settings, they basically can't use the plugin + throw new IllegalArgumentException("If you want to use an azure repository, you need to define a client configuration."); } - this.clients = new HashMap<>(); - logger.debug("starting azure storage client instance"); // We register all regular azure clients @@ -92,24 +71,22 @@ public AzureStorageServiceImpl(Settings settings, Map azureStorageSettingsEntry : this.deprecatedStorageSettings.entrySet()) { - logger.debug("registering deprecated client for account [{}]", azureStorageSettingsEntry.getKey()); - createClient(azureStorageSettingsEntry.getValue()); - } } void createClient(AzureStorageSettings azureStorageSettings) { try { - logger.trace("creating new Azure storage client using account [{}], key [{}]", - azureStorageSettings.getAccount(), azureStorageSettings.getKey()); + logger.trace("creating new Azure storage client using account [{}], key [{}], endpoint suffix [{}]", + azureStorageSettings.getAccount(), azureStorageSettings.getKey(), azureStorageSettings.getEndpointSuffix()); String storageConnectionString = "DefaultEndpointsProtocol=https;" + "AccountName=" + azureStorageSettings.getAccount() + ";" + "AccountKey=" + azureStorageSettings.getKey(); + String endpointSuffix = azureStorageSettings.getEndpointSuffix(); + if (endpointSuffix != null && !endpointSuffix.isEmpty()) { + storageConnectionString += ";EndpointSuffix=" + endpointSuffix; + } // Retrieve storage account from connection-string. CloudStorageAccount storageAccount = CloudStorageAccount.parse(storageConnectionString); @@ -123,31 +100,21 @@ void createClient(AzureStorageSettings azureStorageSettings) { } } - CloudBlobClient getSelectedClient(String account, LocationMode mode) { - logger.trace("selecting a client for account [{}], mode [{}]", account, mode.name()); - AzureStorageSettings azureStorageSettings = this.storageSettings.get(account); + CloudBlobClient getSelectedClient(String clientName, LocationMode mode) { + logger.trace("selecting a client named [{}], mode [{}]", clientName, mode.name()); + AzureStorageSettings azureStorageSettings = this.storageSettings.get(clientName); if (azureStorageSettings == null) { - // We can't find a client that has been registered using regular settings so we try deprecated client - azureStorageSettings = this.deprecatedStorageSettings.get(account); - if (azureStorageSettings == null) { - // We did not get an account. That's bad. - if (Strings.hasLength(account)) { - throw new IllegalArgumentException("Can not find named azure client [" + account + - "]. Check your elasticsearch.yml."); - } - throw new IllegalArgumentException("Can not find primary/secondary client using deprecated settings. " + - "Check your elasticsearch.yml."); - } + throw new IllegalArgumentException("Can not find named azure client [" + clientName + "]. Check your settings."); } CloudBlobClient client = this.clients.get(azureStorageSettings.getAccount()); if (client == null) { - throw new IllegalArgumentException("Can not find an azure client for account [" + azureStorageSettings.getAccount() + "]"); + throw new IllegalArgumentException("Can not find an azure client named [" + azureStorageSettings.getAccount() + "]"); } // NOTE: for now, just set the location mode in case it is different; - // only one mode per storage account can be active at a time + // only one mode per storage clientName can be active at a time client.getDefaultRequestOptions().setLocationMode(mode); // Set timeout option if the user sets cloud.azure.storage.timeout or cloud.azure.storage.xxx.timeout (it's negative by default) @@ -168,12 +135,23 @@ CloudBlobClient getSelectedClient(String account, LocationMode mode) { return client; } + private OperationContext generateOperationContext(String clientName) { + OperationContext context = new OperationContext(); + AzureStorageSettings azureStorageSettings = this.storageSettings.get(clientName); + + if (azureStorageSettings.getProxy() != null) { + context.setProxy(azureStorageSettings.getProxy()); + } + + return context; + } + @Override public boolean doesContainerExist(String account, LocationMode mode, String container) { try { CloudBlobClient client = this.getSelectedClient(account, mode); CloudBlobContainer blobContainer = client.getContainerReference(container); - return SocketAccess.doPrivilegedException(blobContainer::exists); + return SocketAccess.doPrivilegedException(() -> blobContainer.exists(null, null, generateOperationContext(account))); } catch (Exception e) { logger.error("can not access container [{}]", container); } @@ -185,7 +163,7 @@ public void removeContainer(String account, LocationMode mode, String container) CloudBlobClient client = this.getSelectedClient(account, mode); CloudBlobContainer blobContainer = client.getContainerReference(container); logger.trace("removing container [{}]", container); - SocketAccess.doPrivilegedException(blobContainer::deleteIfExists); + SocketAccess.doPrivilegedException(() -> blobContainer.deleteIfExists(null, null, generateOperationContext(account))); } @Override @@ -194,7 +172,7 @@ public void createContainer(String account, LocationMode mode, String container) CloudBlobClient client = this.getSelectedClient(account, mode); CloudBlobContainer blobContainer = client.getContainerReference(container); logger.trace("creating container [{}]", container); - SocketAccess.doPrivilegedException(blobContainer::createIfNotExists); + SocketAccess.doPrivilegedException(() -> blobContainer.createIfNotExists(null, null, generateOperationContext(account))); } catch (IllegalArgumentException e) { logger.trace((Supplier) () -> new ParameterizedMessage("fails creating container [{}]", container), e); throw new RepositoryException(container, e.getMessage(), e); @@ -211,7 +189,8 @@ public void deleteFiles(String account, LocationMode mode, String container, Str SocketAccess.doPrivilegedVoidException(() -> { if (blobContainer.exists()) { // We list the blobs using a flat blob listing mode - for (ListBlobItem blobItem : blobContainer.listBlobs(path, true)) { + for (ListBlobItem blobItem : blobContainer.listBlobs(path, true, EnumSet.noneOf(BlobListingDetails.class), null, + generateOperationContext(account))) { String blobName = blobNameFromUri(blobItem.getUri()); logger.trace("removing blob [{}] full URI was [{}]", blobName, blobItem.getUri()); deleteBlob(account, mode, container, blobName); @@ -241,13 +220,14 @@ public static String blobNameFromUri(URI uri) { } @Override - public boolean blobExists(String account, LocationMode mode, String container, String blob) throws URISyntaxException, StorageException { + public boolean blobExists(String account, LocationMode mode, String container, String blob) + throws URISyntaxException, StorageException { // Container name must be lower case. CloudBlobClient client = this.getSelectedClient(account, mode); CloudBlobContainer blobContainer = client.getContainerReference(container); - if (SocketAccess.doPrivilegedException(blobContainer::exists)) { + if (SocketAccess.doPrivilegedException(() -> blobContainer.exists(null, null, generateOperationContext(account)))) { CloudBlockBlob azureBlob = blobContainer.getBlockBlobReference(blob); - return SocketAccess.doPrivilegedException(azureBlob::exists); + return SocketAccess.doPrivilegedException(() -> azureBlob.exists(null, null, generateOperationContext(account))); } return false; @@ -260,81 +240,81 @@ public void deleteBlob(String account, LocationMode mode, String container, Stri // Container name must be lower case. CloudBlobClient client = this.getSelectedClient(account, mode); CloudBlobContainer blobContainer = client.getContainerReference(container); - if (SocketAccess.doPrivilegedException(blobContainer::exists)) { + if (SocketAccess.doPrivilegedException(() -> blobContainer.exists(null, null, generateOperationContext(account)))) { logger.trace("container [{}]: blob [{}] found. removing.", container, blob); CloudBlockBlob azureBlob = blobContainer.getBlockBlobReference(blob); - SocketAccess.doPrivilegedVoidException(azureBlob::delete); + SocketAccess.doPrivilegedVoidException(() -> azureBlob.delete(DeleteSnapshotsOption.NONE, null, null, + generateOperationContext(account))); } } @Override - public InputStream getInputStream(String account, LocationMode mode, String container, String blob) throws URISyntaxException, StorageException { + public InputStream getInputStream(String account, LocationMode mode, String container, String blob) + throws URISyntaxException, StorageException { logger.trace("reading container [{}], blob [{}]", container, blob); CloudBlobClient client = this.getSelectedClient(account, mode); CloudBlockBlob blockBlobReference = client.getContainerReference(container).getBlockBlobReference(blob); - return SocketAccess.doPrivilegedException(blockBlobReference::openInputStream); + return SocketAccess.doPrivilegedException(() -> blockBlobReference.openInputStream(null, null, generateOperationContext(account))); } @Override - public OutputStream getOutputStream(String account, LocationMode mode, String container, String blob) throws URISyntaxException, StorageException { - logger.trace("writing container [{}], blob [{}]", container, blob); - CloudBlobClient client = this.getSelectedClient(account, mode); - CloudBlockBlob blockBlobReference = client.getContainerReference(container).getBlockBlobReference(blob); - return SocketAccess.doPrivilegedException(blockBlobReference::openOutputStream); - } - - @Override - public Map listBlobsByPrefix(String account, LocationMode mode, String container, String keyPath, String prefix) throws URISyntaxException, StorageException { + public Map listBlobsByPrefix(String account, LocationMode mode, String container, String keyPath, String prefix) + throws URISyntaxException, StorageException { // NOTE: this should be here: if (prefix == null) prefix = ""; // however, this is really inefficient since deleteBlobsByPrefix enumerates everything and // then does a prefix match on the result; it should just call listBlobsByPrefix with the prefix! logger.debug("listing container [{}], keyPath [{}], prefix [{}]", container, keyPath, prefix); MapBuilder blobsBuilder = MapBuilder.newMapBuilder(); + EnumSet enumBlobListingDetails = EnumSet.of(BlobListingDetails.METADATA); CloudBlobClient client = this.getSelectedClient(account, mode); CloudBlobContainer blobContainer = client.getContainerReference(container); - SocketAccess.doPrivilegedVoidException(() -> { if (blobContainer.exists()) { - for (ListBlobItem blobItem : blobContainer.listBlobs(keyPath + (prefix == null ? "" : prefix))) { + for (ListBlobItem blobItem : blobContainer.listBlobs(keyPath + (prefix == null ? "" : prefix), false, + enumBlobListingDetails, null, generateOperationContext(account))) { URI uri = blobItem.getUri(); logger.trace("blob url [{}]", uri); // uri.getPath is of the form /container/keyPath.* and we want to strip off the /container/ // this requires 1 + container.length() + 1, with each 1 corresponding to one of the / String blobPath = uri.getPath().substring(1 + container.length() + 1); - - CloudBlockBlob blob = blobContainer.getBlockBlobReference(blobPath); - - // fetch the blob attributes from Azure (getBlockBlobReference does not do this) - // this is needed to retrieve the blob length (among other metadata) from Azure Storage - blob.downloadAttributes(); - - BlobProperties properties = blob.getProperties(); + BlobProperties properties = ((CloudBlockBlob) blobItem).getProperties(); String name = blobPath.substring(keyPath.length()); logger.trace("blob url [{}], name [{}], size [{}]", uri, name, properties.getLength()); blobsBuilder.put(name, new PlainBlobMetaData(name, properties.getLength())); } } }); - return blobsBuilder.immutableMap(); } @Override - public void moveBlob(String account, LocationMode mode, String container, String sourceBlob, String targetBlob) throws URISyntaxException, StorageException { + public void moveBlob(String account, LocationMode mode, String container, String sourceBlob, String targetBlob) + throws URISyntaxException, StorageException { logger.debug("moveBlob container [{}], sourceBlob [{}], targetBlob [{}]", container, sourceBlob, targetBlob); CloudBlobClient client = this.getSelectedClient(account, mode); CloudBlobContainer blobContainer = client.getContainerReference(container); CloudBlockBlob blobSource = blobContainer.getBlockBlobReference(sourceBlob); - if (SocketAccess.doPrivilegedException(blobSource::exists)) { + if (SocketAccess.doPrivilegedException(() -> blobSource.exists(null, null, generateOperationContext(account)))) { CloudBlockBlob blobTarget = blobContainer.getBlockBlobReference(targetBlob); SocketAccess.doPrivilegedVoidException(() -> { - blobTarget.startCopy(blobSource); - blobSource.delete(); + blobTarget.startCopy(blobSource, null, null, null, generateOperationContext(account)); + blobSource.delete(DeleteSnapshotsOption.NONE, null, null, generateOperationContext(account)); }); logger.debug("moveBlob container [{}], sourceBlob [{}], targetBlob [{}] -> done", container, sourceBlob, targetBlob); } } + + @Override + public void writeBlob(String account, LocationMode mode, String container, String blobName, InputStream inputStream, long blobSize) + throws URISyntaxException, StorageException { + logger.trace("writeBlob({}, stream, {})", blobName, blobSize); + CloudBlobClient client = this.getSelectedClient(account, mode); + CloudBlobContainer blobContainer = client.getContainerReference(container); + CloudBlockBlob blob = blobContainer.getBlockBlobReference(blobName); + SocketAccess.doPrivilegedVoidException(() -> blob.upload(inputStream, blobSize, null, null, generateOperationContext(account))); + logger.trace("writeBlob({}, stream, {}) - done", blobName, blobSize); + } } diff --git a/plugins/repository-azure/src/main/java/org/elasticsearch/repositories/azure/AzureStorageSettings.java b/plugins/repository-azure/src/main/java/org/elasticsearch/repositories/azure/AzureStorageSettings.java new file mode 100644 index 0000000000000..472ab121e8365 --- /dev/null +++ b/plugins/repository-azure/src/main/java/org/elasticsearch/repositories/azure/AzureStorageSettings.java @@ -0,0 +1,201 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.repositories.azure; + +import com.microsoft.azure.storage.RetryPolicy; +import org.elasticsearch.common.Strings; +import org.elasticsearch.common.settings.SecureSetting; +import org.elasticsearch.common.settings.SecureString; +import org.elasticsearch.common.settings.Setting; +import org.elasticsearch.common.settings.Setting.AffixSetting; +import org.elasticsearch.common.settings.Setting.Property; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.settings.SettingsException; +import org.elasticsearch.common.unit.TimeValue; + +import java.net.InetAddress; +import java.net.InetSocketAddress; +import java.net.Proxy; +import java.net.UnknownHostException; +import java.util.Collections; +import java.util.HashMap; +import java.util.Locale; +import java.util.Map; +import java.util.Set; + +public final class AzureStorageSettings { + // prefix for azure client settings + private static final String PREFIX = "azure.client."; + + /** Azure account name */ + public static final AffixSetting ACCOUNT_SETTING = + Setting.affixKeySetting(PREFIX, "account", key -> SecureSetting.secureString(key, null)); + + /** max_retries: Number of retries in case of Azure errors. Defaults to 3 (RetryPolicy.DEFAULT_CLIENT_RETRY_COUNT). */ + private static final Setting MAX_RETRIES_SETTING = + Setting.affixKeySetting(PREFIX, "max_retries", + (key) -> Setting.intSetting(key, RetryPolicy.DEFAULT_CLIENT_RETRY_COUNT, Setting.Property.NodeScope)); + /** + * Azure endpoint suffix. Default to core.windows.net (CloudStorageAccount.DEFAULT_DNS). + */ + public static final Setting ENDPOINT_SUFFIX_SETTING = Setting.affixKeySetting(PREFIX, "endpoint_suffix", + key -> Setting.simpleString(key, Property.NodeScope)); + + /** Azure key */ + public static final AffixSetting KEY_SETTING = Setting.affixKeySetting(PREFIX, "key", + key -> SecureSetting.secureString(key, null)); + + public static final AffixSetting TIMEOUT_SETTING = Setting.affixKeySetting(PREFIX, "timeout", + (key) -> Setting.timeSetting(key, TimeValue.timeValueMinutes(-1), Property.NodeScope)); + + /** The type of the proxy to connect to azure through. Can be direct (no proxy, default), http or socks */ + public static final AffixSetting PROXY_TYPE_SETTING = Setting.affixKeySetting(PREFIX, "proxy.type", + (key) -> new Setting<>(key, "direct", s -> Proxy.Type.valueOf(s.toUpperCase(Locale.ROOT)), Property.NodeScope)); + + /** The host name of a proxy to connect to azure through. */ + public static final Setting PROXY_HOST_SETTING = Setting.affixKeySetting(PREFIX, "proxy.host", + (key) -> Setting.simpleString(key, Property.NodeScope)); + + /** The port of a proxy to connect to azure through. */ + public static final Setting PROXY_PORT_SETTING = Setting.affixKeySetting(PREFIX, "proxy.port", + (key) -> Setting.intSetting(key, 0, 0, 65535, Setting.Property.NodeScope)); + + private final String account; + private final String key; + private final String endpointSuffix; + private final TimeValue timeout; + private final int maxRetries; + private final Proxy proxy; + + + public AzureStorageSettings(String account, String key, String endpointSuffix, TimeValue timeout, int maxRetries, + Proxy.Type proxyType, String proxyHost, Integer proxyPort) { + this.account = account; + this.key = key; + this.endpointSuffix = endpointSuffix; + this.timeout = timeout; + this.maxRetries = maxRetries; + + // Register the proxy if we have any + // Validate proxy settings + if (proxyType.equals(Proxy.Type.DIRECT) && (proxyPort != 0 || Strings.hasText(proxyHost))) { + throw new SettingsException("Azure Proxy port or host have been set but proxy type is not defined."); + } + if (proxyType.equals(Proxy.Type.DIRECT) == false && (proxyPort == 0 || Strings.isEmpty(proxyHost))) { + throw new SettingsException("Azure Proxy type has been set but proxy host or port is not defined."); + } + + if (proxyType.equals(Proxy.Type.DIRECT)) { + proxy = null; + } else { + try { + proxy = new Proxy(proxyType, new InetSocketAddress(InetAddress.getByName(proxyHost), proxyPort)); + } catch (UnknownHostException e) { + throw new SettingsException("Azure proxy host is unknown.", e); + } + } + } + + public String getKey() { + return key; + } + + public String getAccount() { + return account; + } + + public String getEndpointSuffix() { + return endpointSuffix; + } + + public TimeValue getTimeout() { + return timeout; + } + + public int getMaxRetries() { + return maxRetries; + } + + public Proxy getProxy() { + return proxy; + } + + @Override + public String toString() { + final StringBuilder sb = new StringBuilder("AzureStorageSettings{"); + sb.append(", account='").append(account).append('\''); + sb.append(", key='").append(key).append('\''); + sb.append(", timeout=").append(timeout); + sb.append(", endpointSuffix='").append(endpointSuffix).append('\''); + sb.append(", maxRetries=").append(maxRetries); + sb.append(", proxy=").append(proxy); + sb.append('}'); + return sb.toString(); + } + + /** + * Parses settings and read all settings available under azure.client.* + * @param settings settings to parse + * @return All the named configurations + */ + public static Map load(Settings settings) { + // Get the list of existing named configurations + Set clientNames = settings.getGroups(PREFIX).keySet(); + Map storageSettings = new HashMap<>(); + for (String clientName : clientNames) { + storageSettings.put(clientName, getClientSettings(settings, clientName)); + } + + if (storageSettings.containsKey("default") == false && storageSettings.isEmpty() == false) { + // in case no setting named "default" has been set, let's define our "default" + // as the first named config we get + AzureStorageSettings defaultSettings = storageSettings.values().iterator().next(); + storageSettings.put("default", defaultSettings); + } + return Collections.unmodifiableMap(storageSettings); + } + + // pkg private for tests + /** Parse settings for a single client. */ + static AzureStorageSettings getClientSettings(Settings settings, String clientName) { + try (SecureString account = getConfigValue(settings, clientName, ACCOUNT_SETTING); + SecureString key = getConfigValue(settings, clientName, KEY_SETTING)) { + return new AzureStorageSettings(account.toString(), key.toString(), + getValue(settings, clientName, ENDPOINT_SUFFIX_SETTING), + getValue(settings, clientName, TIMEOUT_SETTING), + getValue(settings, clientName, MAX_RETRIES_SETTING), + getValue(settings, clientName, PROXY_TYPE_SETTING), + getValue(settings, clientName, PROXY_HOST_SETTING), + getValue(settings, clientName, PROXY_PORT_SETTING)); + } + } + + private static T getConfigValue(Settings settings, String clientName, + Setting.AffixSetting clientSetting) { + Setting concreteSetting = clientSetting.getConcreteSettingForNamespace(clientName); + return concreteSetting.get(settings); + } + + public static T getValue(Settings settings, String groupName, Setting setting) { + Setting.AffixKey k = (Setting.AffixKey) setting.getRawKey(); + String fullKey = k.toConcreteKey(groupName).toString(); + return setting.getConcreteSetting(fullKey).get(settings); + } +} diff --git a/plugins/repository-azure/src/main/java/org/elasticsearch/cloud/azure/blobstore/util/SocketAccess.java b/plugins/repository-azure/src/main/java/org/elasticsearch/repositories/azure/SocketAccess.java similarity index 96% rename from plugins/repository-azure/src/main/java/org/elasticsearch/cloud/azure/blobstore/util/SocketAccess.java rename to plugins/repository-azure/src/main/java/org/elasticsearch/repositories/azure/SocketAccess.java index 6202a0a46f8e6..c4db24a97e958 100644 --- a/plugins/repository-azure/src/main/java/org/elasticsearch/cloud/azure/blobstore/util/SocketAccess.java +++ b/plugins/repository-azure/src/main/java/org/elasticsearch/repositories/azure/SocketAccess.java @@ -17,11 +17,12 @@ * under the License. */ -package org.elasticsearch.cloud.azure.blobstore.util; +package org.elasticsearch.repositories.azure; import com.microsoft.azure.storage.StorageException; import org.elasticsearch.SpecialPermission; +import java.io.IOException; import java.net.SocketPermission; import java.net.URISyntaxException; import java.security.AccessController; @@ -66,7 +67,7 @@ public static void doPrivilegedVoidException(StorageRunnable action) throws Stor @FunctionalInterface public interface StorageRunnable { - void executeCouldThrow() throws StorageException, URISyntaxException; + void executeCouldThrow() throws StorageException, URISyntaxException, IOException; } } diff --git a/plugins/repository-azure/src/test/java/org/elasticsearch/cloud/azure/AbstractAzureWithThirdPartyIntegTestCase.java b/plugins/repository-azure/src/test/java/org/elasticsearch/cloud/azure/AbstractAzureWithThirdPartyIntegTestCase.java deleted file mode 100644 index 8f6cdce113e7a..0000000000000 --- a/plugins/repository-azure/src/test/java/org/elasticsearch/cloud/azure/AbstractAzureWithThirdPartyIntegTestCase.java +++ /dev/null @@ -1,54 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.cloud.azure; - -import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.plugin.repository.azure.AzureRepositoryPlugin; -import org.elasticsearch.plugins.Plugin; -import org.elasticsearch.test.ESIntegTestCase.ThirdParty; - -import java.util.Arrays; -import java.util.Collection; - -import static org.elasticsearch.cloud.azure.AzureTestUtils.readSettingsFromFile; - -/** - * Base class for Azure tests that require credentials. - *

    - * You must specify {@code -Dtests.thirdparty=true -Dtests.config=/path/to/config} - * in order to run these tests. - */ -@ThirdParty -public abstract class AbstractAzureWithThirdPartyIntegTestCase extends AbstractAzureIntegTestCase { - - @Override - protected Settings nodeSettings(int nodeOrdinal) { - return Settings.builder() - .put(super.nodeSettings(nodeOrdinal)) - .put(readSettingsFromFile()) - .build(); - } - - @Override - protected Collection> nodePlugins() { - return Arrays.asList(AzureRepositoryPlugin.class); - } - -} diff --git a/plugins/repository-azure/src/test/java/org/elasticsearch/cloud/azure/AzureTestUtils.java b/plugins/repository-azure/src/test/java/org/elasticsearch/cloud/azure/AzureTestUtils.java deleted file mode 100644 index 097f519db0363..0000000000000 --- a/plugins/repository-azure/src/test/java/org/elasticsearch/cloud/azure/AzureTestUtils.java +++ /dev/null @@ -1,55 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.cloud.azure; - -import org.elasticsearch.common.Strings; -import org.elasticsearch.common.io.PathUtils; -import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.common.settings.SettingsException; - -import java.io.IOException; - -public class AzureTestUtils { - /** - * Read settings from file when running integration tests with ThirdParty annotation. - * elasticsearch.yml file path has to be set with -Dtests.config=/path/to/elasticsearch.yml. - * @return Settings from elasticsearch.yml integration test file (for 3rd party tests) - */ - public static Settings readSettingsFromFile() { - Settings.Builder settings = Settings.builder(); - - // if explicit, just load it and don't load from env - try { - if (Strings.hasText(System.getProperty("tests.config"))) { - try { - settings.loadFromPath(PathUtils.get((System.getProperty("tests.config")))); - } catch (IOException e) { - throw new IllegalArgumentException("could not load azure tests config", e); - } - } else { - throw new IllegalStateException("to run integration tests, you need to set -Dtests.thirdparty=true and " + - "-Dtests.config=/path/to/elasticsearch.yml"); - } - } catch (SettingsException exception) { - throw new IllegalStateException("your test configuration file is incorrect: " + System.getProperty("tests.config"), exception); - } - return settings.build(); - } -} diff --git a/plugins/repository-azure/src/test/java/org/elasticsearch/cloud/azure/storage/AzureStorageServiceTests.java b/plugins/repository-azure/src/test/java/org/elasticsearch/cloud/azure/storage/AzureStorageServiceTests.java deleted file mode 100644 index b232ee12e05c4..0000000000000 --- a/plugins/repository-azure/src/test/java/org/elasticsearch/cloud/azure/storage/AzureStorageServiceTests.java +++ /dev/null @@ -1,270 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.cloud.azure.storage; - -import com.microsoft.azure.storage.LocationMode; -import com.microsoft.azure.storage.RetryExponentialRetry; -import com.microsoft.azure.storage.blob.CloudBlobClient; -import org.elasticsearch.common.settings.MockSecureSettings; -import org.elasticsearch.common.settings.Setting; -import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.test.ESTestCase; - -import java.net.URI; -import java.net.URISyntaxException; -import java.util.Map; - -import static org.elasticsearch.cloud.azure.storage.AzureStorageServiceImpl.blobNameFromUri; -import static org.elasticsearch.cloud.azure.storage.AzureStorageSettings.DEPRECATED_ACCOUNT_SETTING; -import static org.elasticsearch.cloud.azure.storage.AzureStorageSettings.DEPRECATED_DEFAULT_SETTING; -import static org.elasticsearch.cloud.azure.storage.AzureStorageSettings.DEPRECATED_KEY_SETTING; -import static org.elasticsearch.cloud.azure.storage.AzureStorageSettings.DEPRECATED_TIMEOUT_SETTING; -import static org.elasticsearch.repositories.azure.AzureSettingsParserTests.getConcreteSetting; -import static org.hamcrest.Matchers.containsInAnyOrder; -import static org.hamcrest.Matchers.instanceOf; -import static org.hamcrest.Matchers.is; -import static org.hamcrest.Matchers.notNullValue; -import static org.hamcrest.Matchers.nullValue; - -public class AzureStorageServiceTests extends ESTestCase { - - @Deprecated - static final Settings deprecatedSettings = Settings.builder() - .put("cloud.azure.storage.azure1.account", "myaccount1") - .put("cloud.azure.storage.azure1.key", "mykey1") - .put("cloud.azure.storage.azure1.default", true) - .put("cloud.azure.storage.azure2.account", "myaccount2") - .put("cloud.azure.storage.azure2.key", "mykey2") - .put("cloud.azure.storage.azure3.account", "myaccount3") - .put("cloud.azure.storage.azure3.key", "mykey3") - .put("cloud.azure.storage.azure3.timeout", "30s") - .build(); - - private MockSecureSettings buildSecureSettings() { - MockSecureSettings secureSettings = new MockSecureSettings(); - secureSettings.setString("azure.client.azure1.account", "myaccount1"); - secureSettings.setString("azure.client.azure1.key", "mykey1"); - secureSettings.setString("azure.client.azure2.account", "myaccount2"); - secureSettings.setString("azure.client.azure2.key", "mykey2"); - secureSettings.setString("azure.client.azure3.account", "myaccount3"); - secureSettings.setString("azure.client.azure3.key", "mykey3"); - return secureSettings; - } - private Settings buildSettings() { - Settings settings = Settings.builder() - .setSecureSettings(buildSecureSettings()) - .build(); - return settings; - } - - public void testReadSecuredSettings() { - MockSecureSettings secureSettings = new MockSecureSettings(); - secureSettings.setString("azure.client.azure1.account", "myaccount1"); - secureSettings.setString("azure.client.azure1.key", "mykey1"); - secureSettings.setString("azure.client.azure2.account", "myaccount2"); - secureSettings.setString("azure.client.azure2.key", "mykey2"); - secureSettings.setString("azure.client.azure3.account", "myaccount3"); - secureSettings.setString("azure.client.azure3.key", "mykey3"); - Settings settings = Settings.builder().setSecureSettings(secureSettings).build(); - - Map loadedSettings = AzureStorageSettings.load(settings); - assertThat(loadedSettings.keySet(), containsInAnyOrder("azure1","azure2","azure3","default")); - } - - public void testGetSelectedClientWithNoPrimaryAndSecondary() { - try { - new AzureStorageServiceMock(Settings.EMPTY); - fail("we should have raised an IllegalArgumentException"); - } catch (IllegalArgumentException e) { - assertThat(e.getMessage(), is("If you want to use an azure repository, you need to define a client configuration.")); - } - } - - public void testGetSelectedClientNonExisting() { - AzureStorageServiceImpl azureStorageService = new AzureStorageServiceMock(buildSettings()); - IllegalArgumentException e = expectThrows(IllegalArgumentException.class, () -> { - azureStorageService.getSelectedClient("azure4", LocationMode.PRIMARY_ONLY); - }); - assertThat(e.getMessage(), is("Can not find named azure client [azure4]. Check your elasticsearch.yml.")); - } - - public void testGetSelectedClientGlobalTimeout() { - Settings timeoutSettings = Settings.builder() - .setSecureSettings(buildSecureSettings()) - .put(AzureStorageService.Storage.TIMEOUT_SETTING.getKey(), "10s") - .put("azure.client.azure3.timeout", "30s") - .build(); - - AzureStorageServiceImpl azureStorageService = new AzureStorageServiceMock(timeoutSettings); - CloudBlobClient client1 = azureStorageService.getSelectedClient("azure1", LocationMode.PRIMARY_ONLY); - assertThat(client1.getDefaultRequestOptions().getTimeoutIntervalInMs(), is(10 * 1000)); - CloudBlobClient client3 = azureStorageService.getSelectedClient("azure3", LocationMode.PRIMARY_ONLY); - assertThat(client3.getDefaultRequestOptions().getTimeoutIntervalInMs(), is(30 * 1000)); - - assertSettingDeprecationsAndWarnings(new Setting[]{AzureStorageService.Storage.TIMEOUT_SETTING}); - } - - public void testGetSelectedClientDefaultTimeout() { - Settings timeoutSettings = Settings.builder() - .setSecureSettings(buildSecureSettings()) - .put("azure.client.azure3.timeout", "30s") - .build(); - AzureStorageServiceImpl azureStorageService = new AzureStorageServiceMock(timeoutSettings); - CloudBlobClient client1 = azureStorageService.getSelectedClient("azure1", LocationMode.PRIMARY_ONLY); - assertThat(client1.getDefaultRequestOptions().getTimeoutIntervalInMs(), nullValue()); - CloudBlobClient client3 = azureStorageService.getSelectedClient("azure3", LocationMode.PRIMARY_ONLY); - assertThat(client3.getDefaultRequestOptions().getTimeoutIntervalInMs(), is(30 * 1000)); - } - - public void testGetSelectedClientNoTimeout() { - AzureStorageServiceImpl azureStorageService = new AzureStorageServiceMock(buildSettings()); - CloudBlobClient client1 = azureStorageService.getSelectedClient("azure1", LocationMode.PRIMARY_ONLY); - assertThat(client1.getDefaultRequestOptions().getTimeoutIntervalInMs(), is(nullValue())); - } - - public void testGetSelectedClientBackoffPolicy() { - AzureStorageServiceImpl azureStorageService = new AzureStorageServiceMock(buildSettings()); - CloudBlobClient client1 = azureStorageService.getSelectedClient("azure1", LocationMode.PRIMARY_ONLY); - assertThat(client1.getDefaultRequestOptions().getRetryPolicyFactory(), is(notNullValue())); - assertThat(client1.getDefaultRequestOptions().getRetryPolicyFactory(), instanceOf(RetryExponentialRetry.class)); - } - - public void testGetSelectedClientBackoffPolicyNbRetries() { - Settings timeoutSettings = Settings.builder() - .setSecureSettings(buildSecureSettings()) - .put("cloud.azure.storage.azure.max_retries", 7) - .build(); - - AzureStorageServiceImpl azureStorageService = new AzureStorageServiceMock(timeoutSettings); - CloudBlobClient client1 = azureStorageService.getSelectedClient("azure1", LocationMode.PRIMARY_ONLY); - assertThat(client1.getDefaultRequestOptions().getRetryPolicyFactory(), is(notNullValue())); - assertThat(client1.getDefaultRequestOptions().getRetryPolicyFactory(), instanceOf(RetryExponentialRetry.class)); - } - - /** - * This internal class just overload createClient method which is called by AzureStorageServiceImpl.doStart() - */ - class AzureStorageServiceMock extends AzureStorageServiceImpl { - AzureStorageServiceMock(Settings settings) { - super(settings, AzureStorageSettings.load(settings)); - } - - // We fake the client here - @Override - void createClient(AzureStorageSettings azureStorageSettings) { - this.clients.put(azureStorageSettings.getAccount(), - new CloudBlobClient(URI.create("https://" + azureStorageSettings.getName()))); - } - } - - public void testBlobNameFromUri() throws URISyntaxException { - String name = blobNameFromUri(new URI("https://myservice.azure.net/container/path/to/myfile")); - assertThat(name, is("path/to/myfile")); - name = blobNameFromUri(new URI("http://myservice.azure.net/container/path/to/myfile")); - assertThat(name, is("path/to/myfile")); - name = blobNameFromUri(new URI("http://127.0.0.1/container/path/to/myfile")); - assertThat(name, is("path/to/myfile")); - name = blobNameFromUri(new URI("https://127.0.0.1/container/path/to/myfile")); - assertThat(name, is("path/to/myfile")); - } - - // Deprecated settings. We still test them until we remove definitely the deprecated settings - - @Deprecated - public void testGetSelectedClientWithNoSecondary() { - AzureStorageServiceImpl azureStorageService = new AzureStorageServiceMock(Settings.builder() - .put("cloud.azure.storage.azure1.account", "myaccount1") - .put("cloud.azure.storage.azure1.key", "mykey1") - .build()); - CloudBlobClient client = azureStorageService.getSelectedClient("azure1", LocationMode.PRIMARY_ONLY); - assertThat(client.getEndpoint(), is(URI.create("https://azure1"))); - assertSettingDeprecationsAndWarnings(new Setting[]{ - getConcreteSetting(DEPRECATED_ACCOUNT_SETTING, "azure1"), - getConcreteSetting(DEPRECATED_KEY_SETTING, "azure1") - }); - } - - @Deprecated - public void testGetDefaultClientWithNoSecondary() { - AzureStorageServiceImpl azureStorageService = new AzureStorageServiceMock(Settings.builder() - .put("cloud.azure.storage.azure1.account", "myaccount1") - .put("cloud.azure.storage.azure1.key", "mykey1") - .build()); - CloudBlobClient client = azureStorageService.getSelectedClient("default", LocationMode.PRIMARY_ONLY); - assertThat(client.getEndpoint(), is(URI.create("https://azure1"))); - assertSettingDeprecationsAndWarnings(new Setting[]{ - getConcreteSetting(DEPRECATED_ACCOUNT_SETTING, "azure1"), - getConcreteSetting(DEPRECATED_KEY_SETTING, "azure1") - }); - } - - @Deprecated - public void testGetSelectedClientPrimary() { - AzureStorageServiceImpl azureStorageService = new AzureStorageServiceMock(deprecatedSettings); - CloudBlobClient client = azureStorageService.getSelectedClient("azure1", LocationMode.PRIMARY_ONLY); - assertThat(client.getEndpoint(), is(URI.create("https://azure1"))); - assertDeprecatedWarnings(); - } - - @Deprecated - public void testGetSelectedClientSecondary1() { - AzureStorageServiceImpl azureStorageService = new AzureStorageServiceMock(deprecatedSettings); - CloudBlobClient client = azureStorageService.getSelectedClient("azure2", LocationMode.PRIMARY_ONLY); - assertThat(client.getEndpoint(), is(URI.create("https://azure2"))); - assertDeprecatedWarnings(); - } - - @Deprecated - public void testGetSelectedClientSecondary2() { - AzureStorageServiceImpl azureStorageService = new AzureStorageServiceMock(deprecatedSettings); - CloudBlobClient client = azureStorageService.getSelectedClient("azure3", LocationMode.PRIMARY_ONLY); - assertThat(client.getEndpoint(), is(URI.create("https://azure3"))); - assertDeprecatedWarnings(); - } - - @Deprecated - public void testGetDefaultClientWithPrimaryAndSecondaries() { - AzureStorageServiceImpl azureStorageService = new AzureStorageServiceMock(deprecatedSettings); - CloudBlobClient client = azureStorageService.getSelectedClient("default", LocationMode.PRIMARY_ONLY); - assertThat(client.getEndpoint(), is(URI.create("https://azure1"))); - assertDeprecatedWarnings(); - } - - @Deprecated - public void testGetSelectedClientDefault() { - AzureStorageServiceImpl azureStorageService = new AzureStorageServiceMock(deprecatedSettings); - CloudBlobClient client = azureStorageService.getSelectedClient("default", LocationMode.PRIMARY_ONLY); - assertThat(client.getEndpoint(), is(URI.create("https://azure1"))); - assertDeprecatedWarnings(); - } - - private void assertDeprecatedWarnings() { - assertSettingDeprecationsAndWarnings(new Setting[]{ - getConcreteSetting(DEPRECATED_ACCOUNT_SETTING, "azure1"), - getConcreteSetting(DEPRECATED_KEY_SETTING, "azure1"), - getConcreteSetting(DEPRECATED_DEFAULT_SETTING, "azure1"), - getConcreteSetting(DEPRECATED_ACCOUNT_SETTING, "azure2"), - getConcreteSetting(DEPRECATED_KEY_SETTING, "azure2"), - getConcreteSetting(DEPRECATED_ACCOUNT_SETTING, "azure3"), - getConcreteSetting(DEPRECATED_KEY_SETTING, "azure3"), - getConcreteSetting(DEPRECATED_TIMEOUT_SETTING, "azure3") - }); - } -} diff --git a/plugins/repository-azure/src/test/java/org/elasticsearch/cloud/azure/storage/AzureStorageSettingsFilterTests.java b/plugins/repository-azure/src/test/java/org/elasticsearch/cloud/azure/storage/AzureStorageSettingsFilterTests.java deleted file mode 100644 index 17b43715253c8..0000000000000 --- a/plugins/repository-azure/src/test/java/org/elasticsearch/cloud/azure/storage/AzureStorageSettingsFilterTests.java +++ /dev/null @@ -1,72 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.cloud.azure.storage; - -import org.elasticsearch.common.inject.ModuleTestCase; -import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.common.settings.SettingsFilter; -import org.elasticsearch.common.settings.SettingsModule; -import org.elasticsearch.common.xcontent.XContentBuilder; -import org.elasticsearch.common.xcontent.json.JsonXContent; -import org.elasticsearch.plugin.repository.azure.AzureRepositoryPlugin; -import org.elasticsearch.rest.RestRequest; -import org.elasticsearch.test.ESTestCase; -import org.elasticsearch.test.rest.FakeRestRequest; - -import java.io.IOException; - -import static org.hamcrest.Matchers.contains; - -/** - * TODO as we moved credentials to secure settings, we don't need anymore to keep this test in 7.x - */ -public class AzureStorageSettingsFilterTests extends ESTestCase { - static final Settings settings = Settings.builder() - .put("cloud.azure.storage.azure1.account", "myaccount1") - .put("cloud.azure.storage.azure1.key", "mykey1") - .put("cloud.azure.storage.azure1.default", true) - .put("cloud.azure.storage.azure2.account", "myaccount2") - .put("cloud.azure.storage.azure2.key", "mykey2") - .put("cloud.azure.storage.azure3.account", "myaccount3") - .put("cloud.azure.storage.azure3.key", "mykey3") - .build(); - - public void testSettingsFiltering() throws IOException { - AzureRepositoryPlugin p = new AzureRepositoryPlugin(settings); - SettingsModule module = new SettingsModule(Settings.EMPTY, p.getSettings(), p.getSettingsFilter()); - SettingsFilter settingsFilter = ModuleTestCase.bindAndGetInstance(module, SettingsFilter.class); - - // Test using direct filtering - Settings filteredSettings = settingsFilter.filter(settings); - assertThat(filteredSettings.getAsMap().keySet(), contains("cloud.azure.storage.azure1.default")); - - // Test using toXContent filtering - RestRequest request = new FakeRestRequest(); - settingsFilter.addFilterSettingParams(request); - XContentBuilder xContentBuilder = XContentBuilder.builder(JsonXContent.jsonXContent); - xContentBuilder.startObject(); - settings.toXContent(xContentBuilder, request); - xContentBuilder.endObject(); - String filteredSettingsString = xContentBuilder.string(); - filteredSettings = Settings.builder().loadFromSource(filteredSettingsString, xContentBuilder.contentType()).build(); - assertThat(filteredSettings.getAsMap().keySet(), contains("cloud.azure.storage.azure1.default")); - } - -} diff --git a/plugins/repository-azure/src/test/java/org/elasticsearch/repositories/azure/AzureBlobStoreContainerTests.java b/plugins/repository-azure/src/test/java/org/elasticsearch/repositories/azure/AzureBlobStoreContainerTests.java index 85ca44205aa94..10deeb4676fd3 100644 --- a/plugins/repository-azure/src/test/java/org/elasticsearch/repositories/azure/AzureBlobStoreContainerTests.java +++ b/plugins/repository-azure/src/test/java/org/elasticsearch/repositories/azure/AzureBlobStoreContainerTests.java @@ -20,8 +20,6 @@ package org.elasticsearch.repositories.azure; import com.microsoft.azure.storage.StorageException; -import org.elasticsearch.cloud.azure.blobstore.AzureBlobStore; -import org.elasticsearch.cloud.azure.storage.AzureStorageServiceMock; import org.elasticsearch.cluster.metadata.RepositoryMetaData; import org.elasticsearch.common.blobstore.BlobStore; import org.elasticsearch.common.settings.Settings; diff --git a/plugins/repository-azure/src/test/java/org/elasticsearch/repositories/azure/AzureRepositoryF.java b/plugins/repository-azure/src/test/java/org/elasticsearch/repositories/azure/AzureRepositoryF.java index bfa0621912724..bb93792aa6e61 100644 --- a/plugins/repository-azure/src/test/java/org/elasticsearch/repositories/azure/AzureRepositoryF.java +++ b/plugins/repository-azure/src/test/java/org/elasticsearch/repositories/azure/AzureRepositoryF.java @@ -24,7 +24,6 @@ import org.elasticsearch.common.settings.Settings; import org.elasticsearch.node.MockNode; import org.elasticsearch.node.Node; -import org.elasticsearch.plugin.repository.azure.AzureRepositoryPlugin; import java.io.IOException; import java.util.Collections; diff --git a/plugins/repository-azure/src/test/java/org/elasticsearch/repositories/azure/AzureRepositorySettingsTests.java b/plugins/repository-azure/src/test/java/org/elasticsearch/repositories/azure/AzureRepositorySettingsTests.java index 75ef13d7d8745..75025332889a7 100644 --- a/plugins/repository-azure/src/test/java/org/elasticsearch/repositories/azure/AzureRepositorySettingsTests.java +++ b/plugins/repository-azure/src/test/java/org/elasticsearch/repositories/azure/AzureRepositorySettingsTests.java @@ -21,13 +21,13 @@ import com.microsoft.azure.storage.LocationMode; import com.microsoft.azure.storage.StorageException; -import org.elasticsearch.cloud.azure.storage.AzureStorageService; import org.elasticsearch.cluster.metadata.RepositoryMetaData; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.ByteSizeUnit; import org.elasticsearch.common.unit.ByteSizeValue; import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.env.Environment; +import org.elasticsearch.env.TestEnvironment; import org.elasticsearch.test.ESTestCase; import java.io.IOException; @@ -40,11 +40,11 @@ public class AzureRepositorySettingsTests extends ESTestCase { private AzureRepository azureRepository(Settings settings) throws StorageException, IOException, URISyntaxException { Settings internalSettings = Settings.builder() .put(Environment.PATH_HOME_SETTING.getKey(), createTempDir().toAbsolutePath()) - .putArray(Environment.PATH_DATA_SETTING.getKey(), tmpPaths()) + .putList(Environment.PATH_DATA_SETTING.getKey(), tmpPaths()) .put(settings) .build(); - return new AzureRepository(new RepositoryMetaData("foo", "azure", internalSettings), new Environment(internalSettings), - NamedXContentRegistry.EMPTY, null); + return new AzureRepository(new RepositoryMetaData("foo", "azure", internalSettings), + TestEnvironment.newEnvironment(internalSettings), NamedXContentRegistry.EMPTY, null); } diff --git a/plugins/repository-azure/src/test/java/org/elasticsearch/repositories/azure/AzureSettingsParserTests.java b/plugins/repository-azure/src/test/java/org/elasticsearch/repositories/azure/AzureSettingsParserTests.java deleted file mode 100644 index d0fbdb98e0315..0000000000000 --- a/plugins/repository-azure/src/test/java/org/elasticsearch/repositories/azure/AzureSettingsParserTests.java +++ /dev/null @@ -1,143 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.repositories.azure; - -import org.elasticsearch.cloud.azure.storage.AzureStorageSettings; -import org.elasticsearch.common.collect.Tuple; -import org.elasticsearch.common.settings.Setting; -import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.common.settings.SettingsException; -import org.elasticsearch.test.ESTestCase; - -import java.util.Map; - -import static org.elasticsearch.cloud.azure.storage.AzureStorageSettings.DEPRECATED_ACCOUNT_SETTING; -import static org.elasticsearch.cloud.azure.storage.AzureStorageSettings.DEPRECATED_DEFAULT_SETTING; -import static org.elasticsearch.cloud.azure.storage.AzureStorageSettings.DEPRECATED_KEY_SETTING; -import static org.hamcrest.Matchers.hasSize; -import static org.hamcrest.Matchers.is; -import static org.hamcrest.Matchers.notNullValue; -import static org.hamcrest.Matchers.nullValue; - -public class AzureSettingsParserTests extends ESTestCase { - - public void testParseTwoSettingsExplicitDefault() { - Settings settings = Settings.builder() - .put("cloud.azure.storage.azure1.account", "myaccount1") - .put("cloud.azure.storage.azure1.key", "mykey1") - .put("cloud.azure.storage.azure1.default", true) - .put("cloud.azure.storage.azure2.account", "myaccount2") - .put("cloud.azure.storage.azure2.key", "mykey2") - .build(); - - Tuple> tuple = AzureStorageSettings.loadLegacy(settings); - assertThat(tuple.v1(), notNullValue()); - assertThat(tuple.v1().getAccount(), is("myaccount1")); - assertThat(tuple.v1().getKey(), is("mykey1")); - assertThat(tuple.v2().keySet(), hasSize(1)); - assertThat(tuple.v2().get("azure2"), notNullValue()); - assertThat(tuple.v2().get("azure2").getAccount(), is("myaccount2")); - assertThat(tuple.v2().get("azure2").getKey(), is("mykey2")); - assertSettingDeprecationsAndWarnings(new Setting[]{ - getConcreteSetting(DEPRECATED_ACCOUNT_SETTING, "azure1"), - getConcreteSetting(DEPRECATED_KEY_SETTING, "azure1"), - getConcreteSetting(DEPRECATED_DEFAULT_SETTING, "azure1"), - getConcreteSetting(DEPRECATED_ACCOUNT_SETTING, "azure2"), - getConcreteSetting(DEPRECATED_KEY_SETTING, "azure2") - }); - } - - public void testParseUniqueSettings() { - Settings settings = Settings.builder() - .put("cloud.azure.storage.azure1.account", "myaccount1") - .put("cloud.azure.storage.azure1.key", "mykey1") - .build(); - - Tuple> tuple = AzureStorageSettings.loadLegacy(settings); - assertThat(tuple.v1(), notNullValue()); - assertThat(tuple.v1().getAccount(), is("myaccount1")); - assertThat(tuple.v1().getKey(), is("mykey1")); - assertThat(tuple.v2().keySet(), hasSize(0)); - assertSettingDeprecationsAndWarnings(new Setting[]{ - getConcreteSetting(DEPRECATED_ACCOUNT_SETTING, "azure1"), - getConcreteSetting(DEPRECATED_KEY_SETTING, "azure1") - }); - } - - public void testParseTwoSettingsNoDefault() { - Settings settings = Settings.builder() - .put("cloud.azure.storage.azure1.account", "myaccount1") - .put("cloud.azure.storage.azure1.key", "mykey1") - .put("cloud.azure.storage.azure2.account", "myaccount2") - .put("cloud.azure.storage.azure2.key", "mykey2") - .build(); - - try { - AzureStorageSettings.loadLegacy(settings); - fail("Should have failed with a SettingsException (no default data store)"); - } catch (SettingsException ex) { - assertEquals(ex.getMessage(), "No default Azure data store configured"); - } - assertSettingDeprecationsAndWarnings(new Setting[]{ - getConcreteSetting(DEPRECATED_ACCOUNT_SETTING, "azure1"), - getConcreteSetting(DEPRECATED_KEY_SETTING, "azure1"), - getConcreteSetting(DEPRECATED_ACCOUNT_SETTING, "azure2"), - getConcreteSetting(DEPRECATED_KEY_SETTING, "azure2"), - }); - } - - public void testParseTwoSettingsTooManyDefaultSet() { - Settings settings = Settings.builder() - .put("cloud.azure.storage.azure1.account", "myaccount1") - .put("cloud.azure.storage.azure1.key", "mykey1") - .put("cloud.azure.storage.azure1.default", true) - .put("cloud.azure.storage.azure2.account", "myaccount2") - .put("cloud.azure.storage.azure2.key", "mykey2") - .put("cloud.azure.storage.azure2.default", true) - .build(); - - try { - AzureStorageSettings.loadLegacy(settings); - fail("Should have failed with a SettingsException (multiple default data stores)"); - } catch (SettingsException ex) { - assertEquals(ex.getMessage(), "Multiple default Azure data stores configured: [azure1] and [azure2]"); - } - assertSettingDeprecationsAndWarnings(new Setting[]{ - getConcreteSetting(DEPRECATED_ACCOUNT_SETTING, "azure1"), - getConcreteSetting(DEPRECATED_KEY_SETTING, "azure1"), - getConcreteSetting(DEPRECATED_DEFAULT_SETTING, "azure1"), - getConcreteSetting(DEPRECATED_ACCOUNT_SETTING, "azure2"), - getConcreteSetting(DEPRECATED_KEY_SETTING, "azure2"), - getConcreteSetting(DEPRECATED_DEFAULT_SETTING, "azure2") - }); - } - - public void testParseEmptySettings() { - Tuple> tuple = AzureStorageSettings.loadLegacy(Settings.EMPTY); - assertThat(tuple.v1(), nullValue()); - assertThat(tuple.v2().keySet(), hasSize(0)); - } - - public static Setting getConcreteSetting(Setting setting, String groupName) { - Setting.AffixKey k = (Setting.AffixKey) setting.getRawKey(); - String concreteKey = k.toConcreteKey(groupName).toString(); - return setting.getConcreteSetting(concreteKey); - } -} diff --git a/plugins/repository-azure/src/test/java/org/elasticsearch/repositories/azure/AzureSnapshotRestoreListSnapshotsTests.java b/plugins/repository-azure/src/test/java/org/elasticsearch/repositories/azure/AzureSnapshotRestoreListSnapshotsTests.java deleted file mode 100644 index 6760b418ed50e..0000000000000 --- a/plugins/repository-azure/src/test/java/org/elasticsearch/repositories/azure/AzureSnapshotRestoreListSnapshotsTests.java +++ /dev/null @@ -1,119 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.repositories.azure; - -import com.microsoft.azure.storage.LocationMode; -import com.microsoft.azure.storage.StorageException; -import org.elasticsearch.action.admin.cluster.repositories.put.PutRepositoryResponse; -import org.elasticsearch.client.Client; -import org.elasticsearch.cloud.azure.AbstractAzureWithThirdPartyIntegTestCase; -import org.elasticsearch.cloud.azure.storage.AzureStorageService; -import org.elasticsearch.cloud.azure.storage.AzureStorageServiceImpl; -import org.elasticsearch.cloud.azure.storage.AzureStorageSettings; -import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.repositories.azure.AzureRepository.Repository; -import org.elasticsearch.test.ESIntegTestCase; -import org.elasticsearch.test.ESIntegTestCase.ClusterScope; -import org.junit.After; -import org.junit.Before; - -import java.net.URISyntaxException; -import java.util.concurrent.TimeUnit; - -import static org.elasticsearch.cloud.azure.AzureTestUtils.readSettingsFromFile; -import static org.elasticsearch.repositories.azure.AzureSnapshotRestoreTests.getContainerName; -import static org.hamcrest.Matchers.equalTo; -import static org.hamcrest.Matchers.lessThanOrEqualTo; - -/** - * This test needs Azure to run and -Dtests.thirdparty=true to be set - * and -Dtests.config=/path/to/elasticsearch.yml - * - * Note that this test requires an Azure storage account, with the account - * and credentials set in the elasticsearch.yml config file passed in to the - * test. The Azure storage account type must be a Read-access geo-redundant - * storage (RA-GRS) account. - * - * @see AbstractAzureWithThirdPartyIntegTestCase - */ -@ClusterScope( - scope = ESIntegTestCase.Scope.SUITE, - supportsDedicatedMasters = false, numDataNodes = 1, - transportClientRatio = 0.0) -public class AzureSnapshotRestoreListSnapshotsTests extends AbstractAzureWithThirdPartyIntegTestCase { - - private final AzureStorageService azureStorageService = new AzureStorageServiceImpl(readSettingsFromFile(), - AzureStorageSettings.load(readSettingsFromFile())); - private final String containerName = getContainerName(); - - public void testList() throws Exception { - Client client = client(); - logger.info("--> creating azure primary repository"); - PutRepositoryResponse putRepositoryResponsePrimary = client.admin().cluster().preparePutRepository("primary") - .setType("azure").setSettings(Settings.builder() - .put(Repository.CONTAINER_SETTING.getKey(), containerName) - ).get(); - assertThat(putRepositoryResponsePrimary.isAcknowledged(), equalTo(true)); - - logger.info("--> start get snapshots on primary"); - long startWait = System.currentTimeMillis(); - client.admin().cluster().prepareGetSnapshots("primary").get(); - long endWait = System.currentTimeMillis(); - // definitely should be done in 30s, and if its not working as expected, it takes over 1m - assertThat(endWait - startWait, lessThanOrEqualTo(30000L)); - - logger.info("--> creating azure secondary repository"); - PutRepositoryResponse putRepositoryResponseSecondary = client.admin().cluster().preparePutRepository("secondary") - .setType("azure").setSettings(Settings.builder() - .put(Repository.CONTAINER_SETTING.getKey(), containerName) - .put(Repository.LOCATION_MODE_SETTING.getKey(), "secondary_only") - ).get(); - assertThat(putRepositoryResponseSecondary.isAcknowledged(), equalTo(true)); - - logger.info("--> start get snapshots on secondary"); - startWait = System.currentTimeMillis(); - client.admin().cluster().prepareGetSnapshots("secondary").get(); - endWait = System.currentTimeMillis(); - logger.info("--> end of get snapshots on secondary. Took {} ms", endWait - startWait); - assertThat(endWait - startWait, lessThanOrEqualTo(30000L)); - } - - @Before - public void createContainer() throws Exception { - // It could happen that we run this test really close to a previous one - // so we might need some time to be able to create the container - assertBusy(() -> { - try { - azureStorageService.createContainer(null, LocationMode.PRIMARY_ONLY, containerName); - } catch (URISyntaxException e) { - // Incorrect URL. This should never happen. - fail(); - } catch (StorageException e) { - // It could happen. Let's wait for a while. - fail(); - } - }, 30, TimeUnit.SECONDS); - } - - @After - public void removeContainer() throws Exception { - azureStorageService.removeContainer(null, LocationMode.PRIMARY_ONLY, containerName); - } -} diff --git a/plugins/repository-azure/src/test/java/org/elasticsearch/repositories/azure/AzureSnapshotRestoreTests.java b/plugins/repository-azure/src/test/java/org/elasticsearch/repositories/azure/AzureSnapshotRestoreTests.java index aea47f38ef3ef..439a9d567f1a4 100644 --- a/plugins/repository-azure/src/test/java/org/elasticsearch/repositories/azure/AzureSnapshotRestoreTests.java +++ b/plugins/repository-azure/src/test/java/org/elasticsearch/repositories/azure/AzureSnapshotRestoreTests.java @@ -28,213 +28,139 @@ import org.elasticsearch.action.admin.cluster.snapshots.restore.RestoreSnapshotResponse; import org.elasticsearch.client.Client; import org.elasticsearch.client.ClusterAdminClient; -import org.elasticsearch.cloud.azure.AbstractAzureWithThirdPartyIntegTestCase; -import org.elasticsearch.cloud.azure.storage.AzureStorageService; -import org.elasticsearch.cloud.azure.storage.AzureStorageServiceImpl; -import org.elasticsearch.cloud.azure.storage.AzureStorageSettings; import org.elasticsearch.cluster.ClusterState; import org.elasticsearch.common.Strings; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.ByteSizeUnit; +import org.elasticsearch.plugins.Plugin; import org.elasticsearch.repositories.RepositoryMissingException; import org.elasticsearch.repositories.RepositoryVerificationException; import org.elasticsearch.repositories.azure.AzureRepository.Repository; +import org.elasticsearch.repositories.blobstore.ESBlobStoreRepositoryIntegTestCase; import org.elasticsearch.snapshots.SnapshotMissingException; +import org.elasticsearch.snapshots.SnapshotRestoreException; import org.elasticsearch.snapshots.SnapshotState; import org.elasticsearch.test.ESIntegTestCase; import org.elasticsearch.test.ESIntegTestCase.ClusterScope; +import org.elasticsearch.test.ESIntegTestCase.ThirdParty; import org.elasticsearch.test.store.MockFSDirectoryService; +import org.elasticsearch.test.store.MockFSIndexStore; import org.junit.After; -import org.junit.Before; +import org.junit.AfterClass; +import org.junit.BeforeClass; import java.net.URISyntaxException; +import java.util.Arrays; +import java.util.Collection; import java.util.Locale; import java.util.concurrent.TimeUnit; -import static org.elasticsearch.cloud.azure.AzureTestUtils.readSettingsFromFile; +import static org.elasticsearch.repositories.azure.AzureTestUtils.generateMockSecureSettings; +import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked; import static org.hamcrest.Matchers.equalTo; import static org.hamcrest.Matchers.greaterThan; +import static org.hamcrest.Matchers.lessThanOrEqualTo; /** - * This test needs Azure to run and -Dtests.thirdparty=true to be set - * and -Dtests.config=/path/to/elasticsearch.yml - * @see AbstractAzureWithThirdPartyIntegTestCase + * Those integration tests need an Azure access and must be run with + * {@code -Dtests.thirdparty=true -Dtests.azure.account=AzureStorageAccount -Dtests.azure.key=AzureStorageKey} + * options */ @ClusterScope( scope = ESIntegTestCase.Scope.SUITE, supportsDedicatedMasters = false, numDataNodes = 1, transportClientRatio = 0.0) -public class AzureSnapshotRestoreTests extends AbstractAzureWithThirdPartyIntegTestCase { - private String getRepositoryPath() { - String testName = "it-" + getTestName(); - return testName.contains(" ") ? Strings.split(testName, " ")[0] : testName; +@ThirdParty +public class AzureSnapshotRestoreTests extends ESBlobStoreRepositoryIntegTestCase { + + private static Settings.Builder generateMockSettings() { + return Settings.builder().setSecureSettings(generateMockSecureSettings()); } - public static String getContainerName() { - String testName = "snapshot-itest-".concat(RandomizedTest.getContext().getRunnerSeedAsString().toLowerCase(Locale.ROOT)); - return testName.contains(" ") ? Strings.split(testName, " ")[0] : testName; + private static AzureStorageService getAzureStorageService() { + return new AzureStorageServiceImpl(generateMockSettings().build(), + AzureStorageSettings.load(generateMockSettings().build())); } @Override - public Settings indexSettings() { - // During restore we frequently restore index to exactly the same state it was before, that might cause the same - // checksum file to be written twice during restore operation - return Settings.builder().put(super.indexSettings()) - .put(MockFSDirectoryService.RANDOM_PREVENT_DOUBLE_WRITE_SETTING.getKey(), false) - .put(MockFSDirectoryService.RANDOM_NO_DELETE_OPEN_FILE_SETTING.getKey(), false) - .build(); + protected Settings nodeSettings(int nodeOrdinal) { + return generateMockSettings() + .put(super.nodeSettings(nodeOrdinal)) + .build(); } - @Before @After - public final void wipeAzureRepositories() throws StorageException, URISyntaxException { - wipeRepositories(); - cleanRepositoryFiles( - getContainerName(), - getContainerName().concat("-1"), - getContainerName().concat("-2")); + private static String getContainerName() { + /* Have a different name per test so that there is no possible race condition. As the long can be negative, + * there mustn't be a hyphen between the 2 concatenated numbers + * (can't have 2 consecutives hyphens on Azure containers) + */ + String testName = "snapshot-itest-" + .concat(RandomizedTest.getContext().getRunnerSeedAsString().toLowerCase(Locale.ROOT)); + return testName.contains(" ") ? Strings.split(testName, " ")[0] : testName; } - public void testSimpleWorkflow() { - Client client = client(); - logger.info("--> creating azure repository with path [{}]", getRepositoryPath()); - PutRepositoryResponse putRepositoryResponse = client.admin().cluster().preparePutRepository("test-repo") - .setType("azure").setSettings(Settings.builder() - .put(Repository.CONTAINER_SETTING.getKey(), getContainerName()) - .put(Repository.BASE_PATH_SETTING.getKey(), getRepositoryPath()) - .put(Repository.CHUNK_SIZE_SETTING.getKey(), randomIntBetween(1000, 10000), ByteSizeUnit.BYTES) - ).get(); - assertThat(putRepositoryResponse.isAcknowledged(), equalTo(true)); - - createIndex("test-idx-1", "test-idx-2", "test-idx-3"); - ensureGreen(); - - logger.info("--> indexing some data"); - for (int i = 0; i < 100; i++) { - index("test-idx-1", "doc", Integer.toString(i), "foo", "bar" + i); - index("test-idx-2", "doc", Integer.toString(i), "foo", "baz" + i); - index("test-idx-3", "doc", Integer.toString(i), "foo", "baz" + i); - } - refresh(); - assertThat(client.prepareSearch("test-idx-1").setSize(0).get().getHits().getTotalHits(), equalTo(100L)); - assertThat(client.prepareSearch("test-idx-2").setSize(0).get().getHits().getTotalHits(), equalTo(100L)); - assertThat(client.prepareSearch("test-idx-3").setSize(0).get().getHits().getTotalHits(), equalTo(100L)); - - logger.info("--> snapshot"); - CreateSnapshotResponse createSnapshotResponse = client.admin().cluster().prepareCreateSnapshot("test-repo", "test-snap") - .setWaitForCompletion(true).setIndices("test-idx-*", "-test-idx-3").get(); - assertThat(createSnapshotResponse.getSnapshotInfo().successfulShards(), greaterThan(0)); - assertThat(createSnapshotResponse.getSnapshotInfo().successfulShards(), - equalTo(createSnapshotResponse.getSnapshotInfo().totalShards())); - - assertThat(client.admin().cluster().prepareGetSnapshots("test-repo").setSnapshots("test-snap").get().getSnapshots() - .get(0).state(), equalTo(SnapshotState.SUCCESS)); - - logger.info("--> delete some data"); - for (int i = 0; i < 50; i++) { - client.prepareDelete("test-idx-1", "doc", Integer.toString(i)).get(); - } - for (int i = 50; i < 100; i++) { - client.prepareDelete("test-idx-2", "doc", Integer.toString(i)).get(); - } - for (int i = 0; i < 100; i += 2) { - client.prepareDelete("test-idx-3", "doc", Integer.toString(i)).get(); - } - refresh(); - assertThat(client.prepareSearch("test-idx-1").setSize(0).get().getHits().getTotalHits(), equalTo(50L)); - assertThat(client.prepareSearch("test-idx-2").setSize(0).get().getHits().getTotalHits(), equalTo(50L)); - assertThat(client.prepareSearch("test-idx-3").setSize(0).get().getHits().getTotalHits(), equalTo(50L)); - - logger.info("--> close indices"); - client.admin().indices().prepareClose("test-idx-1", "test-idx-2").get(); - - logger.info("--> restore all indices from the snapshot"); - RestoreSnapshotResponse restoreSnapshotResponse = client.admin().cluster().prepareRestoreSnapshot("test-repo", "test-snap") - .setWaitForCompletion(true).get(); - assertThat(restoreSnapshotResponse.getRestoreInfo().totalShards(), greaterThan(0)); - - ensureGreen(); - assertThat(client.prepareSearch("test-idx-1").setSize(0).get().getHits().getTotalHits(), equalTo(100L)); - assertThat(client.prepareSearch("test-idx-2").setSize(0).get().getHits().getTotalHits(), equalTo(100L)); - assertThat(client.prepareSearch("test-idx-3").setSize(0).get().getHits().getTotalHits(), equalTo(50L)); + @BeforeClass + public static void createTestContainers() throws Exception { + createTestContainer(getContainerName()); + // This is needed for testMultipleRepositories() test case + createTestContainer(getContainerName() + "-1"); + createTestContainer(getContainerName() + "-2"); + } - // Test restore after index deletion - logger.info("--> delete indices"); - cluster().wipeIndices("test-idx-1", "test-idx-2"); - logger.info("--> restore one index after deletion"); - restoreSnapshotResponse = client.admin().cluster().prepareRestoreSnapshot("test-repo", "test-snap").setWaitForCompletion(true) - .setIndices("test-idx-*", "-test-idx-2").get(); - assertThat(restoreSnapshotResponse.getRestoreInfo().totalShards(), greaterThan(0)); - ensureGreen(); - assertThat(client.prepareSearch("test-idx-1").setSize(0).get().getHits().getTotalHits(), equalTo(100L)); - ClusterState clusterState = client.admin().cluster().prepareState().get().getState(); - assertThat(clusterState.getMetaData().hasIndex("test-idx-1"), equalTo(true)); - assertThat(clusterState.getMetaData().hasIndex("test-idx-2"), equalTo(false)); + @AfterClass + public static void removeContainer() throws Exception { + removeTestContainer(getContainerName()); + // This is needed for testMultipleRepositories() test case + removeTestContainer(getContainerName() + "-1"); + removeTestContainer(getContainerName() + "-2"); } /** - * For issue #51: https://github.com/elastic/elasticsearch-cloud-azure/issues/51 + * Create a test container in Azure + * @param containerName container name to use */ - public void testMultipleSnapshots() throws URISyntaxException, StorageException { - final String indexName = "test-idx-1"; - final String typeName = "doc"; - final String repositoryName = "test-repo"; - final String snapshot1Name = "test-snap-1"; - final String snapshot2Name = "test-snap-2"; - - Client client = client(); - - logger.info("creating index [{}]", indexName); - createIndex(indexName); - ensureGreen(); - - logger.info("indexing first document"); - index(indexName, typeName, Integer.toString(1), "foo", "bar " + Integer.toString(1)); - refresh(); - assertThat(client.prepareSearch(indexName).setSize(0).get().getHits().getTotalHits(), equalTo(1L)); - - logger.info("creating Azure repository with path [{}]", getRepositoryPath()); - PutRepositoryResponse putRepositoryResponse = client.admin().cluster().preparePutRepository(repositoryName) - .setType("azure").setSettings(Settings.builder() - .put(Repository.CONTAINER_SETTING.getKey(), getContainerName()) - .put(Repository.BASE_PATH_SETTING.getKey(), getRepositoryPath()) - .put(Repository.BASE_PATH_SETTING.getKey(), randomIntBetween(1000, 10000), ByteSizeUnit.BYTES) - ).get(); - assertThat(putRepositoryResponse.isAcknowledged(), equalTo(true)); - - logger.info("creating snapshot [{}]", snapshot1Name); - CreateSnapshotResponse createSnapshotResponse1 = client.admin().cluster().prepareCreateSnapshot(repositoryName, snapshot1Name) - .setWaitForCompletion(true).setIndices(indexName).get(); - assertThat(createSnapshotResponse1.getSnapshotInfo().successfulShards(), greaterThan(0)); - assertThat(createSnapshotResponse1.getSnapshotInfo().successfulShards(), - equalTo(createSnapshotResponse1.getSnapshotInfo().totalShards())); - - assertThat(client.admin().cluster().prepareGetSnapshots(repositoryName).setSnapshots(snapshot1Name).get().getSnapshots() - .get(0).state(), equalTo(SnapshotState.SUCCESS)); + private static void createTestContainer(String containerName) throws Exception { + // It could happen that we run this test really close to a previous one + // so we might need some time to be able to create the container + assertBusy(() -> { + getAzureStorageService().createContainer("default", LocationMode.PRIMARY_ONLY, containerName); + }, 30, TimeUnit.SECONDS); + } - logger.info("indexing second document"); - index(indexName, typeName, Integer.toString(2), "foo", "bar " + Integer.toString(2)); - refresh(); - assertThat(client.prepareSearch(indexName).setSize(0).get().getHits().getTotalHits(), equalTo(2L)); + /** + * Remove a test container in Azure + * @param containerName container name to use + */ + private static void removeTestContainer(String containerName) throws URISyntaxException, StorageException { + getAzureStorageService().removeContainer("default", LocationMode.PRIMARY_ONLY, containerName); + } - logger.info("creating snapshot [{}]", snapshot2Name); - CreateSnapshotResponse createSnapshotResponse2 = client.admin().cluster().prepareCreateSnapshot(repositoryName, snapshot2Name) - .setWaitForCompletion(true).setIndices(indexName).get(); - assertThat(createSnapshotResponse2.getSnapshotInfo().successfulShards(), greaterThan(0)); - assertThat(createSnapshotResponse2.getSnapshotInfo().successfulShards(), - equalTo(createSnapshotResponse2.getSnapshotInfo().totalShards())); + @Override + protected Collection> nodePlugins() { + return Arrays.asList(AzureRepositoryPlugin.class, MockFSIndexStore.TestPlugin.class); + } - assertThat(client.admin().cluster().prepareGetSnapshots(repositoryName).setSnapshots(snapshot2Name).get().getSnapshots() - .get(0).state(), equalTo(SnapshotState.SUCCESS)); + private String getRepositoryPath() { + String testName = "it-" + getTestName(); + return testName.contains(" ") ? Strings.split(testName, " ")[0] : testName; + } - logger.info("closing index [{}]", indexName); - client.admin().indices().prepareClose(indexName).get(); + @Override + public Settings indexSettings() { + // During restore we frequently restore index to exactly the same state it was before, that might cause the same + // checksum file to be written twice during restore operation + return Settings.builder().put(super.indexSettings()) + .put(MockFSDirectoryService.RANDOM_PREVENT_DOUBLE_WRITE_SETTING.getKey(), false) + .put(MockFSDirectoryService.RANDOM_NO_DELETE_OPEN_FILE_SETTING.getKey(), false) + .build(); + } - logger.info("attempting restore from snapshot [{}]", snapshot1Name); - RestoreSnapshotResponse restoreSnapshotResponse = client.admin().cluster().prepareRestoreSnapshot(repositoryName, snapshot1Name) - .setWaitForCompletion(true).get(); - assertThat(restoreSnapshotResponse.getRestoreInfo().totalShards(), greaterThan(0)); - ensureGreen(); - assertThat(client.prepareSearch(indexName).setSize(0).get().getHits().getTotalHits(), equalTo(1L)); + @After + public final void wipeAzureRepositories() { + try { + client().admin().cluster().prepareDeleteRepository("*").get(); + } catch (RepositoryMissingException ignored) { + } } public void testMultipleRepositories() { @@ -314,6 +240,7 @@ public void testMultipleRepositories() { * For issue #26: https://github.com/elastic/elasticsearch-cloud-azure/issues/26 */ public void testListBlobs_26() throws StorageException, URISyntaxException { + final String repositoryName="test-repo-26"; createIndex("test-idx-1", "test-idx-2", "test-idx-3"); ensureGreen(); @@ -327,29 +254,29 @@ public void testListBlobs_26() throws StorageException, URISyntaxException { ClusterAdminClient client = client().admin().cluster(); logger.info("--> creating azure repository without any path"); - PutRepositoryResponse putRepositoryResponse = client.preparePutRepository("test-repo").setType("azure") + PutRepositoryResponse putRepositoryResponse = client.preparePutRepository(repositoryName).setType("azure") .setSettings(Settings.builder() .put(Repository.CONTAINER_SETTING.getKey(), getContainerName()) ).get(); assertThat(putRepositoryResponse.isAcknowledged(), equalTo(true)); // Get all snapshots - should be empty - assertThat(client.prepareGetSnapshots("test-repo").get().getSnapshots().size(), equalTo(0)); + assertThat(client.prepareGetSnapshots(repositoryName).get().getSnapshots().size(), equalTo(0)); logger.info("--> snapshot"); - CreateSnapshotResponse createSnapshotResponse = client.prepareCreateSnapshot("test-repo", "test-snap-26") + CreateSnapshotResponse createSnapshotResponse = client.prepareCreateSnapshot(repositoryName, "test-snap-26") .setWaitForCompletion(true).setIndices("test-idx-*").get(); assertThat(createSnapshotResponse.getSnapshotInfo().successfulShards(), greaterThan(0)); // Get all snapshots - should have one - assertThat(client.prepareGetSnapshots("test-repo").get().getSnapshots().size(), equalTo(1)); + assertThat(client.prepareGetSnapshots(repositoryName).get().getSnapshots().size(), equalTo(1)); // Clean the snapshot - client.prepareDeleteSnapshot("test-repo", "test-snap-26").get(); - client.prepareDeleteRepository("test-repo").get(); + client.prepareDeleteSnapshot(repositoryName, "test-snap-26").get(); + client.prepareDeleteRepository(repositoryName).get(); logger.info("--> creating azure repository path [{}]", getRepositoryPath()); - putRepositoryResponse = client.preparePutRepository("test-repo").setType("azure") + putRepositoryResponse = client.preparePutRepository(repositoryName).setType("azure") .setSettings(Settings.builder() .put(Repository.CONTAINER_SETTING.getKey(), getContainerName()) .put(Repository.BASE_PATH_SETTING.getKey(), getRepositoryPath()) @@ -357,103 +284,53 @@ public void testListBlobs_26() throws StorageException, URISyntaxException { assertThat(putRepositoryResponse.isAcknowledged(), equalTo(true)); // Get all snapshots - should be empty - assertThat(client.prepareGetSnapshots("test-repo").get().getSnapshots().size(), equalTo(0)); + assertThat(client.prepareGetSnapshots(repositoryName).get().getSnapshots().size(), equalTo(0)); logger.info("--> snapshot"); - createSnapshotResponse = client.prepareCreateSnapshot("test-repo", "test-snap-26").setWaitForCompletion(true) + createSnapshotResponse = client.prepareCreateSnapshot(repositoryName, "test-snap-26").setWaitForCompletion(true) .setIndices("test-idx-*").get(); assertThat(createSnapshotResponse.getSnapshotInfo().successfulShards(), greaterThan(0)); // Get all snapshots - should have one - assertThat(client.prepareGetSnapshots("test-repo").get().getSnapshots().size(), equalTo(1)); - - + assertThat(client.prepareGetSnapshots(repositoryName).get().getSnapshots().size(), equalTo(1)); } /** * For issue #28: https://github.com/elastic/elasticsearch-cloud-azure/issues/28 */ public void testGetDeleteNonExistingSnapshot_28() throws StorageException, URISyntaxException { + final String repositoryName="test-repo-28"; ClusterAdminClient client = client().admin().cluster(); logger.info("--> creating azure repository without any path"); - PutRepositoryResponse putRepositoryResponse = client.preparePutRepository("test-repo").setType("azure") + PutRepositoryResponse putRepositoryResponse = client.preparePutRepository(repositoryName).setType("azure") .setSettings(Settings.builder() .put(Repository.CONTAINER_SETTING.getKey(), getContainerName()) ).get(); assertThat(putRepositoryResponse.isAcknowledged(), equalTo(true)); try { - client.prepareGetSnapshots("test-repo").addSnapshots("nonexistingsnapshotname").get(); + client.prepareGetSnapshots(repositoryName).addSnapshots("nonexistingsnapshotname").get(); fail("Shouldn't be here"); } catch (SnapshotMissingException ex) { // Expected } try { - client.prepareDeleteSnapshot("test-repo", "nonexistingsnapshotname").get(); + client.prepareDeleteSnapshot(repositoryName, "nonexistingsnapshotname").get(); fail("Shouldn't be here"); } catch (SnapshotMissingException ex) { // Expected } } - /** - * For issue #21: https://github.com/elastic/elasticsearch-cloud-azure/issues/21 - */ - public void testForbiddenContainerName() throws Exception { - checkContainerName("", false); - checkContainerName("es", false); - checkContainerName("-elasticsearch", false); - checkContainerName("elasticsearch--integration", false); - checkContainerName("elasticsearch_integration", false); - checkContainerName("ElAsTicsearch_integration", false); - checkContainerName("123456789-123456789-123456789-123456789-123456789-123456789-1234", false); - checkContainerName("123456789-123456789-123456789-123456789-123456789-123456789-123", true); - checkContainerName("elasticsearch-integration", true); - checkContainerName("elasticsearch-integration-007", true); - } - - /** - * Create repository with wrong or correct container name - * @param container Container name we want to create - * @param correct Is this container name correct - */ - private void checkContainerName(final String container, final boolean correct) throws Exception { - logger.info("--> creating azure repository with container name [{}]", container); - // It could happen that we just removed from a previous test the same container so - // we can not create it yet. - assertBusy(() -> { - try { - PutRepositoryResponse putRepositoryResponse = client().admin().cluster().preparePutRepository("test-repo") - .setType("azure").setSettings(Settings.builder() - .put(Repository.CONTAINER_SETTING.getKey(), container) - .put(Repository.BASE_PATH_SETTING.getKey(), getRepositoryPath()) - .put(Repository.CHUNK_SIZE_SETTING.getKey(), randomIntBetween(1000, 10000), ByteSizeUnit.BYTES) - ).get(); - client().admin().cluster().prepareDeleteRepository("test-repo").get(); - try { - logger.info("--> remove container [{}]", container); - cleanRepositoryFiles(container); - } catch (StorageException | URISyntaxException e) { - // We can ignore that as we just try to clean after the test - } - assertTrue(putRepositoryResponse.isAcknowledged() == correct); - } catch (RepositoryVerificationException e) { - if (correct) { - logger.debug(" -> container is being removed. Let's wait a bit..."); - fail(); - } - } - }, 5, TimeUnit.MINUTES); - } - /** * Test case for issue #23: https://github.com/elastic/elasticsearch-cloud-azure/issues/23 */ public void testNonExistingRepo_23() { + final String repositoryName = "test-repo-test23"; Client client = client(); logger.info("--> creating azure repository with path [{}]", getRepositoryPath()); - PutRepositoryResponse putRepositoryResponse = client.admin().cluster().preparePutRepository("test-repo") + PutRepositoryResponse putRepositoryResponse = client.admin().cluster().preparePutRepository(repositoryName) .setType("azure").setSettings(Settings.builder() .put(Repository.CONTAINER_SETTING.getKey(), getContainerName()) .put(Repository.BASE_PATH_SETTING.getKey(), getRepositoryPath()) @@ -463,9 +340,9 @@ public void testNonExistingRepo_23() { logger.info("--> restore non existing snapshot"); try { - client.admin().cluster().prepareRestoreSnapshot("test-repo", "no-existing-snapshot").setWaitForCompletion(true).get(); + client.admin().cluster().prepareRestoreSnapshot(repositoryName, "no-existing-snapshot").setWaitForCompletion(true).get(); fail("Shouldn't be here"); - } catch (SnapshotMissingException ex) { + } catch (SnapshotRestoreException ex) { // Expected } } @@ -475,25 +352,9 @@ public void testNonExistingRepo_23() { */ public void testRemoveAndCreateContainer() throws Exception { final String container = getContainerName().concat("-testremove"); - final AzureStorageService storageService = new AzureStorageServiceImpl(internalCluster().getDefaultSettings(), - AzureStorageSettings.load(internalCluster().getDefaultSettings())); - // It could happen that we run this test really close to a previous one - // so we might need some time to be able to create the container - assertBusy(() -> { - try { - storageService.createContainer(null, LocationMode.PRIMARY_ONLY, container); - logger.debug(" -> container created..."); - } catch (URISyntaxException e) { - // Incorrect URL. This should never happen. - fail(); - } catch (StorageException e) { - // It could happen. Let's wait for a while. - logger.debug(" -> container is being removed. Let's wait a bit..."); - fail(); - } - }, 30, TimeUnit.SECONDS); - storageService.removeContainer(null, LocationMode.PRIMARY_ONLY, container); + createTestContainer(container); + removeTestContainer(container); ClusterAdminClient client = client().admin().cluster(); logger.info("--> creating azure repository while container is being removed"); @@ -509,30 +370,52 @@ public void testRemoveAndCreateContainer() throws Exception { } /** - * Deletes repositories, supports wildcard notation. + * Test that you can snapshot on the primary repository and list the available snapshots + * from the secondary repository. + * + * Note that this test requires an Azure storage account which must be a Read-access geo-redundant + * storage (RA-GRS) account type. + * @throws Exception If anything goes wrong */ - public static void wipeRepositories(String... repositories) { - // if nothing is provided, delete all - if (repositories.length == 0) { - repositories = new String[]{"*"}; - } - for (String repository : repositories) { - try { - client().admin().cluster().prepareDeleteRepository(repository).get(); - } catch (RepositoryMissingException ex) { - // ignore - } - } + public void testGeoRedundantStorage() throws Exception { + Client client = client(); + logger.info("--> creating azure primary repository"); + PutRepositoryResponse putRepositoryResponsePrimary = client.admin().cluster().preparePutRepository("primary") + .setType("azure").setSettings(Settings.builder() + .put(Repository.CONTAINER_SETTING.getKey(), getContainerName()) + ).get(); + assertThat(putRepositoryResponsePrimary.isAcknowledged(), equalTo(true)); + + logger.info("--> start get snapshots on primary"); + long startWait = System.currentTimeMillis(); + client.admin().cluster().prepareGetSnapshots("primary").get(); + long endWait = System.currentTimeMillis(); + // definitely should be done in 30s, and if its not working as expected, it takes over 1m + assertThat(endWait - startWait, lessThanOrEqualTo(30000L)); + + logger.info("--> creating azure secondary repository"); + PutRepositoryResponse putRepositoryResponseSecondary = client.admin().cluster().preparePutRepository("secondary") + .setType("azure").setSettings(Settings.builder() + .put(Repository.CONTAINER_SETTING.getKey(), getContainerName()) + .put(Repository.LOCATION_MODE_SETTING.getKey(), "secondary_only") + ).get(); + assertThat(putRepositoryResponseSecondary.isAcknowledged(), equalTo(true)); + + logger.info("--> start get snapshots on secondary"); + startWait = System.currentTimeMillis(); + client.admin().cluster().prepareGetSnapshots("secondary").get(); + endWait = System.currentTimeMillis(); + logger.info("--> end of get snapshots on secondary. Took {} ms", endWait - startWait); + assertThat(endWait - startWait, lessThanOrEqualTo(30000L)); } - /** - * Purge the test containers - */ - public void cleanRepositoryFiles(String... containers) throws StorageException, URISyntaxException { - Settings settings = readSettingsFromFile(); - AzureStorageService client = new AzureStorageServiceImpl(settings, AzureStorageSettings.load(settings)); - for (String container : containers) { - client.removeContainer(null, LocationMode.PRIMARY_ONLY, container); - } + @Override + protected void createTestRepository(String name) { + assertAcked(client().admin().cluster().preparePutRepository(name) + .setType(AzureRepository.TYPE) + .setSettings(Settings.builder() + .put(Repository.CONTAINER_SETTING.getKey(), getContainerName()) + .put(Repository.BASE_PATH_SETTING.getKey(), getRepositoryPath()) + .put(Repository.CHUNK_SIZE_SETTING.getKey(), randomIntBetween(100, 1000), ByteSizeUnit.BYTES))); } } diff --git a/plugins/repository-azure/src/test/java/org/elasticsearch/cloud/azure/storage/AzureStorageServiceMock.java b/plugins/repository-azure/src/test/java/org/elasticsearch/repositories/azure/AzureStorageServiceMock.java similarity index 89% rename from plugins/repository-azure/src/test/java/org/elasticsearch/cloud/azure/storage/AzureStorageServiceMock.java rename to plugins/repository-azure/src/test/java/org/elasticsearch/repositories/azure/AzureStorageServiceMock.java index ba2011c276e0b..6dfe2db628013 100644 --- a/plugins/repository-azure/src/test/java/org/elasticsearch/cloud/azure/storage/AzureStorageServiceMock.java +++ b/plugins/repository-azure/src/test/java/org/elasticsearch/repositories/azure/AzureStorageServiceMock.java @@ -17,7 +17,7 @@ * under the License. */ -package org.elasticsearch.cloud.azure.storage; +package org.elasticsearch.repositories.azure; import com.microsoft.azure.storage.LocationMode; import com.microsoft.azure.storage.StorageException; @@ -25,13 +25,13 @@ import org.elasticsearch.common.blobstore.support.PlainBlobMetaData; import org.elasticsearch.common.collect.MapBuilder; import org.elasticsearch.common.component.AbstractComponent; +import org.elasticsearch.common.io.Streams; import org.elasticsearch.common.settings.Settings; import java.io.ByteArrayInputStream; import java.io.ByteArrayOutputStream; import java.io.IOException; import java.io.InputStream; -import java.io.OutputStream; import java.net.URISyntaxException; import java.nio.file.NoSuchFileException; import java.util.Locale; @@ -84,13 +84,6 @@ public InputStream getInputStream(String account, LocationMode mode, String cont return new ByteArrayInputStream(blobs.get(blob).toByteArray()); } - @Override - public OutputStream getOutputStream(String account, LocationMode mode, String container, String blob) throws URISyntaxException, StorageException { - ByteArrayOutputStream outputStream = new ByteArrayOutputStream(); - blobs.put(blob, outputStream); - return outputStream; - } - @Override public Map listBlobsByPrefix(String account, LocationMode mode, String container, String keyPath, String prefix) { MapBuilder blobsBuilder = MapBuilder.newMapBuilder(); @@ -110,7 +103,8 @@ public Map listBlobsByPrefix(String account, LocationMode } @Override - public void moveBlob(String account, LocationMode mode, String container, String sourceBlob, String targetBlob) throws URISyntaxException, StorageException { + public void moveBlob(String account, LocationMode mode, String container, String sourceBlob, String targetBlob) + throws URISyntaxException, StorageException { for (String blobName : blobs.keySet()) { if (endsWithIgnoreCase(blobName, sourceBlob)) { ByteArrayOutputStream outputStream = blobs.get(blobName); @@ -120,6 +114,17 @@ public void moveBlob(String account, LocationMode mode, String container, String } } + @Override + public void writeBlob(String account, LocationMode mode, String container, String blobName, InputStream inputStream, long blobSize) + throws URISyntaxException, StorageException { + try (ByteArrayOutputStream outputStream = new ByteArrayOutputStream()) { + blobs.put(blobName, outputStream); + Streams.copy(inputStream, outputStream); + } catch (IOException e) { + throw new StorageException("MOCK", "Error while writing mock stream", e); + } + } + /** * Test if the given String starts with the specified prefix, * ignoring upper/lower case. diff --git a/plugins/repository-azure/src/test/java/org/elasticsearch/repositories/azure/AzureStorageServiceTests.java b/plugins/repository-azure/src/test/java/org/elasticsearch/repositories/azure/AzureStorageServiceTests.java new file mode 100644 index 0000000000000..72cd015f14847 --- /dev/null +++ b/plugins/repository-azure/src/test/java/org/elasticsearch/repositories/azure/AzureStorageServiceTests.java @@ -0,0 +1,294 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.repositories.azure; + +import com.microsoft.azure.storage.LocationMode; +import com.microsoft.azure.storage.RetryExponentialRetry; +import com.microsoft.azure.storage.blob.CloudBlobClient; +import com.microsoft.azure.storage.core.Base64; + +import org.elasticsearch.common.settings.MockSecureSettings; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.settings.SettingsException; +import org.elasticsearch.test.ESTestCase; + +import java.net.InetAddress; +import java.net.InetSocketAddress; +import java.net.Proxy; +import java.net.URI; +import java.net.URISyntaxException; +import java.net.UnknownHostException; +import java.nio.charset.StandardCharsets; +import java.util.Map; + +import static org.elasticsearch.repositories.azure.AzureStorageServiceImpl.blobNameFromUri; +import static org.hamcrest.Matchers.containsInAnyOrder; +import static org.hamcrest.Matchers.equalTo; +import static org.hamcrest.Matchers.instanceOf; +import static org.hamcrest.Matchers.is; +import static org.hamcrest.Matchers.isEmptyString; +import static org.hamcrest.Matchers.notNullValue; +import static org.hamcrest.Matchers.nullValue; + +public class AzureStorageServiceTests extends ESTestCase { + + private MockSecureSettings buildSecureSettings() { + MockSecureSettings secureSettings = new MockSecureSettings(); + secureSettings.setString("azure.client.azure1.account", "myaccount1"); + secureSettings.setString("azure.client.azure1.key", "mykey1"); + secureSettings.setString("azure.client.azure2.account", "myaccount2"); + secureSettings.setString("azure.client.azure2.key", "mykey2"); + secureSettings.setString("azure.client.azure3.account", "myaccount3"); + secureSettings.setString("azure.client.azure3.key", "mykey3"); + return secureSettings; + } + private Settings buildSettings() { + Settings settings = Settings.builder() + .setSecureSettings(buildSecureSettings()) + .build(); + return settings; + } + + public void testReadSecuredSettings() { + MockSecureSettings secureSettings = new MockSecureSettings(); + secureSettings.setString("azure.client.azure1.account", "myaccount1"); + secureSettings.setString("azure.client.azure1.key", "mykey1"); + secureSettings.setString("azure.client.azure2.account", "myaccount2"); + secureSettings.setString("azure.client.azure2.key", "mykey2"); + secureSettings.setString("azure.client.azure3.account", "myaccount3"); + secureSettings.setString("azure.client.azure3.key", "mykey3"); + Settings settings = Settings.builder().setSecureSettings(secureSettings) + .put("azure.client.azure3.endpoint_suffix", "my_endpoint_suffix").build(); + + Map loadedSettings = AzureStorageSettings.load(settings); + assertThat(loadedSettings.keySet(), containsInAnyOrder("azure1","azure2","azure3","default")); + + assertThat(loadedSettings.get("azure1").getEndpointSuffix(), isEmptyString()); + assertThat(loadedSettings.get("azure2").getEndpointSuffix(), isEmptyString()); + assertThat(loadedSettings.get("azure3").getEndpointSuffix(), equalTo("my_endpoint_suffix")); + } + + public void testCreateClientWithEndpointSuffix() { + MockSecureSettings secureSettings = new MockSecureSettings(); + secureSettings.setString("azure.client.azure1.account", "myaccount1"); + secureSettings.setString("azure.client.azure1.key", Base64.encode("mykey1".getBytes(StandardCharsets.UTF_8))); + secureSettings.setString("azure.client.azure2.account", "myaccount2"); + secureSettings.setString("azure.client.azure2.key", Base64.encode("mykey2".getBytes(StandardCharsets.UTF_8))); + Settings settings = Settings.builder().setSecureSettings(secureSettings) + .put("azure.client.azure1.endpoint_suffix", "my_endpoint_suffix").build(); + AzureStorageServiceImpl azureStorageService = new AzureStorageServiceImpl(settings, AzureStorageSettings.load(settings)); + CloudBlobClient client1 = azureStorageService.getSelectedClient("azure1", LocationMode.PRIMARY_ONLY); + assertThat(client1.getEndpoint().toString(), equalTo("https://myaccount1.blob.my_endpoint_suffix")); + + CloudBlobClient client2 = azureStorageService.getSelectedClient("azure2", LocationMode.PRIMARY_ONLY); + assertThat(client2.getEndpoint().toString(), equalTo("https://myaccount2.blob.core.windows.net")); + } + + public void testGetSelectedClientWithNoPrimaryAndSecondary() { + try { + new AzureStorageServiceMockForSettings(Settings.EMPTY); + fail("we should have raised an IllegalArgumentException"); + } catch (IllegalArgumentException e) { + assertThat(e.getMessage(), is("If you want to use an azure repository, you need to define a client configuration.")); + } + } + + public void testGetSelectedClientNonExisting() { + AzureStorageServiceImpl azureStorageService = new AzureStorageServiceMockForSettings(buildSettings()); + IllegalArgumentException e = expectThrows(IllegalArgumentException.class, () -> { + azureStorageService.getSelectedClient("azure4", LocationMode.PRIMARY_ONLY); + }); + assertThat(e.getMessage(), is("Can not find named azure client [azure4]. Check your settings.")); + } + + public void testGetSelectedClientDefaultTimeout() { + Settings timeoutSettings = Settings.builder() + .setSecureSettings(buildSecureSettings()) + .put("azure.client.azure3.timeout", "30s") + .build(); + AzureStorageServiceImpl azureStorageService = new AzureStorageServiceMockForSettings(timeoutSettings); + CloudBlobClient client1 = azureStorageService.getSelectedClient("azure1", LocationMode.PRIMARY_ONLY); + assertThat(client1.getDefaultRequestOptions().getTimeoutIntervalInMs(), nullValue()); + CloudBlobClient client3 = azureStorageService.getSelectedClient("azure3", LocationMode.PRIMARY_ONLY); + assertThat(client3.getDefaultRequestOptions().getTimeoutIntervalInMs(), is(30 * 1000)); + } + + public void testGetSelectedClientNoTimeout() { + AzureStorageServiceImpl azureStorageService = new AzureStorageServiceMockForSettings(buildSettings()); + CloudBlobClient client1 = azureStorageService.getSelectedClient("azure1", LocationMode.PRIMARY_ONLY); + assertThat(client1.getDefaultRequestOptions().getTimeoutIntervalInMs(), is(nullValue())); + } + + public void testGetSelectedClientBackoffPolicy() { + AzureStorageServiceImpl azureStorageService = new AzureStorageServiceMockForSettings(buildSettings()); + CloudBlobClient client1 = azureStorageService.getSelectedClient("azure1", LocationMode.PRIMARY_ONLY); + assertThat(client1.getDefaultRequestOptions().getRetryPolicyFactory(), is(notNullValue())); + assertThat(client1.getDefaultRequestOptions().getRetryPolicyFactory(), instanceOf(RetryExponentialRetry.class)); + } + + public void testGetSelectedClientBackoffPolicyNbRetries() { + Settings timeoutSettings = Settings.builder() + .setSecureSettings(buildSecureSettings()) + .put("azure.client.azure1.max_retries", 7) + .build(); + + AzureStorageServiceImpl azureStorageService = new AzureStorageServiceMockForSettings(timeoutSettings); + CloudBlobClient client1 = azureStorageService.getSelectedClient("azure1", LocationMode.PRIMARY_ONLY); + assertThat(client1.getDefaultRequestOptions().getRetryPolicyFactory(), is(notNullValue())); + assertThat(client1.getDefaultRequestOptions().getRetryPolicyFactory(), instanceOf(RetryExponentialRetry.class)); + } + + public void testNoProxy() { + Settings settings = Settings.builder() + .setSecureSettings(buildSecureSettings()) + .build(); + AzureStorageServiceMockForSettings mock = new AzureStorageServiceMockForSettings(settings); + assertThat(mock.storageSettings.get("azure1").getProxy(), nullValue()); + assertThat(mock.storageSettings.get("azure2").getProxy(), nullValue()); + assertThat(mock.storageSettings.get("azure3").getProxy(), nullValue()); + } + + public void testProxyHttp() throws UnknownHostException { + Settings settings = Settings.builder() + .setSecureSettings(buildSecureSettings()) + .put("azure.client.azure1.proxy.host", "127.0.0.1") + .put("azure.client.azure1.proxy.port", 8080) + .put("azure.client.azure1.proxy.type", "http") + .build(); + AzureStorageServiceMockForSettings mock = new AzureStorageServiceMockForSettings(settings); + Proxy azure1Proxy = mock.storageSettings.get("azure1").getProxy(); + + assertThat(azure1Proxy, notNullValue()); + assertThat(azure1Proxy.type(), is(Proxy.Type.HTTP)); + assertThat(azure1Proxy.address(), is(new InetSocketAddress(InetAddress.getByName("127.0.0.1"), 8080))); + assertThat(mock.storageSettings.get("azure2").getProxy(), nullValue()); + assertThat(mock.storageSettings.get("azure3").getProxy(), nullValue()); + } + + public void testMultipleProxies() throws UnknownHostException { + Settings settings = Settings.builder() + .setSecureSettings(buildSecureSettings()) + .put("azure.client.azure1.proxy.host", "127.0.0.1") + .put("azure.client.azure1.proxy.port", 8080) + .put("azure.client.azure1.proxy.type", "http") + .put("azure.client.azure2.proxy.host", "127.0.0.1") + .put("azure.client.azure2.proxy.port", 8081) + .put("azure.client.azure2.proxy.type", "http") + .build(); + AzureStorageServiceMockForSettings mock = new AzureStorageServiceMockForSettings(settings); + Proxy azure1Proxy = mock.storageSettings.get("azure1").getProxy(); + assertThat(azure1Proxy, notNullValue()); + assertThat(azure1Proxy.type(), is(Proxy.Type.HTTP)); + assertThat(azure1Proxy.address(), is(new InetSocketAddress(InetAddress.getByName("127.0.0.1"), 8080))); + Proxy azure2Proxy = mock.storageSettings.get("azure2").getProxy(); + assertThat(azure2Proxy, notNullValue()); + assertThat(azure2Proxy.type(), is(Proxy.Type.HTTP)); + assertThat(azure2Proxy.address(), is(new InetSocketAddress(InetAddress.getByName("127.0.0.1"), 8081))); + assertThat(mock.storageSettings.get("azure3").getProxy(), nullValue()); + } + + public void testProxySocks() throws UnknownHostException { + Settings settings = Settings.builder() + .setSecureSettings(buildSecureSettings()) + .put("azure.client.azure1.proxy.host", "127.0.0.1") + .put("azure.client.azure1.proxy.port", 8080) + .put("azure.client.azure1.proxy.type", "socks") + .build(); + AzureStorageServiceMockForSettings mock = new AzureStorageServiceMockForSettings(settings); + Proxy azure1Proxy = mock.storageSettings.get("azure1").getProxy(); + assertThat(azure1Proxy, notNullValue()); + assertThat(azure1Proxy.type(), is(Proxy.Type.SOCKS)); + assertThat(azure1Proxy.address(), is(new InetSocketAddress(InetAddress.getByName("127.0.0.1"), 8080))); + assertThat(mock.storageSettings.get("azure2").getProxy(), nullValue()); + assertThat(mock.storageSettings.get("azure3").getProxy(), nullValue()); + } + + public void testProxyNoHost() { + Settings settings = Settings.builder() + .setSecureSettings(buildSecureSettings()) + .put("azure.client.azure1.proxy.port", 8080) + .put("azure.client.azure1.proxy.type", randomFrom("socks", "http")) + .build(); + + SettingsException e = expectThrows(SettingsException.class, () -> new AzureStorageServiceMockForSettings(settings)); + assertEquals("Azure Proxy type has been set but proxy host or port is not defined.", e.getMessage()); + } + + public void testProxyNoPort() { + Settings settings = Settings.builder() + .setSecureSettings(buildSecureSettings()) + .put("azure.client.azure1.proxy.host", "127.0.0.1") + .put("azure.client.azure1.proxy.type", randomFrom("socks", "http")) + .build(); + + SettingsException e = expectThrows(SettingsException.class, () -> new AzureStorageServiceMockForSettings(settings)); + assertEquals("Azure Proxy type has been set but proxy host or port is not defined.", e.getMessage()); + } + + public void testProxyNoType() { + Settings settings = Settings.builder() + .setSecureSettings(buildSecureSettings()) + .put("azure.client.azure1.proxy.host", "127.0.0.1") + .put("azure.client.azure1.proxy.port", 8080) + .build(); + + SettingsException e = expectThrows(SettingsException.class, () -> new AzureStorageServiceMockForSettings(settings)); + assertEquals("Azure Proxy port or host have been set but proxy type is not defined.", e.getMessage()); + } + + public void testProxyWrongHost() { + Settings settings = Settings.builder() + .setSecureSettings(buildSecureSettings()) + .put("azure.client.azure1.proxy.type", randomFrom("socks", "http")) + .put("azure.client.azure1.proxy.host", "thisisnotavalidhostorwehavebeensuperunlucky") + .put("azure.client.azure1.proxy.port", 8080) + .build(); + + SettingsException e = expectThrows(SettingsException.class, () -> new AzureStorageServiceMockForSettings(settings)); + assertEquals("Azure proxy host is unknown.", e.getMessage()); + } + + /** + * This internal class just overload createClient method which is called by AzureStorageServiceImpl.doStart() + */ + class AzureStorageServiceMockForSettings extends AzureStorageServiceImpl { + AzureStorageServiceMockForSettings(Settings settings) { + super(settings, AzureStorageSettings.load(settings)); + } + + // We fake the client here + @Override + void createClient(AzureStorageSettings azureStorageSettings) { + this.clients.put(azureStorageSettings.getAccount(), + new CloudBlobClient(URI.create("https://" + azureStorageSettings.getAccount()))); + } + } + + public void testBlobNameFromUri() throws URISyntaxException { + String name = blobNameFromUri(new URI("https://myservice.azure.net/container/path/to/myfile")); + assertThat(name, is("path/to/myfile")); + name = blobNameFromUri(new URI("http://myservice.azure.net/container/path/to/myfile")); + assertThat(name, is("path/to/myfile")); + name = blobNameFromUri(new URI("http://127.0.0.1/container/path/to/myfile")); + assertThat(name, is("path/to/myfile")); + name = blobNameFromUri(new URI("https://127.0.0.1/container/path/to/myfile")); + assertThat(name, is("path/to/myfile")); + } +} diff --git a/plugins/repository-azure/src/test/java/org/elasticsearch/repositories/azure/AzureTestUtils.java b/plugins/repository-azure/src/test/java/org/elasticsearch/repositories/azure/AzureTestUtils.java new file mode 100644 index 0000000000000..52ff8a7faa49a --- /dev/null +++ b/plugins/repository-azure/src/test/java/org/elasticsearch/repositories/azure/AzureTestUtils.java @@ -0,0 +1,46 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.repositories.azure; + +import org.elasticsearch.common.Strings; +import org.elasticsearch.common.settings.MockSecureSettings; +import org.elasticsearch.common.settings.SecureSettings; + +public class AzureTestUtils { + /** + * Mock secure settings from sysprops when running integration tests with ThirdParty annotation. + * Start the tests with {@code -Dtests.azure.account=AzureStorageAccount and -Dtests.azure.key=AzureStorageKey} + * @return Mock Settings from sysprops + */ + public static SecureSettings generateMockSecureSettings() { + MockSecureSettings secureSettings = new MockSecureSettings(); + + if (Strings.isEmpty(System.getProperty("tests.azure.account")) || + Strings.isEmpty(System.getProperty("tests.azure.key"))) { + throw new IllegalStateException("to run integration tests, you need to set -Dtests.thirdparty=true and " + + "-Dtests.azure.account=azure-account -Dtests.azure.key=azure-key"); + } + + secureSettings.setString("azure.client.default.account", System.getProperty("tests.azure.account")); + secureSettings.setString("azure.client.default.key", System.getProperty("tests.azure.key")); + + return secureSettings; + } +} diff --git a/plugins/repository-gcs/src/test/java/org/elasticsearch/repositories/gcs/GoogleCloudStorageServiceTests.java b/plugins/repository-gcs/src/test/java/org/elasticsearch/repositories/gcs/GoogleCloudStorageServiceTests.java index 5353f1c28e649..a12cd4fdb5c94 100644 --- a/plugins/repository-gcs/src/test/java/org/elasticsearch/repositories/gcs/GoogleCloudStorageServiceTests.java +++ b/plugins/repository-gcs/src/test/java/org/elasticsearch/repositories/gcs/GoogleCloudStorageServiceTests.java @@ -21,19 +21,16 @@ import java.io.IOException; import java.io.InputStream; -import java.nio.file.Files; -import java.nio.file.Path; import java.util.Collections; import java.util.Map; import com.google.api.client.googleapis.auth.oauth2.GoogleCredential; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.env.Environment; +import org.elasticsearch.env.TestEnvironment; import org.elasticsearch.repositories.gcs.GoogleCloudStorageService.InternalGoogleCloudStorageService; import org.elasticsearch.test.ESTestCase; -import static org.hamcrest.Matchers.containsString; - public class GoogleCloudStorageServiceTests extends ESTestCase { private InputStream getDummyCredentialStream() throws IOException { @@ -41,7 +38,7 @@ private InputStream getDummyCredentialStream() throws IOException { } public void testDefaultCredential() throws Exception { - Environment env = new Environment(Settings.builder().put("path.home", createTempDir()).build()); + Environment env = TestEnvironment.newEnvironment(Settings.builder().put("path.home", createTempDir()).build()); GoogleCredential cred = GoogleCredential.fromStream(getDummyCredentialStream()); InternalGoogleCloudStorageService service = new InternalGoogleCloudStorageService(env, Collections.emptyMap()) { @Override @@ -55,7 +52,7 @@ GoogleCredential getDefaultCredential() throws IOException { public void testClientCredential() throws Exception { GoogleCredential cred = GoogleCredential.fromStream(getDummyCredentialStream()); Map credentials = Collections.singletonMap("clientname", cred); - Environment env = new Environment(Settings.builder().put("path.home", createTempDir()).build()); + Environment env = TestEnvironment.newEnvironment(Settings.builder().put("path.home", createTempDir()).build()); InternalGoogleCloudStorageService service = new InternalGoogleCloudStorageService(env, credentials); assertSame(cred, service.getCredential("clientname")); } diff --git a/plugins/repository-hdfs/build.gradle b/plugins/repository-hdfs/build.gradle index 3a93146c0865b..6426f588352b7 100644 --- a/plugins/repository-hdfs/build.gradle +++ b/plugins/repository-hdfs/build.gradle @@ -57,6 +57,7 @@ dependencies { compile 'commons-lang:commons-lang:2.6' compile 'javax.servlet:servlet-api:2.5' compile "org.slf4j:slf4j-api:${versions.slf4j}" + compile "org.apache.logging.log4j:log4j-slf4j-impl:${versions.log4j}" hdfsFixture project(':test:fixtures:hdfs-fixture') } @@ -105,7 +106,7 @@ List principals = [ "elasticsearch", "hdfs/hdfs.build.elastic.co" ] String realm = "BUILD.ELASTIC.CO" for (String principal : principals) { - Task create = project.tasks.create("addPrincipal#${principal}", org.elasticsearch.gradle.vagrant.VagrantCommandTask) { + Task create = project.tasks.create("addPrincipal#${principal}".replace('/', '_'), org.elasticsearch.gradle.vagrant.VagrantCommandTask) { command 'ssh' args '--command', "sudo bash /vagrant/src/main/resources/provision/addprinc.sh $principal" boxName box @@ -470,9 +471,8 @@ thirdPartyAudit.excludes = [ // internal java api: sun.misc.SignalHandler 'org.apache.hadoop.util.SignalLogger$Handler', - // optional dependencies of slf4j-api - 'org.slf4j.impl.StaticMDCBinder', - 'org.slf4j.impl.StaticMarkerBinder', + // we are not pulling in slf4j-ext, this is okay, Log4j will fallback gracefully + 'org.slf4j.ext.EventData', 'org.apache.log4j.AppenderSkeleton', 'org.apache.log4j.AsyncAppender', @@ -493,12 +493,6 @@ thirdPartyAudit.excludes = [ 'com.squareup.okhttp.ResponseBody' ] -// Gradle 2.13 bundles org.slf4j.impl.StaticLoggerBinder in its core.jar which leaks into the forbidden APIs ant task -// Gradle 2.14+ does not bundle this class anymore so we need to properly exclude it here. -if (GradleVersion.current() > GradleVersion.version("2.13")) { - thirdPartyAudit.excludes += ['org.slf4j.impl.StaticLoggerBinder'] -} - if (JavaVersion.current() > JavaVersion.VERSION_1_8) { thirdPartyAudit.excludes += ['javax.xml.bind.annotation.adapters.HexBinaryAdapter'] } diff --git a/plugins/repository-hdfs/licenses/log4j-slf4j-impl-2.9.1.jar.sha1 b/plugins/repository-hdfs/licenses/log4j-slf4j-impl-2.9.1.jar.sha1 new file mode 100644 index 0000000000000..66119e87e211f --- /dev/null +++ b/plugins/repository-hdfs/licenses/log4j-slf4j-impl-2.9.1.jar.sha1 @@ -0,0 +1 @@ +0a97a849b18b3798c4af1a2ca5b10c66cef17e3a \ No newline at end of file diff --git a/plugins/repository-hdfs/licenses/log4j-slf4j-impl-LICENSE.txt b/plugins/repository-hdfs/licenses/log4j-slf4j-impl-LICENSE.txt new file mode 100644 index 0000000000000..6279e5206de13 --- /dev/null +++ b/plugins/repository-hdfs/licenses/log4j-slf4j-impl-LICENSE.txt @@ -0,0 +1,202 @@ + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright 1999-2005 The Apache Software Foundation + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/plugins/repository-hdfs/licenses/log4j-slf4j-impl-NOTICE.txt b/plugins/repository-hdfs/licenses/log4j-slf4j-impl-NOTICE.txt new file mode 100644 index 0000000000000..0375732360047 --- /dev/null +++ b/plugins/repository-hdfs/licenses/log4j-slf4j-impl-NOTICE.txt @@ -0,0 +1,5 @@ +Apache log4j +Copyright 2007 The Apache Software Foundation + +This product includes software developed at +The Apache Software Foundation (http://www.apache.org/). \ No newline at end of file diff --git a/plugins/repository-hdfs/src/main/java/org/elasticsearch/repositories/hdfs/HdfsBlobContainer.java b/plugins/repository-hdfs/src/main/java/org/elasticsearch/repositories/hdfs/HdfsBlobContainer.java index 4649cf858d254..f160f4c4ead8e 100644 --- a/plugins/repository-hdfs/src/main/java/org/elasticsearch/repositories/hdfs/HdfsBlobContainer.java +++ b/plugins/repository-hdfs/src/main/java/org/elasticsearch/repositories/hdfs/HdfsBlobContainer.java @@ -23,6 +23,7 @@ import org.apache.hadoop.fs.FileStatus; import org.apache.hadoop.fs.Options.CreateOpts; import org.apache.hadoop.fs.Path; +import org.elasticsearch.SpecialPermission; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.blobstore.BlobMetaData; import org.elasticsearch.common.blobstore.BlobPath; @@ -45,12 +46,14 @@ final class HdfsBlobContainer extends AbstractBlobContainer { private final HdfsBlobStore store; + private final HdfsSecurityContext securityContext; private final Path path; private final int bufferSize; - HdfsBlobContainer(BlobPath blobPath, HdfsBlobStore store, Path path, int bufferSize) { + HdfsBlobContainer(BlobPath blobPath, HdfsBlobStore store, Path path, int bufferSize, HdfsSecurityContext hdfsSecurityContext) { super(blobPath); this.store = store; + this.securityContext = hdfsSecurityContext; this.path = path; this.bufferSize = bufferSize; } @@ -90,7 +93,9 @@ public InputStream readBlob(String blobName) throws IOException { // FSDataInputStream can open connections on read() or skip() so we wrap in // HDFSPrivilegedInputSteam which will ensure that underlying methods will // be called with the proper privileges. - return store.execute(fileContext -> new HDFSPrivilegedInputSteam(fileContext.open(new Path(path, blobName), bufferSize))); + return store.execute(fileContext -> + new HDFSPrivilegedInputSteam(fileContext.open(new Path(path, blobName), bufferSize), securityContext) + ); } @Override @@ -144,8 +149,11 @@ public Map listBlobs() throws IOException { */ private static class HDFSPrivilegedInputSteam extends FilterInputStream { - HDFSPrivilegedInputSteam(InputStream in) { + private final HdfsSecurityContext securityContext; + + HDFSPrivilegedInputSteam(InputStream in, HdfsSecurityContext hdfsSecurityContext) { super(in); + this.securityContext = hdfsSecurityContext; } public int read() throws IOException { @@ -175,9 +183,10 @@ public synchronized void reset() throws IOException { }); } - private static T doPrivilegedOrThrow(PrivilegedExceptionAction action) throws IOException { + private T doPrivilegedOrThrow(PrivilegedExceptionAction action) throws IOException { + SpecialPermission.check(); try { - return AccessController.doPrivileged(action); + return AccessController.doPrivileged(action, null, securityContext.getRestrictedExecutionPermissions()); } catch (PrivilegedActionException e) { throw (IOException) e.getCause(); } diff --git a/plugins/repository-hdfs/src/main/java/org/elasticsearch/repositories/hdfs/HdfsBlobStore.java b/plugins/repository-hdfs/src/main/java/org/elasticsearch/repositories/hdfs/HdfsBlobStore.java index 8d88b7fd07422..fc6922d81f441 100644 --- a/plugins/repository-hdfs/src/main/java/org/elasticsearch/repositories/hdfs/HdfsBlobStore.java +++ b/plugins/repository-hdfs/src/main/java/org/elasticsearch/repositories/hdfs/HdfsBlobStore.java @@ -39,17 +39,21 @@ final class HdfsBlobStore implements BlobStore { private final FileContext fileContext; private final HdfsSecurityContext securityContext; private final int bufferSize; + private final boolean readOnly; private volatile boolean closed; - HdfsBlobStore(FileContext fileContext, String path, int bufferSize) throws IOException { + HdfsBlobStore(FileContext fileContext, String path, int bufferSize, boolean readOnly) throws IOException { this.fileContext = fileContext; this.securityContext = new HdfsSecurityContext(fileContext.getUgi()); this.bufferSize = bufferSize; this.root = execute(fileContext1 -> fileContext1.makeQualified(new Path(path))); - try { - mkdirs(root); - } catch (FileAlreadyExistsException ok) { - // behaves like Files.createDirectories + this.readOnly = readOnly; + if (!readOnly) { + try { + mkdirs(root); + } catch (FileAlreadyExistsException ok) { + // behaves like Files.createDirectories + } } } @@ -75,17 +79,19 @@ public String toString() { @Override public BlobContainer blobContainer(BlobPath path) { - return new HdfsBlobContainer(path, this, buildHdfsPath(path), bufferSize); + return new HdfsBlobContainer(path, this, buildHdfsPath(path), bufferSize, securityContext); } private Path buildHdfsPath(BlobPath blobPath) { final Path path = translateToHdfsPath(blobPath); - try { - mkdirs(path); - } catch (FileAlreadyExistsException ok) { - // behaves like Files.createDirectories - } catch (IOException ex) { - throw new ElasticsearchException("failed to create blob container", ex); + if (!readOnly) { + try { + mkdirs(path); + } catch (FileAlreadyExistsException ok) { + // behaves like Files.createDirectories + } catch (IOException ex) { + throw new ElasticsearchException("failed to create blob container", ex); + } } return path; } diff --git a/plugins/repository-hdfs/src/main/java/org/elasticsearch/repositories/hdfs/HdfsRepository.java b/plugins/repository-hdfs/src/main/java/org/elasticsearch/repositories/hdfs/HdfsRepository.java index 16ed9d06a5e8a..1bf2e47e9650b 100644 --- a/plugins/repository-hdfs/src/main/java/org/elasticsearch/repositories/hdfs/HdfsRepository.java +++ b/plugins/repository-hdfs/src/main/java/org/elasticsearch/repositories/hdfs/HdfsRepository.java @@ -106,7 +106,7 @@ protected void doStart() { SpecialPermission.check(); FileContext fileContext = AccessController.doPrivileged((PrivilegedAction) () -> createContext(uri, getMetadata().settings())); - blobStore = new HdfsBlobStore(fileContext, pathSetting, bufferSize); + blobStore = new HdfsBlobStore(fileContext, pathSetting, bufferSize, isReadOnly()); logger.debug("Using file-system [{}] for URI [{}], path [{}]", fileContext.getDefaultFileSystem(), fileContext.getDefaultFileSystem().getUri(), pathSetting); } catch (IOException e) { throw new UncheckedIOException(String.format(Locale.ROOT, "Cannot create HDFS repository for uri [%s]", uri), e); @@ -120,9 +120,9 @@ private FileContext createContext(URI uri, Settings repositorySettings) { hadoopConfiguration.setClassLoader(HdfsRepository.class.getClassLoader()); hadoopConfiguration.reloadConfiguration(); - Map map = repositorySettings.getByPrefix("conf.").getAsMap(); - for (Entry entry : map.entrySet()) { - hadoopConfiguration.set(entry.getKey(), entry.getValue()); + final Settings confSettings = repositorySettings.getByPrefix("conf."); + for (String key : confSettings.keySet()) { + hadoopConfiguration.set(key, confSettings.get(key)); } // Create a hadoop user @@ -132,7 +132,7 @@ private FileContext createContext(URI uri, Settings repositorySettings) { hadoopConfiguration.setBoolean("fs.hdfs.impl.disable.cache", true); // Create the filecontext with our user information - // This will correctly configure the filecontext to have our UGI as it's internal user. + // This will correctly configure the filecontext to have our UGI as its internal user. return ugi.doAs((PrivilegedAction) () -> { try { AbstractFileSystem fs = AbstractFileSystem.get(uri, hadoopConfiguration); diff --git a/plugins/repository-hdfs/src/main/java/org/elasticsearch/repositories/hdfs/HdfsSecurityContext.java b/plugins/repository-hdfs/src/main/java/org/elasticsearch/repositories/hdfs/HdfsSecurityContext.java index 3cd1a5a40fdc0..bd16d87d87923 100644 --- a/plugins/repository-hdfs/src/main/java/org/elasticsearch/repositories/hdfs/HdfsSecurityContext.java +++ b/plugins/repository-hdfs/src/main/java/org/elasticsearch/repositories/hdfs/HdfsSecurityContext.java @@ -56,7 +56,9 @@ class HdfsSecurityContext { // 1) hadoop dynamic proxy is messy with access rules new ReflectPermission("suppressAccessChecks"), // 2) allow hadoop to add credentials to our Subject - new AuthPermission("modifyPrivateCredentials") + new AuthPermission("modifyPrivateCredentials"), + // 3) RPC Engine requires this for re-establishing pooled connections over the lifetime of the client + new PrivateCredentialPermission("org.apache.hadoop.security.Credentials * \"*\"", "read") }; // If Security is enabled, we need all the following elevated permissions: diff --git a/plugins/repository-hdfs/src/test/java/org/elasticsearch/repositories/hdfs/HdfsBlobStoreContainerTests.java b/plugins/repository-hdfs/src/test/java/org/elasticsearch/repositories/hdfs/HdfsBlobStoreContainerTests.java index 195dea9810224..a5d68331db78e 100644 --- a/plugins/repository-hdfs/src/test/java/org/elasticsearch/repositories/hdfs/HdfsBlobStoreContainerTests.java +++ b/plugins/repository-hdfs/src/test/java/org/elasticsearch/repositories/hdfs/HdfsBlobStoreContainerTests.java @@ -19,6 +19,20 @@ package org.elasticsearch.repositories.hdfs; +import com.carrotsearch.randomizedtesting.annotations.ThreadLeakFilters; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.AbstractFileSystem; +import org.apache.hadoop.fs.FileContext; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.UnsupportedFileSystemException; +import org.elasticsearch.common.SuppressForbidden; +import org.elasticsearch.common.blobstore.BlobContainer; +import org.elasticsearch.common.blobstore.BlobPath; +import org.elasticsearch.common.blobstore.BlobStore; +import org.elasticsearch.common.bytes.BytesArray; +import org.elasticsearch.repositories.ESBlobStoreContainerTestCase; + +import javax.security.auth.Subject; import java.io.IOException; import java.lang.reflect.Constructor; import java.lang.reflect.InvocationTargetException; @@ -29,22 +43,20 @@ import java.security.PrivilegedActionException; import java.security.PrivilegedExceptionAction; import java.util.Collections; -import javax.security.auth.Subject; -import com.carrotsearch.randomizedtesting.annotations.ThreadLeakFilters; -import org.apache.hadoop.conf.Configuration; -import org.apache.hadoop.fs.AbstractFileSystem; -import org.apache.hadoop.fs.FileContext; -import org.apache.hadoop.fs.UnsupportedFileSystemException; -import org.elasticsearch.common.SuppressForbidden; -import org.elasticsearch.common.blobstore.BlobStore; -import org.elasticsearch.repositories.ESBlobStoreContainerTestCase; +import static org.elasticsearch.repositories.ESBlobStoreTestCase.randomBytes; +import static org.elasticsearch.repositories.ESBlobStoreTestCase.readBlobFully; + @ThreadLeakFilters(filters = {HdfsClientThreadLeakFilter.class}) public class HdfsBlobStoreContainerTests extends ESBlobStoreContainerTestCase { @Override protected BlobStore newBlobStore() throws IOException { + return new HdfsBlobStore(createTestContext(), "temp", 1024, false); + } + + private FileContext createTestContext() { FileContext fileContext; try { fileContext = AccessController.doPrivileged((PrivilegedExceptionAction) @@ -52,7 +64,7 @@ protected BlobStore newBlobStore() throws IOException { } catch (PrivilegedActionException e) { throw new RuntimeException(e.getCause()); } - return new HdfsBlobStore(fileContext, "temp", 1024); + return fileContext; } @SuppressForbidden(reason = "lesser of two evils (the other being a bunch of JNI/classloader nightmares)") @@ -69,7 +81,7 @@ private FileContext createContext(URI uri) { Class clazz = Class.forName("org.apache.hadoop.security.User"); ctor = clazz.getConstructor(String.class); ctor.setAccessible(true); - } catch (ClassNotFoundException | NoSuchMethodException e) { + } catch (ClassNotFoundException | NoSuchMethodException e) { throw new RuntimeException(e); } @@ -98,4 +110,33 @@ private FileContext createContext(URI uri) { } }); } + + public void testReadOnly() throws Exception { + FileContext fileContext = createTestContext(); + // Constructor will not create dir if read only + HdfsBlobStore hdfsBlobStore = new HdfsBlobStore(fileContext, "dir", 1024, true); + FileContext.Util util = fileContext.util(); + Path root = fileContext.makeQualified(new Path("dir")); + assertFalse(util.exists(root)); + BlobPath blobPath = BlobPath.cleanPath().add("path"); + + // blobContainer() will not create path if read only + hdfsBlobStore.blobContainer(blobPath); + Path hdfsPath = root; + for (String p : blobPath) { + hdfsPath = new Path(hdfsPath, p); + } + assertFalse(util.exists(hdfsPath)); + + // if not read only, directory will be created + hdfsBlobStore = new HdfsBlobStore(fileContext, "dir", 1024, false); + assertTrue(util.exists(root)); + BlobContainer container = hdfsBlobStore.blobContainer(blobPath); + assertTrue(util.exists(hdfsPath)); + + byte[] data = randomBytes(randomIntBetween(10, scaledRandomIntBetween(1024, 1 << 16))); + writeBlob(container, "foo", new BytesArray(data)); + assertArrayEquals(readBlobFully(container, "foo", data.length), data); + assertTrue(container.blobExists("foo")); + } } diff --git a/plugins/repository-hdfs/src/test/resources/rest-api-spec/test/hdfs_repository/30_snapshot_readonly.yml b/plugins/repository-hdfs/src/test/resources/rest-api-spec/test/hdfs_repository/30_snapshot_readonly.yml new file mode 100644 index 0000000000000..c2a37964e70a7 --- /dev/null +++ b/plugins/repository-hdfs/src/test/resources/rest-api-spec/test/hdfs_repository/30_snapshot_readonly.yml @@ -0,0 +1,29 @@ +# Integration tests for HDFS Repository plugin +# +# Tests retrieving information about snapshot +# +--- +"Get a snapshot - readonly": + # Create repository + - do: + snapshot.create_repository: + repository: test_snapshot_repository_ro + body: + type: hdfs + settings: + uri: "hdfs://localhost:9999" + path: "/user/elasticsearch/existing/readonly-repository" + readonly: true + + # List snapshot info + - do: + snapshot.get: + repository: test_snapshot_repository_ro + snapshot: "_all" + + - length: { snapshots: 1 } + + # Remove our repository + - do: + snapshot.delete_repository: + repository: test_snapshot_repository_ro diff --git a/plugins/repository-hdfs/src/test/resources/rest-api-spec/test/secure_hdfs_repository/30_snapshot_readonly.yml b/plugins/repository-hdfs/src/test/resources/rest-api-spec/test/secure_hdfs_repository/30_snapshot_readonly.yml new file mode 100644 index 0000000000000..8c4c0347a156a --- /dev/null +++ b/plugins/repository-hdfs/src/test/resources/rest-api-spec/test/secure_hdfs_repository/30_snapshot_readonly.yml @@ -0,0 +1,31 @@ +# Integration tests for HDFS Repository plugin +# +# Tests retrieving information about snapshot +# +--- +"Get a snapshot - readonly": + # Create repository + - do: + snapshot.create_repository: + repository: test_snapshot_repository_ro + body: + type: hdfs + settings: + uri: "hdfs://localhost:9998" + path: "/user/elasticsearch/existing/readonly-repository" + security: + principal: "elasticsearch@BUILD.ELASTIC.CO" + readonly: true + + # List snapshot info + - do: + snapshot.get: + repository: test_snapshot_repository_ro + snapshot: "_all" + + - length: { snapshots: 1 } + + # Remove our repository + - do: + snapshot.delete_repository: + repository: test_snapshot_repository_ro diff --git a/plugins/store-smb/src/test/resources/rest-api-spec/test/store_smb/15_index_creation.yml b/plugins/store-smb/src/test/resources/rest-api-spec/test/store_smb/15_index_creation.yml index d036176e32083..53b036b6682b5 100644 --- a/plugins/store-smb/src/test/resources/rest-api-spec/test/store_smb/15_index_creation.yml +++ b/plugins/store-smb/src/test/resources/rest-api-spec/test/store_smb/15_index_creation.yml @@ -3,8 +3,9 @@ indices.create: index: smb-test body: - index: - store.type: smb_mmap_fs + settings: + index: + store.type: smb_mmap_fs - do: index: diff --git a/qa/evil-tests/src/test/java/org/elasticsearch/bootstrap/ESPolicyUnitTests.java b/qa/evil-tests/src/test/java/org/elasticsearch/bootstrap/ESPolicyUnitTests.java index ed71934c72ac4..f9931e9a182db 100644 --- a/qa/evil-tests/src/test/java/org/elasticsearch/bootstrap/ESPolicyUnitTests.java +++ b/qa/evil-tests/src/test/java/org/elasticsearch/bootstrap/ESPolicyUnitTests.java @@ -19,6 +19,7 @@ package org.elasticsearch.bootstrap; +import org.elasticsearch.common.SuppressForbidden; import org.elasticsearch.test.ESTestCase; import java.io.FilePermission; @@ -44,6 +45,7 @@ public class ESPolicyUnitTests extends ESTestCase { * even though ProtectionDomain's ctor javadocs might make you think * that the policy won't be consulted. */ + @SuppressForbidden(reason = "to create FilePermission object") public void testNullCodeSource() throws Exception { assumeTrue("test cannot run with security manager", System.getSecurityManager() == null); // create a policy with AllPermission @@ -61,6 +63,7 @@ public void testNullCodeSource() throws Exception { *

    * its unclear when/if this happens, see https://bugs.openjdk.java.net/browse/JDK-8129972 */ + @SuppressForbidden(reason = "to create FilePermission object") public void testNullLocation() throws Exception { assumeTrue("test cannot run with security manager", System.getSecurityManager() == null); PermissionCollection noPermissions = new Permissions(); diff --git a/qa/evil-tests/src/test/java/org/elasticsearch/bootstrap/EvilBootstrapChecksTests.java b/qa/evil-tests/src/test/java/org/elasticsearch/bootstrap/EvilBootstrapChecksTests.java index 8e346bf7d9cb9..0dc9ea0a170ba 100644 --- a/qa/evil-tests/src/test/java/org/elasticsearch/bootstrap/EvilBootstrapChecksTests.java +++ b/qa/evil-tests/src/test/java/org/elasticsearch/bootstrap/EvilBootstrapChecksTests.java @@ -21,6 +21,7 @@ import org.apache.logging.log4j.Logger; import org.elasticsearch.common.SuppressForbidden; +import org.elasticsearch.common.settings.Settings; import org.elasticsearch.node.NodeValidationException; import org.elasticsearch.test.ESTestCase; import org.hamcrest.Matcher; @@ -58,24 +59,13 @@ public void tearDown() throws Exception { public void testEnforceBootstrapChecks() throws NodeValidationException { setEsEnforceBootstrapChecks("true"); - final List checks = Collections.singletonList( - new BootstrapCheck() { - @Override - public boolean check() { - return true; - } + final List checks = Collections.singletonList(context -> BootstrapCheck.BootstrapCheckResult.failure("error")); - @Override - public String errorMessage() { - return "error"; - } - } - ); final Logger logger = mock(Logger.class); final NodeValidationException e = expectThrows( NodeValidationException.class, - () -> BootstrapChecks.check(false, checks, logger)); + () -> BootstrapChecks.check(new BootstrapContext(Settings.EMPTY, null), false, checks, logger)); final Matcher allOf = allOf(containsString("bootstrap checks failed"), containsString("error")); assertThat(e, hasToString(allOf)); @@ -87,7 +77,7 @@ public void testNonEnforcedBootstrapChecks() throws NodeValidationException { setEsEnforceBootstrapChecks(null); final Logger logger = mock(Logger.class); // nothing should happen - BootstrapChecks.check(false, emptyList(), logger); + BootstrapChecks.check(new BootstrapContext(Settings.EMPTY, null), false, emptyList(), logger); verifyNoMoreInteractions(logger); } @@ -97,7 +87,7 @@ public void testInvalidValue() { final boolean enforceLimits = randomBoolean(); final IllegalArgumentException e = expectThrows( IllegalArgumentException.class, - () -> BootstrapChecks.check(enforceLimits, emptyList(), "testInvalidValue")); + () -> BootstrapChecks.check(new BootstrapContext(Settings.EMPTY, null), enforceLimits, emptyList(), "testInvalidValue")); final Matcher matcher = containsString( "[es.enforce.bootstrap.checks] must be [true] but was [" + value + "]"); assertThat(e, hasToString(matcher)); diff --git a/qa/evil-tests/src/test/java/org/elasticsearch/bootstrap/EvilElasticsearchCliTests.java b/qa/evil-tests/src/test/java/org/elasticsearch/bootstrap/EvilElasticsearchCliTests.java index 02caeca808944..2f74a5180e195 100644 --- a/qa/evil-tests/src/test/java/org/elasticsearch/bootstrap/EvilElasticsearchCliTests.java +++ b/qa/evil-tests/src/test/java/org/elasticsearch/bootstrap/EvilElasticsearchCliTests.java @@ -23,6 +23,7 @@ import org.elasticsearch.cli.ExitCodes; import org.elasticsearch.common.SuppressForbidden; +import org.elasticsearch.common.settings.Settings; import static org.hamcrest.CoreMatchers.equalTo; import static org.hamcrest.Matchers.hasEntry; @@ -41,10 +42,10 @@ public void testPathHome() throws Exception { true, output -> {}, (foreground, pidFile, quiet, esSettings) -> { - Map settings = esSettings.settings().getAsMap(); + Settings settings = esSettings.settings(); assertThat(settings.size(), equalTo(2)); - assertThat(settings, hasEntry("path.home", value)); - assertThat(settings, hasKey("path.logs")); // added by env initialization + assertEquals(value, settings.get("path.home")); + assertTrue(settings.keySet().contains("path.logs")); // added by env initialization }); System.clearProperty("es.path.home"); @@ -54,10 +55,10 @@ public void testPathHome() throws Exception { true, output -> {}, (foreground, pidFile, quiet, esSettings) -> { - Map settings = esSettings.settings().getAsMap(); + Settings settings = esSettings.settings(); assertThat(settings.size(), equalTo(2)); - assertThat(settings, hasEntry("path.home", commandLineValue)); - assertThat(settings, hasKey("path.logs")); // added by env initialization + assertEquals(commandLineValue, settings.get("path.home")); + assertTrue(settings.keySet().contains("path.logs")); // added by env initialization }, "-Epath.home=" + commandLineValue); diff --git a/qa/evil-tests/src/test/java/org/elasticsearch/bootstrap/EvilSecurityTests.java b/qa/evil-tests/src/test/java/org/elasticsearch/bootstrap/EvilSecurityTests.java index c54cab44a019f..aa753f6d4509a 100644 --- a/qa/evil-tests/src/test/java/org/elasticsearch/bootstrap/EvilSecurityTests.java +++ b/qa/evil-tests/src/test/java/org/elasticsearch/bootstrap/EvilSecurityTests.java @@ -22,9 +22,9 @@ import org.apache.lucene.util.Constants; import org.elasticsearch.common.SuppressForbidden; import org.elasticsearch.common.io.PathUtils; -import org.elasticsearch.common.settings.Setting; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.env.Environment; +import org.elasticsearch.env.TestEnvironment; import org.elasticsearch.test.ESTestCase; import java.io.FilePermission; @@ -55,7 +55,7 @@ public void testGeneratedPermissions() throws Exception { Permissions permissions; try { System.setProperty("java.io.tmpdir", fakeTmpDir.toString()); - Environment environment = new Environment(settings); + Environment environment = TestEnvironment.newEnvironment(settings); permissions = Security.createPermissions(environment); } finally { System.setProperty("java.io.tmpdir", realTmpDir); @@ -73,6 +73,7 @@ public void testGeneratedPermissions() throws Exception { /** test generated permissions for all configured paths */ @SuppressWarnings("deprecation") // needs to check settings for deprecated path + @SuppressForbidden(reason = "to create FilePermission object") public void testEnvironmentPaths() throws Exception { Path path = createTempDir(); // make a fake ES home and ensure we only grant permissions to that. @@ -80,7 +81,7 @@ public void testEnvironmentPaths() throws Exception { Settings.Builder settingsBuilder = Settings.builder(); settingsBuilder.put(Environment.PATH_HOME_SETTING.getKey(), esHome.resolve("home").toString()); - settingsBuilder.putArray(Environment.PATH_DATA_SETTING.getKey(), esHome.resolve("data1").toString(), + settingsBuilder.putList(Environment.PATH_DATA_SETTING.getKey(), esHome.resolve("data1").toString(), esHome.resolve("data2").toString()); settingsBuilder.put(Environment.PATH_SHARED_DATA_SETTING.getKey(), esHome.resolve("custom").toString()); settingsBuilder.put(Environment.PATH_LOGS_SETTING.getKey(), esHome.resolve("logs").toString()); @@ -153,10 +154,10 @@ public void testDuplicateDataPaths() throws IOException { Settings .builder() .put(Environment.PATH_HOME_SETTING.getKey(), home.toString()) - .putArray(Environment.PATH_DATA_SETTING.getKey(), data.toString(), duplicate.toString()) + .putList(Environment.PATH_DATA_SETTING.getKey(), data.toString(), duplicate.toString()) .build(); - final Environment environment = new Environment(settings); + final Environment environment = TestEnvironment.newEnvironment(settings); final IllegalStateException e = expectThrows(IllegalStateException.class, () -> Security.createPermissions(environment)); assertThat(e, hasToString(containsString("path [" + duplicate.toRealPath() + "] is duplicated by [" + duplicate + "]"))); } @@ -217,7 +218,7 @@ public void testSymlinkPermissions() throws IOException { assumeNoException("test cannot create symbolic links with security manager enabled", e); } Permissions permissions = new Permissions(); - Security.addPath(permissions, "testing", link, "read"); + FilePermissionUtils.addDirectoryPath(permissions, "testing", link, "read"); assertExactPermissions(new FilePermission(link.toString(), "read"), permissions); assertExactPermissions(new FilePermission(link.resolve("foo").toString(), "read"), permissions); assertExactPermissions(new FilePermission(target.toString(), "read"), permissions); @@ -227,6 +228,7 @@ public void testSymlinkPermissions() throws IOException { /** * checks exact file permissions, meaning those and only those for that path. */ + @SuppressForbidden(reason = "to create FilePermission object") static void assertExactPermissions(FilePermission expected, PermissionCollection actual) { String target = expected.getName(); // see javadocs Set permissionSet = asSet(expected.getActions().split(",")); @@ -246,6 +248,7 @@ static void assertExactPermissions(FilePermission expected, PermissionCollection /** * checks that this path has no permissions */ + @SuppressForbidden(reason = "to create FilePermission object") static void assertNoPermissions(Path path, PermissionCollection actual) { String target = path.toString(); assertFalse(actual.implies(new FilePermission(target, "read"))); diff --git a/qa/evil-tests/src/test/java/org/elasticsearch/common/logging/EvilLoggerTests.java b/qa/evil-tests/src/test/java/org/elasticsearch/common/logging/EvilLoggerTests.java index ce6c7d74cfc5e..d4bc754689e68 100644 --- a/qa/evil-tests/src/test/java/org/elasticsearch/common/logging/EvilLoggerTests.java +++ b/qa/evil-tests/src/test/java/org/elasticsearch/common/logging/EvilLoggerTests.java @@ -164,7 +164,9 @@ public void testConcurrentDeprecationLogger() throws IOException, UserException, final Set actualWarningValues = warnings.stream().map(DeprecationLogger::extractWarningValueFromWarningHeader).collect(Collectors.toSet()); for (int j = 0; j < 128; j++) { - assertThat(actualWarningValues, hasItem(DeprecationLogger.escape("This is a maybe logged deprecation message" + j))); + assertThat( + actualWarningValues, + hasItem(DeprecationLogger.escapeAndEncode("This is a maybe logged deprecation message" + j))); } try { diff --git a/qa/evil-tests/src/test/java/org/elasticsearch/env/NodeEnvironmentEvilTests.java b/qa/evil-tests/src/test/java/org/elasticsearch/env/NodeEnvironmentEvilTests.java index 3eebf4a2f6481..57d4a363cc8c7 100644 --- a/qa/evil-tests/src/test/java/org/elasticsearch/env/NodeEnvironmentEvilTests.java +++ b/qa/evil-tests/src/test/java/org/elasticsearch/env/NodeEnvironmentEvilTests.java @@ -50,9 +50,9 @@ public void testMissingWritePermission() throws IOException { PosixFilePermission.OWNER_READ))); Settings build = Settings.builder() .put(Environment.PATH_HOME_SETTING.getKey(), createTempDir().toAbsolutePath().toString()) - .putArray(Environment.PATH_DATA_SETTING.getKey(), tempPaths).build(); + .putList(Environment.PATH_DATA_SETTING.getKey(), tempPaths).build(); IOException ioException = expectThrows(IOException.class, () -> { - new NodeEnvironment(build, new Environment(build)); + new NodeEnvironment(build, TestEnvironment.newEnvironment(build)); }); assertTrue(ioException.getMessage(), ioException.getMessage().startsWith(path.toString())); } @@ -70,9 +70,9 @@ public void testMissingWritePermissionOnIndex() throws IOException { PosixFilePermission.OWNER_READ))); Settings build = Settings.builder() .put(Environment.PATH_HOME_SETTING.getKey(), createTempDir().toAbsolutePath().toString()) - .putArray(Environment.PATH_DATA_SETTING.getKey(), tempPaths).build(); + .putList(Environment.PATH_DATA_SETTING.getKey(), tempPaths).build(); IOException ioException = expectThrows(IOException.class, () -> { - new NodeEnvironment(build, new Environment(build)); + new NodeEnvironment(build, TestEnvironment.newEnvironment(build)); }); assertTrue(ioException.getMessage(), ioException.getMessage().startsWith("failed to test writes in data directory")); } @@ -95,9 +95,9 @@ public void testMissingWritePermissionOnShard() throws IOException { PosixFilePermission.OWNER_READ))); Settings build = Settings.builder() .put(Environment.PATH_HOME_SETTING.getKey(), createTempDir().toAbsolutePath().toString()) - .putArray(Environment.PATH_DATA_SETTING.getKey(), tempPaths).build(); + .putList(Environment.PATH_DATA_SETTING.getKey(), tempPaths).build(); IOException ioException = expectThrows(IOException.class, () -> { - new NodeEnvironment(build, new Environment(build)); + new NodeEnvironment(build, TestEnvironment.newEnvironment(build)); }); assertTrue(ioException.getMessage(), ioException.getMessage().startsWith("failed to test writes in data directory")); } diff --git a/qa/evil-tests/src/test/java/org/elasticsearch/index/engine/EvilInternalEngineTests.java b/qa/evil-tests/src/test/java/org/elasticsearch/index/engine/EvilInternalEngineTests.java new file mode 100644 index 0000000000000..c32b3ab202080 --- /dev/null +++ b/qa/evil-tests/src/test/java/org/elasticsearch/index/engine/EvilInternalEngineTests.java @@ -0,0 +1,107 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.index.engine; + +import org.apache.lucene.index.IndexWriter; +import org.apache.lucene.index.MergePolicy; +import org.apache.lucene.index.SegmentCommitInfo; +import org.elasticsearch.index.mapper.ParsedDocument; + +import java.io.IOException; +import java.util.List; +import java.util.concurrent.CountDownLatch; +import java.util.concurrent.atomic.AtomicReference; +import java.util.stream.Collectors; +import java.util.stream.StreamSupport; + +import static org.hamcrest.Matchers.containsString; +import static org.hamcrest.Matchers.hasToString; +import static org.hamcrest.Matchers.instanceOf; + +public class EvilInternalEngineTests extends EngineTestCase { + + public void testOutOfMemoryErrorWhileMergingIsRethrownAndIsUncaught() throws IOException, InterruptedException { + engine.close(); + final AtomicReference maybeFatal = new AtomicReference<>(); + final CountDownLatch latch = new CountDownLatch(1); + final Thread.UncaughtExceptionHandler uncaughtExceptionHandler = Thread.getDefaultUncaughtExceptionHandler(); + try { + /* + * We want to test that the out of memory error thrown from the merge goes uncaught; this gives us confidence that an out of + * memory error thrown while merging will lead to the node being torn down. + */ + Thread.setDefaultUncaughtExceptionHandler((t, e) -> { + maybeFatal.set(e); + latch.countDown(); + }); + final AtomicReference> segmentsReference = new AtomicReference<>(); + + try (Engine e = createEngine( + defaultSettings, + store, + primaryTranslogDir, + newMergePolicy(), + (directory, iwc) -> new IndexWriter(directory, iwc) { + @Override + public void merge(final MergePolicy.OneMerge merge) throws IOException { + throw new OutOfMemoryError("640K ought to be enough for anybody"); + } + + @Override + public synchronized MergePolicy.OneMerge getNextMerge() { + /* + * This will be called when we flush when we will not be ready to return the segments. After the segments are on + * disk, we can only return them from here once or the merge scheduler will be stuck in a loop repeatedly + * peeling off the same segments to schedule for merging. + */ + if (segmentsReference.get() == null) { + return super.getNextMerge(); + } else { + final List segments = segmentsReference.getAndSet(null); + return new MergePolicy.OneMerge(segments); + } + } + }, + null)) { + // force segments to exist on disk + final ParsedDocument doc1 = testParsedDocument("1", null, testDocumentWithTextField(), B_1, null); + e.index(indexForDoc(doc1)); + e.flush(); + final List segments = + StreamSupport.stream(e.getLastCommittedSegmentInfos().spliterator(), false).collect(Collectors.toList()); + segmentsReference.set(segments); + // trigger a background merge that will be managed by the concurrent merge scheduler + e.forceMerge(randomBoolean(), 0, false, false, false); + /* + * Merging happens in the background on a merge thread, and the maybeDie handler is invoked on yet another thread; we have + * to wait for these events to finish. + */ + latch.await(); + assertNotNull(maybeFatal.get()); + assertThat(maybeFatal.get(), instanceOf(OutOfMemoryError.class)); + assertThat(maybeFatal.get(), hasToString(containsString("640K ought to be enough for anybody"))); + } + } finally { + Thread.setDefaultUncaughtExceptionHandler(uncaughtExceptionHandler); + } + } + + +} diff --git a/qa/evil-tests/src/test/java/org/elasticsearch/plugins/PluginSecurityTests.java b/qa/evil-tests/src/test/java/org/elasticsearch/plugins/PluginSecurityTests.java index 77ecd12f786fb..1c60e3264c72e 100644 --- a/qa/evil-tests/src/test/java/org/elasticsearch/plugins/PluginSecurityTests.java +++ b/qa/evil-tests/src/test/java/org/elasticsearch/plugins/PluginSecurityTests.java @@ -48,7 +48,7 @@ public void testHasNativeController() throws IOException { "test cannot run with security manager enabled", System.getSecurityManager() == null); final PluginInfo info = - new PluginInfo("fake", "fake", Version.CURRENT.toString(), "Fake", true); + new PluginInfo("fake", "fake", Version.CURRENT.toString(), "Fake", true, false); final MockTerminal terminal = new MockTerminal(); terminal.addTextInput("y"); terminal.addTextInput("y"); @@ -63,7 +63,7 @@ public void testDeclineNativeController() throws IOException { "test cannot run with security manager enabled", System.getSecurityManager() == null); final PluginInfo info = - new PluginInfo("fake", "fake", Version.CURRENT.toString(), "Fake", true); + new PluginInfo("fake", "fake", Version.CURRENT.toString(), "Fake", true, false); final MockTerminal terminal = new MockTerminal(); terminal.addTextInput("y"); terminal.addTextInput("n"); @@ -79,7 +79,7 @@ public void testDoesNotHaveNativeController() throws IOException { "test cannot run with security manager enabled", System.getSecurityManager() == null); final PluginInfo info = - new PluginInfo("fake", "fake", Version.CURRENT.toString(), "Fake", false); + new PluginInfo("fake", "fake", Version.CURRENT.toString(), "Fake", false, false); final MockTerminal terminal = new MockTerminal(); terminal.addTextInput("y"); final Path policyFile = this.getDataPath("security/simple-plugin-security.policy"); diff --git a/qa/full-cluster-restart/build.gradle b/qa/full-cluster-restart/build.gradle index 8759cac4157bd..f271dae5cfda1 100644 --- a/qa/full-cluster-restart/build.gradle +++ b/qa/full-cluster-restart/build.gradle @@ -48,6 +48,13 @@ for (Version version : indexCompatVersions) { numBwcNodes = 2 numNodes = 2 clusterName = 'full-cluster-restart' + + // some tests rely on the translog not being flushed + setting 'indices.memory.shard_inactive_time', '20m' + + // debug logging for testRecovery + setting 'logger.level', 'DEBUG' + if (version.onOrAfter('5.3.0')) { setting 'http.content_type.required', 'true' } @@ -64,6 +71,13 @@ for (Version version : indexCompatVersions) { "${baseName}#oldClusterTestCluster#node1.stop" distribution = 'zip' clusterName = 'full-cluster-restart' + + // some tests rely on the translog not being flushed + setting 'indices.memory.shard_inactive_time', '20m' + + // debug logging for testRecovery + setting 'logger.level', 'DEBUG' + numNodes = 2 dataDir = { nodeNum -> oldClusterTest.nodes[nodeNum].dataDir } cleanShared = false // We want to keep snapshots made by the old cluster! @@ -73,6 +87,7 @@ for (Version version : indexCompatVersions) { systemProperty 'tests.is_old_cluster', 'false' systemProperty 'tests.old_cluster_version', version.toString().minus("-SNAPSHOT") systemProperty 'tests.path.repo', new File(buildDir, "cluster/shared/repo") + } Task versionBwcTest = tasks.create(name: "${baseName}#bwcTest") { @@ -89,7 +104,10 @@ test.enabled = false // no unit tests for rolling upgrades, only the rest integr // basic integ tests includes testing bwc against the most recent version task integTest { if (project.bwc_tests_enabled) { - dependsOn = ["v${indexCompatVersions[-1]}#bwcTest"] + dependsOn "v${indexCompatVersions[-1]}#bwcTest" + if (indexCompatVersions[-1].bugfix == 0) { + dependsOn "v${indexCompatVersions[-2]}#bwcTest" + } } } diff --git a/qa/full-cluster-restart/src/test/java/org/elasticsearch/upgrades/FullClusterRestartIT.java b/qa/full-cluster-restart/src/test/java/org/elasticsearch/upgrades/FullClusterRestartIT.java index 5f1e6bbdf302f..22859859f2521 100644 --- a/qa/full-cluster-restart/src/test/java/org/elasticsearch/upgrades/FullClusterRestartIT.java +++ b/qa/full-cluster-restart/src/test/java/org/elasticsearch/upgrades/FullClusterRestartIT.java @@ -19,12 +19,13 @@ package org.elasticsearch.upgrades; -import org.elasticsearch.client.http.HttpEntity; -import org.elasticsearch.client.http.entity.ContentType; -import org.elasticsearch.client.http.entity.StringEntity; -import org.elasticsearch.client.http.util.EntityUtils; +import org.apache.http.HttpEntity; +import org.apache.http.entity.ContentType; +import org.apache.http.entity.StringEntity; +import org.apache.http.util.EntityUtils; import org.elasticsearch.Version; import org.elasticsearch.client.Response; +import org.elasticsearch.client.RestClient; import org.elasticsearch.common.Booleans; import org.elasticsearch.common.CheckedFunction; import org.elasticsearch.common.xcontent.XContentBuilder; @@ -33,15 +34,19 @@ import org.elasticsearch.common.xcontent.support.XContentMapValues; import org.elasticsearch.test.NotEqualMessageBuilder; import org.elasticsearch.test.rest.ESRestTestCase; +import org.elasticsearch.test.rest.yaml.ObjectPath; import org.junit.Before; import java.io.IOException; +import java.util.ArrayList; import java.util.Base64; import java.util.Collections; import java.util.HashMap; +import java.util.HashSet; import java.util.List; import java.util.Locale; import java.util.Map; +import java.util.Set; import java.util.regex.Matcher; import java.util.regex.Pattern; @@ -52,6 +57,7 @@ import static org.hamcrest.Matchers.containsString; import static org.hamcrest.Matchers.equalTo; import static org.hamcrest.Matchers.greaterThan; +import static org.hamcrest.Matchers.notNullValue; /** * Tests to run before and after a full cluster restart. This is run twice, @@ -225,17 +231,15 @@ public void testNewReplicasWork() throws Exception { Map recoverRsp = toMap(client().performRequest("GET", "/" + index + "/_recovery")); logger.debug("--> recovery status:\n{}", recoverRsp); - Map responseBody = toMap(client().performRequest("GET", "/" + index + "/_search", - Collections.singletonMap("preference", "_primary"))); - assertNoFailures(responseBody); - int foundHits1 = (int) XContentMapValues.extractValue("hits.total", responseBody); - - responseBody = toMap(client().performRequest("GET", "/" + index + "/_search", - Collections.singletonMap("preference", "_replica"))); - assertNoFailures(responseBody); - int foundHits2 = (int) XContentMapValues.extractValue("hits.total", responseBody); - assertEquals(foundHits1, foundHits2); - // TODO: do something more with the replicas! index? + Set counts = new HashSet<>(); + for (String node : dataNodes(index, client())) { + Map responseBody = toMap(client().performRequest("GET", "/" + index + "/_search", + Collections.singletonMap("preference", "_only_nodes:" + node))); + assertNoFailures(responseBody); + int hits = (int) XContentMapValues.extractValue("hits.total", responseBody); + counts.add(hits); + } + assertEquals("All nodes should have a consistent number of documents", 1, counts.size()); } } @@ -761,6 +765,39 @@ public void testSnapshotRestore() throws IOException { } } + public void testHistoryUUIDIsAdded() throws Exception { + if (runningAgainstOldCluster) { + XContentBuilder mappingsAndSettings = jsonBuilder(); + mappingsAndSettings.startObject(); + { + mappingsAndSettings.startObject("settings"); + mappingsAndSettings.field("number_of_shards", 1); + mappingsAndSettings.field("number_of_replicas", 1); + mappingsAndSettings.endObject(); + } + mappingsAndSettings.endObject(); + client().performRequest("PUT", "/" + index, Collections.emptyMap(), + new StringEntity(mappingsAndSettings.string(), ContentType.APPLICATION_JSON)); + } else { + Response response = client().performRequest("GET", index + "/_stats", singletonMap("level", "shards")); + List shardStats = ObjectPath.createFromResponse(response).evaluate("indices." + index + ".shards.0"); + String globalHistoryUUID = null; + for (Object shard : shardStats) { + final String nodeId = ObjectPath.evaluate(shard, "routing.node"); + final Boolean primary = ObjectPath.evaluate(shard, "routing.primary"); + logger.info("evaluating: {} , {}", ObjectPath.evaluate(shard, "routing"), ObjectPath.evaluate(shard, "commit")); + String historyUUID = ObjectPath.evaluate(shard, "commit.user_data.history_uuid"); + assertThat("no history uuid found on " + nodeId + " (primary: " + primary + ")", historyUUID, notNullValue()); + if (globalHistoryUUID == null) { + globalHistoryUUID = historyUUID; + } else { + assertThat("history uuid mismatch on " + nodeId + " (primary: " + primary + ")", historyUUID, + equalTo(globalHistoryUUID)); + } + } + } + } + private void checkSnapshot(String snapshotName, int count, Version tookOnVersion) throws IOException { // Check the snapshot metadata, especially the version String response = toStr(client().performRequest("GET", "/_snapshot/repo/" + snapshotName, listSnapshotVerboseParams())); @@ -905,4 +942,15 @@ private void refresh() throws IOException { logger.debug("Refreshing [{}]", index); client().performRequest("POST", "/" + index + "/_refresh"); } + + private List dataNodes(String index, RestClient client) throws IOException { + Response response = client.performRequest("GET", index + "/_stats", singletonMap("level", "shards")); + List nodes = new ArrayList<>(); + List shardStats = ObjectPath.createFromResponse(response).evaluate("indices." + index + ".shards.0"); + for (Object shard : shardStats) { + final String nodeId = ObjectPath.evaluate(shard, "routing.node"); + nodes.add(nodeId); + } + return nodes; + } } diff --git a/qa/mixed-cluster/build.gradle b/qa/mixed-cluster/build.gradle index 66185325931d8..781a69684e5d4 100644 --- a/qa/mixed-cluster/build.gradle +++ b/qa/mixed-cluster/build.gradle @@ -37,14 +37,14 @@ for (Version version : wireCompatVersions) { includePackaged = true } - /* This project runs the core REST tests against a 2 node cluster where one of + /* This project runs the core REST tests against a 4 node cluster where two of the nodes has a different minor. */ Object extension = extensions.findByName("${baseName}#mixedClusterTestCluster") - configure(extensions.findByName("${baseName}#mixedClusterTestCluster")) { + configure(extension) { distribution = 'zip' numNodes = 4 numBwcNodes = 2 - bwcVersion = project.wireCompatVersions[-1] + bwcVersion = version } Task versionBwcTest = tasks.create(name: "${baseName}#bwcTest") { diff --git a/qa/mixed-cluster/src/test/java/org/elasticsearch/backwards/IndexingIT.java b/qa/mixed-cluster/src/test/java/org/elasticsearch/backwards/IndexingIT.java index 8b9c322cddb05..f744b3029b125 100644 --- a/qa/mixed-cluster/src/test/java/org/elasticsearch/backwards/IndexingIT.java +++ b/qa/mixed-cluster/src/test/java/org/elasticsearch/backwards/IndexingIT.java @@ -18,9 +18,9 @@ */ package org.elasticsearch.backwards; -import org.elasticsearch.client.http.HttpHost; -import org.elasticsearch.client.http.entity.ContentType; -import org.elasticsearch.client.http.entity.StringEntity; +import org.apache.http.HttpHost; +import org.apache.http.entity.ContentType; +import org.apache.http.entity.StringEntity; import org.elasticsearch.Version; import org.elasticsearch.client.Response; import org.elasticsearch.client.RestClient; @@ -28,7 +28,6 @@ import org.elasticsearch.common.Strings; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.index.seqno.SeqNoStats; -import org.elasticsearch.index.seqno.SequenceNumbersService; import org.elasticsearch.test.rest.ESRestTestCase; import org.elasticsearch.test.rest.yaml.ObjectPath; @@ -43,27 +42,10 @@ import static com.carrotsearch.randomizedtesting.RandomizedTest.randomAsciiOfLength; import static java.util.Collections.emptyMap; import static java.util.Collections.singletonMap; -import static org.hamcrest.Matchers.anyOf; import static org.hamcrest.Matchers.equalTo; public class IndexingIT extends ESRestTestCase { - private void assertOK(Response response) { - assertThat(response.getStatusLine().getStatusCode(), anyOf(equalTo(200), equalTo(201))); - } - - private void ensureGreen() throws IOException { - Map params = new HashMap<>(); - params.put("wait_for_status", "green"); - params.put("wait_for_no_relocating_shards", "true"); - assertOK(client().performRequest("GET", "_cluster/health", params)); - } - - private void createIndex(String name, Settings settings) throws IOException { - assertOK(client().performRequest("PUT", name, Collections.emptyMap(), - new StringEntity("{ \"settings\": " + Strings.toString(settings) + " }", ContentType.APPLICATION_JSON))); - } - private void updateIndexSetting(String name, Settings.Builder settings) throws IOException { updateIndexSetting(name, settings.build()); } @@ -189,9 +171,6 @@ public void testIndexVersionPropagation() throws Exception { assertVersion(index, 5, "_only_nodes:" + shard.getNode().getNodeName(), finalVersionForDoc5); assertCount(index, "_only_nodes:" + shard.getNode().getNodeName(), 5); } - // the number of documents on the primary and on the recovered replica should match the number of indexed documents - assertCount(index, "_primary", 5); - assertCount(index, "_replica", 5); } } @@ -213,20 +192,19 @@ public void testSeqNoCheckpoints() throws Exception { int numDocs = 0; final int numberOfInitialDocs = 1 + randomInt(5); logger.info("indexing [{}] docs initially", numberOfInitialDocs); - numDocs += indexDocs(index, numDocs, numberOfInitialDocs); - assertOK(client().performRequest("POST", index + "/_refresh")); // this forces a global checkpoint sync - assertSeqNoOnShards(index, nodes, numDocs, newNodeClient); + numDocs += indexDocs(index, 0, numberOfInitialDocs); + assertSeqNoOnShards(index, nodes, nodes.getBWCVersion().major >= 6 ? numDocs : 0, newNodeClient); logger.info("allowing shards on all nodes"); updateIndexSetting(index, Settings.builder().putNull("index.routing.allocation.include._name")); ensureGreen(); + assertOK(client().performRequest("POST", index + "/_refresh")); for (final String bwcName : bwcNamesList) { assertCount(index, "_only_nodes:" + bwcName, numDocs); } final int numberOfDocsAfterAllowingShardsOnAllNodes = 1 + randomInt(5); logger.info("indexing [{}] docs after allowing shards on all nodes", numberOfDocsAfterAllowingShardsOnAllNodes); numDocs += indexDocs(index, numDocs, numberOfDocsAfterAllowingShardsOnAllNodes); - assertOK(client().performRequest("POST", index + "/_refresh")); // this forces a global checkpoint sync - assertSeqNoOnShards(index, nodes, numDocs, newNodeClient); + assertSeqNoOnShards(index, nodes, nodes.getBWCVersion().major >= 6 ? numDocs : 0, newNodeClient); Shard primary = buildShards(index, nodes, newNodeClient).stream().filter(Shard::isPrimary).findFirst().get(); logger.info("moving primary to new node by excluding {}", primary.getNode().getNodeName()); updateIndexSetting(index, Settings.builder().put("index.routing.allocation.exclude._name", primary.getNode().getNodeName())); @@ -236,8 +214,7 @@ public void testSeqNoCheckpoints() throws Exception { logger.info("indexing [{}] docs after moving primary", numberOfDocsAfterMovingPrimary); numDocsOnNewPrimary += indexDocs(index, numDocs, numberOfDocsAfterMovingPrimary); numDocs += numberOfDocsAfterMovingPrimary; - assertOK(client().performRequest("POST", index + "/_refresh")); // this forces a global checkpoint sync - assertSeqNoOnShards(index, nodes, numDocs, newNodeClient); + assertSeqNoOnShards(index, nodes, nodes.getBWCVersion().major >= 6 ? numDocs : numDocsOnNewPrimary, newNodeClient); /* * Dropping the number of replicas to zero, and then increasing it to one triggers a recovery thus exercising any BWC-logic in * the recovery code. @@ -252,10 +229,11 @@ public void testSeqNoCheckpoints() throws Exception { updateIndexSetting(index, Settings.builder().put("index.number_of_replicas", 1)); ensureGreen(); assertOK(client().performRequest("POST", index + "/_refresh")); - // the number of documents on the primary and on the recovered replica should match the number of indexed documents - assertCount(index, "_primary", numDocs); - assertCount(index, "_replica", numDocs); - assertSeqNoOnShards(index, nodes, numDocs, newNodeClient); + + for (Shard shard : buildShards(index, nodes, newNodeClient)) { + assertCount(index, "_only_nodes:" + shard.node.nodeName, numDocs); + } + assertSeqNoOnShards(index, nodes, nodes.getBWCVersion().major >= 6 ? numDocs : numDocsOnNewPrimary, newNodeClient); } } diff --git a/qa/multi-cluster-search/src/test/resources/rest-api-spec/test/multi_cluster/10_basic.yml b/qa/multi-cluster-search/src/test/resources/rest-api-spec/test/multi_cluster/10_basic.yml index e6b0e9d13c0fc..7726a1df0b10d 100644 --- a/qa/multi-cluster-search/src/test/resources/rest-api-spec/test/multi_cluster/10_basic.yml +++ b/qa/multi-cluster-search/src/test/resources/rest-api-spec/test/multi_cluster/10_basic.yml @@ -165,3 +165,14 @@ - match: { hits.total: 2 } - match: { hits.hits.0._source.filter_field: 1 } - match: { hits.hits.0._index: "my_remote_cluster:test_index" } + +--- +"Single shard search gets properly proxied": + + - do: + search: + index: "my_remote_cluster:single_doc_index" + + - match: { _shards.total: 1 } + - match: { hits.total: 1 } + - match: { hits.hits.0._index: "my_remote_cluster:single_doc_index"} diff --git a/qa/no-bootstrap-tests/src/test/java/org/elasticsearch/bootstrap/SpawnerNoBootstrapTests.java b/qa/no-bootstrap-tests/src/test/java/org/elasticsearch/bootstrap/SpawnerNoBootstrapTests.java index 8da6d8d3afe23..d9d4ab5c3aca9 100644 --- a/qa/no-bootstrap-tests/src/test/java/org/elasticsearch/bootstrap/SpawnerNoBootstrapTests.java +++ b/qa/no-bootstrap-tests/src/test/java/org/elasticsearch/bootstrap/SpawnerNoBootstrapTests.java @@ -24,6 +24,7 @@ import org.elasticsearch.Version; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.env.Environment; +import org.elasticsearch.env.TestEnvironment; import org.elasticsearch.plugins.PluginTestUtil; import org.elasticsearch.plugins.Platforms; @@ -31,7 +32,9 @@ import java.io.IOException; import java.io.InputStreamReader; import java.nio.charset.StandardCharsets; +import java.nio.file.FileSystemException; import java.nio.file.Files; +import java.nio.file.NoSuchFileException; import java.nio.file.Path; import java.nio.file.attribute.PosixFileAttributeView; import java.nio.file.attribute.PosixFilePermission; @@ -40,8 +43,11 @@ import java.util.Set; import java.util.concurrent.TimeUnit; +import static org.hamcrest.Matchers.containsString; import static org.hamcrest.Matchers.equalTo; import static org.hamcrest.Matchers.hasSize; +import static org.hamcrest.Matchers.hasToString; +import static org.hamcrest.Matchers.instanceOf; /** * Create a simple "daemon controller", put it in the right place and check that it runs. @@ -67,7 +73,7 @@ public void testNoControllerSpawn() throws IOException, InterruptedException { settingsBuilder.put(Environment.PATH_HOME_SETTING.getKey(), esHome.toString()); Settings settings = settingsBuilder.build(); - Environment environment = new Environment(settings); + Environment environment = TestEnvironment.newEnvironment(settings); // This plugin will NOT have a controller daemon Path plugin = environment.pluginsFile().resolve("a_plugin"); @@ -103,7 +109,7 @@ public void testControllerSpawn() throws IOException, InterruptedException { settingsBuilder.put(Environment.PATH_HOME_SETTING.getKey(), esHome.toString()); Settings settings = settingsBuilder.build(); - Environment environment = new Environment(settings); + Environment environment = TestEnvironment.newEnvironment(settings); // this plugin will have a controller daemon Path plugin = environment.pluginsFile().resolve("test_plugin"); @@ -164,7 +170,7 @@ public void testControllerSpawnWithIncorrectDescriptor() throws IOException { settingsBuilder.put(Environment.PATH_HOME_SETTING.getKey(), esHome.toString()); Settings settings = settingsBuilder.build(); - Environment environment = new Environment(settings); + Environment environment = TestEnvironment.newEnvironment(settings); Path plugin = environment.pluginsFile().resolve("test_plugin"); Files.createDirectories(plugin); @@ -189,6 +195,33 @@ public void testControllerSpawnWithIncorrectDescriptor() throws IOException { equalTo("plugin [test_plugin] does not have permission to fork native controller")); } + public void testSpawnerHandlingOfDesktopServicesStoreFiles() throws IOException { + final Path esHome = createTempDir().resolve("home"); + final Settings settings = Settings.builder().put(Environment.PATH_HOME_SETTING.getKey(), esHome.toString()).build(); + + final Environment environment = TestEnvironment.newEnvironment(settings); + + Files.createDirectories(environment.pluginsFile()); + + final Path desktopServicesStore = environment.pluginsFile().resolve(".DS_Store"); + Files.createFile(desktopServicesStore); + + final Spawner spawner = new Spawner(); + if (Constants.MAC_OS_X) { + // if the spawner were not skipping the Desktop Services Store files on macOS this would explode + spawner.spawnNativePluginControllers(environment); + } else { + // we do not ignore these files on non-macOS systems + final FileSystemException e = + expectThrows(FileSystemException.class, () -> spawner.spawnNativePluginControllers(environment)); + if (Constants.WINDOWS) { + assertThat(e, instanceOf(NoSuchFileException.class)); + } else { + assertThat(e, hasToString(containsString("Not a directory"))); + } + } + } + private void createControllerProgram(final Path outputFile) throws IOException { final Path outputDir = outputFile.getParent(); Files.createDirectories(outputDir); diff --git a/qa/query-builder-bwc/build.gradle b/qa/query-builder-bwc/build.gradle index dbc438f673875..f1e7ad6f640f0 100644 --- a/qa/query-builder-bwc/build.gradle +++ b/qa/query-builder-bwc/build.gradle @@ -30,12 +30,7 @@ task bwcTest { group = 'verification' } -// For now test against the current version: -Version currentVersion = Version.fromString(VersionProperties.elasticsearch.minus('-SNAPSHOT')) -Version[] versions = [currentVersion] -// TODO: uncomment when there is a released version with: https://github.com/elastic/elasticsearch/pull/25456 -// versions = indexCompatVersions -for (Version version : versions) { +for (Version version : indexCompatVersions) { String baseName = "v${version}" Task oldQueryBuilderTest = tasks.create(name: "${baseName}#oldQueryBuilderTest", type: RestIntegTestTask) { @@ -48,9 +43,8 @@ for (Version version : versions) { configure(extensions.findByName("${baseName}#oldQueryBuilderTestCluster")) { distribution = 'zip' - // TODO: uncomment when there is a released version with: https://github.com/elastic/elasticsearch/pull/25456 - // bwcVersion = version - // numBwcNodes = 1 + bwcVersion = version + numBwcNodes = 1 numNodes = 1 clusterName = 'query_builder_bwc' setting 'http.content_type.required', 'true' @@ -89,7 +83,7 @@ test.enabled = false // no unit tests for rolling upgrades, only the rest integr // basic integ tests includes testing bwc against the most recent version task integTest { if (project.bwc_tests_enabled) { - dependsOn = ["v${versions[-1]}#bwcTest"] + dependsOn = ["v${indexCompatVersions[-1]}#bwcTest"] } } diff --git a/qa/query-builder-bwc/src/test/java/org/elasticsearch/bwc/QueryBuilderBWCIT.java b/qa/query-builder-bwc/src/test/java/org/elasticsearch/bwc/QueryBuilderBWCIT.java index ed88ff98c02e0..bff28d6f375bf 100644 --- a/qa/query-builder-bwc/src/test/java/org/elasticsearch/bwc/QueryBuilderBWCIT.java +++ b/qa/query-builder-bwc/src/test/java/org/elasticsearch/bwc/QueryBuilderBWCIT.java @@ -19,11 +19,11 @@ package org.elasticsearch.bwc; +import org.apache.http.entity.ContentType; +import org.apache.http.entity.StringEntity; +import org.apache.http.util.EntityUtils; import org.elasticsearch.Version; import org.elasticsearch.client.Response; -import org.elasticsearch.client.http.entity.ContentType; -import org.elasticsearch.client.http.entity.StringEntity; -import org.elasticsearch.client.http.util.EntityUtils; import org.elasticsearch.common.Booleans; import org.elasticsearch.common.io.stream.InputStreamStreamInput; import org.elasticsearch.common.io.stream.NamedWriteableAwareStreamInput; @@ -61,6 +61,16 @@ import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder; +/** + * An integration test that tests whether percolator queries stored in older supported ES version can still be read by the + * current ES version. Percolator queries are stored in the binary format in a dedicated doc values field (see + * PercolatorFieldMapper#createQueryBuilderField(...) method). Using the query builders writable contract. This test + * does best effort verifying that we don't break bwc for query builders between the first previous major version and + * the latest current major release. + * + * The queries to test are specified in json format, which turns out to work because we tend break here rarely. If the + * json format of a query being tested here then feel free to change this. + */ public class QueryBuilderBWCIT extends ESRestTestCase { private static final List CANDIDATES = new ArrayList<>(); @@ -101,7 +111,7 @@ public class QueryBuilderBWCIT extends ESRestTestCase { .tieBreaker(0.01f) ); addCandidate( - "\"constant_score\": {\"query\": {\"match_all\": {}}, \"boost\": 0.1}", + "\"constant_score\": {\"filter\": {\"match_all\": {}}, \"boost\": 0.1}", new ConstantScoreQueryBuilder(new MatchAllQueryBuilder()).boost(0.1f) ); addCandidate( diff --git a/qa/reindex-from-old/src/test/java/org/elasticsearch/smoketest/ReindexFromOldRemoteIT.java b/qa/reindex-from-old/src/test/java/org/elasticsearch/smoketest/ReindexFromOldRemoteIT.java index 5506f3e0a6fa6..162e68e402730 100644 --- a/qa/reindex-from-old/src/test/java/org/elasticsearch/smoketest/ReindexFromOldRemoteIT.java +++ b/qa/reindex-from-old/src/test/java/org/elasticsearch/smoketest/ReindexFromOldRemoteIT.java @@ -19,11 +19,11 @@ package org.elasticsearch.smoketest; -import org.elasticsearch.client.http.HttpEntity; -import org.elasticsearch.client.http.HttpHost; -import org.elasticsearch.client.http.entity.ContentType; -import org.elasticsearch.client.http.entity.StringEntity; -import org.elasticsearch.client.http.util.EntityUtils; +import org.apache.http.HttpEntity; +import org.apache.http.HttpHost; +import org.apache.http.entity.ContentType; +import org.apache.http.entity.StringEntity; +import org.apache.http.util.EntityUtils; import org.elasticsearch.client.Response; import org.elasticsearch.client.RestClient; import org.elasticsearch.test.rest.ESRestTestCase; diff --git a/qa/rolling-upgrade/build.gradle b/qa/rolling-upgrade/build.gradle index b5f841601308e..fc3cf88b272f1 100644 --- a/qa/rolling-upgrade/build.gradle +++ b/qa/rolling-upgrade/build.gradle @@ -61,6 +61,7 @@ for (Version version : wireCompatVersions) { distribution = 'zip' clusterName = 'rolling-upgrade' unicastTransportUri = { seedNode, node, ant -> oldClusterTest.nodes.get(0).transportUri() } + minimumMasterNodes = { 2 } /* Override the data directory so the new node always gets the node we * just stopped's data directory. */ dataDir = { nodeNumber -> oldClusterTest.nodes[1].dataDir } @@ -81,6 +82,7 @@ for (Version version : wireCompatVersions) { distribution = 'zip' clusterName = 'rolling-upgrade' unicastTransportUri = { seedNode, node, ant -> mixedClusterTest.nodes.get(0).transportUri() } + minimumMasterNodes = { 2 } /* Override the data directory so the new node always gets the node we * just stopped's data directory. */ dataDir = { nodeNumber -> oldClusterTest.nodes[0].dataDir} diff --git a/qa/rolling-upgrade/src/test/java/org/elasticsearch/upgrades/RecoveryIT.java b/qa/rolling-upgrade/src/test/java/org/elasticsearch/upgrades/RecoveryIT.java new file mode 100644 index 0000000000000..6c21d3e37ef30 --- /dev/null +++ b/qa/rolling-upgrade/src/test/java/org/elasticsearch/upgrades/RecoveryIT.java @@ -0,0 +1,112 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.upgrades; + +import org.elasticsearch.client.Response; +import org.elasticsearch.cluster.metadata.IndexMetaData; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.test.rest.ESRestTestCase; +import org.elasticsearch.test.rest.yaml.ObjectPath; + +import java.util.Collections; +import java.util.List; + +import static org.elasticsearch.cluster.routing.UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING; +import static org.hamcrest.Matchers.equalTo; +import static org.hamcrest.Matchers.hasSize; +import static org.hamcrest.Matchers.notNullValue; + +public class RecoveryIT extends ESRestTestCase { + + @Override + protected boolean preserveIndicesUponCompletion() { + return true; + } + + @Override + protected boolean preserveReposUponCompletion() { + return true; + } + + private enum CLUSTER_TYPE { + OLD, + MIXED, + UPGRADED; + + public static CLUSTER_TYPE parse(String value) { + switch (value) { + case "old_cluster": + return OLD; + case "mixed_cluster": + return MIXED; + case "upgraded_cluster": + return UPGRADED; + default: + throw new AssertionError("unknown cluster type: " + value); + } + } + } + + private final CLUSTER_TYPE clusterType = CLUSTER_TYPE.parse(System.getProperty("tests.rest.suite")); + + @Override + protected Settings restClientSettings() { + return Settings.builder().put(super.restClientSettings()) + // increase the timeout here to 90 seconds to handle long waits for a green + // cluster health. the waits for green need to be longer than a minute to + // account for delayed shards + .put(ESRestTestCase.CLIENT_RETRY_TIMEOUT, "90s") + .put(ESRestTestCase.CLIENT_SOCKET_TIMEOUT, "90s") + .build(); + } + + public void testHistoryUUIDIsGenerated() throws Exception { + final String index = "index_history_uuid"; + if (clusterType == CLUSTER_TYPE.OLD) { + Settings.Builder settings = Settings.builder() + .put(IndexMetaData.INDEX_NUMBER_OF_SHARDS_SETTING.getKey(), 1) + .put(IndexMetaData.INDEX_NUMBER_OF_REPLICAS_SETTING.getKey(), 1) + // if the node with the replica is the first to be restarted, while a replica is still recovering + // then delayed allocation will kick in. When the node comes back, the master will search for a copy + // but the recovering copy will be seen as invalid and the cluster health won't return to GREEN + // before timing out + .put(INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING.getKey(), "100ms"); + createIndex(index, settings.build()); + } else if (clusterType == CLUSTER_TYPE.UPGRADED) { + ensureGreen(); + Response response = client().performRequest("GET", index + "/_stats", Collections.singletonMap("level", "shards")); + assertOK(response); + ObjectPath objectPath = ObjectPath.createFromResponse(response); + List shardStats = objectPath.evaluate("indices." + index + ".shards.0"); + assertThat(shardStats, hasSize(2)); + String expectHistoryUUID = null; + for (int shard = 0; shard < 2; shard++) { + String nodeID = objectPath.evaluate("indices." + index + ".shards.0." + shard + ".routing.node"); + String historyUUID = objectPath.evaluate("indices." + index + ".shards.0." + shard + ".commit.user_data.history_uuid"); + assertThat("no history uuid found for shard on " + nodeID, historyUUID, notNullValue()); + if (expectHistoryUUID == null) { + expectHistoryUUID = historyUUID; + } else { + assertThat("different history uuid found for shard on " + nodeID, historyUUID, equalTo(expectHistoryUUID)); + } + } + } + } + +} diff --git a/qa/rolling-upgrade/src/test/java/org/elasticsearch/upgrades/UpgradeClusterClientYamlTestSuiteIT.java b/qa/rolling-upgrade/src/test/java/org/elasticsearch/upgrades/UpgradeClusterClientYamlTestSuiteIT.java index 6dfdbb987cc07..f3c0256b2c3c5 100644 --- a/qa/rolling-upgrade/src/test/java/org/elasticsearch/upgrades/UpgradeClusterClientYamlTestSuiteIT.java +++ b/qa/rolling-upgrade/src/test/java/org/elasticsearch/upgrades/UpgradeClusterClientYamlTestSuiteIT.java @@ -52,10 +52,11 @@ public static Iterable parameters() throws Exception { @Override protected Settings restClientSettings() { return Settings.builder().put(super.restClientSettings()) - // increase the timeout so that we can actually see the result of failed cluster health - // calls that have a default timeout of 30s - .put(ESRestTestCase.CLIENT_RETRY_TIMEOUT, "40s") - .put(ESRestTestCase.CLIENT_SOCKET_TIMEOUT, "40s") + // increase the timeout here to 90 seconds to handle long waits for a green + // cluster health. the waits for green need to be longer than a minute to + // account for delayed shards + .put(ESRestTestCase.CLIENT_RETRY_TIMEOUT, "90s") + .put(ESRestTestCase.CLIENT_SOCKET_TIMEOUT, "90s") .build(); } } diff --git a/qa/rolling-upgrade/src/test/resources/rest-api-spec/test/mixed_cluster/10_basic.yml b/qa/rolling-upgrade/src/test/resources/rest-api-spec/test/mixed_cluster/10_basic.yml index 36c975bb5321d..b0d6a23cc4095 100644 --- a/qa/rolling-upgrade/src/test/resources/rest-api-spec/test/mixed_cluster/10_basic.yml +++ b/qa/rolling-upgrade/src/test/resources/rest-api-spec/test/mixed_cluster/10_basic.yml @@ -40,7 +40,7 @@ body: {"f1": "v6_mixed", "f2": 10} - do: - indices.flush: + indices.refresh: index: test_index - do: @@ -56,7 +56,7 @@ id: d10 - do: - indices.flush: + indices.refresh: index: test_index - do: diff --git a/qa/rolling-upgrade/src/test/resources/rest-api-spec/test/old_cluster/10_basic.yml b/qa/rolling-upgrade/src/test/resources/rest-api-spec/test/old_cluster/10_basic.yml index 691ae2f6af487..7f2c24e23307b 100644 --- a/qa/rolling-upgrade/src/test/resources/rest-api-spec/test/old_cluster/10_basic.yml +++ b/qa/rolling-upgrade/src/test/resources/rest-api-spec/test/old_cluster/10_basic.yml @@ -46,7 +46,7 @@ - '{"f1": "d_old"}' - do: - indices.flush: + indices.refresh: index: test_index,index_with_replicas - do: diff --git a/qa/rolling-upgrade/src/test/resources/rest-api-spec/test/upgraded_cluster/10_basic.yml b/qa/rolling-upgrade/src/test/resources/rest-api-spec/test/upgraded_cluster/10_basic.yml index dff4e878d7dbe..a3608b0fdedd0 100644 --- a/qa/rolling-upgrade/src/test/resources/rest-api-spec/test/upgraded_cluster/10_basic.yml +++ b/qa/rolling-upgrade/src/test/resources/rest-api-spec/test/upgraded_cluster/10_basic.yml @@ -4,6 +4,9 @@ cluster.health: wait_for_status: green wait_for_nodes: 2 + # wait for long enough that we give delayed unassigned shards to stop being delayed + timeout: 70s + level: shards - do: search: @@ -33,7 +36,7 @@ - '{"f1": "v5_upgraded", "f2": 14}' - do: - indices.flush: + indices.refresh: index: test_index - do: diff --git a/qa/smoke-test-http/src/test/java/org/elasticsearch/http/ContextAndHeaderTransportIT.java b/qa/smoke-test-http/src/test/java/org/elasticsearch/http/ContextAndHeaderTransportIT.java index ddf1013266219..749c03598a378 100644 --- a/qa/smoke-test-http/src/test/java/org/elasticsearch/http/ContextAndHeaderTransportIT.java +++ b/qa/smoke-test-http/src/test/java/org/elasticsearch/http/ContextAndHeaderTransportIT.java @@ -19,7 +19,8 @@ package org.elasticsearch.http; -import org.elasticsearch.client.http.message.BasicHeader; +import org.apache.http.message.BasicHeader; +import org.apache.lucene.util.SetOnce; import org.elasticsearch.action.ActionListener; import org.elasticsearch.action.ActionRequest; import org.elasticsearch.action.admin.indices.refresh.RefreshRequest; @@ -31,12 +32,14 @@ import org.elasticsearch.action.termvectors.MultiTermVectorsRequest; import org.elasticsearch.client.Client; import org.elasticsearch.client.Response; -import org.elasticsearch.common.inject.AbstractModule; -import org.elasticsearch.common.inject.Inject; -import org.elasticsearch.common.inject.Module; +import org.elasticsearch.cluster.service.ClusterService; +import org.elasticsearch.common.io.stream.NamedWriteableRegistry; import org.elasticsearch.common.network.NetworkModule; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.common.xcontent.XContentType; +import org.elasticsearch.env.Environment; +import org.elasticsearch.env.NodeEnvironment; import org.elasticsearch.index.query.BoolQueryBuilder; import org.elasticsearch.index.query.GeoShapeQueryBuilder; import org.elasticsearch.index.query.MoreLikeThisQueryBuilder; @@ -46,8 +49,10 @@ import org.elasticsearch.indices.TermsLookup; import org.elasticsearch.plugins.ActionPlugin; import org.elasticsearch.plugins.Plugin; +import org.elasticsearch.script.ScriptService; import org.elasticsearch.test.ESIntegTestCase.ClusterScope; import org.elasticsearch.threadpool.ThreadPool; +import org.elasticsearch.watcher.ResourceWatcherService; import org.junit.After; import org.junit.Before; @@ -282,21 +287,20 @@ private Client transportClient() { public static class ActionLoggingPlugin extends Plugin implements ActionPlugin { - @Override - public Collection createGuiceModules() { - return Collections.singletonList(new ActionLoggingModule()); - } + private final SetOnce loggingFilter = new SetOnce<>(); @Override - public List> getActionFilters() { - return singletonList(LoggingFilter.class); + public Collection createComponents(Client client, ClusterService clusterService, ThreadPool threadPool, + ResourceWatcherService resourceWatcherService, ScriptService scriptService, + NamedXContentRegistry xContentRegistry, Environment environment, + NodeEnvironment nodeEnvironment, NamedWriteableRegistry namedWriteableRegistry) { + loggingFilter.set(new LoggingFilter(clusterService.getSettings(), threadPool)); + return Collections.emptyList(); } - } - public static class ActionLoggingModule extends AbstractModule { @Override - protected void configure() { - bind(LoggingFilter.class).asEagerSingleton(); + public List getActionFilters() { + return singletonList(loggingFilter.get()); } } @@ -305,7 +309,6 @@ public static class LoggingFilter extends ActionFilter.Simple { private final ThreadPool threadPool; - @Inject public LoggingFilter(Settings settings, ThreadPool pool) { super(settings); this.threadPool = pool; diff --git a/qa/smoke-test-http/src/test/java/org/elasticsearch/http/CorsNotSetIT.java b/qa/smoke-test-http/src/test/java/org/elasticsearch/http/CorsNotSetIT.java index b4fe7c86254b6..bdda44c1b7118 100644 --- a/qa/smoke-test-http/src/test/java/org/elasticsearch/http/CorsNotSetIT.java +++ b/qa/smoke-test-http/src/test/java/org/elasticsearch/http/CorsNotSetIT.java @@ -19,7 +19,7 @@ package org.elasticsearch.http; -import org.elasticsearch.client.http.message.BasicHeader; +import org.apache.http.message.BasicHeader; import org.elasticsearch.client.Response; import org.elasticsearch.test.ESIntegTestCase; diff --git a/qa/smoke-test-http/src/test/java/org/elasticsearch/http/CorsRegexIT.java b/qa/smoke-test-http/src/test/java/org/elasticsearch/http/CorsRegexIT.java index efa93d4a54b6c..441f56a8631dd 100644 --- a/qa/smoke-test-http/src/test/java/org/elasticsearch/http/CorsRegexIT.java +++ b/qa/smoke-test-http/src/test/java/org/elasticsearch/http/CorsRegexIT.java @@ -18,7 +18,7 @@ */ package org.elasticsearch.http; -import org.elasticsearch.client.http.message.BasicHeader; +import org.apache.http.message.BasicHeader; import org.elasticsearch.client.Response; import org.elasticsearch.client.ResponseException; import org.elasticsearch.common.network.NetworkModule; diff --git a/qa/smoke-test-http/src/test/java/org/elasticsearch/http/DeprecationHttpIT.java b/qa/smoke-test-http/src/test/java/org/elasticsearch/http/DeprecationHttpIT.java index ac4e9cb6cd1ae..948f573a05c8a 100644 --- a/qa/smoke-test-http/src/test/java/org/elasticsearch/http/DeprecationHttpIT.java +++ b/qa/smoke-test-http/src/test/java/org/elasticsearch/http/DeprecationHttpIT.java @@ -18,10 +18,10 @@ */ package org.elasticsearch.http; -import org.elasticsearch.client.http.Header; -import org.elasticsearch.client.http.HttpEntity; -import org.elasticsearch.client.http.entity.ContentType; -import org.elasticsearch.client.http.entity.StringEntity; +import org.apache.http.Header; +import org.apache.http.HttpEntity; +import org.apache.http.entity.ContentType; +import org.apache.http.entity.StringEntity; import org.elasticsearch.client.Response; import org.elasticsearch.common.logging.DeprecationLogger; import org.elasticsearch.common.logging.LoggerMessageFormat; diff --git a/qa/smoke-test-http/src/test/java/org/elasticsearch/http/DetailedErrorsDisabledIT.java b/qa/smoke-test-http/src/test/java/org/elasticsearch/http/DetailedErrorsDisabledIT.java index e6eb8f169222a..380937ed010e1 100644 --- a/qa/smoke-test-http/src/test/java/org/elasticsearch/http/DetailedErrorsDisabledIT.java +++ b/qa/smoke-test-http/src/test/java/org/elasticsearch/http/DetailedErrorsDisabledIT.java @@ -22,7 +22,7 @@ import java.io.IOException; import java.util.Collections; -import org.elasticsearch.client.http.util.EntityUtils; +import org.apache.http.util.EntityUtils; import org.elasticsearch.client.Response; import org.elasticsearch.client.ResponseException; import org.elasticsearch.common.network.NetworkModule; diff --git a/qa/smoke-test-http/src/test/java/org/elasticsearch/http/DetailedErrorsEnabledIT.java b/qa/smoke-test-http/src/test/java/org/elasticsearch/http/DetailedErrorsEnabledIT.java index 789d5475b2242..d0b80595a26ee 100644 --- a/qa/smoke-test-http/src/test/java/org/elasticsearch/http/DetailedErrorsEnabledIT.java +++ b/qa/smoke-test-http/src/test/java/org/elasticsearch/http/DetailedErrorsEnabledIT.java @@ -19,7 +19,7 @@ package org.elasticsearch.http; -import org.elasticsearch.client.http.util.EntityUtils; +import org.apache.http.util.EntityUtils; import org.elasticsearch.client.Response; import org.elasticsearch.client.ResponseException; diff --git a/qa/smoke-test-http/src/test/java/org/elasticsearch/http/HttpCompressionIT.java b/qa/smoke-test-http/src/test/java/org/elasticsearch/http/HttpCompressionIT.java index 6838d860d7097..20ddd0d230ad4 100644 --- a/qa/smoke-test-http/src/test/java/org/elasticsearch/http/HttpCompressionIT.java +++ b/qa/smoke-test-http/src/test/java/org/elasticsearch/http/HttpCompressionIT.java @@ -18,10 +18,10 @@ */ package org.elasticsearch.http; -import org.elasticsearch.client.http.HttpHeaders; -import org.elasticsearch.client.http.entity.ContentType; -import org.elasticsearch.client.http.entity.StringEntity; -import org.elasticsearch.client.http.message.BasicHeader; +import org.apache.http.HttpHeaders; +import org.apache.http.entity.ContentType; +import org.apache.http.entity.StringEntity; +import org.apache.http.message.BasicHeader; import org.elasticsearch.client.Response; import org.elasticsearch.client.RestClient; import org.elasticsearch.test.rest.ESRestTestCase; diff --git a/qa/smoke-test-http/src/test/java/org/elasticsearch/http/ResponseHeaderPluginIT.java b/qa/smoke-test-http/src/test/java/org/elasticsearch/http/ResponseHeaderPluginIT.java index ca717a3c36be3..ffb23f31f4087 100644 --- a/qa/smoke-test-http/src/test/java/org/elasticsearch/http/ResponseHeaderPluginIT.java +++ b/qa/smoke-test-http/src/test/java/org/elasticsearch/http/ResponseHeaderPluginIT.java @@ -18,7 +18,7 @@ */ package org.elasticsearch.http; -import org.elasticsearch.client.http.message.BasicHeader; +import org.apache.http.message.BasicHeader; import org.elasticsearch.client.Response; import org.elasticsearch.client.ResponseException; import org.elasticsearch.common.settings.Settings; diff --git a/qa/smoke-test-http/src/test/java/org/elasticsearch/http/RestHttpResponseHeadersIT.java b/qa/smoke-test-http/src/test/java/org/elasticsearch/http/RestHttpResponseHeadersIT.java index 224518db6682a..c9e7dc451a053 100644 --- a/qa/smoke-test-http/src/test/java/org/elasticsearch/http/RestHttpResponseHeadersIT.java +++ b/qa/smoke-test-http/src/test/java/org/elasticsearch/http/RestHttpResponseHeadersIT.java @@ -17,7 +17,7 @@ package org.elasticsearch.http; -import org.elasticsearch.client.http.util.EntityUtils; +import org.apache.http.util.EntityUtils; import org.elasticsearch.client.Response; import org.elasticsearch.client.ResponseException; import org.elasticsearch.test.rest.ESRestTestCase; diff --git a/qa/smoke-test-ingest-with-all-dependencies/build.gradle b/qa/smoke-test-ingest-with-all-dependencies/build.gradle index 0bed8f3782775..1d318f6be6674 100644 --- a/qa/smoke-test-ingest-with-all-dependencies/build.gradle +++ b/qa/smoke-test-ingest-with-all-dependencies/build.gradle @@ -30,5 +30,4 @@ dependencies { integTestCluster { plugin ':plugins:ingest-geoip' - setting 'script.max_compilations_per_minute', '1000' } diff --git a/qa/smoke-test-ingest-with-all-dependencies/src/test/resources/rest-api-spec/test/ingest/50_script_processor_using_painless.yml b/qa/smoke-test-ingest-with-all-dependencies/src/test/resources/rest-api-spec/test/ingest/50_script_processor_using_painless.yml index d89395d4d1939..8c6a94b4a5c49 100644 --- a/qa/smoke-test-ingest-with-all-dependencies/src/test/resources/rest-api-spec/test/ingest/50_script_processor_using_painless.yml +++ b/qa/smoke-test-ingest-with-all-dependencies/src/test/resources/rest-api-spec/test/ingest/50_script_processor_using_painless.yml @@ -104,7 +104,6 @@ ] } - match: { error.header.processor_type: "script" } - - match: { error.header.property_name: "source" } - match: { error.type: "script_exception" } - match: { error.reason: "compile error" } diff --git a/qa/smoke-test-reindex-with-all-modules/build.gradle b/qa/smoke-test-reindex-with-all-modules/build.gradle index cab01cb9412ec..1d2ef8a325833 100644 --- a/qa/smoke-test-reindex-with-all-modules/build.gradle +++ b/qa/smoke-test-reindex-with-all-modules/build.gradle @@ -21,7 +21,6 @@ apply plugin: 'elasticsearch.standalone-rest-test' apply plugin: 'elasticsearch.rest-test' integTestCluster { - setting 'script.max_compilations_per_minute', '1000' // Whitelist reindexing from the local node so we can test it. setting 'reindex.remote.whitelist', '127.0.0.1:*' -} \ No newline at end of file +} diff --git a/qa/smoke-test-reindex-with-all-modules/src/test/resources/rest-api-spec/test/update_by_query/10_script.yml b/qa/smoke-test-reindex-with-all-modules/src/test/resources/rest-api-spec/test/update_by_query/10_script.yml index c4414229d7efa..ea9fa33e6a9cf 100644 --- a/qa/smoke-test-reindex-with-all-modules/src/test/resources/rest-api-spec/test/update_by_query/10_script.yml +++ b/qa/smoke-test-reindex-with-all-modules/src/test/resources/rest-api-spec/test/update_by_query/10_script.yml @@ -29,6 +29,34 @@ user: notkimchy - match: { hits.total: 1 } +--- +"Update document using short `script` form": + - do: + index: + index: twitter + type: tweet + id: 1 + body: { "user": "kimchy" } + - do: + indices.refresh: {} + + - do: + update_by_query: + index: twitter + refresh: true + body: { "script": "ctx._source.user = \"not\" + ctx._source.user" } + - match: {updated: 1} + - match: {noops: 0} + + - do: + search: + index: twitter + body: + query: + match: + user: notkimchy + - match: { hits.total: 1 } + --- "Noop one doc": - do: @@ -302,7 +330,7 @@ indices.refresh: {} - do: - catch: request + catch: bad_request update_by_query: refresh: true index: twitter diff --git a/qa/vagrant/src/test/resources/packaging/tests/module_and_plugin_test_cases.bash b/qa/vagrant/src/test/resources/packaging/tests/module_and_plugin_test_cases.bash index db94bf983d548..91c06974266fa 100644 --- a/qa/vagrant/src/test/resources/packaging/tests/module_and_plugin_test_cases.bash +++ b/qa/vagrant/src/test/resources/packaging/tests/module_and_plugin_test_cases.bash @@ -450,3 +450,15 @@ fi @test "[$GROUP] test umask" { install_jvm_example $(readlink -m jvm-example-*.zip) 0077 } + +@test "[$GROUP] hostname" { + local temp=`mktemp -d` + cp "$ESCONFIG"/elasticsearch.yml "$temp" + echo 'node.name: ${HOSTNAME}' >> "$ESCONFIG"/elasticsearch.yml + start_elasticsearch_service + wait_for_elasticsearch_status + [ "$(curl -XGET localhost:9200/_cat/nodes?h=name)" == "$HOSTNAME" ] + stop_elasticsearch_service + cp "$temp"/elasticsearch.yml "$ESCONFIG"/elasticsearch.yml + rm -rf "$temp" +} diff --git a/qa/vagrant/src/test/resources/packaging/utils/packages.bash b/qa/vagrant/src/test/resources/packaging/utils/packages.bash index bcd4ec8f52ca7..86e182dbbff87 100644 --- a/qa/vagrant/src/test/resources/packaging/utils/packages.bash +++ b/qa/vagrant/src/test/resources/packaging/utils/packages.bash @@ -94,7 +94,7 @@ verify_package_installation() { assert_file "$ESHOME/bin/elasticsearch-plugin" f root root 755 assert_file "$ESHOME/bin/elasticsearch-translog" f root root 755 assert_file "$ESHOME/lib" d root root 755 - assert_file "$ESCONFIG" d root elasticsearch 750 + assert_file "$ESCONFIG" d root elasticsearch 2750 assert_file "$ESCONFIG/elasticsearch.yml" f root elasticsearch 660 assert_file "$ESCONFIG/jvm.options" f root elasticsearch 660 assert_file "$ESCONFIG/log4j2.properties" f root elasticsearch 660 diff --git a/qa/vagrant/src/test/resources/packaging/utils/utils.bash b/qa/vagrant/src/test/resources/packaging/utils/utils.bash index 58d49558b1be0..dc5238d03f46b 100644 --- a/qa/vagrant/src/test/resources/packaging/utils/utils.bash +++ b/qa/vagrant/src/test/resources/packaging/utils/utils.bash @@ -1,6 +1,6 @@ #!/bin/bash -# This file contains some utilities to test the the .deb/.rpm +# This file contains some utilities to test the .deb/.rpm # packages and the SysV/Systemd scripts. # WARNING: This testing file must be executed as root and can diff --git a/qa/wildfly/src/test/java/org/elasticsearch/wildfly/WildflyIT.java b/qa/wildfly/src/test/java/org/elasticsearch/wildfly/WildflyIT.java index e71e6f0c5bed6..baf44ed777d1a 100644 --- a/qa/wildfly/src/test/java/org/elasticsearch/wildfly/WildflyIT.java +++ b/qa/wildfly/src/test/java/org/elasticsearch/wildfly/WildflyIT.java @@ -19,13 +19,13 @@ package org.elasticsearch.wildfly; -import org.elasticsearch.client.http.client.methods.CloseableHttpResponse; -import org.elasticsearch.client.http.client.methods.HttpGet; -import org.elasticsearch.client.http.client.methods.HttpPut; -import org.elasticsearch.client.http.entity.ContentType; -import org.elasticsearch.client.http.entity.StringEntity; -import org.elasticsearch.client.http.impl.client.CloseableHttpClient; -import org.elasticsearch.client.http.impl.client.HttpClientBuilder; +import org.apache.http.client.methods.CloseableHttpResponse; +import org.apache.http.client.methods.HttpGet; +import org.apache.http.client.methods.HttpPut; +import org.apache.http.entity.ContentType; +import org.apache.http.entity.StringEntity; +import org.apache.http.impl.client.CloseableHttpClient; +import org.apache.http.impl.client.HttpClientBuilder; import org.apache.lucene.util.LuceneTestCase; import org.elasticsearch.Build; import org.elasticsearch.Version; diff --git a/rest-api-spec/src/main/resources/rest-api-spec/api/cat.segments.json b/rest-api-spec/src/main/resources/rest-api-spec/api/cat.segments.json index 118f8b6bf9632..3306b2f753b2a 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/api/cat.segments.json +++ b/rest-api-spec/src/main/resources/rest-api-spec/api/cat.segments.json @@ -16,6 +16,11 @@ "type" : "string", "description" : "a short version of the Accept header, e.g. json, yaml" }, + "bytes": { + "type": "enum", + "description" : "The unit in which to display byte values", + "options": [ "b", "k", "kb", "m", "mb", "g", "gb", "t", "tb", "p", "pb" ] + }, "h": { "type": "list", "description" : "Comma-separated list of column names to display" diff --git a/rest-api-spec/src/main/resources/rest-api-spec/api/cat.shards.json b/rest-api-spec/src/main/resources/rest-api-spec/api/cat.shards.json index db46ce909ff6b..2ad714e7225d7 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/api/cat.shards.json +++ b/rest-api-spec/src/main/resources/rest-api-spec/api/cat.shards.json @@ -16,6 +16,11 @@ "type" : "string", "description" : "a short version of the Accept header, e.g. json, yaml" }, + "bytes": { + "type": "enum", + "description" : "The unit in which to display byte values", + "options": [ "b", "k", "kb", "m", "mb", "g", "gb", "t", "tb", "p", "pb" ] + }, "local": { "type" : "boolean", "description" : "Return local information, do not retrieve the state from master node (default: false)" diff --git a/rest-api-spec/src/main/resources/rest-api-spec/api/cat.snapshots.json b/rest-api-spec/src/main/resources/rest-api-spec/api/cat.snapshots.json index 90f4ca32730b6..eec22e2e0412d 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/api/cat.snapshots.json +++ b/rest-api-spec/src/main/resources/rest-api-spec/api/cat.snapshots.json @@ -10,7 +10,6 @@ "parts": { "repository": { "type" : "list", - "required": true, "description": "Name of repository from which to fetch the snapshot information" } }, diff --git a/rest-api-spec/src/main/resources/rest-api-spec/api/cluster.put_settings.json b/rest-api-spec/src/main/resources/rest-api-spec/api/cluster.put_settings.json index 393d1350dd3e2..5fcf03102836a 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/api/cluster.put_settings.json +++ b/rest-api-spec/src/main/resources/rest-api-spec/api/cluster.put_settings.json @@ -22,7 +22,8 @@ } }, "body": { - "description": "The settings to be updated. Can be either `transient` or `persistent` (survives cluster restart)." + "description": "The settings to be updated. Can be either `transient` or `persistent` (survives cluster restart).", + "required" : true } } } diff --git a/rest-api-spec/src/main/resources/rest-api-spec/api/count.json b/rest-api-spec/src/main/resources/rest-api-spec/api/count.json index 0e2697cd524d2..96fa4daf12b95 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/api/count.json +++ b/rest-api-spec/src/main/resources/rest-api-spec/api/count.json @@ -39,8 +39,8 @@ "description" : "Specify the node or shard the operation should be performed on (default: random)" }, "routing": { - "type" : "string", - "description" : "Specific routing value" + "type" : "list", + "description" : "A comma-separated list of specific routing values" }, "q": { "type" : "string", @@ -67,6 +67,10 @@ "lenient": { "type" : "boolean", "description" : "Specify whether format-based query failures (such as providing text to a numeric field) should be ignored" + }, + "terminate_after" : { + "type" : "number", + "description" : "The maximum count for each shard, upon reaching which the query execution will terminate early" } } }, diff --git a/rest-api-spec/src/main/resources/rest-api-spec/api/delete_script.json b/rest-api-spec/src/main/resources/rest-api-spec/api/delete_script.json index c61aecd6bb2a8..83bb690cc0428 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/api/delete_script.json +++ b/rest-api-spec/src/main/resources/rest-api-spec/api/delete_script.json @@ -10,11 +10,6 @@ "type" : "string", "description" : "Script ID", "required" : true - }, - "lang" : { - "type" : "string", - "description" : "Script language", - "required" : true } }, "params" : { diff --git a/rest-api-spec/src/main/resources/rest-api-spec/api/get_script.json b/rest-api-spec/src/main/resources/rest-api-spec/api/get_script.json index 1bdc546ad03ac..2240f0e1a0b75 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/api/get_script.json +++ b/rest-api-spec/src/main/resources/rest-api-spec/api/get_script.json @@ -10,11 +10,6 @@ "type" : "string", "description" : "Script ID", "required" : true - }, - "lang" : { - "type" : "string", - "description" : "Script language", - "required" : true } }, "params" : { diff --git a/rest-api-spec/src/main/resources/rest-api-spec/api/indices.exists_alias.json b/rest-api-spec/src/main/resources/rest-api-spec/api/indices.exists_alias.json index 8891aebd223ec..aea20b2b634d0 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/api/indices.exists_alias.json +++ b/rest-api-spec/src/main/resources/rest-api-spec/api/indices.exists_alias.json @@ -12,6 +12,7 @@ }, "name": { "type" : "list", + "required" : true, "description" : "A comma-separated list of alias names to return" } }, diff --git a/rest-api-spec/src/main/resources/rest-api-spec/api/indices.exists_template.json b/rest-api-spec/src/main/resources/rest-api-spec/api/indices.exists_template.json index 96c4c53cd9dcf..3fb9d1e207e1e 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/api/indices.exists_template.json +++ b/rest-api-spec/src/main/resources/rest-api-spec/api/indices.exists_template.json @@ -8,7 +8,7 @@ "parts": { "name": { "type": "list", - "required": false, + "required": true, "description": "The comma separated names of the index templates" } }, diff --git a/rest-api-spec/src/main/resources/rest-api-spec/api/indices.open.json b/rest-api-spec/src/main/resources/rest-api-spec/api/indices.open.json index 879ce5a2b9d9a..86c39988e181f 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/api/indices.open.json +++ b/rest-api-spec/src/main/resources/rest-api-spec/api/indices.open.json @@ -34,6 +34,10 @@ "options" : ["open","closed","none","all"], "default" : "closed", "description" : "Whether to expand wildcard expression to concrete indices that are open, closed or both." + }, + "wait_for_active_shards": { + "type" : "string", + "description" : "Sets the number of active shards to wait for before the operation returns." } } }, diff --git a/rest-api-spec/src/main/resources/rest-api-spec/api/indices.split.json b/rest-api-spec/src/main/resources/rest-api-spec/api/indices.split.json new file mode 100644 index 0000000000000..a79fa7b708269 --- /dev/null +++ b/rest-api-spec/src/main/resources/rest-api-spec/api/indices.split.json @@ -0,0 +1,39 @@ +{ + "indices.split": { + "documentation": "http://www.elastic.co/guide/en/elasticsearch/reference/master/indices-split-index.html", + "methods": ["PUT", "POST"], + "url": { + "path": "/{index}/_split/{target}", + "paths": ["/{index}/_split/{target}"], + "parts": { + "index": { + "type" : "string", + "required" : true, + "description" : "The name of the source index to split" + }, + "target": { + "type" : "string", + "required" : true, + "description" : "The name of the target index to split into" + } + }, + "params": { + "timeout": { + "type" : "time", + "description" : "Explicit operation timeout" + }, + "master_timeout": { + "type" : "time", + "description" : "Specify timeout for connection to master" + }, + "wait_for_active_shards": { + "type" : "string", + "description" : "Set the number of active shards to wait for on the shrunken index before the operation returns." + } + } + }, + "body": { + "description" : "The configuration for the target index (`settings` and `aliases`)" + } + } +} diff --git a/rest-api-spec/src/main/resources/rest-api-spec/api/put_script.json b/rest-api-spec/src/main/resources/rest-api-spec/api/put_script.json index 45b97f9f2857a..34bd4f63c285e 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/api/put_script.json +++ b/rest-api-spec/src/main/resources/rest-api-spec/api/put_script.json @@ -11,10 +11,9 @@ "description" : "Script ID", "required" : true }, - "lang" : { + "context" : { "type" : "string", - "description" : "Script language", - "required" : true + "description" : "Script context" } }, "params" : { diff --git a/rest-api-spec/src/main/resources/rest-api-spec/api/reindex_rethrottle.json b/rest-api-spec/src/main/resources/rest-api-spec/api/reindex_rethrottle.json index 4bba41d37d504..4004409ab6883 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/api/reindex_rethrottle.json +++ b/rest-api-spec/src/main/resources/rest-api-spec/api/reindex_rethrottle.json @@ -8,6 +8,7 @@ "parts": { "task_id": { "type": "string", + "required" : true, "description": "The task id to rethrottle" } }, diff --git a/rest-api-spec/src/main/resources/rest-api-spec/api/search_template.json b/rest-api-spec/src/main/resources/rest-api-spec/api/search_template.json index c58cf2199119c..a78295dd4f5a3 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/api/search_template.json +++ b/rest-api-spec/src/main/resources/rest-api-spec/api/search_template.json @@ -7,8 +7,8 @@ "paths": ["/_search/template", "/{index}/_search/template", "/{index}/{type}/_search/template"], "parts": { "index": { - "type" : "list", - "description" : "A comma-separated list of index names to search; use `_all` or empty string to perform the operation on all indices" + "type" : "list", + "description" : "A comma-separated list of index names to search; use `_all` or empty string to perform the operation on all indices" }, "type": { "type" : "list", @@ -17,18 +17,18 @@ }, "params" : { "ignore_unavailable": { - "type" : "boolean", - "description" : "Whether specified concrete indices should be ignored when unavailable (missing or closed)" + "type" : "boolean", + "description" : "Whether specified concrete indices should be ignored when unavailable (missing or closed)" }, "allow_no_indices": { - "type" : "boolean", - "description" : "Whether to ignore if a wildcard indices expression resolves into no concrete indices. (This includes `_all` string or when no indices have been specified)" + "type" : "boolean", + "description" : "Whether to ignore if a wildcard indices expression resolves into no concrete indices. (This includes `_all` string or when no indices have been specified)" }, "expand_wildcards": { - "type" : "enum", - "options" : ["open","closed","none","all"], - "default" : "open", - "description" : "Whether to expand wildcard expression to concrete indices that are open, closed or both." + "type" : "enum", + "options" : ["open","closed","none","all"], + "default" : "open", + "description" : "Whether to expand wildcard expression to concrete indices that are open, closed or both." }, "preference": { "type" : "string", @@ -62,7 +62,8 @@ } }, "body": { - "description": "The search definition template and its params" + "description": "The search definition template and its params", + "required" : true } } } diff --git a/rest-api-spec/src/main/resources/rest-api-spec/api/tasks.cancel.json b/rest-api-spec/src/main/resources/rest-api-spec/api/tasks.cancel.json index 69d21f4ec1def..cffa74934bccc 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/api/tasks.cancel.json +++ b/rest-api-spec/src/main/resources/rest-api-spec/api/tasks.cancel.json @@ -12,7 +12,7 @@ } }, "params": { - "node_id": { + "nodes": { "type": "list", "description": "A comma-separated list of node IDs or names to limit the returned information; use `_local` to return information from the node you're connecting to, leave empty to get information from all nodes" }, @@ -24,7 +24,7 @@ "type": "string", "description": "Cancel tasks with specified parent node." }, - "parent_task": { + "parent_task_id": { "type" : "string", "description" : "Cancel tasks with specified parent task id (node_id:task_number). Set to -1 to cancel all." } diff --git a/rest-api-spec/src/main/resources/rest-api-spec/api/tasks.get.json b/rest-api-spec/src/main/resources/rest-api-spec/api/tasks.get.json index f97206cd16f72..e17acb0512c9b 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/api/tasks.get.json +++ b/rest-api-spec/src/main/resources/rest-api-spec/api/tasks.get.json @@ -8,6 +8,7 @@ "parts": { "task_id": { "type": "string", + "required" : true, "description": "Return the task with specified id (node_id:task_number)" } }, diff --git a/rest-api-spec/src/main/resources/rest-api-spec/api/tasks.list.json b/rest-api-spec/src/main/resources/rest-api-spec/api/tasks.list.json index a966cb0e50716..fbe355ee164b0 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/api/tasks.list.json +++ b/rest-api-spec/src/main/resources/rest-api-spec/api/tasks.list.json @@ -7,7 +7,7 @@ "paths": ["/_tasks"], "parts": {}, "params": { - "node_id": { + "nodes": { "type": "list", "description": "A comma-separated list of node IDs or names to limit the returned information; use `_local` to return information from the node you're connecting to, leave empty to get information from all nodes" }, @@ -23,7 +23,7 @@ "type": "string", "description": "Return tasks with specified parent node." }, - "parent_task": { + "parent_task_id": { "type" : "string", "description" : "Return tasks with specified parent task id (node_id:task_number). Set to -1 to return all." }, diff --git a/rest-api-spec/src/main/resources/rest-api-spec/api/update.json b/rest-api-spec/src/main/resources/rest-api-spec/api/update.json index 7cacda722f5ed..97725917e1e8e 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/api/update.json +++ b/rest-api-spec/src/main/resources/rest-api-spec/api/update.json @@ -88,7 +88,8 @@ } }, "body": { - "description": "The request definition using either `script` or partial `doc`" + "description": "The request definition requires either `script` or partial `doc`", + "required": true } } } diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/README.asciidoc b/rest-api-spec/src/main/resources/rest-api-spec/test/README.asciidoc index db33513962fae..c93873a5be429 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/test/README.asciidoc +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/README.asciidoc @@ -163,12 +163,18 @@ be caught and tested. For instance: The argument to `catch` can be any of: [horizontal] -`missing`:: a 404 response from ES -`conflict`:: a 409 response from ES -`request`:: a generic error response from ES -`param`:: a client-side error indicating an unknown parameter has been passed - to the method -`/foo bar/`:: the text of the error message matches this regular expression +`bad_request`:: a 400 response from ES +`unauthorized`:: a 401 response from ES +`forbidden`:: a 403 response from ES +`missing`:: a 404 response from ES +`request_timeout`:: a 408 response from ES +`conflict`:: a 409 response from ES +`request`:: a 4xx-5xx error response from ES, not equal to any named response + above +`unavailable`:: a 503 response from ES +`param`:: a client-side error indicating an unknown parameter has been passed + to the method +`/foo bar/`:: the text of the error message matches this regular expression If `catch` is specified, then the `response` var must be cleared, and the test should fail if no error is thrown. diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/bulk/10_basic.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/bulk/10_basic.yml index c6ba03a9aeb8d..6bc9f0084b704 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/test/bulk/10_basic.yml +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/bulk/10_basic.yml @@ -65,7 +65,8 @@ - skip: version: " - 5.4.99" reason: confusing exception messaged caused by empty object fixed in 5.5.0 - + features: ["headers"] + - do: catch: /Malformed action\/metadata line \[3\], expected FIELD_NAME but found \[END_OBJECT\]/ headers: diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/bulk/20_list_of_strings.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/bulk/20_list_of_strings.yml index e25626cf3ae28..def91f4280722 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/test/bulk/20_list_of_strings.yml +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/bulk/20_list_of_strings.yml @@ -11,8 +11,6 @@ - do: count: - # we count through the primary in case there is a replica that has not yet fully recovered - preference: _primary index: test_index - match: {count: 2} diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/cat.segments/10_basic.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/cat.segments/10_basic.yml index 0ae24068e60bf..3a05a9baa75fb 100755 --- a/rest-api-spec/src/main/resources/rest-api-spec/test/cat.segments/10_basic.yml +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/cat.segments/10_basic.yml @@ -103,7 +103,7 @@ index: index1 - do: - catch: request + catch: bad_request cat.segments: index: index1 diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/cluster.allocation_explain/10_basic.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/cluster.allocation_explain/10_basic.yml index 63724be133176..e88093c5c11ee 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/test/cluster.allocation_explain/10_basic.yml +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/cluster.allocation_explain/10_basic.yml @@ -47,7 +47,7 @@ - do: indices.create: index: test - body: { "index.number_of_shards": 1, "index.number_of_replicas": 9 } + body: { "settings": { "index.number_of_shards": 1, "index.number_of_replicas": 9 } } - do: cluster.state: diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/count/10_basic.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/count/10_basic.yml index f38d2c315eb78..32256811e0f51 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/test/count/10_basic.yml +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/count/10_basic.yml @@ -58,7 +58,7 @@ setup: --- "count body without query element": - do: - catch: request + catch: bad_request count: index: test body: diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/create/30_internal_version.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/create/30_internal_version.yml index e220d98816161..afd5ea134fe64 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/test/create/30_internal_version.yml +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/create/30_internal_version.yml @@ -26,7 +26,7 @@ reason: validation logic only fixed from 5.1.2 onwards - do: - catch: request + catch: bad_request create: index: test type: test diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/create/35_external_version.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/create/35_external_version.yml index e29690fe8d03b..ac1f1adcc94a7 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/test/create/35_external_version.yml +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/create/35_external_version.yml @@ -6,7 +6,7 @@ reason: validation logic only fixed from 5.1.2 onwards - do: - catch: request + catch: bad_request create: index: test type: test @@ -20,7 +20,7 @@ - match: { error.reason: "Validation Failed: 1: create operations only support internal versioning. use index instead;" } - do: - catch: request + catch: bad_request create: index: test type: test diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/exists/30_parent.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/exists/30_parent.yml index 91fdf027c131f..4c92605756a37 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/test/exists/30_parent.yml +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/exists/30_parent.yml @@ -31,7 +31,7 @@ setup: "Parent omitted": - do: - catch: request + catch: bad_request exists: index: test_1 type: test diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/explain/10_basic.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/explain/10_basic.yml index b5a9212d36b52..5f211435ae976 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/test/explain/10_basic.yml +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/explain/10_basic.yml @@ -56,7 +56,7 @@ setup: --- "Explain body without query element": - do: - catch: request + catch: bad_request explain: index: test_1 type: test diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/get/30_parent.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/get/30_parent.yml index 353dce8fab7da..04f578b88d6e6 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/test/get/30_parent.yml +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/get/30_parent.yml @@ -34,7 +34,7 @@ setup: --- "Parent omitted": - do: - catch: request + catch: bad_request get: index: test_1 type: test diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/get_source/30_parent.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/get_source/30_parent.yml index 8c1088e19bb39..fe589c9823472 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/test/get_source/30_parent.yml +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/get_source/30_parent.yml @@ -32,7 +32,7 @@ setup: "Parent omitted": - do: - catch: request + catch: bad_request get_source: index: test_1 type: test diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/index/10_with_id.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/index/10_with_id.yml index 8ac55ec79f626..daac81849fb5e 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/test/index/10_with_id.yml +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/index/10_with_id.yml @@ -26,7 +26,7 @@ - match: { _source: { foo: bar }} - do: - catch: request + catch: bad_request index: index: idx type: type diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/indices.delete/10_basic.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/indices.delete/10_basic.yml index 40486da9e7e76..783e65001eff0 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/test/indices.delete/10_basic.yml +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/indices.delete/10_basic.yml @@ -14,7 +14,7 @@ setup: version: " - 5.99.0" reason: delete index doesn't support aliases only from 6.0.0 on - do: - catch: request + catch: bad_request indices.delete: index: alias - do: @@ -42,7 +42,7 @@ setup: version: " - 5.99.0" reason: delete index doesn't support aliases only from 6.0.0 on - do: - catch: request + catch: bad_request indices.delete: index: alias,index2 - do: diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/indices.get/10_basic.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/indices.get/10_basic.yml index b6ac97eb91bfd..e30af208aeb85 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/test/indices.get/10_basic.yml +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/indices.get/10_basic.yml @@ -160,3 +160,14 @@ setup: - is_true: test_index_2.settings - is_true: test_index_3.settings +--- +"Should return an exception when querying invalid indices": + - skip: + version: " - 5.99.99" + reason: "bad request logic added in 6.0.0" + + - do: + catch: bad_request + indices.get: + index: _foo + diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/indices.open/10_basic.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/indices.open/10_basic.yml index 86a3a441539ab..2fa6b34681b00 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/test/indices.open/10_basic.yml +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/indices.open/10_basic.yml @@ -20,7 +20,7 @@ index: test_index - do: - catch: request + catch: bad_request search: index: test_index @@ -36,3 +36,27 @@ search: index: test_index +--- +"Open index with wait_for_active_shards set to all": + - skip: + version: " - 6.0.99" + reason: wait_for_active_shards parameter was added in 6.1.0 + + - do: + indices.create: + index: test_index + body: + settings: + number_of_replicas: 0 + + - do: + indices.close: + index: test_index + + - do: + indices.open: + index: test_index + wait_for_active_shards: all + + - match: { acknowledged: true } + - match: { shards_acknowledged: true } diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/indices.open/20_multiple_indices.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/indices.open/20_multiple_indices.yml index 181e010c95c19..944338123d139 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/test/indices.open/20_multiple_indices.yml +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/indices.open/20_multiple_indices.yml @@ -32,7 +32,7 @@ setup: index: _all - do: - catch: request + catch: bad_request search: index: test_index2 @@ -59,7 +59,7 @@ setup: index: test_* - do: - catch: request + catch: bad_request search: index: test_index2 @@ -86,7 +86,7 @@ setup: index: '*' - do: - catch: request + catch: bad_request search: index: test_index3 diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/indices.put_alias/10_basic.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/indices.put_alias/10_basic.yml index 5527c023b13a3..32a5be627658b 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/test/indices.put_alias/10_basic.yml +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/indices.put_alias/10_basic.yml @@ -39,7 +39,7 @@ index: test_index - do: - catch: request + catch: bad_request indices.put_alias: index: test_index name: test_* @@ -55,7 +55,7 @@ index: foo - do: - catch: request + catch: bad_request indices.put_alias: index: test_index name: foo diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/indices.put_template/10_basic.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/indices.put_template/10_basic.yml index 01bd7afc582b8..3e8b3db468ea9 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/test/indices.put_template/10_basic.yml +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/indices.put_template/10_basic.yml @@ -100,7 +100,7 @@ - match: {test.settings: {index.number_of_shards: '1', index.number_of_replicas: '0'}} - do: - catch: request + catch: bad_request indices.put_template: name: test create: true @@ -210,3 +210,16 @@ catch: missing indices.get_template: name: "my_template" + +--- +"Put index template without index_patterns": + + - skip: + version: " - 5.99.99" + reason: the error message is updated in v6.0.0 + + - do: + catch: /index patterns are missing/ + indices.put_template: + name: test + body: {} diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/indices.rollover/30_max_size_condition.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/indices.rollover/30_max_size_condition.yml new file mode 100644 index 0000000000000..6e4df0f292915 --- /dev/null +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/indices.rollover/30_max_size_condition.yml @@ -0,0 +1,60 @@ +--- +"Rollover with max_size condition": + + - skip: + version: " - 6.0.99" + reason: max_size condition is introduced in 6.1.0 + + # create index with alias and replica + - do: + indices.create: + index: logs-1 + wait_for_active_shards: 1 + body: + aliases: + logs_search: {} + + # index a document + - do: + index: + index: logs-1 + type: doc + id: "1" + body: { "foo": "hello world" } + refresh: true + + # perform alias rollover with a large max_size, no action. + - do: + indices.rollover: + alias: "logs_search" + wait_for_active_shards: 1 + body: + conditions: + max_size: 100mb + + - match: { conditions: { "[max_size: 100mb]": false } } + - match: { rolled_over: false } + + # perform alias rollover with a small max_size, got action. + - do: + indices.rollover: + alias: "logs_search" + wait_for_active_shards: 1 + body: + conditions: + max_size: 10b + + - match: { conditions: { "[max_size: 10b]": true } } + - match: { rolled_over: true } + + # perform alias rollover on an empty index, no action. + - do: + indices.rollover: + alias: "logs_search" + wait_for_active_shards: 1 + body: + conditions: + max_size: 1b + + - match: { conditions: { "[max_size: 1b]": false } } + - match: { rolled_over: false } diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/indices.segments/10_basic.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/indices.segments/10_basic.yml index 3ad2a3683320e..64d94535a9cb5 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/test/indices.segments/10_basic.yml +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/indices.segments/10_basic.yml @@ -66,7 +66,7 @@ index: index1 - do: - catch: request + catch: bad_request indices.segments: index: index1 diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/indices.sort/10_basic.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/indices.sort/10_basic.yml index a31c85c7d41b5..6e8be800a1b7b 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/test/indices.sort/10_basic.yml +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/indices.sort/10_basic.yml @@ -98,7 +98,6 @@ sort: ["rank"] size: 1 - - is_true: terminated_early - match: {hits.total: 8 } - length: {hits.hits: 1 } - match: {hits.hits.0._id: "2" } @@ -113,7 +112,6 @@ track_total_hits: false size: 1 - - match: {terminated_early: true} - match: {hits.total: -1 } - length: {hits.hits: 1 } - match: {hits.hits.0._id: "2" } @@ -134,7 +132,6 @@ body: sort: _doc - - is_false: terminated_early - match: {hits.total: 8 } - length: {hits.hits: 8 } - match: {hits.hits.0._id: "2" } @@ -156,7 +153,6 @@ track_total_hits: false size: 3 - - match: {terminated_early: true } - match: {hits.total: -1 } - length: {hits.hits: 3 } - match: {hits.hits.0._id: "2" } diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/indices.split/10_basic.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/indices.split/10_basic.yml new file mode 100644 index 0000000000000..82881564a2043 --- /dev/null +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/indices.split/10_basic.yml @@ -0,0 +1,101 @@ +--- +"Split index via API": + - skip: + version: " - 6.0.99" + reason: Added in 6.1.0 + - do: + indices.create: + index: source + wait_for_active_shards: 1 + body: + settings: + index.number_of_shards: 1 + index.number_of_replicas: 0 + index.number_of_routing_shards: 2 + - do: + index: + index: source + type: doc + id: "1" + body: { "foo": "hello world" } + + - do: + index: + index: source + type: doc + id: "2" + body: { "foo": "hello world 2" } + + - do: + index: + index: source + type: doc + id: "3" + body: { "foo": "hello world 3" } + + # make it read-only + - do: + indices.put_settings: + index: source + body: + index.blocks.write: true + index.number_of_replicas: 0 + + - do: + cluster.health: + wait_for_status: green + index: source + + # now we do the actual split + - do: + indices.split: + index: "source" + target: "target" + wait_for_active_shards: 1 + master_timeout: 10s + body: + settings: + index.number_of_replicas: 0 + index.number_of_shards: 2 + + - do: + cluster.health: + wait_for_status: green + + - do: + get: + index: target + type: doc + id: "1" + + - match: { _index: target } + - match: { _type: doc } + - match: { _id: "1" } + - match: { _source: { foo: "hello world" } } + + + - do: + get: + index: target + type: doc + id: "2" + + - match: { _index: target } + - match: { _type: doc } + - match: { _id: "2" } + - match: { _source: { foo: "hello world 2" } } + + + - do: + get: + index: target + type: doc + id: "3" + + - match: { _index: target } + - match: { _type: doc } + - match: { _id: "3" } + - match: { _source: { foo: "hello world 3" } } + + + diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/indices.split/20_source_mapping.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/indices.split/20_source_mapping.yml new file mode 100644 index 0000000000000..38f36c405a1cb --- /dev/null +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/indices.split/20_source_mapping.yml @@ -0,0 +1,72 @@ +--- +"Split index ignores target template mapping": + - skip: + version: " - 6.0.99" + reason: Added in 6.1.0 + + # create index + - do: + indices.create: + index: source + wait_for_active_shards: 1 + body: + settings: + number_of_shards: 1 + number_of_replicas: 0 + index.number_of_routing_shards: 2 + mappings: + test: + properties: + count: + type: text + + # index document + - do: + index: + index: source + type: test + id: "1" + body: { "count": "1" } + + # create template matching shrink target + - do: + indices.put_template: + name: tpl1 + body: + index_patterns: targ* + mappings: + test: + properties: + count: + type: integer + + # make it read-only + - do: + indices.put_settings: + index: source + body: + index.blocks.write: true + index.number_of_replicas: 0 + + - do: + cluster.health: + wait_for_status: green + index: source + + # now we do the actual split + - do: + indices.split: + index: "source" + target: "target" + wait_for_active_shards: 1 + master_timeout: 10s + body: + settings: + index.number_of_shards: 2 + index.number_of_replicas: 0 + + - do: + cluster.health: + wait_for_status: green + + diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/indices.stats/10_index.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/indices.stats/10_index.yml index b7724e062836e..a0e131024b60f 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/test/indices.stats/10_index.yml +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/indices.stats/10_index.yml @@ -106,7 +106,7 @@ setup: version: " - 5.0.99" reason: strict stats handling does not exist in 5.0 - do: - catch: request + catch: bad_request indices.stats: metric: [ fieldata ] diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/nodes.stats/10_basic.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/nodes.stats/10_basic.yml index 62664319d8a43..07f32ff413211 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/test/nodes.stats/10_basic.yml +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/nodes.stats/10_basic.yml @@ -27,7 +27,7 @@ version: " - 5.0.99" reason: strict stats handling does not exist in 5.0 - do: - catch: request + catch: bad_request nodes.stats: metric: [ transprot ] diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/nodes.stats/30_discovery.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/nodes.stats/30_discovery.yml index 2617f76941c54..ad8058876ae49 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/test/nodes.stats/30_discovery.yml +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/nodes.stats/30_discovery.yml @@ -1,5 +1,8 @@ --- "Discovery stats": + - skip: + version: " - 6.0.99" + reason: "published_cluster_states_received arrived in 6.1.0" - do: cluster.state: {} @@ -15,6 +18,11 @@ - is_true: nodes.$master.name - is_false: nodes.$master.jvm - is_true: nodes.$master.discovery + - is_true: nodes.$master.discovery.cluster_state_queue + - is_true: nodes.$master.discovery.published_cluster_states + - gte: { nodes.$master.discovery.published_cluster_states.full_states: 0 } + - gte: { nodes.$master.discovery.published_cluster_states.incompatible_diffs: 0 } + - gte: { nodes.$master.discovery.published_cluster_states.compatible_diffs: 0 } - is_true: nodes.$master.roles - do: @@ -26,4 +34,9 @@ - is_false: nodes.$master.name - is_false: nodes.$master.jvm - is_true: nodes.$master.discovery + - is_true: nodes.$master.discovery.cluster_state_queue + - is_true: nodes.$master.discovery.published_cluster_states + - gte: { nodes.$master.discovery.published_cluster_states.full_states: 0 } + - gte: { nodes.$master.discovery.published_cluster_states.incompatible_diffs: 0 } + - gte: { nodes.$master.discovery.published_cluster_states.compatible_diffs: 0 } - is_false: nodes.$master.roles diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/scroll/12_slices.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/scroll/12_slices.yml index ac66af0095e2d..4acc4d132327e 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/test/scroll/12_slices.yml +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/scroll/12_slices.yml @@ -103,8 +103,12 @@ setup: --- "Sliced scroll with invalid arguments": + - skip: + version: " - 6.99.99" + reason: Prior versions return 500 rather than 404 + - do: - catch: /query_phase_execution_exception.*The number of slices.*index.max_slices_per_scroll/ + catch: bad_request search: index: test_sliced_scroll size: 1 diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/scroll/20_keep_alive.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/scroll/20_keep_alive.yml new file mode 100644 index 0000000000000..8577835502a6d --- /dev/null +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/scroll/20_keep_alive.yml @@ -0,0 +1,70 @@ +--- + teardown: + + - do: + cluster.put_settings: + body: + transient: + search.max_keep_alive: null + search.default_keep_alive: null + +--- +"Max keep alive": + - skip: + version: " - 6.99.99" + reason: search.max_keep_alive was added in 7.0.0 + + - do: + index: + index: test_scroll + type: test + id: 1 + body: { foo: 1 } + + - do: + index: + index: test_scroll + type: test + id: 2 + body: { foo: 1 } + + - do: + indices.refresh: {} + + - do: + cluster.put_settings: + body: + transient: + search.default_keep_alive: "1m" + search.max_keep_alive: "1m" + + - do: + catch: /.*Keep alive for scroll.*is too large.*/ + search: + index: test_scroll + size: 1 + scroll: 2m + sort: foo + body: + query: + match_all: {} + + - do: + search: + index: test_scroll + size: 1 + scroll: 1m + sort: foo + body: + query: + match_all: {} + + - set: {_scroll_id: scroll_id} + - match: {hits.total: 2 } + - length: {hits.hits: 1 } + + - do: + catch: /.*Keep alive for scroll.*is too large.*/ + scroll: + scroll_id: $scroll_id + scroll: 3m diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/100_avg_metric.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/100_avg_metric.yml new file mode 100644 index 0000000000000..a17bdade6560d --- /dev/null +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/100_avg_metric.yml @@ -0,0 +1,176 @@ +setup: + - do: + indices.create: + index: test_1 + body: + settings: + number_of_replicas: 0 + mappings: + doc: + properties: + int_field: + type : integer + double_field: + type : double + string_field: + type: keyword + + - do: + bulk: + refresh: true + body: + - index: + _index: test_1 + _type: doc + _id: 1 + - int_field: 1 + double_field: 1.0 + string_field: foo + - index: + _index: test_1 + _type: doc + _id: 2 + - int_field: 51 + double_field: 51.0 + string_field: foo + - index: + _index: test_1 + _type: doc + _id: 3 + - int_field: 101 + double_field: 101.0 + string_field: foo + - index: + _index: test_1 + _type: doc + _id: 4 + - int_field: 151 + double_field: 151.0 + string_field: foo + +--- +"Basic test": + + - do: + search: + body: + aggs: + the_int_avg: + avg: + field: int_field + the_double_avg: + avg: + field: double_field + + - match: { hits.total: 4 } + - length: { hits.hits: 4 } + - match: { aggregations.the_int_avg.value: 76.0 } + - match: { aggregations.the_double_avg.value: 76.0 } + +--- +"Only aggs test": + + - do: + search: + body: + size: 0 + aggs: + the_int_avg: + avg: + field: int_field + the_double_avg: + avg: + field: double_field + + - match: { hits.total: 4 } + - length: { hits.hits: 0 } + - match: { aggregations.the_int_avg.value: 76.0 } + - match: { aggregations.the_double_avg.value: 76.0 } + +--- +"Filtered test": + + - do: + search: + body: + query: + constant_score: + filter: + range: + int_field: + gte: 50 + aggs: + the_int_avg: + avg: + field: int_field + the_double_avg: + avg: + field: double_field + + - match: { hits.total: 3 } + - length: { hits.hits: 3 } + - match: { aggregations.the_int_avg.value: 101.0 } + - match: { aggregations.the_double_avg.value: 101.0 } + + +--- +"Missing field with missing param": + + - do: + search: + body: + aggs: + the_missing_avg: + avg: + field: foo + missing: 1 + + - match: { hits.total: 4 } + - length: { hits.hits: 4 } + - match: { aggregations.the_missing_avg.value: 1 } + +--- +"Missing field without missing param": + + - do: + search: + body: + aggs: + the_missing_avg: + avg: + field: foo + + - match: { hits.total: 4 } + - length: { hits.hits: 4 } + - is_false: aggregations.the_missing_avg.value + +--- +"Metadata test": + + - do: + search: + body: + aggs: + the_int_avg: + meta: + foo: bar + avg: + field: int_field + + - match: { hits.total: 4 } + - length: { hits.hits: 4 } + - match: { aggregations.the_int_avg.value: 76.0 } + - match: { aggregations.the_int_avg.meta.foo: "bar" } + +--- +"Aggregating wrong datatype test": + + - do: + catch: bad_request + search: + body: + aggs: + the_string_avg: + avg: + field: string_field + diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/110_max_metric.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/110_max_metric.yml new file mode 100644 index 0000000000000..30b0bafe3b031 --- /dev/null +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/110_max_metric.yml @@ -0,0 +1,175 @@ +setup: + - do: + indices.create: + index: test_1 + body: + settings: + number_of_replicas: 0 + mappings: + doc: + properties: + int_field: + type : integer + double_field: + type : double + string_field: + type: keyword + + - do: + bulk: + refresh: true + body: + - index: + _index: test_1 + _type: doc + _id: 1 + - int_field: 1 + double_field: 1.0 + string_field: foo + - index: + _index: test_1 + _type: doc + _id: 2 + - int_field: 51 + double_field: 51.0 + string_field: foo + - index: + _index: test_1 + _type: doc + _id: 3 + - int_field: 101 + double_field: 101.0 + string_field: foo + - index: + _index: test_1 + _type: doc + _id: 4 + - int_field: 151 + double_field: 151.0 + string_field: foo + +--- +"Basic test": + + - do: + search: + body: + aggs: + the_int_max: + max: + field: int_field + the_double_max: + max: + field: double_field + + - match: { hits.total: 4 } + - length: { hits.hits: 4 } + - match: { aggregations.the_int_max.value: 151.0 } + - match: { aggregations.the_double_max.value: 151.0 } + +--- +"Only aggs test": + + - do: + search: + body: + size: 0 + aggs: + the_int_max: + max: + field: int_field + the_double_max: + max: + field: double_field + + - match: { hits.total: 4 } + - length: { hits.hits: 0 } + - match: { aggregations.the_int_max.value: 151.0 } + - match: { aggregations.the_double_max.value: 151.0 } + +--- +"Filtered test": + + - do: + search: + body: + query: + constant_score: + filter: + range: + int_field: + lte: 60 + aggs: + the_int_max: + max: + field: int_field + the_double_max: + max: + field: double_field + + - match: { hits.total: 2 } + - length: { hits.hits: 2 } + - match: { aggregations.the_int_max.value: 51.0 } + - match: { aggregations.the_double_max.value: 51.0 } + + +--- +"Missing field with missing param": + + - do: + search: + body: + aggs: + the_missing_max: + max: + field: foo + missing: 1 + + - match: { hits.total: 4 } + - length: { hits.hits: 4 } + - match: { aggregations.the_missing_max.value: 1 } + +--- +"Missing field without missing param": + + - do: + search: + body: + aggs: + the_missing_max: + max: + field: foo + + - match: { hits.total: 4 } + - length: { hits.hits: 4 } + - is_false: aggregations.the_missing_max.value + +--- +"Metadata test": + + - do: + search: + body: + aggs: + the_int_max: + meta: + foo: bar + max: + field: int_field + + - match: { hits.total: 4 } + - length: { hits.hits: 4 } + - match: { aggregations.the_int_max.value: 151.0 } + - match: { aggregations.the_int_max.meta.foo: "bar" } + +--- +"Aggregating wrong datatype test": + + - do: + catch: bad_request + search: + body: + aggs: + the_string_avg: + avg: + field: string_field diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/120_min_metric.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/120_min_metric.yml new file mode 100644 index 0000000000000..f56719dfe6e54 --- /dev/null +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/120_min_metric.yml @@ -0,0 +1,176 @@ +setup: + - do: + indices.create: + index: test_1 + body: + settings: + number_of_replicas: 0 + mappings: + doc: + properties: + int_field: + type : integer + double_field: + type : double + string_field: + type: keyword + + - do: + bulk: + refresh: true + body: + - index: + _index: test_1 + _type: doc + _id: 1 + - int_field: 1 + double_field: 1.0 + string_field: foo + - index: + _index: test_1 + _type: doc + _id: 2 + - int_field: 51 + double_field: 51.0 + string_field: foo + - index: + _index: test_1 + _type: doc + _id: 3 + - int_field: 101 + double_field: 101.0 + string_field: foo + - index: + _index: test_1 + _type: doc + _id: 4 + - int_field: 151 + double_field: 151.0 + string_field: foo + +--- +"Basic test": + + - do: + search: + body: + aggs: + the_int_min: + min: + field: int_field + the_double_min: + min: + field: double_field + + - match: { hits.total: 4 } + - length: { hits.hits: 4 } + - match: { aggregations.the_int_min.value: 1.0 } + - match: { aggregations.the_double_min.value: 1.0 } + +--- +"Only aggs test": + + - do: + search: + body: + size: 0 + aggs: + the_int_min: + min: + field: int_field + the_double_min: + min: + field: double_field + + - match: { hits.total: 4 } + - length: { hits.hits: 0 } + - match: { aggregations.the_int_min.value: 1.0 } + - match: { aggregations.the_double_min.value: 1.0 } + +--- +"Filtered test": + + - do: + search: + body: + query: + constant_score: + filter: + range: + int_field: + gte: 50 + aggs: + the_int_min: + min: + field: int_field + the_double_min: + min: + field: double_field + + - match: { hits.total: 3 } + - length: { hits.hits: 3 } + - match: { aggregations.the_int_min.value: 51.0 } + - match: { aggregations.the_double_min.value: 51.0 } + + +--- +"Missing field with missing param": + + - do: + search: + body: + aggs: + the_missing_min: + min: + field: foo + missing: 1 + + - match: { hits.total: 4 } + - length: { hits.hits: 4 } + - match: { aggregations.the_missing_min.value: 1.0 } + +--- +"Missing field without missing param": + + - do: + search: + body: + aggs: + the_missing_min: + min: + field: foo + + - match: { hits.total: 4 } + - length: { hits.hits: 4 } + - is_false: aggregations.the_missing_min.value + +--- +"Metadata test": + + - do: + search: + body: + aggs: + the_int_min: + meta: + foo: bar + min: + field: int_field + + - match: { hits.total: 4 } + - length: { hits.hits: 4 } + - match: { aggregations.the_int_min.value: 1.0 } + - match: { aggregations.the_int_min.meta.foo: "bar" } + +--- +"Aggregating wrong datatype test": + + - do: + catch: bad_request + search: + body: + aggs: + the_string_min: + min: + field: string_field + diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/130_sum_metric.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/130_sum_metric.yml new file mode 100644 index 0000000000000..9fbb15fdab3df --- /dev/null +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/130_sum_metric.yml @@ -0,0 +1,176 @@ +setup: + - do: + indices.create: + index: test_1 + body: + settings: + number_of_replicas: 0 + mappings: + doc: + properties: + int_field: + type : integer + double_field: + type : double + string_field: + type: keyword + + - do: + bulk: + refresh: true + body: + - index: + _index: test_1 + _type: doc + _id: 1 + - int_field: 1 + double_field: 1.0 + string_field: foo + - index: + _index: test_1 + _type: doc + _id: 2 + - int_field: 51 + double_field: 51.0 + string_field: foo + - index: + _index: test_1 + _type: doc + _id: 3 + - int_field: 101 + double_field: 101.0 + string_field: foo + - index: + _index: test_1 + _type: doc + _id: 4 + - int_field: 151 + double_field: 151.0 + string_field: foo + +--- +"Basic test": + + - do: + search: + body: + aggs: + the_int_sum: + sum: + field: int_field + the_double_sum: + sum: + field: double_field + + - match: { hits.total: 4 } + - length: { hits.hits: 4 } + - match: { aggregations.the_int_sum.value: 304.0 } + - match: { aggregations.the_double_sum.value: 304.0 } + +--- +"Only aggs test": + + - do: + search: + body: + size: 0 + aggs: + the_int_sum: + sum: + field: int_field + the_double_sum: + sum: + field: double_field + + - match: { hits.total: 4 } + - length: { hits.hits: 0 } + - match: { aggregations.the_int_sum.value: 304.0 } + - match: { aggregations.the_double_sum.value: 304.0 } + +--- +"Filtered test": + + - do: + search: + body: + query: + constant_score: + filter: + range: + int_field: + gte: 50 + aggs: + the_int_sum: + sum: + field: int_field + the_double_sum: + sum: + field: double_field + + - match: { hits.total: 3 } + - length: { hits.hits: 3 } + - match: { aggregations.the_int_sum.value: 303.0 } + - match: { aggregations.the_double_sum.value: 303.0 } + + +--- +"Missing field with missing param": + + - do: + search: + body: + aggs: + the_missing_sum: + sum: + field: foo + missing: 1 + + - match: { hits.total: 4 } + - length: { hits.hits: 4 } + - match: { aggregations.the_missing_sum.value: 4.0 } + +--- +"Missing field without missing param": + + - do: + search: + body: + aggs: + the_missing_sum: + sum: + field: foo + + - match: { hits.total: 4 } + - length: { hits.hits: 4 } + - match: { aggregations.the_missing_sum.value: 0.0 } + +--- +"Metadata test": + + - do: + search: + body: + aggs: + the_int_sum: + meta: + foo: bar + sum: + field: int_field + + - match: { hits.total: 4 } + - length: { hits.hits: 4 } + - match: { aggregations.the_int_sum.value: 304.0 } + - match: { aggregations.the_int_sum.meta.foo: "bar" } + +--- +"Aggregating wrong datatype test": + + - do: + catch: bad_request + search: + body: + aggs: + the_string_sum: + sum: + field: string_field + diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/140_value_count_metric.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/140_value_count_metric.yml new file mode 100644 index 0000000000000..088f81a0db300 --- /dev/null +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/140_value_count_metric.yml @@ -0,0 +1,175 @@ +setup: + - do: + indices.create: + index: test_1 + body: + settings: + number_of_replicas: 0 + mappings: + doc: + properties: + int_field: + type : integer + double_field: + type : double + string_field: + type: keyword + + - do: + bulk: + refresh: true + body: + - index: + _index: test_1 + _type: doc + _id: 1 + - int_field: 1 + double_field: 1.0 + string_field: foo + - index: + _index: test_1 + _type: doc + _id: 2 + - int_field: 51 + double_field: 51.0 + string_field: foo + - index: + _index: test_1 + _type: doc + _id: 3 + - int_field: 101 + double_field: 101.0 + string_field: foo + - index: + _index: test_1 + _type: doc + _id: 4 + - int_field: 151 + double_field: 151.0 + string_field: foo + +--- +"Basic test": + + - do: + search: + body: + aggs: + the_int_value_count: + value_count: + field: int_field + the_double_value_count: + value_count: + field: double_field + the_string_value_count: + value_count: + field: string_field + + - match: { hits.total: 4 } + - length: { hits.hits: 4 } + - match: { aggregations.the_int_value_count.value: 4 } + - match: { aggregations.the_double_value_count.value: 4 } + - match: { aggregations.the_string_value_count.value: 4 } + +--- +"Only aggs test": + + - do: + search: + body: + size: 0 + aggs: + the_int_value_count: + value_count: + field: int_field + the_double_value_count: + value_count: + field: double_field + the_string_value_count: + value_count: + field: string_field + + - match: { hits.total: 4 } + - length: { hits.hits: 0 } + - match: { aggregations.the_int_value_count.value: 4 } + - match: { aggregations.the_double_value_count.value: 4 } + - match: { aggregations.the_string_value_count.value: 4 } + +--- +"Filtered test": + + - do: + search: + body: + query: + constant_score: + filter: + range: + int_field: + gte: 50 + aggs: + the_int_value_count: + value_count: + field: int_field + the_double_value_count: + value_count: + field: double_field + the_string_value_count: + value_count: + field: string_field + + - match: { hits.total: 3 } + - length: { hits.hits: 3 } + - match: { aggregations.the_int_value_count.value: 3 } + - match: { aggregations.the_double_value_count.value: 3 } + - match: { aggregations.the_string_value_count.value: 3 } + + +--- +"Missing field with missing param": + + - do: + search: + body: + aggs: + the_missing_value_count: + value_count: + field: foo + missing: 1 + + - match: { hits.total: 4 } + - length: { hits.hits: 4 } + - match: { aggregations.the_missing_value_count.value: 4 } + +--- +"Missing field without missing param": + + - do: + search: + body: + aggs: + the_missing_value_count: + value_count: + field: foo + + - match: { hits.total: 4 } + - length: { hits.hits: 4 } + - is_false: aggregations.the_missing_value_count.value + +--- +"Metadata test": + + - do: + search: + body: + aggs: + the_int_value_count: + meta: + foo: bar + value_count: + field: int_field + + - match: { hits.total: 4 } + - length: { hits.hits: 4 } + - match: { aggregations.the_int_value_count.value: 4 } + - match: { aggregations.the_int_value_count.meta.foo: "bar" } diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/150_stats_metric.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/150_stats_metric.yml new file mode 100644 index 0000000000000..a27f722509875 --- /dev/null +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/150_stats_metric.yml @@ -0,0 +1,196 @@ +setup: + - do: + indices.create: + index: test_1 + body: + settings: + number_of_replicas: 0 + mappings: + doc: + properties: + int_field: + type : integer + double_field: + type : double + string_field: + type: keyword + + - do: + bulk: + refresh: true + body: + - index: + _index: test_1 + _type: doc + _id: 1 + - int_field: 1 + double_field: 1.0 + string_field: foo + - index: + _index: test_1 + _type: doc + _id: 2 + - int_field: 51 + double_field: 51.0 + string_field: foo + - index: + _index: test_1 + _type: doc + _id: 3 + - int_field: 101 + double_field: 101.0 + string_field: foo + - index: + _index: test_1 + _type: doc + _id: 4 + - int_field: 151 + double_field: 151.0 + string_field: foo + +--- +"Basic test": + + - do: + search: + body: + aggs: + the_int_stats: + stats: + field: int_field + the_double_stats: + stats: + field: double_field + + - match: { hits.total: 4 } + - length: { hits.hits: 4 } + - match: { aggregations.the_int_stats.count: 4 } + - match: { aggregations.the_int_stats.min: 1.0 } + - match: { aggregations.the_int_stats.max: 151.0 } + - match: { aggregations.the_int_stats.avg: 76.0 } + - match: { aggregations.the_int_stats.sum: 304.0 } + - match: { aggregations.the_double_stats.count: 4 } + - match: { aggregations.the_double_stats.min: 1.0 } + - match: { aggregations.the_double_stats.max: 151.0 } + - match: { aggregations.the_double_stats.avg: 76.0 } + - match: { aggregations.the_double_stats.sum: 304.0 } + +--- +"Only aggs test": + + - do: + search: + body: + size: 0 + aggs: + the_int_stats: + stats: + field: int_field + the_double_stats: + stats: + field: double_field + + - match: { hits.total: 4 } + - length: { hits.hits: 0 } + - match: { aggregations.the_int_stats.count: 4 } + - match: { aggregations.the_int_stats.min: 1.0 } + - match: { aggregations.the_int_stats.max: 151.0 } + - match: { aggregations.the_int_stats.avg: 76.0 } + - match: { aggregations.the_int_stats.sum: 304.0 } + - match: { aggregations.the_double_stats.count: 4 } + - match: { aggregations.the_double_stats.min: 1.0 } + - match: { aggregations.the_double_stats.max: 151.0 } + - match: { aggregations.the_double_stats.avg: 76.0 } + - match: { aggregations.the_double_stats.sum: 304.0 } + +--- +"Filtered test": + + - do: + search: + body: + query: + constant_score: + filter: + range: + int_field: + gte: 50 + aggs: + the_int_stats: + stats: + field: int_field + the_double_stats: + stats: + field: double_field + + - match: { hits.total: 3 } + - length: { hits.hits: 3 } + - match: { aggregations.the_int_stats.count: 3 } + - match: { aggregations.the_int_stats.min: 51.0 } + - match: { aggregations.the_int_stats.max: 151.0 } + - match: { aggregations.the_int_stats.avg: 101.0 } + - match: { aggregations.the_int_stats.sum: 303.0 } + - match: { aggregations.the_double_stats.count: 3 } + - match: { aggregations.the_double_stats.min: 51.0 } + - match: { aggregations.the_double_stats.max: 151.0 } + - match: { aggregations.the_double_stats.avg: 101.0 } + - match: { aggregations.the_double_stats.sum: 303.0 } + + +--- +"Missing field with missing param": + + - do: + search: + body: + aggs: + the_missing_stats: + stats: + field: foo + missing: 1 + + - match: { hits.total: 4 } + - length: { hits.hits: 4 } + - match: { aggregations.the_missing_stats.count: 4 } + - match: { aggregations.the_missing_stats.min: 1.0 } + - match: { aggregations.the_missing_stats.max: 1.0 } + - match: { aggregations.the_missing_stats.avg: 1.0 } + - match: { aggregations.the_missing_stats.sum: 4.0 } + +--- +"Missing field without missing param": + + - do: + search: + body: + aggs: + the_missing_stats: + stats: + field: foo + + - match: { hits.total: 4 } + - length: { hits.hits: 4 } + - is_false: aggregations.the_missing_stats.value + +--- +"Metadata test": + + - do: + search: + body: + aggs: + the_int_stats: + meta: + foo: bar + stats: + field: int_field + + - match: { hits.total: 4 } + - length: { hits.hits: 4 } + - match: { aggregations.the_int_stats.count: 4 } + - match: { aggregations.the_int_stats.min: 1.0 } + - match: { aggregations.the_int_stats.max: 151.0 } + - match: { aggregations.the_int_stats.avg: 76.0 } + - match: { aggregations.the_int_stats.sum: 304.0 } + + diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/160_extended_stats_metric.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/160_extended_stats_metric.yml new file mode 100644 index 0000000000000..aff30d17de167 --- /dev/null +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/160_extended_stats_metric.yml @@ -0,0 +1,291 @@ +setup: + - do: + indices.create: + index: test_1 + body: + settings: + number_of_replicas: 0 + mappings: + doc: + properties: + int_field: + type : integer + double_field: + type : double + string_field: + type: keyword + + - do: + bulk: + refresh: true + body: + - index: + _index: test_1 + _type: doc + _id: 1 + - int_field: 1 + double_field: 1.0 + string_field: foo + - index: + _index: test_1 + _type: doc + _id: 2 + - int_field: 51 + double_field: 51.0 + string_field: foo + - index: + _index: test_1 + _type: doc + _id: 3 + - int_field: 101 + double_field: 101.0 + string_field: foo + - index: + _index: test_1 + _type: doc + _id: 4 + - int_field: 151 + double_field: 151.0 + string_field: foo + +--- +"Basic test": + + - do: + search: + body: + aggs: + the_int_extended_stats: + extended_stats: + field: int_field + the_double_extended_stats: + extended_stats: + field: double_field + + - match: { hits.total: 4 } + - length: { hits.hits: 4 } + - match: { aggregations.the_int_extended_stats.count: 4 } + - match: { aggregations.the_int_extended_stats.min: 1.0 } + - match: { aggregations.the_int_extended_stats.max: 151.0 } + - match: { aggregations.the_int_extended_stats.avg: 76.0 } + - match: { aggregations.the_int_extended_stats.sum: 304.0 } + - match: { aggregations.the_int_extended_stats.sum_of_squares: 35604.0 } + - match: { aggregations.the_int_extended_stats.std_deviation: 55.90169943749474 } + - match: { aggregations.the_int_extended_stats.std_deviation_bounds.upper: 187.80339887498948 } + - match: { aggregations.the_int_extended_stats.std_deviation_bounds.lower: -35.80339887498948 } + - match: { aggregations.the_double_extended_stats.count: 4 } + - match: { aggregations.the_double_extended_stats.min: 1.0 } + - match: { aggregations.the_double_extended_stats.max: 151.0 } + - match: { aggregations.the_double_extended_stats.avg: 76.0 } + - match: { aggregations.the_double_extended_stats.sum: 304.0 } + - match: { aggregations.the_double_extended_stats.sum_of_squares: 35604.0 } + - match: { aggregations.the_double_extended_stats.std_deviation: 55.90169943749474 } + - match: { aggregations.the_double_extended_stats.std_deviation_bounds.upper: 187.80339887498948 } + - match: { aggregations.the_double_extended_stats.std_deviation_bounds.lower: -35.80339887498948 } + +--- +"Only aggs test": + + - do: + search: + body: + size: 0 + aggs: + the_int_extended_stats: + extended_stats: + field: int_field + the_double_extended_stats: + extended_stats: + field: double_field + + - match: { hits.total: 4 } + - length: { hits.hits: 0 } + - match: { aggregations.the_int_extended_stats.count: 4 } + - match: { aggregations.the_int_extended_stats.min: 1.0 } + - match: { aggregations.the_int_extended_stats.max: 151.0 } + - match: { aggregations.the_int_extended_stats.avg: 76.0 } + - match: { aggregations.the_int_extended_stats.sum: 304.0 } + - match: { aggregations.the_int_extended_stats.sum_of_squares: 35604.0 } + - match: { aggregations.the_int_extended_stats.std_deviation: 55.90169943749474 } + - match: { aggregations.the_int_extended_stats.variance: 3125.0 } + - match: { aggregations.the_int_extended_stats.std_deviation_bounds.upper: 187.80339887498948 } + - match: { aggregations.the_int_extended_stats.std_deviation_bounds.lower: -35.80339887498948 } + - match: { aggregations.the_double_extended_stats.count: 4 } + - match: { aggregations.the_double_extended_stats.min: 1.0 } + - match: { aggregations.the_double_extended_stats.max: 151.0 } + - match: { aggregations.the_double_extended_stats.avg: 76.0 } + - match: { aggregations.the_double_extended_stats.sum: 304.0 } + - match: { aggregations.the_double_extended_stats.sum_of_squares: 35604.0 } + - match: { aggregations.the_double_extended_stats.std_deviation: 55.90169943749474 } + - match: { aggregations.the_double_extended_stats.variance: 3125.0 } + - match: { aggregations.the_double_extended_stats.std_deviation_bounds.upper: 187.80339887498948 } + - match: { aggregations.the_double_extended_stats.std_deviation_bounds.lower: -35.80339887498948 } + +--- +"Filtered test": + + - do: + search: + body: + query: + constant_score: + filter: + range: + int_field: + gte: 50 + aggs: + the_int_extended_stats: + extended_stats: + field: int_field + the_double_extended_stats: + extended_stats: + field: double_field + + - match: { hits.total: 3 } + - length: { hits.hits: 3 } + - match: { aggregations.the_int_extended_stats.count: 3 } + - match: { aggregations.the_int_extended_stats.min: 51.0 } + - match: { aggregations.the_int_extended_stats.max: 151.0 } + - match: { aggregations.the_int_extended_stats.avg: 101.0 } + - match: { aggregations.the_int_extended_stats.sum: 303.0 } + - match: { aggregations.the_int_extended_stats.sum_of_squares: 35603.0 } + - match: { aggregations.the_int_extended_stats.variance: 1666.6666666666667 } + - match: { aggregations.the_int_extended_stats.std_deviation: 40.824829046386306 } + - match: { aggregations.the_int_extended_stats.std_deviation_bounds.upper: 182.6496580927726 } + - match: { aggregations.the_int_extended_stats.std_deviation_bounds.lower: 19.35034190722739 } + - match: { aggregations.the_double_extended_stats.count: 3 } + - match: { aggregations.the_double_extended_stats.min: 51.0 } + - match: { aggregations.the_double_extended_stats.max: 151.0 } + - match: { aggregations.the_double_extended_stats.avg: 101.0 } + - match: { aggregations.the_double_extended_stats.sum: 303.0 } + - match: { aggregations.the_double_extended_stats.sum_of_squares: 35603.0 } + - match: { aggregations.the_double_extended_stats.variance: 1666.6666666666667 } + - match: { aggregations.the_double_extended_stats.std_deviation: 40.824829046386306 } + - match: { aggregations.the_double_extended_stats.std_deviation_bounds.upper: 182.6496580927726 } + - match: { aggregations.the_double_extended_stats.std_deviation_bounds.lower: 19.35034190722739 } + + +--- +"Missing field with missing param": + + - do: + search: + body: + aggs: + the_missing_extended_stats: + extended_stats: + field: foo + missing: 1 + + - match: { hits.total: 4 } + - length: { hits.hits: 4 } + - match: { aggregations.the_missing_extended_stats.count: 4 } + - match: { aggregations.the_missing_extended_stats.min: 1.0 } + - match: { aggregations.the_missing_extended_stats.max: 1.0 } + - match: { aggregations.the_missing_extended_stats.avg: 1.0 } + - match: { aggregations.the_missing_extended_stats.sum: 4.0 } + - match: { aggregations.the_missing_extended_stats.sum_of_squares: 4.0 } + - match: { aggregations.the_missing_extended_stats.variance: 0.0 } + - match: { aggregations.the_missing_extended_stats.std_deviation: 0.0 } + - match: { aggregations.the_missing_extended_stats.std_deviation_bounds.upper: 1.0 } + - match: { aggregations.the_missing_extended_stats.std_deviation_bounds.lower: 1.0 } + +--- +"Missing field without missing param": + + - do: + search: + body: + aggs: + the_missing_extended_stats: + extended_stats: + field: foo + + - match: { hits.total: 4 } + - length: { hits.hits: 4 } + - is_false: aggregations.the_missing_extended_stats.value + +--- +"Metadata test": + + - do: + search: + body: + aggs: + the_int_extended_stats: + meta: + foo: bar + extended_stats: + field: int_field + + - match: { hits.total: 4 } + - length: { hits.hits: 4 } + - match: { aggregations.the_int_extended_stats.count: 4 } + - match: { aggregations.the_int_extended_stats.min: 1.0 } + - match: { aggregations.the_int_extended_stats.max: 151.0 } + - match: { aggregations.the_int_extended_stats.avg: 76.0 } + - match: { aggregations.the_int_extended_stats.sum: 304.0 } + - match: { aggregations.the_int_extended_stats.sum_of_squares: 35604.0 } + - match: { aggregations.the_int_extended_stats.std_deviation: 55.90169943749474 } + - match: { aggregations.the_int_extended_stats.std_deviation_bounds.upper: 187.80339887498948 } + - match: { aggregations.the_int_extended_stats.std_deviation_bounds.lower: -35.80339887498948 } + +--- +"Sigma test": + + - do: + search: + body: + aggs: + the_int_extended_stats: + extended_stats: + field: int_field + sigma: 3 + the_double_extended_stats: + extended_stats: + field: double_field + sigma: 3 + + - match: { hits.total: 4 } + - length: { hits.hits: 4 } + - match: { aggregations.the_int_extended_stats.count: 4 } + - match: { aggregations.the_int_extended_stats.min: 1.0 } + - match: { aggregations.the_int_extended_stats.max: 151.0 } + - match: { aggregations.the_int_extended_stats.avg: 76.0 } + - match: { aggregations.the_int_extended_stats.sum: 304.0 } + - match: { aggregations.the_int_extended_stats.sum_of_squares: 35604.0 } + - match: { aggregations.the_int_extended_stats.std_deviation: 55.90169943749474 } + - match: { aggregations.the_int_extended_stats.std_deviation_bounds.upper: 243.7050983124842 } + - match: { aggregations.the_int_extended_stats.std_deviation_bounds.lower: -91.70509831248421 } + - match: { aggregations.the_double_extended_stats.count: 4 } + - match: { aggregations.the_double_extended_stats.min: 1.0 } + - match: { aggregations.the_double_extended_stats.max: 151.0 } + - match: { aggregations.the_double_extended_stats.avg: 76.0 } + - match: { aggregations.the_double_extended_stats.sum: 304.0 } + - match: { aggregations.the_double_extended_stats.sum_of_squares: 35604.0 } + - match: { aggregations.the_double_extended_stats.std_deviation: 55.90169943749474 } + - match: { aggregations.the_double_extended_stats.std_deviation_bounds.upper: 243.7050983124842 } + - match: { aggregations.the_double_extended_stats.std_deviation_bounds.lower: -91.70509831248421 } + +--- +"Bad sigma test": + + - do: + catch: /\[sigma\] must be greater than or equal to 0. Found \[-1.0\] in \[the_int_extended_stats\]/ + search: + body: + aggs: + the_int_extended_stats: + extended_stats: + field: int_field + sigma: -1 + + - do: + catch: /parsing_exception/ + search: + body: + aggs: + the_int_extended_stats: + extended_stats: + field: int_field + sigma: "foo" diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/170_cardinality_metric.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/170_cardinality_metric.yml new file mode 100644 index 0000000000000..f706c5d864090 --- /dev/null +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/170_cardinality_metric.yml @@ -0,0 +1,213 @@ +setup: + - do: + indices.create: + index: test_1 + body: + settings: + number_of_replicas: 0 + mappings: + doc: + properties: + int_field: + type : integer + double_field: + type : double + string_field: + type: keyword + + - do: + bulk: + refresh: true + body: + - index: + _index: test_1 + _type: doc + _id: 1 + - int_field: 1 + double_field: 1.0 + string_field: foo + - index: + _index: test_1 + _type: doc + _id: 2 + - int_field: 51 + double_field: 51.0 + string_field: foo + - index: + _index: test_1 + _type: doc + _id: 3 + - int_field: 101 + double_field: 101.0 + string_field: foo + - index: + _index: test_1 + _type: doc + _id: 4 + - int_field: 151 + double_field: 151.0 + string_field: foo + +--- +"Basic test": + + - do: + search: + body: + aggs: + distinct_int: + cardinality: + field: int_field + distinct_double: + cardinality: + field: double_field + distinct_string: + cardinality: + field: string_field + + - match: { hits.total: 4 } + - length: { hits.hits: 4 } + - match: { aggregations.distinct_int.value: 4 } + - match: { aggregations.distinct_double.value: 4 } + - match: { aggregations.distinct_string.value: 1 } + + - do: + search: + body: + aggs: + distinct_int: + cardinality: + field: int_field + precision_threshold: 100 + distinct_double: + cardinality: + field: double_field + precision_threshold: 100 + distinct_string: + cardinality: + field: string_field + precision_threshold: 100 + + - match: { hits.total: 4 } + - length: { hits.hits: 4 } + - match: { aggregations.distinct_int.value: 4 } + - match: { aggregations.distinct_double.value: 4 } + - match: { aggregations.distinct_string.value: 1 } + +--- +"Only aggs test": + + - do: + search: + body: + size: 0 + aggs: + distinct_int: + cardinality: + field: int_field + distinct_double: + cardinality: + field: double_field + distinct_string: + cardinality: + field: string_field + + - match: { hits.total: 4 } + - length: { hits.hits: 0 } + - match: { aggregations.distinct_int.value: 4 } + - match: { aggregations.distinct_double.value: 4 } + - match: { aggregations.distinct_string.value: 1 } + +--- +"Filtered test": + + - do: + search: + body: + query: + constant_score: + filter: + range: + int_field: + gte: 50 + aggs: + distinct_int: + cardinality: + field: int_field + distinct_double: + cardinality: + field: double_field + distinct_string: + cardinality: + field: string_field + + - match: { hits.total: 3 } + - length: { hits.hits: 3 } + - match: { aggregations.distinct_int.value: 3 } + - match: { aggregations.distinct_double.value: 3 } + - match: { aggregations.distinct_string.value: 1 } + + +--- +"Missing field with missing param": + + - do: + search: + body: + aggs: + distinct_missing: + cardinality: + field: missing_field + missing: "foo" + + - match: { hits.total: 4 } + - length: { hits.hits: 4 } + - match: { aggregations.distinct_missing.value: 1 } + +--- +"Missing field without missing param": + + - do: + search: + body: + aggs: + distinct_missing: + cardinality: + field: missing_field + + - match: { hits.total: 4 } + - length: { hits.hits: 4 } + - is_false: aggregations.distinct_missing.value + +--- +"Metadata test": + + - do: + search: + body: + aggs: + distinct_missing: + meta: + foo: bar + cardinality: + field: int_field + + - match: { hits.total: 4 } + - length: { hits.hits: 4 } + - match: { aggregations.distinct_missing.value: 4 } + - match: { aggregations.distinct_missing.meta.foo: "bar" } + +--- +"Invalid Precision test": + + - do: + catch: /\[precisionThreshold\] must be greater than or equal to 0. Found \[-1\] in \[distinct_int\]/ + search: + body: + aggs: + distinct_int: + cardinality: + field: int_field + precision_threshold: -1 + + diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/180_percentiles_tdigest_metric.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/180_percentiles_tdigest_metric.yml new file mode 100644 index 0000000000000..4301e6824e16a --- /dev/null +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/180_percentiles_tdigest_metric.yml @@ -0,0 +1,375 @@ +setup: + - do: + indices.create: + index: test_1 + body: + settings: + number_of_replicas: 0 + mappings: + doc: + properties: + int_field: + type : integer + double_field: + type : double + string_field: + type: keyword + + - do: + bulk: + refresh: true + body: + - index: + _index: test_1 + _type: doc + _id: 1 + - int_field: 1 + double_field: 1.0 + string_field: foo + - index: + _index: test_1 + _type: doc + _id: 2 + - int_field: 51 + double_field: 51.0 + string_field: foo + - index: + _index: test_1 + _type: doc + _id: 3 + - int_field: 101 + double_field: 101.0 + string_field: foo + - index: + _index: test_1 + _type: doc + _id: 4 + - int_field: 151 + double_field: 151.0 + string_field: foo + +--- +"Basic test": + + - do: + search: + body: + aggs: + percentiles_int: + percentiles: + field: int_field + percentiles_double: + percentiles: + field: double_field + + + - match: { hits.total: 4 } + - length: { hits.hits: 4 } + - match: + aggregations.percentiles_int.values: + "1.0": 2.5 + "5.0": 8.500000000000002 + "25.0": 38.5 + "50.0": 76.0 + "75.0": 113.5 + "95.0": 143.49999999999997 + "99.0": 149.5 + - match: + aggregations.percentiles_double.values: + "1.0": 2.5 + "5.0": 8.500000000000002 + "25.0": 38.5 + "50.0": 76.0 + "75.0": 113.5 + "95.0": 143.49999999999997 + "99.0": 149.5 + + - do: + search: + body: + aggs: + percentiles_int: + percentiles: + field: int_field + tdigest: + compression: 200 + percentiles_double: + percentiles: + field: double_field + tdigest: + compression: 200 + + + - match: { hits.total: 4 } + - length: { hits.hits: 4 } + - match: + aggregations.percentiles_int.values: + "1.0": 2.5 + "5.0": 8.500000000000002 + "25.0": 38.5 + "50.0": 76.0 + "75.0": 113.5 + "95.0": 143.49999999999997 + "99.0": 149.5 + - match: + aggregations.percentiles_double.values: + "1.0": 2.5 + "5.0": 8.500000000000002 + "25.0": 38.5 + "50.0": 76.0 + "75.0": 113.5 + "95.0": 143.49999999999997 + "99.0": 149.5 + +--- +"Only aggs test": + + - do: + search: + body: + size: 0 + aggs: + percentiles_int: + percentiles: + field: int_field + percentiles_double: + percentiles: + field: double_field + + - match: { hits.total: 4 } + - length: { hits.hits: 0 } + - match: + aggregations.percentiles_int.values: + "1.0": 2.5 + "5.0": 8.500000000000002 + "25.0": 38.5 + "50.0": 76.0 + "75.0": 113.5 + "95.0": 143.49999999999997 + "99.0": 149.5 + - match: + aggregations.percentiles_double.values: + "1.0": 2.5 + "5.0": 8.500000000000002 + "25.0": 38.5 + "50.0": 76.0 + "75.0": 113.5 + "95.0": 143.49999999999997 + "99.0": 149.5 + +--- +"Filtered test": + + - do: + search: + body: + query: + constant_score: + filter: + range: + int_field: + gte: 50 + aggs: + percentiles_int: + percentiles: + field: int_field + percentiles_double: + percentiles: + field: double_field + + - match: { hits.total: 3 } + - length: { hits.hits: 3 } + - match: + aggregations.percentiles_int.values: + "1.0": 52.0 + "5.0": 56.0 + "25.0": 76.0 + "50.0": 101.0 + "75.0": 126.0 + "95.0": 146.0 + "99.0": 150.0 + - match: + aggregations.percentiles_double.values: + "1.0": 52.0 + "5.0": 56.0 + "25.0": 76.0 + "50.0": 101.0 + "75.0": 126.0 + "95.0": 146.0 + "99.0": 150.0 + +--- +"Missing field with missing param": + + - do: + search: + body: + aggs: + percentiles_missing: + percentiles: + field: missing_field + missing: 1.0 + + - match: { hits.total: 4 } + - length: { hits.hits: 4 } + - match: + aggregations.percentiles_missing.values: + "1.0": 1.0 + "5.0": 1.0 + "25.0": 1.0 + "50.0": 1.0 + "75.0": 1.0 + "95.0": 1.0 + "99.0": 1.0 + +--- +"Missing field without missing param": + + - do: + search: + body: + aggs: + percentiles_missing: + percentiles: + field: missing_field + + - match: { hits.total: 4 } + - length: { hits.hits: 4 } + - is_false: aggregations.percentiles_missing.value + +--- +"Metadata test": + + - do: + search: + body: + aggs: + percentiles_int: + meta: + foo: bar + percentiles: + field: int_field + + - match: { hits.total: 4 } + - length: { hits.hits: 4 } + - match: { aggregations.percentiles_int.meta.foo: "bar" } + - match: + aggregations.percentiles_int.values: + "1.0": 2.5 + "5.0": 8.500000000000002 + "25.0": 38.5 + "50.0": 76.0 + "75.0": 113.5 + "95.0": 143.49999999999997 + "99.0": 149.5 + +--- +"Invalid params test": + + - do: + catch: /\[compression\] must be greater than or equal to 0. Found \[-1.0\] in \[percentiles_int\]/ + search: + body: + aggs: + percentiles_int: + percentiles: + field: int_field + tdigest: + compression: -1 + + - do: + catch: /\[percents\] must not be empty/ + search: + body: + aggs: + percentiles_int: + percentiles: + field: int_field + percents: [] + + - do: + catch: bad_request + search: + body: + aggs: + percentiles_int: + percentiles: + field: int_field + percents: null + + - do: + catch: bad_request + search: + body: + aggs: + percentiles_int: + percentiles: + field: int_field + percents: ["foo"] + + - do: + catch: bad_request + search: + body: + aggs: + percentiles_string: + percentiles: + field: string_field + +--- +"Explicit Percents test": + + - do: + search: + body: + aggs: + percentiles_int: + percentiles: + field: int_field + percents: [5.0, 25.0, 50.0] + percentiles_double: + percentiles: + field: double_field + percents: [5.0, 25.0, 50.0] + + + - match: { hits.total: 4 } + - length: { hits.hits: 4 } + - match: + aggregations.percentiles_int.values: + 5.0: 8.500000000000002 + 25.0: 38.5 + 50.0: 76.0 + - match: + aggregations.percentiles_double.values: + 5.0: 8.500000000000002 + 25.0: 38.5 + 50.0: 76.0 + +--- +"Non-keyed test": + + - do: + search: + body: + aggs: + percentiles_int: + percentiles: + field: int_field + percents: [5.0, 25.0, 50.0] + keyed: false + + + - match: { hits.total: 4 } + - length: { hits.hits: 4 } + - match: + aggregations.percentiles_int.values: + - key: 5.0 + value: 8.500000000000002 + - key: 25.0 + value: 38.5 + - key: 50.0 + value: 76.0 + + + diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/190_percentiles_hdr_metric.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/190_percentiles_hdr_metric.yml new file mode 100644 index 0000000000000..426faae503517 --- /dev/null +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/190_percentiles_hdr_metric.yml @@ -0,0 +1,447 @@ +setup: + - do: + indices.create: + index: test_1 + body: + settings: + number_of_replicas: 0 + mappings: + doc: + properties: + int_field: + type : integer + double_field: + type : double + string_field: + type: keyword + + - do: + bulk: + refresh: true + body: + - index: + _index: test_1 + _type: doc + _id: 1 + - int_field: 1 + double_field: 1.0 + string_field: foo + - index: + _index: test_1 + _type: doc + _id: 2 + - int_field: 51 + double_field: 51.0 + string_field: foo + - index: + _index: test_1 + _type: doc + _id: 3 + - int_field: 101 + double_field: 101.0 + string_field: foo + - index: + _index: test_1 + _type: doc + _id: 4 + - int_field: 151 + double_field: 151.0 + string_field: foo + +--- +"Basic test": + + - do: + search: + body: + aggs: + percentiles_int: + percentiles: + field: int_field + hdr: {} + percentiles_double: + percentiles: + field: double_field + hdr: {} + + + - match: { hits.total: 4 } + - length: { hits.hits: 4 } + - match: + aggregations.percentiles_int.values: + "1.0": 1.0 + "5.0": 1.0 + "25.0": 1.0 + "50.0": 51.0302734375 + "75.0": 101.0615234375 + "95.0": 151.1240234375 + "99.0": 151.1240234375 + - match: + aggregations.percentiles_double.values: + "1.0": 1.0 + "5.0": 1.0 + "25.0": 1.0 + "50.0": 51.0302734375 + "75.0": 101.0615234375 + "95.0": 151.1240234375 + "99.0": 151.1240234375 + + - do: + search: + body: + aggs: + percentiles_int: + percentiles: + field: int_field + hdr: + number_of_significant_value_digits: 3 + percentiles_double: + percentiles: + field: double_field + hdr: + number_of_significant_value_digits: 3 + + + - match: { hits.total: 4 } + - length: { hits.hits: 4 } + - match: + aggregations.percentiles_int.values: + "1.0": 1.0 + "5.0": 1.0 + "25.0": 1.0 + "50.0": 51.0302734375 + "75.0": 101.0615234375 + "95.0": 151.1240234375 + "99.0": 151.1240234375 + - match: + aggregations.percentiles_double.values: + "1.0": 1.0 + "5.0": 1.0 + "25.0": 1.0 + "50.0": 51.0302734375 + "75.0": 101.0615234375 + "95.0": 151.1240234375 + "99.0": 151.1240234375 + +--- +"Only aggs test": + + - do: + search: + body: + size: 0 + aggs: + percentiles_int: + percentiles: + field: int_field + hdr: {} + percentiles_double: + percentiles: + field: double_field + hdr: {} + + - match: { hits.total: 4 } + - length: { hits.hits: 0 } + - match: + aggregations.percentiles_int.values: + "1.0": 1.0 + "5.0": 1.0 + "25.0": 1.0 + "50.0": 51.0302734375 + "75.0": 101.0615234375 + "95.0": 151.1240234375 + "99.0": 151.1240234375 + - match: + aggregations.percentiles_double.values: + "1.0": 1.0 + "5.0": 1.0 + "25.0": 1.0 + "50.0": 51.0302734375 + "75.0": 101.0615234375 + "95.0": 151.1240234375 + "99.0": 151.1240234375 + +--- +"Filtered test": + + - do: + search: + body: + query: + constant_score: + filter: + range: + int_field: + gte: 50 + aggs: + percentiles_int: + percentiles: + field: int_field + hdr: {} + percentiles_double: + percentiles: + field: double_field + hdr: {} + + - match: { hits.total: 3 } + - length: { hits.hits: 3 } + - match: + aggregations.percentiles_int.values: + "1.0": 51.0 + "5.0": 51.0 + "25.0": 51.0 + "50.0": 101.03125 + "75.0": 101.03125 + "95.0": 151.09375 + "99.0": 151.09375 + - match: + aggregations.percentiles_double.values: + "1.0": 51.0 + "5.0": 51.0 + "25.0": 51.0 + "50.0": 101.03125 + "75.0": 101.03125 + "95.0": 151.09375 + "99.0": 151.09375 + +--- +"Missing field with missing param": + + - do: + search: + body: + aggs: + percentiles_missing: + percentiles: + field: missing_field + missing: 1.0 + hdr: {} + + - match: { hits.total: 4 } + - length: { hits.hits: 4 } + - match: + aggregations.percentiles_missing.values: + "1.0": 1.0 + "5.0": 1.0 + "25.0": 1.0 + "50.0": 1.0 + "75.0": 1.0 + "95.0": 1.0 + "99.0": 1.0 + +--- +"Missing field without missing param": + + - do: + search: + body: + aggs: + percentiles_missing: + percentiles: + field: missing_field + hdr: {} + + - match: { hits.total: 4 } + - length: { hits.hits: 4 } + - is_false: aggregations.percentiles_missing.value + +--- +"Metadata test": + + - do: + search: + body: + aggs: + percentiles_int: + meta: + foo: bar + percentiles: + field: int_field + hdr: {} + + - match: { hits.total: 4 } + - length: { hits.hits: 4 } + - match: { aggregations.percentiles_int.meta.foo: "bar" } + - match: + aggregations.percentiles_int.values: + "1.0": 1.0 + "5.0": 1.0 + "25.0": 1.0 + "50.0": 51.0302734375 + "75.0": 101.0615234375 + "95.0": 151.1240234375 + "99.0": 151.1240234375 + +--- +"Invalid params test": + + - do: + catch: /\[numberOfSignificantValueDigits\] must be between 0 and 5/ + search: + body: + aggs: + percentiles_int: + percentiles: + field: int_field + hdr: + number_of_significant_value_digits: -1 + + - do: + catch: /\[numberOfSignificantValueDigits\] must be between 0 and 5/ + search: + body: + aggs: + percentiles_int: + percentiles: + field: int_field + hdr: + number_of_significant_value_digits: 10 + + - do: + catch: bad_request + search: + body: + aggs: + percentiles_int: + percentiles: + field: int_field + hdr: + number_of_significant_value_digits: null + + - do: + catch: /\[percents\] must not be empty/ + search: + body: + aggs: + percentiles_int: + percentiles: + field: int_field + hdr: {} + percents: [] + + - do: + catch: bad_request + search: + body: + aggs: + percentiles_int: + percentiles: + field: int_field + hdr: {} + percents: null + + - do: + catch: bad_request + search: + body: + aggs: + percentiles_int: + percentiles: + field: int_field + hdr: {} + percents: ["foo"] + + - do: + catch: bad_request + search: + body: + aggs: + percentiles_string: + percentiles: + field: string_field + hdr: {} + +--- +"Explicit Percents test": + + - do: + search: + body: + aggs: + percentiles_int: + percentiles: + field: int_field + percents: [5.0, 25.0, 50.0] + hdr: {} + percentiles_double: + percentiles: + field: double_field + percents: [5.0, 25.0, 50.0] + hdr: {} + + + - match: { hits.total: 4 } + - length: { hits.hits: 4 } + - match: + aggregations.percentiles_int.values: + "5.0": 1.0 + "25.0": 1.0 + "50.0": 51.0302734375 + - match: + aggregations.percentiles_double.values: + "5.0": 1.0 + "25.0": 1.0 + "50.0": 51.0302734375 + +--- +"Non-keyed test": + + - do: + search: + body: + aggs: + percentiles_int: + percentiles: + field: int_field + percents: [5.0, 25.0, 50.0] + keyed: false + hdr: {} + + + - match: { hits.total: 4 } + - length: { hits.hits: 4 } + - match: + aggregations.percentiles_int.values: + - key: 5.0 + value: 1.0 + - key: 25.0 + value: 1.0 + - key: 50.0 + value: 51.0302734375 + + +--- +"Negative values test": + + - do: + index: + index: test_1 + type: doc + id: 5 + refresh: true + body: { int_field: -10 } + + - do: + search: + body: + aggs: + percentiles_int: + percentiles: + field: int_field + hdr: {} + + + - match: { hits.total: 4 } + - length: { hits.hits: 4 } + - match: + aggregations.percentiles_int.values: + "1.0": 1.0 + "5.0": 1.0 + "25.0": 1.0 + "50.0": 51.0302734375 + "75.0": 101.0615234375 + "95.0": 151.1240234375 + "99.0": 151.1240234375 + - match: { _shards.failures.0.reason.type: array_index_out_of_bounds_exception } + + diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/20_terms.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/20_terms.yml index 9cc30bbcd1b45..5ac79a898816b 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/20_terms.yml +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/20_terms.yml @@ -20,9 +20,6 @@ setup: type: double number: type: long - scaled_float: - type: scaled_float - scaling_factor: 100 date: type: date @@ -300,56 +297,6 @@ setup: - match: { aggregations.double_terms.buckets.1.doc_count: 1 } ---- -"Scaled float test": - - skip: - version: " - 5.2.0" - reason: scaled_float were considered as longs in aggregations, this was fixed in 5.2.0 - - - do: - index: - index: test_1 - type: test - id: 1 - body: { "scaled_float": 9.99 } - - - do: - index: - index: test_1 - type: test - id: 2 - body: { "scaled_float": 9.994 } - - - do: - index: - index: test_1 - type: test - id: 3 - body: { "scaled_float": 8.99 } - - - do: - indices.refresh: {} - - - do: - search: - body: { "size" : 0, "aggs" : { "scaled_float_terms" : { "terms" : { "field" : "scaled_float" } } } } - - - match: { hits.total: 3 } - - - length: { aggregations.scaled_float_terms.buckets: 2 } - - - match: { aggregations.scaled_float_terms.buckets.0.key: 9.99 } - - - is_false: aggregations.scaled_float_terms.buckets.0.key_as_string - - - match: { aggregations.scaled_float_terms.buckets.0.doc_count: 2 } - - - match: { aggregations.scaled_float_terms.buckets.1.key: 8.99 } - - - is_false: aggregations.scaled_float_terms.buckets.1.key_as_string - - - match: { aggregations.scaled_float_terms.buckets.1.doc_count: 1 } - --- "Date test": - do: diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/search/160_exists_query.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/search/160_exists_query.yml new file mode 100644 index 0000000000000..f3380f513966d --- /dev/null +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/search/160_exists_query.yml @@ -0,0 +1,1289 @@ +setup: + - skip: + features: ["headers"] + + - do: + indices.create: + index: test + body: + mappings: + test: + dynamic: false + properties: + binary: + type: binary + doc_values: true + boolean: + type: boolean + date: + type: date + geo_point: + type: geo_point + geo_shape: + type: geo_shape + ip: + type: ip + keyword: + type: keyword + byte: + type: byte + double: + type: double + float: + type: float + half_float: + type: half_float + integer: + type: integer + long: + type: long + short: + type: short + object: + type: object + properties: + inner1: + type: keyword + inner2: + type: keyword + text: + type: text + + - do: + headers: + Content-Type: application/json + index: + index: "test" + type: "test" + id: 1 + body: + binary: "YWJjZGUxMjM0" + boolean: true + date: "2017-01-01" + geo_point: [0.0, 20.0] + geo_shape: + type: "point" + coordinates: [0.0, 20.0] + ip: "192.168.0.1" + keyword: "foo" + byte: 1 + double: 1.0 + float: 1.0 + half_float: 1.0 + integer: 1 + long: 1 + short: 1 + object: + inner1: "foo" + inner2: "bar" + text: "foo bar" + + - do: + headers: + Content-Type: application/json + index: + index: "test" + type: "test" + id: 2 + body: + binary: "YWJjZGUxMjM0" + boolean: false + date: "2017-01-01" + geo_point: [0.0, 20.0] + geo_shape: + type: "point" + coordinates: [0.0, 20.0] + ip: "192.168.0.1" + keyword: "foo" + byte: 1 + double: 1.0 + float: 1.0 + half_float: 1.0 + integer: 1 + long: 1 + short: 1 + object: + inner1: "foo" + text: "foo bar" + + - do: + headers: + Content-Type: application/json + index: + index: "test" + type: "test" + id: 3 + routing: "route_me" + body: + binary: "YWJjZGUxMjM0" + boolean: true + date: "2017-01-01" + geo_point: [0.0, 20.0] + geo_shape: + type: "point" + coordinates: [0.0, 20.0] + ip: "192.168.0.1" + keyword: "foo" + byte: 1 + double: 1.0 + float: 1.0 + half_float: 1.0 + integer: 1 + long: 1 + short: 1 + object: + inner2: "bar" + text: "foo bar" + + - do: + index: + index: "test" + type: "test" + id: 4 + body: {} + + - do: + indices.create: + index: test-no-dv + body: + mappings: + test: + dynamic: false + properties: + binary: + type: binary + doc_values: false + store: true + boolean: + type: boolean + doc_values: false + date: + type: date + doc_values: false + geo_point: + type: geo_point + doc_values: false + geo_shape: + type: geo_shape + ip: + type: ip + doc_values: false + keyword: + type: keyword + doc_values: false + byte: + type: byte + doc_values: false + double: + type: double + doc_values: false + float: + type: float + doc_values: false + half_float: + type: half_float + doc_values: false + integer: + type: integer + doc_values: false + long: + type: long + doc_values: false + short: + type: short + doc_values: false + object: + type: object + properties: + inner1: + type: keyword + doc_values: false + inner2: + type: keyword + doc_values: false + text: + type: text + doc_values: false + + - do: + headers: + Content-Type: application/json + index: + index: "test-no-dv" + type: "test" + id: 1 + body: + binary: "YWJjZGUxMjM0" + boolean: true + date: "2017-01-01" + geo_point: [0.0, 20.0] + geo_shape: + type: "point" + coordinates: [0.0, 20.0] + ip: "192.168.0.1" + keyword: "foo" + byte: 1 + double: 1.0 + float: 1.0 + half_float: 1.0 + integer: 1 + long: 1 + short: 1 + object: + inner1: "foo" + inner2: "bar" + text: "foo bar" + + - do: + headers: + Content-Type: application/json + index: + index: "test-no-dv" + type: "test" + id: 2 + body: + binary: "YWJjZGUxMjM0" + boolean: false + date: "2017-01-01" + geo_point: [0.0, 20.0] + geo_shape: + type: "point" + coordinates: [0.0, 20.0] + ip: "192.168.0.1" + keyword: "foo" + byte: 1 + double: 1.0 + float: 1.0 + half_float: 1.0 + integer: 1 + long: 1 + short: 1 + object: + inner1: "foo" + text: "foo bar" + + - do: + headers: + Content-Type: application/json + index: + index: "test-no-dv" + type: "test" + id: 3 + routing: "route_me" + body: + binary: "YWJjZGUxMjM0" + boolean: true + date: "2017-01-01" + geo_point: [0.0, 20.0] + geo_shape: + type: "point" + coordinates: [0.0, 20.0] + ip: "192.168.0.1" + keyword: "foo" + byte: 1 + double: 1.0 + float: 1.0 + half_float: 1.0 + integer: 1 + long: 1 + short: 1 + object: + inner2: "bar" + text: "foo bar" + + - do: + index: + index: "test-no-dv" + type: "test" + id: 4 + body: {} + + - do: + indices.create: + index: test-unmapped + body: + mappings: + test: + dynamic: false + properties: + unrelated: + type: keyword + + - do: + index: + index: "test-unmapped" + type: "test" + id: 1 + body: + unrelated: "foo" + + - do: + indices.create: + index: test-empty + body: + mappings: + test: + dynamic: false + properties: + binary: + type: binary + date: + type: date + geo_point: + type: geo_point + geo_shape: + type: geo_shape + ip: + type: ip + keyword: + type: keyword + byte: + type: byte + double: + type: double + float: + type: float + half_float: + type: half_float + integer: + type: integer + long: + type: long + short: + type: short + object: + type: object + properties: + inner1: + type: keyword + inner2: + type: keyword + text: + type: text + + - do: + indices.refresh: + index: [test, test-unmapped, test-empty, test-no-dv] + +--- +"Test exists query on mapped binary field": + - do: + search: + index: test + body: + query: + exists: + field: binary + + - match: {hits.total: 3} + +--- +"Test exists query on mapped boolean field": + - do: + search: + index: test + body: + query: + exists: + field: boolean + + - match: {hits.total: 3} + +--- +"Test exists query on mapped date field": + - do: + search: + index: test + body: + query: + exists: + field: date + + - match: {hits.total: 3} + +--- +"Test exists query on mapped geo_point field": + - do: + search: + index: test + body: + query: + exists: + field: geo_point + + - match: {hits.total: 3} + +--- +"Test exists query on mapped geo_shape field": + - do: + search: + index: test + body: + query: + exists: + field: geo_shape + + - match: {hits.total: 3} + +--- +"Test exists query on mapped ip field": + - do: + search: + index: test + body: + query: + exists: + field: ip + + - match: {hits.total: 3} + +--- +"Test exists query on mapped keyword field": + - do: + search: + index: test + body: + query: + exists: + field: keyword + + - match: {hits.total: 3} + +--- +"Test exists query on mapped byte field": + - do: + search: + index: test + body: + query: + exists: + field: byte + + - match: {hits.total: 3} + +--- +"Test exists query on mapped double field": + - do: + search: + index: test + body: + query: + exists: + field: double + + - match: {hits.total: 3} + +--- +"Test exists query on mapped float field": + - do: + search: + index: test + body: + query: + exists: + field: float + + - match: {hits.total: 3} + +--- +"Test exists query on mapped half_float field": + - do: + search: + index: test + body: + query: + exists: + field: half_float + + - match: {hits.total: 3} + +--- +"Test exists query on mapped integer field": + - do: + search: + index: test + body: + query: + exists: + field: integer + + - match: {hits.total: 3} + +--- +"Test exists query on mapped long field": + - do: + search: + index: test + body: + query: + exists: + field: long + + - match: {hits.total: 3} + +--- +"Test exists query on mapped short field": + - do: + search: + index: test + body: + query: + exists: + field: short + + - match: {hits.total: 3} + +--- +"Test exists query on mapped object field": + - do: + search: + index: test + body: + query: + exists: + field: object + + - match: {hits.total: 3} + +--- +"Test exists query on mapped object inner field": + - do: + search: + index: test + body: + query: + exists: + field: object.inner1 + + - match: {hits.total: 2} + +--- +"Test exists query on mapped text field": + - do: + search: + index: test + body: + query: + exists: + field: text + + - match: {hits.total: 3} + +--- +"Test exists query on _id field": + - do: + search: + index: test + body: + query: + exists: + field: _id + + - match: {hits.total: 4} + +--- +"Test exists query on _uid field": + - skip: + version: " - 6.0.99" + reason: exists on _uid not supported prior to 6.1.0 + - do: + search: + index: test + body: + query: + exists: + field: _uid + + - match: {hits.total: 4} + +--- +"Test exists query on _index field": + - skip: + version: " - 6.0.99" + reason: exists on _index not supported prior to 6.1.0 + - do: + search: + index: test + body: + query: + exists: + field: _index + + - match: {hits.total: 4} + +--- +"Test exists query on _type field": + - skip: + version: " - 6.0.99" + reason: exists on _type not supported prior to 6.1.0 + - do: + search: + index: test + body: + query: + exists: + field: _type + + - match: {hits.total: 4} + +--- +"Test exists query on _routing field": + - do: + search: + index: test + body: + query: + exists: + field: _routing + + - match: {hits.total: 1} + +--- +"Test exists query on _seq_no field": + - do: + search: + index: test + body: + query: + exists: + field: _seq_no + + - match: {hits.total: 4} + +--- +"Test exists query on _source field": + - skip: + version: " - 6.0.99" + reason: exists on _source not supported prior to 6.1.0 + - do: + catch: /query_shard_exception/ + search: + index: test + body: + query: + exists: + field: _source + +--- +"Test exists query on _version field": + - do: + search: + index: test + body: + query: + exists: + field: _version + + - match: {hits.total: 4} + +--- +"Test exists query on unmapped binary field": + - do: + search: + index: test-unmapped + body: + query: + exists: + field: binary + + - match: {hits.total: 0} + +--- +"Test exists query on unmapped boolean field": + - do: + search: + index: test-unmapped + body: + query: + exists: + field: boolean + + - match: {hits.total: 0} + +--- +"Test exists query on unmapped date field": + - do: + search: + index: test-unmapped + body: + query: + exists: + field: date + + - match: {hits.total: 0} + +--- +"Test exists query on unmapped geo_point field": + - do: + search: + index: test-unmapped + body: + query: + exists: + field: geo_point + + - match: {hits.total: 0} + +--- +"Test exists query on unmapped geo_shape field": + - do: + search: + index: test-unmapped + body: + query: + exists: + field: geo_shape + + - match: {hits.total: 0} + +--- +"Test exists query on unmapped ip field": + - do: + search: + index: test-unmapped + body: + query: + exists: + field: ip + + - match: {hits.total: 0} + +--- +"Test exists query on unmapped keyword field": + - do: + search: + index: test-unmapped + body: + query: + exists: + field: keyword + + - match: {hits.total: 0} + +--- +"Test exists query on unmapped byte field": + - do: + search: + index: test-unmapped + body: + query: + exists: + field: byte + + - match: {hits.total: 0} + +--- +"Test exists query on unmapped double field": + - do: + search: + index: test-unmapped + body: + query: + exists: + field: double + + - match: {hits.total: 0} + +--- +"Test exists query on unmapped float field": + - do: + search: + index: test-unmapped + body: + query: + exists: + field: float + + - match: {hits.total: 0} + +--- +"Test exists query on unmapped half_float field": + - do: + search: + index: test-unmapped + body: + query: + exists: + field: half_float + + - match: {hits.total: 0} + +--- +"Test exists query on unmapped integer field": + - do: + search: + index: test-unmapped + body: + query: + exists: + field: integer + + - match: {hits.total: 0} + +--- +"Test exists query on unmapped long field": + - do: + search: + index: test-unmapped + body: + query: + exists: + field: long + + - match: {hits.total: 0} + +--- +"Test exists query on unmapped short field": + - do: + search: + index: test-unmapped + body: + query: + exists: + field: short + + - match: {hits.total: 0} + +--- +"Test exists query on unmapped object field": + - do: + search: + index: test-unmapped + body: + query: + exists: + field: object + + - match: {hits.total: 0} + +--- +"Test exists query on unmapped object inner field": + - do: + search: + index: test-unmapped + body: + query: + exists: + field: object.inner1 + + - match: {hits.total: 0} + +--- +"Test exists query on unmapped text field": + - do: + search: + index: test-unmapped + body: + query: + exists: + field: text + + - match: {hits.total: 0} + +--- +"Test exists query on binary field in empty index": + - do: + search: + index: test-empty + body: + query: + exists: + field: binary + + - match: {hits.total: 0} + +--- +"Test exists query on boolean field in empty index": + - do: + search: + index: test-empty + body: + query: + exists: + field: boolean + + - match: {hits.total: 0} + +--- +"Test exists query on date field in empty index": + - do: + search: + index: test-empty + body: + query: + exists: + field: date + + - match: {hits.total: 0} + +--- +"Test exists query on geo_point field in empty index": + - do: + search: + index: test-empty + body: + query: + exists: + field: geo_point + + - match: {hits.total: 0} + +--- +"Test exists query on geo_shape field in empty index": + - do: + search: + index: test-empty + body: + query: + exists: + field: geo_shape + + - match: {hits.total: 0} + +--- +"Test exists query on ip field in empty index": + - do: + search: + index: test-empty + body: + query: + exists: + field: ip + + - match: {hits.total: 0} + +--- +"Test exists query on keyword field in empty index": + - do: + search: + index: test-empty + body: + query: + exists: + field: keyword + + - match: {hits.total: 0} + +--- +"Test exists query on byte field in empty index": + - do: + search: + index: test-empty + body: + query: + exists: + field: byte + + - match: {hits.total: 0} + +--- +"Test exists query on double field in empty index": + - do: + search: + index: test-empty + body: + query: + exists: + field: double + + - match: {hits.total: 0} + +--- +"Test exists query on float field in empty index": + - do: + search: + index: test-empty + body: + query: + exists: + field: float + + - match: {hits.total: 0} + +--- +"Test exists query on half_float field in empty index": + - do: + search: + index: test-empty + body: + query: + exists: + field: half_float + + - match: {hits.total: 0} + +--- +"Test exists query on integer field in empty index": + - do: + search: + index: test-empty + body: + query: + exists: + field: integer + + - match: {hits.total: 0} + +--- +"Test exists query on long field in empty index": + - do: + search: + index: test-empty + body: + query: + exists: + field: long + + - match: {hits.total: 0} + +--- +"Test exists query on short field in empty index": + - do: + search: + index: test-empty + body: + query: + exists: + field: short + + - match: {hits.total: 0} + +--- +"Test exists query on object field in empty index": + - do: + search: + index: test-empty + body: + query: + exists: + field: object + + - match: {hits.total: 0} + +--- +"Test exists query on object inner field in empty index": + - do: + search: + index: test-empty + body: + query: + exists: + field: object.inner1 + + - match: {hits.total: 0} + +--- +"Test exists query on text field in empty index": + - do: + search: + index: test-empty + body: + query: + exists: + field: text + + - match: {hits.total: 0} + +--- +"Test exists query on mapped binary field with no doc values": + - do: + search: + index: test-no-dv + body: + query: + exists: + field: binary + + - match: {hits.total: 3} + +--- +"Test exists query on mapped boolean field with no doc values": + - do: + search: + index: test-no-dv + body: + query: + exists: + field: boolean + + - match: {hits.total: 3} + +--- +"Test exists query on mapped date field with no doc values": + - do: + search: + index: test-no-dv + body: + query: + exists: + field: date + + - match: {hits.total: 3} + +--- +"Test exists query on mapped geo_point field with no doc values": + - do: + search: + index: test-no-dv + body: + query: + exists: + field: geo_point + + - match: {hits.total: 3} + +--- +"Test exists query on mapped geo_shape field with no doc values": + - do: + search: + index: test-no-dv + body: + query: + exists: + field: geo_shape + + - match: {hits.total: 3} + +--- +"Test exists query on mapped ip field with no doc values": + - do: + search: + index: test-no-dv + body: + query: + exists: + field: ip + + - match: {hits.total: 3} + +--- +"Test exists query on mapped keyword field with no doc values": + - do: + search: + index: test-no-dv + body: + query: + exists: + field: keyword + + - match: {hits.total: 3} + +--- +"Test exists query on mapped byte field with no doc values": + - do: + search: + index: test-no-dv + body: + query: + exists: + field: byte + + - match: {hits.total: 3} + +--- +"Test exists query on mapped double field with no doc values": + - do: + search: + index: test-no-dv + body: + query: + exists: + field: double + + - match: {hits.total: 3} + +--- +"Test exists query on mapped float field with no doc values": + - do: + search: + index: test-no-dv + body: + query: + exists: + field: float + + - match: {hits.total: 3} + +--- +"Test exists query on mapped half_float field with no doc values": + - do: + search: + index: test-no-dv + body: + query: + exists: + field: half_float + + - match: {hits.total: 3} + +--- +"Test exists query on mapped integer field with no doc values": + - do: + search: + index: test-no-dv + body: + query: + exists: + field: integer + + - match: {hits.total: 3} + +--- +"Test exists query on mapped long field with no doc values": + - do: + search: + index: test-no-dv + body: + query: + exists: + field: long + + - match: {hits.total: 3} + +--- +"Test exists query on mapped short field with no doc values": + - do: + search: + index: test-no-dv + body: + query: + exists: + field: short + + - match: {hits.total: 3} + +--- +"Test exists query on mapped object field with no doc values": + - do: + search: + index: test-no-dv + body: + query: + exists: + field: object + + - match: {hits.total: 3} + +--- +"Test exists query on mapped object inner field with no doc values": + - do: + search: + index: test-no-dv + body: + query: + exists: + field: object.inner1 + + - match: {hits.total: 2} + +--- +"Test exists query on mapped text field with no doc values": + - do: + search: + index: test-no-dv + body: + query: + exists: + field: text + + - match: {hits.total: 3} diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/search/20_default_values.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/search/20_default_values.yml index 5cdde2cb6965d..52fbd19185335 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/test/search/20_default_values.yml +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/search/20_default_values.yml @@ -69,7 +69,7 @@ setup: "Search body without query element": - do: - catch: request + catch: bad_request search: body: match: diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/search/30_limits.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/search/30_limits.yml index 49c9333c489ec..3ee998224522c 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/test/search/30_limits.yml +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/search/30_limits.yml @@ -1,4 +1,12 @@ setup: + - do: + indices.create: + index: test_1 + body: + settings: + index.max_docvalue_fields_search: 2 + index.max_script_fields: 2 + - do: index: index: test_1 @@ -32,7 +40,7 @@ setup: search: index: test_1 scroll: 5m - from: 10000 + size: 10010 --- "Rescore window limits": @@ -50,3 +58,31 @@ setup: match_all: {} query_weight: 1 rescore_query_weight: 2 + +--- +"Docvalues_fields size limit": + + - do: + catch: /Trying to retrieve too many docvalue_fields\. Must be less than or equal to[:] \[2\] but was \[3\]\. This limit can be set by changing the \[index.max_docvalue_fields_search\] index level setting\./ + search: + index: test_1 + body: + query: + match_all: {} + docvalue_fields: ["one", "two", "three"] + +--- +"Script_fields size limit": + + - do: + catch: /Trying to retrieve too many script_fields\. Must be less than or equal to[:] \[2\] but was \[3\]\. This limit can be set by changing the \[index.max_script_fields\] index level setting\./ + search: + index: test_1 + body: + query: + match_all: {} + script_fields: { + "test1" : { "script" : { "lang": "painless", "source": "1" }}, + "test2" : { "script" : { "lang": "painless", "source": "1" }}, + "test3" : { "script" : { "lang": "painless", "source": "1" }} + } diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/search/issue4895.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/search/issue4895.yml index 993cbed26475d..96a2ca4854a18 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/test/search/issue4895.yml +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/search/issue4895.yml @@ -22,7 +22,7 @@ setup: "Test with _local preference placed in query body - should fail": - do: - catch: request + catch: bad_request search: index: test type: test diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/search/issue9606.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/search/issue9606.yml index 5421ae56a9ee3..3e46531c034eb 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/test/search/issue9606.yml +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/search/issue9606.yml @@ -19,7 +19,7 @@ setup: "Test search_type=query_and_fetch not supported from REST layer": - do: - catch: request + catch: bad_request search: index: test type: test @@ -33,7 +33,7 @@ setup: "Test search_type=dfs_query_and_fetch not supported from REST layer": - do: - catch: request + catch: bad_request search: index: test type: test diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/suggest/20_completion.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/suggest/20_completion.yml index 860c43bbcbf0e..3ac9b4ee2ddc4 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/test/suggest/20_completion.yml +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/suggest/20_completion.yml @@ -291,3 +291,42 @@ setup: - match: { suggest.result.0.options.1._type: "test" } - match: { suggest.result.0.options.1._source.title: "title_bar" } - match: { suggest.result.0.options.1._source.count: 4 } + +--- +"Skip duplicates should work": + - skip: + version: " - 6.0.99" + reason: skip_duplicates was added in 6.1 + + - do: + index: + index: test + type: test + id: 1 + body: + suggest_1: "bar" + + - do: + index: + index: test + type: test + id: 2 + body: + suggest_1: "bar" + + - do: + indices.refresh: {} + + - do: + search: + body: + suggest: + result: + text: "b" + completion: + field: suggest_1 + skip_duplicates: true + + - length: { suggest.result: 1 } + - length: { suggest.result.0.options: 1 } + - match: { suggest.result.0.options.0.text: "bar" } diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/suggest/30_context.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/suggest/30_context.yml index 778094ec90baf..f0d97382eeb8e 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/test/suggest/30_context.yml +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/suggest/30_context.yml @@ -276,4 +276,76 @@ setup: - length: { suggest.result: 1 } - length: { suggest.result.0.options: 1 } - - match: { suggest.result.0.options.0.text: "Marriot in Berlin" } + - match: { suggest.result.0.options.0.text: "Marriot in Berlin" } + +--- +"Skip duplicates with contexts should work": + - skip: + version: " - 6.0.99" + reason: skip_duplicates was added in 6.1 + + - do: + index: + index: test + type: test + id: 1 + body: + suggest_context: + input: "foo" + contexts: + color: "red" + + - do: + index: + index: test + type: test + id: 1 + body: + suggest_context: + input: "foo" + contexts: + color: "red" + + - do: + index: + index: test + type: test + id: 2 + body: + suggest_context: + input: "foo" + contexts: + color: "blue" + + - do: + indices.refresh: {} + + - do: + search: + body: + suggest: + result: + text: "foo" + completion: + field: suggest_context + skip_duplicates: true + contexts: + color: "red" + + - length: { suggest.result: 1 } + - length: { suggest.result.0.options: 1 } + - match: { suggest.result.0.options.0.text: "foo" } + + - do: + search: + body: + suggest: + result: + text: "foo" + completion: + skip_duplicates: true + field: suggest_context + + - length: { suggest.result: 1 } + - length: { suggest.result.0.options: 1 } + - match: { suggest.result.0.options.0.text: "foo" } diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/update/50_parent.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/update/50_parent.yml index 82508f951e04c..e65f80d705cb2 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/test/update/50_parent.yml +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/update/50_parent.yml @@ -65,7 +65,7 @@ setup: body: { foo: bar } - do: - catch: request + catch: bad_request update: index: test_1 type: test diff --git a/settings.gradle b/settings.gradle index e34e55eb3fd61..698f5600684bb 100644 --- a/settings.gradle +++ b/settings.gradle @@ -35,6 +35,7 @@ List projects = [ 'modules:lang-expression', 'modules:lang-mustache', 'modules:lang-painless', + 'modules:mapper-extras', 'modules:parent-join', 'modules:percolator', 'modules:reindex', @@ -65,6 +66,7 @@ List projects = [ 'qa:auto-create-index', 'qa:evil-tests', 'qa:full-cluster-restart', + 'qa:integration-bwc', 'qa:mixed-cluster', 'qa:multi-cluster-search', 'qa:no-bootstrap-tests', @@ -98,7 +100,6 @@ if (isEclipse) { // eclipse cannot handle an intermediate dependency between main and test, so we must create separate projects // for core-src and core-tests projects << 'core-tests' - projects << 'client:rest-tests' } include projects.toArray(new String[0]) @@ -122,11 +123,6 @@ if (isEclipse) { project(":core").buildFileName = 'eclipse-build.gradle' project(":core-tests").projectDir = new File(rootProject.projectDir, 'core/src/test') project(":core-tests").buildFileName = 'eclipse-build.gradle' - - project(":client:rest").projectDir = new File(rootProject.projectDir, 'client/rest/src/main') - project(":client:rest").buildFileName = 'eclipse-build.gradle' - project(":client:rest-tests").projectDir = new File(rootProject.projectDir, 'client/rest/src/test') - project(":client:rest-tests").buildFileName = 'eclipse-build.gradle' } /** diff --git a/test/fixtures/hdfs-fixture/src/main/java/hdfs/MiniHDFS.java b/test/fixtures/hdfs-fixture/src/main/java/hdfs/MiniHDFS.java index 7d41d94e99a3d..73f4e443b0769 100644 --- a/test/fixtures/hdfs-fixture/src/main/java/hdfs/MiniHDFS.java +++ b/test/fixtures/hdfs-fixture/src/main/java/hdfs/MiniHDFS.java @@ -19,7 +19,9 @@ package hdfs; +import java.io.File; import java.lang.management.ManagementFactory; +import java.net.URL; import java.nio.charset.StandardCharsets; import java.nio.file.Files; import java.nio.file.Path; @@ -29,9 +31,11 @@ import java.util.Arrays; import java.util.List; +import org.apache.commons.io.FileUtils; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.CommonConfigurationKeysPublic; import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.FileUtil; import org.apache.hadoop.fs.permission.AclEntry; import org.apache.hadoop.fs.permission.AclEntryType; import org.apache.hadoop.fs.permission.FsAction; @@ -100,15 +104,35 @@ public static void main(String[] args) throws Exception { } MiniDFSCluster dfs = builder.build(); - // Set the elasticsearch user directory up - if (UserGroupInformation.isSecurityEnabled()) { - FileSystem fs = dfs.getFileSystem(); - org.apache.hadoop.fs.Path esUserPath = new org.apache.hadoop.fs.Path("/user/elasticsearch"); + // Configure contents of the filesystem + org.apache.hadoop.fs.Path esUserPath = new org.apache.hadoop.fs.Path("/user/elasticsearch"); + try (FileSystem fs = dfs.getFileSystem()) { + + // Set the elasticsearch user directory up fs.mkdirs(esUserPath); - List acls = new ArrayList<>(); - acls.add(new AclEntry.Builder().setType(AclEntryType.USER).setName("elasticsearch").setPermission(FsAction.ALL).build()); - fs.modifyAclEntries(esUserPath, acls); - fs.close(); + if (UserGroupInformation.isSecurityEnabled()) { + List acls = new ArrayList<>(); + acls.add(new AclEntry.Builder().setType(AclEntryType.USER).setName("elasticsearch").setPermission(FsAction.ALL).build()); + fs.modifyAclEntries(esUserPath, acls); + } + + // Install a pre-existing repository into HDFS + String directoryName = "readonly-repository"; + String archiveName = directoryName + ".tar.gz"; + URL readOnlyRepositoryArchiveURL = MiniHDFS.class.getClassLoader().getResource(archiveName); + if (readOnlyRepositoryArchiveURL != null) { + Path tempDirectory = Files.createTempDirectory(MiniHDFS.class.getName()); + File readOnlyRepositoryArchive = tempDirectory.resolve(archiveName).toFile(); + FileUtils.copyURLToFile(readOnlyRepositoryArchiveURL, readOnlyRepositoryArchive); + FileUtil.unTar(readOnlyRepositoryArchive, tempDirectory.toFile()); + + fs.copyFromLocalFile(true, true, + new org.apache.hadoop.fs.Path(tempDirectory.resolve(directoryName).toAbsolutePath().toUri()), + esUserPath.suffix("/existing/" + directoryName) + ); + + FileUtils.deleteDirectory(tempDirectory.toFile()); + } } // write our PID file diff --git a/test/fixtures/hdfs-fixture/src/main/resources/readonly-repository.tar.gz b/test/fixtures/hdfs-fixture/src/main/resources/readonly-repository.tar.gz new file mode 100644 index 0000000000000..2cdb6d77c07d0 Binary files /dev/null and b/test/fixtures/hdfs-fixture/src/main/resources/readonly-repository.tar.gz differ diff --git a/test/framework/build.gradle b/test/framework/build.gradle index cc132a9d71820..09382763057c9 100644 --- a/test/framework/build.gradle +++ b/test/framework/build.gradle @@ -27,7 +27,6 @@ dependencies { compile "org.hamcrest:hamcrest-all:${versions.hamcrest}" compile "org.apache.lucene:lucene-test-framework:${versions.lucene}" compile "org.apache.lucene:lucene-codecs:${versions.lucene}" - compile "org.elasticsearch.client:elasticsearch-rest-client:${version}" compile "commons-logging:commons-logging:${versions.commonslogging}" compile "commons-codec:commons-codec:${versions.commonscodec}" compile "org.elasticsearch:securemock:${versions.securemock}" diff --git a/test/framework/src/main/java/org/elasticsearch/bootstrap/BootstrapForTesting.java b/test/framework/src/main/java/org/elasticsearch/bootstrap/BootstrapForTesting.java index 62bf89f6d3116..e4ecd02615e91 100644 --- a/test/framework/src/main/java/org/elasticsearch/bootstrap/BootstrapForTesting.java +++ b/test/framework/src/main/java/org/elasticsearch/bootstrap/BootstrapForTesting.java @@ -102,19 +102,19 @@ public class BootstrapForTesting { Permissions perms = new Permissions(); Security.addClasspathPermissions(perms); // java.io.tmpdir - Security.addPath(perms, "java.io.tmpdir", javaTmpDir, "read,readlink,write,delete"); + FilePermissionUtils.addDirectoryPath(perms, "java.io.tmpdir", javaTmpDir, "read,readlink,write,delete"); // custom test config file if (Strings.hasLength(System.getProperty("tests.config"))) { - perms.add(new FilePermission(System.getProperty("tests.config"), "read,readlink")); + FilePermissionUtils.addSingleFilePath(perms, PathUtils.get(System.getProperty("tests.config")), "read,readlink"); } // jacoco coverage output file final boolean testsCoverage = Booleans.parseBoolean(System.getProperty("tests.coverage", "false")); if (testsCoverage) { Path coverageDir = PathUtils.get(System.getProperty("tests.coverage.dir")); - perms.add(new FilePermission(coverageDir.resolve("jacoco.exec").toString(), "read,write")); + FilePermissionUtils.addSingleFilePath(perms, coverageDir.resolve("jacoco.exec"), "read,write"); // in case we get fancy and use the -integration goals later: - perms.add(new FilePermission(coverageDir.resolve("jacoco-it.exec").toString(), "read,write")); + FilePermissionUtils.addSingleFilePath(perms, coverageDir.resolve("jacoco-it.exec"), "read,write"); } // intellij hack: intellij test runner wants setIO and will // screw up all test logging without it! diff --git a/test/framework/src/main/java/org/elasticsearch/bootstrap/ESElasticsearchCliTestCase.java b/test/framework/src/main/java/org/elasticsearch/bootstrap/ESElasticsearchCliTestCase.java index 1b55ca5806a89..b3ebdb6a69b29 100644 --- a/test/framework/src/main/java/org/elasticsearch/bootstrap/ESElasticsearchCliTestCase.java +++ b/test/framework/src/main/java/org/elasticsearch/bootstrap/ESElasticsearchCliTestCase.java @@ -52,7 +52,9 @@ void runTest( final int status = Elasticsearch.main(args, new Elasticsearch() { @Override protected Environment createEnv(final Terminal terminal, final Map settings) throws UserException { - final Settings realSettings = Settings.builder().put("path.home", home).put(settings).build(); + Settings.Builder builder = Settings.builder().put("path.home", home); + settings.forEach((k,v) -> builder.put(k, v)); + final Settings realSettings = builder.build(); return new Environment(realSettings, home.resolve("config")); } @Override diff --git a/test/framework/src/main/java/org/elasticsearch/client/RestClientBuilderTestCase.java b/test/framework/src/main/java/org/elasticsearch/client/RestClientBuilderTestCase.java new file mode 100644 index 0000000000000..086dca0d94911 --- /dev/null +++ b/test/framework/src/main/java/org/elasticsearch/client/RestClientBuilderTestCase.java @@ -0,0 +1,48 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.client; + +import java.util.HashMap; +import java.util.Map; + +import joptsimple.internal.Strings; +import org.apache.http.Header; +import org.elasticsearch.test.ESTestCase; + +/** + * A test case with access to internals of a RestClient. + */ +public abstract class RestClientBuilderTestCase extends ESTestCase { + /** Checks the given rest client has the provided default headers. */ + public void assertHeaders(RestClient client, Map expectedHeaders) { + expectedHeaders = new HashMap<>(expectedHeaders); // copy so we can remove as we check + for (Header header : client.defaultHeaders) { + String name = header.getName(); + String expectedValue = expectedHeaders.remove(name); + if (expectedValue == null) { + fail("Found unexpected header in rest client: " + name); + } + assertEquals(expectedValue, header.getValue()); + } + if (expectedHeaders.isEmpty() == false) { + fail("Missing expected headers in rest client: " + Strings.join(expectedHeaders.keySet(), ", ")); + } + } +} diff --git a/test/framework/src/main/java/org/elasticsearch/cluster/MockInternalClusterInfoService.java b/test/framework/src/main/java/org/elasticsearch/cluster/MockInternalClusterInfoService.java index 4f6e8bb192482..dd8f90b9e719e 100644 --- a/test/framework/src/main/java/org/elasticsearch/cluster/MockInternalClusterInfoService.java +++ b/test/framework/src/main/java/org/elasticsearch/cluster/MockInternalClusterInfoService.java @@ -67,7 +67,7 @@ public static NodeStats makeStats(String nodeName, DiskUsage usage) { null, null, null, null, null, fsInfo, null, null, null, - null, null, null); + null, null, null, null); } public MockInternalClusterInfoService(Settings settings, ClusterService clusterService, ThreadPool threadPool, NodeClient client, diff --git a/test/framework/src/main/java/org/elasticsearch/cluster/routing/ShardRoutingHelper.java b/test/framework/src/main/java/org/elasticsearch/cluster/routing/ShardRoutingHelper.java index 8eadc728a1f54..aa61292ed0868 100644 --- a/test/framework/src/main/java/org/elasticsearch/cluster/routing/ShardRoutingHelper.java +++ b/test/framework/src/main/java/org/elasticsearch/cluster/routing/ShardRoutingHelper.java @@ -46,14 +46,6 @@ public static ShardRouting initialize(ShardRouting routing, String nodeId, long return routing.initialize(nodeId, null, expectedSize); } - public static ShardRouting reinitPrimary(ShardRouting routing) { - return routing.reinitializePrimaryShard(); - } - - public static ShardRouting reinitPrimary(ShardRouting routing, UnassignedInfo.Reason reason, RecoverySource recoverySource) { - return routing.reinitializePrimaryShard().updateUnassigned(new UnassignedInfo(reason, "test_reinit"), recoverySource); - } - public static ShardRouting initWithSameId(ShardRouting copy, RecoverySource recoverySource) { return new ShardRouting(copy.shardId(), copy.currentNodeId(), copy.relocatingNodeId(), copy.primary(), ShardRoutingState.INITIALIZING, recoverySource, new UnassignedInfo(UnassignedInfo.Reason.REINITIALIZED, null), diff --git a/test/framework/src/main/java/org/elasticsearch/cluster/routing/TestShardRouting.java b/test/framework/src/main/java/org/elasticsearch/cluster/routing/TestShardRouting.java index a4ac6fad241a5..2291c3d39e200 100644 --- a/test/framework/src/main/java/org/elasticsearch/cluster/routing/TestShardRouting.java +++ b/test/framework/src/main/java/org/elasticsearch/cluster/routing/TestShardRouting.java @@ -39,6 +39,10 @@ public static ShardRouting newShardRouting(String index, int shardId, String cur return newShardRouting(new ShardId(index, IndexMetaData.INDEX_UUID_NA_VALUE, shardId), currentNodeId, primary, state); } + public static ShardRouting newShardRouting(ShardId shardId, String currentNodeId, boolean primary, RecoverySource recoverySource, ShardRoutingState state) { + return new ShardRouting(shardId, currentNodeId, null, primary, state, recoverySource, buildUnassignedInfo(state), buildAllocationId(state), -1); + } + public static ShardRouting newShardRouting(ShardId shardId, String currentNodeId, boolean primary, ShardRoutingState state) { return new ShardRouting(shardId, currentNodeId, null, primary, state, buildRecoveryTarget(primary, state), buildUnassignedInfo(state), buildAllocationId(state), -1); } diff --git a/core/src/main/java/org/elasticsearch/common/settings/loader/JsonSettingsLoader.java b/test/framework/src/main/java/org/elasticsearch/env/TestEnvironment.java similarity index 60% rename from core/src/main/java/org/elasticsearch/common/settings/loader/JsonSettingsLoader.java rename to test/framework/src/main/java/org/elasticsearch/env/TestEnvironment.java index 02f7a5c37a065..aa2e03ae22ac1 100644 --- a/core/src/main/java/org/elasticsearch/common/settings/loader/JsonSettingsLoader.java +++ b/test/framework/src/main/java/org/elasticsearch/env/TestEnvironment.java @@ -17,22 +17,21 @@ * under the License. */ -package org.elasticsearch.common.settings.loader; +package org.elasticsearch.env; -import org.elasticsearch.common.xcontent.XContentType; +import org.elasticsearch.common.settings.Settings; /** - * Settings loader that loads (parses) the settings in a json format by flattening them - * into a map. + * Provides a convenience method for tests to construct an Environment when the config path does not matter. + * This is in the test framework to force people who construct an Environment in production code to think + * about what the config path needs to be set to. */ -public class JsonSettingsLoader extends XContentSettingsLoader { +public class TestEnvironment { - public JsonSettingsLoader(boolean allowNullValues) { - super(allowNullValues); + private TestEnvironment() { } - @Override - public XContentType contentType() { - return XContentType.JSON; + public static Environment newEnvironment(Settings settings) { + return new Environment(settings, null); } } diff --git a/test/framework/src/main/java/org/elasticsearch/index/analysis/AnalysisTestsHelper.java b/test/framework/src/main/java/org/elasticsearch/index/analysis/AnalysisTestsHelper.java index 146f99ed17b57..a7153f904b100 100644 --- a/test/framework/src/main/java/org/elasticsearch/index/analysis/AnalysisTestsHelper.java +++ b/test/framework/src/main/java/org/elasticsearch/index/analysis/AnalysisTestsHelper.java @@ -39,7 +39,7 @@ public static ESTestCase.TestAnalysis createTestAnalysisFromClassPath(final Path final String resource, final AnalysisPlugin... plugins) throws IOException { final Settings settings = Settings.builder() - .loadFromStream(resource, AnalysisTestsHelper.class.getResourceAsStream(resource)) + .loadFromStream(resource, AnalysisTestsHelper.class.getResourceAsStream(resource), false) .put(Environment.PATH_HOME_SETTING.getKey(), baseDir.toString()) .build(); diff --git a/test/framework/src/main/java/org/elasticsearch/index/engine/EngineTestCase.java b/test/framework/src/main/java/org/elasticsearch/index/engine/EngineTestCase.java new file mode 100644 index 0000000000000..5c2ef977b163e --- /dev/null +++ b/test/framework/src/main/java/org/elasticsearch/index/engine/EngineTestCase.java @@ -0,0 +1,441 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.index.engine; + +import org.apache.lucene.analysis.Analyzer; +import org.apache.lucene.codecs.Codec; +import org.apache.lucene.document.Field; +import org.apache.lucene.document.NumericDocValuesField; +import org.apache.lucene.document.StoredField; +import org.apache.lucene.document.TextField; +import org.apache.lucene.index.IndexWriter; +import org.apache.lucene.index.IndexWriterConfig; +import org.apache.lucene.index.LiveIndexWriterConfig; +import org.apache.lucene.index.MergePolicy; +import org.apache.lucene.index.Term; +import org.apache.lucene.search.IndexSearcher; +import org.apache.lucene.search.ReferenceManager; +import org.apache.lucene.search.Sort; +import org.apache.lucene.store.Directory; +import org.apache.lucene.util.BytesRef; +import org.apache.lucene.util.IOUtils; +import org.elasticsearch.ElasticsearchException; +import org.elasticsearch.Version; +import org.elasticsearch.action.index.IndexRequest; +import org.elasticsearch.cluster.metadata.IndexMetaData; +import org.elasticsearch.cluster.routing.AllocationId; +import org.elasticsearch.common.Nullable; +import org.elasticsearch.common.bytes.BytesArray; +import org.elasticsearch.common.bytes.BytesReference; +import org.elasticsearch.common.lucene.Lucene; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.unit.TimeValue; +import org.elasticsearch.common.util.BigArrays; +import org.elasticsearch.common.xcontent.XContentType; +import org.elasticsearch.index.Index; +import org.elasticsearch.index.IndexSettings; +import org.elasticsearch.index.VersionType; +import org.elasticsearch.index.codec.CodecService; +import org.elasticsearch.index.mapper.IdFieldMapper; +import org.elasticsearch.index.mapper.Mapping; +import org.elasticsearch.index.mapper.ParseContext; +import org.elasticsearch.index.mapper.ParsedDocument; +import org.elasticsearch.index.mapper.SeqNoFieldMapper; +import org.elasticsearch.index.mapper.SourceFieldMapper; +import org.elasticsearch.index.mapper.Uid; +import org.elasticsearch.index.seqno.SeqNoStats; +import org.elasticsearch.index.seqno.SequenceNumbers; +import org.elasticsearch.index.seqno.SequenceNumbersService; +import org.elasticsearch.index.shard.ShardId; +import org.elasticsearch.index.store.DirectoryService; +import org.elasticsearch.index.store.Store; +import org.elasticsearch.index.translog.Translog; +import org.elasticsearch.index.translog.TranslogConfig; +import org.elasticsearch.test.DummyShardLock; +import org.elasticsearch.test.ESTestCase; +import org.elasticsearch.test.IndexSettingsModule; +import org.elasticsearch.threadpool.TestThreadPool; +import org.elasticsearch.threadpool.ThreadPool; +import org.junit.After; +import org.junit.Before; + +import java.io.IOException; +import java.nio.charset.Charset; +import java.nio.file.Path; +import java.util.Arrays; +import java.util.Collections; +import java.util.List; +import java.util.function.BiFunction; +import java.util.function.ToLongBiFunction; + +import static java.util.Collections.emptyList; +import static org.elasticsearch.index.translog.TranslogDeletionPolicies.createTranslogDeletionPolicy; + +public abstract class EngineTestCase extends ESTestCase { + + protected final ShardId shardId = new ShardId(new Index("index", "_na_"), 0); + protected final AllocationId allocationId = AllocationId.newInitializing(); + protected static final IndexSettings INDEX_SETTINGS = IndexSettingsModule.newIndexSettings("index", Settings.EMPTY); + + protected ThreadPool threadPool; + + protected Store store; + protected Store storeReplica; + + protected InternalEngine engine; + protected InternalEngine replicaEngine; + + protected IndexSettings defaultSettings; + protected String codecName; + protected Path primaryTranslogDir; + protected Path replicaTranslogDir; + + @Override + @Before + public void setUp() throws Exception { + super.setUp(); + + CodecService codecService = new CodecService(null, logger); + String name = Codec.getDefault().getName(); + if (Arrays.asList(codecService.availableCodecs()).contains(name)) { + // some codecs are read only so we only take the ones that we have in the service and randomly + // selected by lucene test case. + codecName = name; + } else { + codecName = "default"; + } + defaultSettings = IndexSettingsModule.newIndexSettings("test", Settings.builder() + .put(IndexSettings.INDEX_GC_DELETES_SETTING.getKey(), "1h") // make sure this doesn't kick in on us + .put(EngineConfig.INDEX_CODEC_SETTING.getKey(), codecName) + .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT) + .put(IndexSettings.MAX_REFRESH_LISTENERS_PER_SHARD.getKey(), + between(10, 10 * IndexSettings.MAX_REFRESH_LISTENERS_PER_SHARD.get(Settings.EMPTY))) + .build()); // TODO randomize more settings + threadPool = new TestThreadPool(getClass().getName()); + store = createStore(); + storeReplica = createStore(); + Lucene.cleanLuceneIndex(store.directory()); + Lucene.cleanLuceneIndex(storeReplica.directory()); + primaryTranslogDir = createTempDir("translog-primary"); + engine = createEngine(store, primaryTranslogDir); + LiveIndexWriterConfig currentIndexWriterConfig = engine.getCurrentIndexWriterConfig(); + + assertEquals(engine.config().getCodec().getName(), codecService.codec(codecName).getName()); + assertEquals(currentIndexWriterConfig.getCodec().getName(), codecService.codec(codecName).getName()); + if (randomBoolean()) { + engine.config().setEnableGcDeletes(false); + } + replicaTranslogDir = createTempDir("translog-replica"); + replicaEngine = createEngine(storeReplica, replicaTranslogDir); + currentIndexWriterConfig = replicaEngine.getCurrentIndexWriterConfig(); + + assertEquals(replicaEngine.config().getCodec().getName(), codecService.codec(codecName).getName()); + assertEquals(currentIndexWriterConfig.getCodec().getName(), codecService.codec(codecName).getName()); + if (randomBoolean()) { + engine.config().setEnableGcDeletes(false); + } + } + + public EngineConfig copy(EngineConfig config, EngineConfig.OpenMode openMode) { + return copy(config, openMode, config.getAnalyzer()); + } + + public EngineConfig copy(EngineConfig config, EngineConfig.OpenMode openMode, Analyzer analyzer) { + return new EngineConfig(openMode, config.getShardId(), config.getAllocationId(), config.getThreadPool(), config.getIndexSettings(), + config.getWarmer(), config.getStore(), config.getMergePolicy(), analyzer, config.getSimilarity(), + new CodecService(null, logger), config.getEventListener(), config.getQueryCache(), config.getQueryCachingPolicy(), + config.getForceNewHistoryUUID(), config.getTranslogConfig(), config.getFlushMergesAfter(), config.getRefreshListeners(), + config.getIndexSort(), config.getTranslogRecoveryRunner()); + } + + @Override + @After + public void tearDown() throws Exception { + super.tearDown(); + if (engine != null && engine.isClosed.get() == false) { + engine.getTranslog().getDeletionPolicy().assertNoOpenTranslogRefs(); + } + if (replicaEngine != null && replicaEngine.isClosed.get() == false) { + replicaEngine.getTranslog().getDeletionPolicy().assertNoOpenTranslogRefs(); + } + IOUtils.close( + replicaEngine, storeReplica, + engine, store); + terminate(threadPool); + } + + + protected static ParseContext.Document testDocumentWithTextField() { + return testDocumentWithTextField("test"); + } + + protected static ParseContext.Document testDocumentWithTextField(String value) { + ParseContext.Document document = testDocument(); + document.add(new TextField("value", value, Field.Store.YES)); + return document; + } + + + protected static ParseContext.Document testDocument() { + return new ParseContext.Document(); + } + + public static ParsedDocument createParsedDoc(String id, String routing) { + return testParsedDocument(id, routing, testDocumentWithTextField(), new BytesArray("{ \"value\" : \"test\" }"), null); + } + + protected static ParsedDocument testParsedDocument( + String id, String routing, ParseContext.Document document, BytesReference source, Mapping mappingUpdate) { + Field uidField = new Field("_id", Uid.encodeId(id), IdFieldMapper.Defaults.FIELD_TYPE); + Field versionField = new NumericDocValuesField("_version", 0); + SeqNoFieldMapper.SequenceIDFields seqID = SeqNoFieldMapper.SequenceIDFields.emptySeqID(); + document.add(uidField); + document.add(versionField); + document.add(seqID.seqNo); + document.add(seqID.seqNoDocValue); + document.add(seqID.primaryTerm); + BytesRef ref = source.toBytesRef(); + document.add(new StoredField(SourceFieldMapper.NAME, ref.bytes, ref.offset, ref.length)); + return new ParsedDocument(versionField, seqID, id, "test", routing, Arrays.asList(document), source, XContentType.JSON, + mappingUpdate); + } + + protected Store createStore() throws IOException { + return createStore(newDirectory()); + } + + protected Store createStore(final Directory directory) throws IOException { + return createStore(INDEX_SETTINGS, directory); + } + + protected Store createStore(final IndexSettings indexSettings, final Directory directory) throws IOException { + final DirectoryService directoryService = new DirectoryService(shardId, indexSettings) { + @Override + public Directory newDirectory() throws IOException { + return directory; + } + }; + return new Store(shardId, indexSettings, directoryService, new DummyShardLock(shardId)); + } + + protected Translog createTranslog() throws IOException { + return createTranslog(primaryTranslogDir); + } + + protected Translog createTranslog(Path translogPath) throws IOException { + TranslogConfig translogConfig = new TranslogConfig(shardId, translogPath, INDEX_SETTINGS, BigArrays.NON_RECYCLING_INSTANCE); + return new Translog(translogConfig, null, createTranslogDeletionPolicy(INDEX_SETTINGS), () -> SequenceNumbers.UNASSIGNED_SEQ_NO); + } + + protected InternalEngine createEngine(Store store, Path translogPath) throws IOException { + return createEngine(defaultSettings, store, translogPath, newMergePolicy(), null); + } + + protected InternalEngine createEngine( + Store store, + Path translogPath, + BiFunction sequenceNumbersServiceSupplier) throws IOException { + return createEngine(defaultSettings, store, translogPath, newMergePolicy(), null, sequenceNumbersServiceSupplier); + } + + protected InternalEngine createEngine( + Store store, + Path translogPath, + BiFunction sequenceNumbersServiceSupplier, + ToLongBiFunction seqNoForOperation) throws IOException { + return createEngine( + defaultSettings, store, translogPath, newMergePolicy(), null, sequenceNumbersServiceSupplier, seqNoForOperation, null); + } + + protected InternalEngine createEngine( + IndexSettings indexSettings, Store store, Path translogPath, MergePolicy mergePolicy) throws IOException { + return createEngine(indexSettings, store, translogPath, mergePolicy, null); + + } + + protected InternalEngine createEngine(IndexSettings indexSettings, Store store, Path translogPath, MergePolicy mergePolicy, + @Nullable IndexWriterFactory indexWriterFactory) throws IOException { + return createEngine(indexSettings, store, translogPath, mergePolicy, indexWriterFactory, null); + } + + protected InternalEngine createEngine( + IndexSettings indexSettings, + Store store, + Path translogPath, + MergePolicy mergePolicy, + @Nullable IndexWriterFactory indexWriterFactory, + @Nullable BiFunction sequenceNumbersServiceSupplier) throws IOException { + return createEngine( + indexSettings, store, translogPath, mergePolicy, indexWriterFactory, sequenceNumbersServiceSupplier, null, null); + } + + protected InternalEngine createEngine( + IndexSettings indexSettings, + Store store, + Path translogPath, + MergePolicy mergePolicy, + @Nullable IndexWriterFactory indexWriterFactory, + @Nullable BiFunction sequenceNumbersServiceSupplier, + @Nullable ToLongBiFunction seqNoForOperation) throws IOException { + return createEngine( + indexSettings, + store, + translogPath, + mergePolicy, + indexWriterFactory, + sequenceNumbersServiceSupplier, + seqNoForOperation, + null); + } + + protected InternalEngine createEngine( + IndexSettings indexSettings, + Store store, + Path translogPath, + MergePolicy mergePolicy, + @Nullable IndexWriterFactory indexWriterFactory, + @Nullable BiFunction sequenceNumbersServiceSupplier, + @Nullable ToLongBiFunction seqNoForOperation, + @Nullable Sort indexSort) throws IOException { + EngineConfig config = config(indexSettings, store, translogPath, mergePolicy, null, indexSort); + InternalEngine internalEngine = createInternalEngine(indexWriterFactory, sequenceNumbersServiceSupplier, seqNoForOperation, config); + if (config.getOpenMode() == EngineConfig.OpenMode.OPEN_INDEX_AND_TRANSLOG) { + internalEngine.recoverFromTranslog(); + } + return internalEngine; + } + + @FunctionalInterface + public interface IndexWriterFactory { + + IndexWriter createWriter(Directory directory, IndexWriterConfig iwc) throws IOException; + } + + public static InternalEngine createInternalEngine( + @Nullable final IndexWriterFactory indexWriterFactory, + @Nullable final BiFunction sequenceNumbersServiceSupplier, + @Nullable final ToLongBiFunction seqNoForOperation, + final EngineConfig config) { + if (sequenceNumbersServiceSupplier == null) { + return new InternalEngine(config) { + @Override + IndexWriter createWriter(Directory directory, IndexWriterConfig iwc) throws IOException { + return (indexWriterFactory != null) ? + indexWriterFactory.createWriter(directory, iwc) : + super.createWriter(directory, iwc); + } + + @Override + protected long doGenerateSeqNoForOperation(final Operation operation) { + return seqNoForOperation != null + ? seqNoForOperation.applyAsLong(this, operation) + : super.doGenerateSeqNoForOperation(operation); + } + }; + } else { + return new InternalEngine(config, sequenceNumbersServiceSupplier) { + @Override + IndexWriter createWriter(Directory directory, IndexWriterConfig iwc) throws IOException { + return (indexWriterFactory != null) ? + indexWriterFactory.createWriter(directory, iwc) : + super.createWriter(directory, iwc); + } + + @Override + protected long doGenerateSeqNoForOperation(final Operation operation) { + return seqNoForOperation != null + ? seqNoForOperation.applyAsLong(this, operation) + : super.doGenerateSeqNoForOperation(operation); + } + }; + } + + } + + public EngineConfig config(IndexSettings indexSettings, Store store, Path translogPath, MergePolicy mergePolicy, + ReferenceManager.RefreshListener refreshListener) { + return config(indexSettings, store, translogPath, mergePolicy, refreshListener, null); + } + + public EngineConfig config(IndexSettings indexSettings, Store store, Path translogPath, MergePolicy mergePolicy, + ReferenceManager.RefreshListener refreshListener, Sort indexSort) { + IndexWriterConfig iwc = newIndexWriterConfig(); + TranslogConfig translogConfig = new TranslogConfig(shardId, translogPath, indexSettings, BigArrays.NON_RECYCLING_INSTANCE); + final EngineConfig.OpenMode openMode; + try { + if (Lucene.indexExists(store.directory()) == false) { + openMode = EngineConfig.OpenMode.CREATE_INDEX_AND_TRANSLOG; + } else { + openMode = EngineConfig.OpenMode.OPEN_INDEX_AND_TRANSLOG; + } + } catch (IOException e) { + throw new ElasticsearchException("can't find index?", e); + } + Engine.EventListener listener = new Engine.EventListener() { + @Override + public void onFailedEngine(String reason, @Nullable Exception e) { + // we don't need to notify anybody in this test + } + }; + final TranslogHandler handler = new TranslogHandler(xContentRegistry(), IndexSettingsModule.newIndexSettings(shardId.getIndexName(), + indexSettings.getSettings())); + final List refreshListenerList = + refreshListener == null ? emptyList() : Collections.singletonList(refreshListener); + EngineConfig config = new EngineConfig(openMode, shardId, allocationId.getId(), threadPool, indexSettings, null, store, + mergePolicy, iwc.getAnalyzer(), iwc.getSimilarity(), new CodecService(null, logger), listener, + IndexSearcher.getDefaultQueryCache(), IndexSearcher.getDefaultQueryCachingPolicy(), false, translogConfig, + TimeValue.timeValueMinutes(5), refreshListenerList, indexSort, handler); + + return config; + } + + protected static final BytesReference B_1 = new BytesArray(new byte[]{1}); + protected static final BytesReference B_2 = new BytesArray(new byte[]{2}); + protected static final BytesReference B_3 = new BytesArray(new byte[]{3}); + protected static final BytesArray SOURCE = bytesArray("{}"); + + protected static BytesArray bytesArray(String string) { + return new BytesArray(string.getBytes(Charset.defaultCharset())); + } + + protected Term newUid(String id) { + return new Term("_id", Uid.encodeId(id)); + } + + protected Term newUid(ParsedDocument doc) { + return newUid(doc.id()); + } + + protected Engine.Get newGet(boolean realtime, ParsedDocument doc) { + return new Engine.Get(realtime, doc.type(), doc.id(), newUid(doc)); + } + + protected Engine.Index indexForDoc(ParsedDocument doc) { + return new Engine.Index(newUid(doc), doc); + } + + protected Engine.Index replicaIndexForDoc(ParsedDocument doc, long version, long seqNo, + boolean isRetry) { + return new Engine.Index(newUid(doc), doc, seqNo, 1, version, VersionType.EXTERNAL, + Engine.Operation.Origin.REPLICA, System.nanoTime(), + IndexRequest.UNSET_AUTO_GENERATED_TIMESTAMP, isRetry); + } + +} diff --git a/test/framework/src/main/java/org/elasticsearch/index/engine/TranslogHandler.java b/test/framework/src/main/java/org/elasticsearch/index/engine/TranslogHandler.java new file mode 100644 index 0000000000000..6834d124c499a --- /dev/null +++ b/test/framework/src/main/java/org/elasticsearch/index/engine/TranslogHandler.java @@ -0,0 +1,145 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.index.engine; + +import org.apache.lucene.analysis.standard.StandardAnalyzer; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; +import org.elasticsearch.common.xcontent.XContentFactory; +import org.elasticsearch.index.IndexSettings; +import org.elasticsearch.index.analysis.AnalyzerScope; +import org.elasticsearch.index.analysis.IndexAnalyzers; +import org.elasticsearch.index.analysis.NamedAnalyzer; +import org.elasticsearch.index.mapper.DocumentMapper; +import org.elasticsearch.index.mapper.DocumentMapperForType; +import org.elasticsearch.index.mapper.MapperService; +import org.elasticsearch.index.mapper.Mapping; +import org.elasticsearch.index.mapper.RootObjectMapper; +import org.elasticsearch.index.shard.IndexShard; +import org.elasticsearch.index.similarity.SimilarityService; +import org.elasticsearch.index.translog.Translog; +import org.elasticsearch.indices.IndicesModule; +import org.elasticsearch.indices.mapper.MapperRegistry; + +import java.io.IOException; +import java.util.HashMap; +import java.util.Map; +import java.util.concurrent.atomic.AtomicLong; + +import static java.util.Collections.emptyList; +import static java.util.Collections.emptyMap; +import static org.elasticsearch.index.mapper.SourceToParse.source; + +public class TranslogHandler implements EngineConfig.TranslogRecoveryRunner { + + private final MapperService mapperService; + public Mapping mappingUpdate = null; + private final Map recoveredTypes = new HashMap<>(); + + private final AtomicLong appliedOperations = new AtomicLong(); + + long appliedOperations() { + return appliedOperations.get(); + } + + public TranslogHandler(NamedXContentRegistry xContentRegistry, IndexSettings indexSettings) { + NamedAnalyzer defaultAnalyzer = new NamedAnalyzer("default", AnalyzerScope.INDEX, new StandardAnalyzer()); + IndexAnalyzers indexAnalyzers = + new IndexAnalyzers(indexSettings, defaultAnalyzer, defaultAnalyzer, defaultAnalyzer, emptyMap(), emptyMap()); + SimilarityService similarityService = new SimilarityService(indexSettings, null, emptyMap()); + MapperRegistry mapperRegistry = new IndicesModule(emptyList()).getMapperRegistry(); + mapperService = new MapperService(indexSettings, indexAnalyzers, xContentRegistry, similarityService, mapperRegistry, + () -> null); + } + + private DocumentMapperForType docMapper(String type) { + RootObjectMapper.Builder rootBuilder = new RootObjectMapper.Builder(type); + DocumentMapper.Builder b = new DocumentMapper.Builder(rootBuilder, mapperService); + return new DocumentMapperForType(b.build(mapperService), mappingUpdate); + } + + private void applyOperation(Engine engine, Engine.Operation operation) throws IOException { + switch (operation.operationType()) { + case INDEX: + Engine.Index engineIndex = (Engine.Index) operation; + Mapping update = engineIndex.parsedDoc().dynamicMappingsUpdate(); + if (engineIndex.parsedDoc().dynamicMappingsUpdate() != null) { + recoveredTypes.compute(engineIndex.type(), (k, mapping) -> mapping == null ? update : mapping.merge(update, false)); + } + engine.index(engineIndex); + break; + case DELETE: + engine.delete((Engine.Delete) operation); + break; + case NO_OP: + engine.noOp((Engine.NoOp) operation); + break; + default: + throw new IllegalStateException("No operation defined for [" + operation + "]"); + } + } + + /** + * Returns the recovered types modifying the mapping during the recovery + */ + public Map getRecoveredTypes() { + return recoveredTypes; + } + + @Override + public int run(Engine engine, Translog.Snapshot snapshot) throws IOException { + int opsRecovered = 0; + Translog.Operation operation; + while ((operation = snapshot.next()) != null) { + applyOperation(engine, convertToEngineOp(operation, Engine.Operation.Origin.LOCAL_TRANSLOG_RECOVERY)); + opsRecovered++; + appliedOperations.incrementAndGet(); + } + return opsRecovered; + } + + private Engine.Operation convertToEngineOp(Translog.Operation operation, Engine.Operation.Origin origin) { + switch (operation.opType()) { + case INDEX: + final Translog.Index index = (Translog.Index) operation; + final String indexName = mapperService.index().getName(); + final Engine.Index engineIndex = IndexShard.prepareIndex(docMapper(index.type()), + mapperService.getIndexSettings().getIndexVersionCreated(), + source(indexName, index.type(), index.id(), index.source(), XContentFactory.xContentType(index.source())) + .routing(index.routing()).parent(index.parent()), index.seqNo(), index.primaryTerm(), + index.version(), index.versionType().versionTypeForReplicationAndRecovery(), origin, + index.getAutoGeneratedIdTimestamp(), true); + return engineIndex; + case DELETE: + final Translog.Delete delete = (Translog.Delete) operation; + final Engine.Delete engineDelete = new Engine.Delete(delete.type(), delete.id(), delete.uid(), delete.seqNo(), + delete.primaryTerm(), delete.version(), delete.versionType().versionTypeForReplicationAndRecovery(), + origin, System.nanoTime()); + return engineDelete; + case NO_OP: + final Translog.NoOp noOp = (Translog.NoOp) operation; + final Engine.NoOp engineNoOp = + new Engine.NoOp(noOp.seqNo(), noOp.primaryTerm(), origin, System.nanoTime(), noOp.reason()); + return engineNoOp; + default: + throw new IllegalStateException("No operation defined for [" + operation + "]"); + } + } + +} diff --git a/core/src/test/java/org/elasticsearch/index/mapper/AbstractNumericFieldMapperTestCase.java b/test/framework/src/main/java/org/elasticsearch/index/mapper/AbstractNumericFieldMapperTestCase.java similarity index 100% rename from core/src/test/java/org/elasticsearch/index/mapper/AbstractNumericFieldMapperTestCase.java rename to test/framework/src/main/java/org/elasticsearch/index/mapper/AbstractNumericFieldMapperTestCase.java diff --git a/test/framework/src/main/java/org/elasticsearch/index/mapper/FieldTypeTestCase.java b/test/framework/src/main/java/org/elasticsearch/index/mapper/FieldTypeTestCase.java index ae91a791535d9..2b6f4c38a902b 100644 --- a/test/framework/src/main/java/org/elasticsearch/index/mapper/FieldTypeTestCase.java +++ b/test/framework/src/main/java/org/elasticsearch/index/mapper/FieldTypeTestCase.java @@ -19,11 +19,13 @@ package org.elasticsearch.index.mapper; import org.apache.lucene.analysis.standard.StandardAnalyzer; +import org.apache.lucene.search.Query; import org.elasticsearch.Version; import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.index.analysis.AnalyzerScope; import org.elasticsearch.index.analysis.NamedAnalyzer; +import org.elasticsearch.index.query.QueryShardContext; import org.elasticsearch.index.similarity.BM25SimilarityProvider; import org.elasticsearch.test.ESTestCase; @@ -285,6 +287,8 @@ public void testCheckTypeName() { public MappedFieldType clone() {return null;} @Override public String typeName() { return fieldType.typeName();} + @Override + public Query existsQuery(QueryShardContext context) { return null; } }; try { fieldType.checkCompatibility(bogus, conflicts, random().nextBoolean()); @@ -299,6 +303,8 @@ public void testCheckTypeName() { public MappedFieldType clone() {return null;} @Override public String typeName() { return "othertype";} + @Override + public Query existsQuery(QueryShardContext context) { return null; } }; try { fieldType.checkCompatibility(other, conflicts, random().nextBoolean()); diff --git a/test/framework/src/main/java/org/elasticsearch/index/mapper/MockFieldMapper.java b/test/framework/src/main/java/org/elasticsearch/index/mapper/MockFieldMapper.java index d8c9c8d797b10..b374a6b40346b 100644 --- a/test/framework/src/main/java/org/elasticsearch/index/mapper/MockFieldMapper.java +++ b/test/framework/src/main/java/org/elasticsearch/index/mapper/MockFieldMapper.java @@ -19,12 +19,17 @@ package org.elasticsearch.index.mapper; -import java.io.IOException; -import java.util.List; - +import org.apache.lucene.index.Term; +import org.apache.lucene.search.DocValuesFieldExistsQuery; +import org.apache.lucene.search.Query; +import org.apache.lucene.search.TermQuery; import org.elasticsearch.Version; import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.index.query.QueryShardContext; + +import java.io.IOException; +import java.util.List; // this sucks how much must be overridden just do get a dummy field mapper... public class MockFieldMapper extends FieldMapper { @@ -66,6 +71,15 @@ public MappedFieldType clone() { public String typeName() { return "faketype"; } + + @Override + public Query existsQuery(QueryShardContext context) { + if (hasDocValues()) { + return new DocValuesFieldExistsQuery(name()); + } else { + return new TermQuery(new Term(FieldNamesFieldMapper.NAME, name())); + } + } } @Override diff --git a/test/framework/src/main/java/org/elasticsearch/index/shard/IndexShardTestCase.java b/test/framework/src/main/java/org/elasticsearch/index/shard/IndexShardTestCase.java index 58208fd7bd87f..d463fdbd17bdd 100644 --- a/test/framework/src/main/java/org/elasticsearch/index/shard/IndexShardTestCase.java +++ b/test/framework/src/main/java/org/elasticsearch/index/shard/IndexShardTestCase.java @@ -58,7 +58,7 @@ import org.elasticsearch.index.mapper.Mapping; import org.elasticsearch.index.mapper.SourceToParse; import org.elasticsearch.index.mapper.Uid; -import org.elasticsearch.index.seqno.SequenceNumbersService; +import org.elasticsearch.index.seqno.SequenceNumbers; import org.elasticsearch.index.similarity.SimilarityService; import org.elasticsearch.index.store.DirectoryService; import org.elasticsearch.index.store.Store; @@ -160,7 +160,9 @@ protected IndexShard newShard(boolean primary) throws IOException { * @param shardRouting the {@link ShardRouting} to use for this shard * @param listeners an optional set of listeners to add to the shard */ - protected IndexShard newShard(ShardRouting shardRouting, IndexingOperationListener... listeners) throws IOException { + protected IndexShard newShard( + final ShardRouting shardRouting, + final IndexingOperationListener... listeners) throws IOException { assert shardRouting.initializing() : shardRouting; Settings settings = Settings.builder().put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT) .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0) @@ -197,9 +199,7 @@ protected IndexShard newShard(ShardId shardId, boolean primary, IndexingOperatio */ protected IndexShard newShard(ShardId shardId, boolean primary, String nodeId, IndexMetaData indexMetaData, @Nullable IndexSearcherWrapper searcherWrapper) throws IOException { - ShardRouting shardRouting = TestShardRouting.newShardRouting(shardId, nodeId, primary, ShardRoutingState.INITIALIZING, - primary ? RecoverySource.StoreRecoverySource.EMPTY_STORE_INSTANCE : RecoverySource.PeerRecoverySource.INSTANCE); - return newShard(shardRouting, indexMetaData, searcherWrapper, null); + return newShard(shardId, primary, nodeId, indexMetaData, searcherWrapper, () -> {}); } /** @@ -211,11 +211,10 @@ protected IndexShard newShard(ShardId shardId, boolean primary, String nodeId, I * (ready to recover from another shard) */ protected IndexShard newShard(ShardId shardId, boolean primary, String nodeId, IndexMetaData indexMetaData, - Runnable globalCheckpointSyncer, - @Nullable IndexSearcherWrapper searcherWrapper) throws IOException { + @Nullable IndexSearcherWrapper searcherWrapper, Runnable globalCheckpointSyncer) throws IOException { ShardRouting shardRouting = TestShardRouting.newShardRouting(shardId, nodeId, primary, ShardRoutingState.INITIALIZING, primary ? RecoverySource.StoreRecoverySource.EMPTY_STORE_INSTANCE : RecoverySource.PeerRecoverySource.INSTANCE); - return newShard(shardRouting, indexMetaData, searcherWrapper, null); + return newShard(shardRouting, indexMetaData, searcherWrapper, null, globalCheckpointSyncer); } @@ -229,40 +228,45 @@ protected IndexShard newShard(ShardId shardId, boolean primary, String nodeId, I */ protected IndexShard newShard(ShardRouting routing, IndexMetaData indexMetaData, IndexingOperationListener... listeners) throws IOException { - return newShard(routing, indexMetaData, null, null, listeners); + return newShard(routing, indexMetaData, null, null, () -> {}, listeners); } /** * creates a new initializing shard. The shard will will be put in its proper path under the * current node id the shard is assigned to. - * @param routing shard routing to use + * @param routing shard routing to use * @param indexMetaData indexMetaData for the shard, including any mapping * @param indexSearcherWrapper an optional wrapper to be used during searchers + * @param globalCheckpointSyncer callback for syncing global checkpoints * @param listeners an optional set of listeners to add to the shard */ protected IndexShard newShard(ShardRouting routing, IndexMetaData indexMetaData, @Nullable IndexSearcherWrapper indexSearcherWrapper, @Nullable EngineFactory engineFactory, + Runnable globalCheckpointSyncer, IndexingOperationListener... listeners) throws IOException { // add node id as name to settings for proper logging final ShardId shardId = routing.shardId(); final NodeEnvironment.NodePath nodePath = new NodeEnvironment.NodePath(createTempDir()); ShardPath shardPath = new ShardPath(false, nodePath.resolve(shardId), nodePath.resolve(shardId), shardId); - return newShard(routing, shardPath, indexMetaData, indexSearcherWrapper, engineFactory, listeners); + return newShard(routing, shardPath, indexMetaData, indexSearcherWrapper, engineFactory, globalCheckpointSyncer, listeners); } /** * creates a new initializing shard. - * @param routing shard routing to use - * @param shardPath path to use for shard data - * @param indexMetaData indexMetaData for the shard, including any mapping - * @param indexSearcherWrapper an optional wrapper to be used during searchers - * @param listeners an optional set of listeners to add to the shard + * + * @param routing shard routing to use + * @param shardPath path to use for shard data + * @param indexMetaData indexMetaData for the shard, including any mapping + * @param indexSearcherWrapper an optional wrapper to be used during searchers + * @param globalCheckpointSyncer callback for syncing global checkpoints + * @param listeners an optional set of listeners to add to the shard */ protected IndexShard newShard(ShardRouting routing, ShardPath shardPath, IndexMetaData indexMetaData, @Nullable IndexSearcherWrapper indexSearcherWrapper, @Nullable EngineFactory engineFactory, + Runnable globalCheckpointSyncer, IndexingOperationListener... listeners) throws IOException { final Settings nodeSettings = Settings.builder().put("node.name", routing.currentNodeId()).build(); final IndexSettings indexSettings = new IndexSettings(indexMetaData, nodeSettings); @@ -279,9 +283,9 @@ protected IndexShard newShard(ShardRouting routing, ShardPath shardPath, IndexMe }; final Engine.Warmer warmer = searcher -> { }; - indexShard = new IndexShard(routing, indexSettings, shardPath, store, () ->null, indexCache, mapperService, similarityService, + indexShard = new IndexShard(routing, indexSettings, shardPath, store, () -> null, indexCache, mapperService, similarityService, engineFactory, indexEventListener, indexSearcherWrapper, threadPool, - BigArrays.NON_RECYCLING_INSTANCE, warmer, Collections.emptyList(), Arrays.asList(listeners)); + BigArrays.NON_RECYCLING_INSTANCE, warmer, Collections.emptyList(), Arrays.asList(listeners), globalCheckpointSyncer); success = true; } finally { if (success == false) { @@ -311,7 +315,14 @@ protected IndexShard reinitShard(IndexShard current, IndexingOperationListener.. */ protected IndexShard reinitShard(IndexShard current, ShardRouting routing, IndexingOperationListener... listeners) throws IOException { closeShards(current); - return newShard(routing, current.shardPath(), current.indexSettings().getIndexMetaData(), null, current.engineFactory, listeners); + return newShard( + routing, + current.shardPath(), + current.indexSettings().getIndexMetaData(), + null, + current.engineFactory, + current.getGlobalCheckpointSyncer(), + listeners); } /** @@ -440,7 +451,7 @@ protected final void recoverReplica(final IndexShard replica, if (snapshot.size() > 0) { startingSeqNo = PeerRecoveryTargetService.getStartingSeqNo(recoveryTarget); } else { - startingSeqNo = SequenceNumbersService.UNASSIGNED_SEQ_NO; + startingSeqNo = SequenceNumbers.UNASSIGNED_SEQ_NO; } final StartRecoveryRequest request = new StartRecoveryRequest(replica.shardId(), targetAllocationId, diff --git a/test/framework/src/main/java/org/elasticsearch/index/translog/TranslogDeletionPolicies.java b/test/framework/src/main/java/org/elasticsearch/index/translog/TranslogDeletionPolicies.java new file mode 100644 index 0000000000000..3ab55b687bd20 --- /dev/null +++ b/test/framework/src/main/java/org/elasticsearch/index/translog/TranslogDeletionPolicies.java @@ -0,0 +1,39 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.index.translog; + +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.index.IndexSettings; + +public class TranslogDeletionPolicies { + + public static TranslogDeletionPolicy createTranslogDeletionPolicy() { + return new TranslogDeletionPolicy( + IndexSettings.INDEX_TRANSLOG_RETENTION_SIZE_SETTING.getDefault(Settings.EMPTY).getBytes(), + IndexSettings.INDEX_TRANSLOG_RETENTION_AGE_SETTING.getDefault(Settings.EMPTY).getMillis() + ); + } + + public static TranslogDeletionPolicy createTranslogDeletionPolicy(IndexSettings indexSettings) { + return new TranslogDeletionPolicy(indexSettings.getTranslogRetentionSize().getBytes(), + indexSettings.getTranslogRetentionAge().getMillis()); + } + +} diff --git a/test/framework/src/main/java/org/elasticsearch/indices/analysis/AnalysisFactoryTestCase.java b/test/framework/src/main/java/org/elasticsearch/indices/analysis/AnalysisFactoryTestCase.java index ce0e58558dd42..ff67f874fda0d 100644 --- a/test/framework/src/main/java/org/elasticsearch/indices/analysis/AnalysisFactoryTestCase.java +++ b/test/framework/src/main/java/org/elasticsearch/indices/analysis/AnalysisFactoryTestCase.java @@ -112,6 +112,8 @@ private static String toCamelCase(String s) { .put("arabicnormalization", MovedToAnalysisCommon.class) .put("arabicstem", MovedToAnalysisCommon.class) .put("asciifolding", MovedToAnalysisCommon.class) + .put("bengalinormalization", MovedToAnalysisCommon.class) + .put("bengalistem", MovedToAnalysisCommon.class) .put("brazilianstem", MovedToAnalysisCommon.class) .put("bulgarianstem", MovedToAnalysisCommon.class) .put("cjkbigram", MovedToAnalysisCommon.class) @@ -191,7 +193,6 @@ private static String toCamelCase(String s) { .put("flattengraph", MovedToAnalysisCommon.class) // TODO: these tokenfilters are not yet exposed: useful? - // suggest stop .put("suggeststop", Void.class) // capitalizes tokens diff --git a/test/framework/src/main/java/org/elasticsearch/repositories/ESBlobStoreContainerTestCase.java b/test/framework/src/main/java/org/elasticsearch/repositories/ESBlobStoreContainerTestCase.java index 42029b053a4b0..18be4e9437770 100644 --- a/test/framework/src/main/java/org/elasticsearch/repositories/ESBlobStoreContainerTestCase.java +++ b/test/framework/src/main/java/org/elasticsearch/repositories/ESBlobStoreContainerTestCase.java @@ -142,7 +142,7 @@ public void testVerifyOverwriteFails() throws IOException { } } - private void writeBlob(final BlobContainer container, final String blobName, final BytesArray bytesArray) throws IOException { + protected void writeBlob(final BlobContainer container, final String blobName, final BytesArray bytesArray) throws IOException { try (InputStream stream = bytesArray.streamInput()) { container.writeBlob(blobName, stream, bytesArray.length()); } diff --git a/test/framework/src/main/java/org/elasticsearch/repositories/ESBlobStoreTestCase.java b/test/framework/src/main/java/org/elasticsearch/repositories/ESBlobStoreTestCase.java index e7f8edb1fa208..35a17c2a8dd83 100644 --- a/test/framework/src/main/java/org/elasticsearch/repositories/ESBlobStoreTestCase.java +++ b/test/framework/src/main/java/org/elasticsearch/repositories/ESBlobStoreTestCase.java @@ -78,7 +78,7 @@ public static byte[] randomBytes(int length) { return data; } - private static void writeBlob(BlobContainer container, String blobName, BytesArray bytesArray) throws IOException { + protected static void writeBlob(BlobContainer container, String blobName, BytesArray bytesArray) throws IOException { try (InputStream stream = bytesArray.streamInput()) { container.writeBlob(blobName, stream, bytesArray.length()); } diff --git a/test/framework/src/main/java/org/elasticsearch/script/MockScriptEngine.java b/test/framework/src/main/java/org/elasticsearch/script/MockScriptEngine.java index 7ad3beea049cb..da3757d77b46e 100644 --- a/test/framework/src/main/java/org/elasticsearch/script/MockScriptEngine.java +++ b/test/framework/src/main/java/org/elasticsearch/script/MockScriptEngine.java @@ -34,6 +34,7 @@ import java.util.HashMap; import java.util.Map; import java.util.function.Function; +import java.util.function.Predicate; import static java.util.Collections.emptyMap; @@ -99,6 +100,9 @@ public String execute() { }; }; return context.factoryClazz.cast(factory); + } else if (context.instanceClazz.equals(FilterScript.class)) { + FilterScript.Factory factory = mockCompiled::createFilterScript; + return context.factoryClazz.cast(factory); } else if (context.instanceClazz.equals(SimilarityScript.class)) { SimilarityScript.Factory factory = mockCompiled::createSimilarityScript; return context.factoryClazz.cast(factory); @@ -153,6 +157,11 @@ public SearchScript.LeafFactory createSearchScript(Map params, S return new MockSearchScript(lookup, context, script != null ? script : ctx -> source); } + + public FilterScript.LeafFactory createFilterScript(Map params, SearchLookup lookup) { + return new MockFilterScript(lookup, params, script); + } + public SimilarityScript createSimilarityScript() { return new MockSimilarityScript(script != null ? script : ctx -> 42d); } @@ -243,6 +252,39 @@ public boolean needs_score() { } } + + public static class MockFilterScript implements FilterScript.LeafFactory { + + private final Function, Object> script; + private final Map vars; + private final SearchLookup lookup; + + public MockFilterScript(SearchLookup lookup, Map vars, Function, Object> script) { + this.lookup = lookup; + this.vars = vars; + this.script = script; + } + + public FilterScript newInstance(LeafReaderContext context) throws IOException { + LeafSearchLookup leafLookup = lookup.getLeafSearchLookup(context); + Map ctx = new HashMap<>(leafLookup.asMap()); + if (vars != null) { + ctx.putAll(vars); + } + return new FilterScript(ctx, lookup, context) { + @Override + public boolean execute() { + return (boolean) script.apply(ctx); + } + + @Override + public void setDocument(int doc) { + leafLookup.setDocument(doc); + } + }; + } + } + public class MockSimilarityScript extends SimilarityScript { private final Function, Object> script; diff --git a/test/framework/src/main/java/org/elasticsearch/search/RandomSearchRequestGenerator.java b/test/framework/src/main/java/org/elasticsearch/search/RandomSearchRequestGenerator.java index e4f5f2cc39ea0..10a166f57b7d9 100644 --- a/test/framework/src/main/java/org/elasticsearch/search/RandomSearchRequestGenerator.java +++ b/test/framework/src/main/java/org/elasticsearch/search/RandomSearchRequestGenerator.java @@ -37,7 +37,7 @@ import org.elasticsearch.search.collapse.CollapseBuilder; import org.elasticsearch.search.fetch.subphase.FetchSourceContext; import org.elasticsearch.search.fetch.subphase.highlight.HighlightBuilder; -import org.elasticsearch.search.rescore.RescoreBuilder; +import org.elasticsearch.search.rescore.RescorerBuilder; import org.elasticsearch.search.searchafter.SearchAfterBuilder; import org.elasticsearch.search.slice.SliceBuilder; import org.elasticsearch.search.sort.ScriptSortBuilder; @@ -115,7 +115,7 @@ public static SearchRequest randomSearchRequest(Supplier ra public static SearchSourceBuilder randomSearchSourceBuilder( Supplier randomHighlightBuilder, Supplier randomSuggestBuilder, - Supplier> randomRescoreBuilder, + Supplier> randomRescoreBuilder, Supplier> randomExtBuilders, Supplier randomCollapseBuilder) { SearchSourceBuilder builder = new SearchSourceBuilder(); diff --git a/test/framework/src/main/java/org/elasticsearch/search/aggregations/AggregatorTestCase.java b/test/framework/src/main/java/org/elasticsearch/search/aggregations/AggregatorTestCase.java index bc9de3e06aaac..d3e83f03d3afb 100644 --- a/test/framework/src/main/java/org/elasticsearch/search/aggregations/AggregatorTestCase.java +++ b/test/framework/src/main/java/org/elasticsearch/search/aggregations/AggregatorTestCase.java @@ -91,6 +91,7 @@ public abstract class AggregatorTestCase extends ESTestCase { private static final String NESTEDFIELD_PREFIX = "nested_"; private List releasables = new ArrayList<>(); + private static final String TYPE_NAME = "type"; /** Create a factory for the given aggregation builder. */ protected AggregatorFactory createAggregatorFactory(AggregationBuilder aggregationBuilder, @@ -104,6 +105,7 @@ protected AggregatorFactory createAggregatorFactory(AggregationBuilder aggreg MapperService mapperService = mapperServiceMock(); when(mapperService.getIndexSettings()).thenReturn(indexSettings); when(mapperService.hasNested()).thenReturn(false); + when(mapperService.types()).thenReturn(Collections.singleton(TYPE_NAME)); when(searchContext.mapperService()).thenReturn(mapperService); IndexFieldDataService ifds = new IndexFieldDataService(indexSettings, new IndicesFieldDataCache(Settings.EMPTY, new IndexFieldDataCache.Listener() { @@ -115,7 +117,7 @@ public Object answer(InvocationOnMock invocationOnMock) throws Throwable { } }); - SearchLookup searchLookup = new SearchLookup(mapperService, ifds::getForField, new String[]{"type"}); + SearchLookup searchLookup = new SearchLookup(mapperService, ifds::getForField, new String[]{TYPE_NAME}); when(searchContext.lookup()).thenReturn(searchLookup); QueryShardContext queryShardContext = queryShardContextMock(mapperService, fieldTypes, circuitBreakerService); diff --git a/test/framework/src/main/java/org/elasticsearch/test/AbstractQueryTestCase.java b/test/framework/src/main/java/org/elasticsearch/test/AbstractQueryTestCase.java index 0b0d95f5c7fff..43904d1f1f9eb 100644 --- a/test/framework/src/main/java/org/elasticsearch/test/AbstractQueryTestCase.java +++ b/test/framework/src/main/java/org/elasticsearch/test/AbstractQueryTestCase.java @@ -20,7 +20,6 @@ package org.elasticsearch.test; import com.fasterxml.jackson.core.io.JsonStringEncoder; - import org.apache.lucene.search.BoostQuery; import org.apache.lucene.search.Query; import org.apache.lucene.search.TermQuery; @@ -64,6 +63,7 @@ import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.common.xcontent.json.JsonXContent; import org.elasticsearch.env.Environment; +import org.elasticsearch.env.TestEnvironment; import org.elasticsearch.index.Index; import org.elasticsearch.index.IndexSettings; import org.elasticsearch.index.analysis.IndexAnalyzers; @@ -138,19 +138,17 @@ public abstract class AbstractQueryTestCase> public static final String STRING_FIELD_NAME = "mapped_string"; protected static final String STRING_FIELD_NAME_2 = "mapped_string_2"; protected static final String INT_FIELD_NAME = "mapped_int"; - protected static final String INT_RANGE_FIELD_NAME = "mapped_int_range"; protected static final String DOUBLE_FIELD_NAME = "mapped_double"; protected static final String BOOLEAN_FIELD_NAME = "mapped_boolean"; protected static final String DATE_FIELD_NAME = "mapped_date"; - protected static final String DATE_RANGE_FIELD_NAME = "mapped_date_range"; protected static final String OBJECT_FIELD_NAME = "mapped_object"; protected static final String GEO_POINT_FIELD_NAME = "mapped_geo_point"; protected static final String GEO_SHAPE_FIELD_NAME = "mapped_geo_shape"; - protected static final String[] MAPPED_FIELD_NAMES = new String[]{STRING_FIELD_NAME, INT_FIELD_NAME, INT_RANGE_FIELD_NAME, - DOUBLE_FIELD_NAME, BOOLEAN_FIELD_NAME, DATE_FIELD_NAME, DATE_RANGE_FIELD_NAME, OBJECT_FIELD_NAME, GEO_POINT_FIELD_NAME, + protected static final String[] MAPPED_FIELD_NAMES = new String[]{STRING_FIELD_NAME, INT_FIELD_NAME, + DOUBLE_FIELD_NAME, BOOLEAN_FIELD_NAME, DATE_FIELD_NAME, OBJECT_FIELD_NAME, GEO_POINT_FIELD_NAME, GEO_SHAPE_FIELD_NAME}; - private static final String[] MAPPED_LEAF_FIELD_NAMES = new String[]{STRING_FIELD_NAME, INT_FIELD_NAME, INT_RANGE_FIELD_NAME, - DOUBLE_FIELD_NAME, BOOLEAN_FIELD_NAME, DATE_FIELD_NAME, DATE_RANGE_FIELD_NAME, GEO_POINT_FIELD_NAME, }; + private static final String[] MAPPED_LEAF_FIELD_NAMES = new String[]{STRING_FIELD_NAME, INT_FIELD_NAME, + DOUBLE_FIELD_NAME, BOOLEAN_FIELD_NAME, DATE_FIELD_NAME, GEO_POINT_FIELD_NAME, }; private static final int NUMBER_OF_TESTQUERIES = 20; protected static Version indexVersionCreated; @@ -413,7 +411,9 @@ static List> alterateQueries(Set queries, Set> alterateQueries(Set queries, Set persistent = metaData.persistentSettings().getAsMap(); + final Set persistent = metaData.persistentSettings().keySet(); assertThat("test leaves persistent cluster metadata behind: " + persistent, persistent.size(), equalTo(0)); - final Map transientSettings = new HashMap<>(metaData.transientSettings().getAsMap()); + final Set transientSettings = new HashSet<>(metaData.transientSettings().keySet()); if (isInternalCluster() && internalCluster().getAutoManageMinMasterNode()) { // this is set by the test infra transientSettings.remove(ElectMasterService.DISCOVERY_ZEN_MINIMUM_MASTER_NODES_SETTING.getKey()); } assertThat("test leaves transient cluster metadata behind: " + transientSettings, - transientSettings.keySet(), empty()); + transientSettings, empty()); } ensureClusterSizeConsistency(); ensureClusterStateConsistency(); @@ -1694,7 +1700,7 @@ protected Settings nodeSettings(int nodeOrdinal) { .put(DiskThresholdSettings.CLUSTER_ROUTING_ALLOCATION_LOW_DISK_WATERMARK_SETTING.getKey(), "1b") .put(DiskThresholdSettings.CLUSTER_ROUTING_ALLOCATION_HIGH_DISK_WATERMARK_SETTING.getKey(), "1b") .put(DiskThresholdSettings.CLUSTER_ROUTING_ALLOCATION_DISK_FLOOD_STAGE_WATERMARK_SETTING.getKey(), "1b") - .put(ScriptService.SCRIPT_MAX_COMPILATIONS_PER_MINUTE.getKey(), 2048) + .put(ScriptService.SCRIPT_MAX_COMPILATIONS_RATE.getKey(), "2048/1m") // by default we never cache below 10k docs in a segment, // bypass this limit so that caching gets some testing in // integration tests that usually create few documents @@ -1963,7 +1969,7 @@ public Path randomRepoPath() { * Returns path to a random directory that can be used to create a temporary file system repo */ public static Path randomRepoPath(Settings settings) { - Environment environment = new Environment(settings); + Environment environment = TestEnvironment.newEnvironment(settings); Path[] repoFiles = environment.repoFiles(); assert repoFiles.length > 0; Path path; @@ -2113,48 +2119,7 @@ protected String routingKeyForShard(String index, int shard) { return internalCluster().routingKeyForShard(resolveIndex(index), shard, random()); } - /** - * Return settings that could be used to start a node that has the given zipped home directory. - */ - protected Settings prepareBackwardsDataDir(Path backwardsIndex, Object... settings) throws IOException { - Path indexDir = createTempDir(); - Path dataDir = indexDir.resolve("data"); - try (InputStream stream = Files.newInputStream(backwardsIndex)) { - TestUtil.unzip(stream, indexDir); - } - assertTrue(Files.exists(dataDir)); - - // list clusters in the datapath, ignoring anything from extrasfs - final Path[] list; - try (DirectoryStream stream = Files.newDirectoryStream(dataDir)) { - List dirs = new ArrayList<>(); - for (Path p : stream) { - if (!p.getFileName().toString().startsWith("extra")) { - dirs.add(p); - } - } - list = dirs.toArray(new Path[0]); - } - if (list.length != 1) { - StringBuilder builder = new StringBuilder("Backwards index must contain exactly one cluster\n"); - for (Path line : list) { - builder.append(line.toString()).append('\n'); - } - throw new IllegalStateException(builder.toString()); - } - Path src = list[0].resolve(NodeEnvironment.NODES_FOLDER); - Path dest = dataDir.resolve(NodeEnvironment.NODES_FOLDER); - assertTrue(Files.exists(src)); - Files.move(src, dest); - assertFalse(Files.exists(src)); - assertTrue(Files.exists(dest)); - Settings.Builder builder = Settings.builder() - .put(settings) - .put(Environment.PATH_DATA_SETTING.getKey(), dataDir.toAbsolutePath()); - - return builder.build(); - } @Override protected NamedXContentRegistry xContentRegistry() { @@ -2238,4 +2203,44 @@ public static Index resolveIndex(String index) { String uuid = getIndexResponse.getSettings().get(index).get(IndexMetaData.SETTING_INDEX_UUID); return new Index(index, uuid); } + + protected void assertSeqNos() throws Exception { + assertBusy(() -> { + IndicesStatsResponse stats = client().admin().indices().prepareStats().clear().get(); + for (IndexStats indexStats : stats.getIndices().values()) { + for (IndexShardStats indexShardStats : indexStats.getIndexShards().values()) { + Optional maybePrimary = Stream.of(indexShardStats.getShards()) + .filter(s -> s.getShardRouting().active() && s.getShardRouting().primary()) + .findFirst(); + if (maybePrimary.isPresent() == false) { + continue; + } + ShardStats primary = maybePrimary.get(); + final SeqNoStats primarySeqNoStats = primary.getSeqNoStats(); + final ShardRouting primaryShardRouting = primary.getShardRouting(); + assertThat(primaryShardRouting + " should have set the global checkpoint", + primarySeqNoStats.getGlobalCheckpoint(), not(equalTo(SequenceNumbers.UNASSIGNED_SEQ_NO))); + final DiscoveryNode node = clusterService().state().nodes().get(primaryShardRouting.currentNodeId()); + final IndicesService indicesService = + internalCluster().getInstance(IndicesService.class, node.getName()); + final IndexShard indexShard = indicesService.getShardOrNull(primaryShardRouting.shardId()); + final ObjectLongMap globalCheckpoints = indexShard.getInSyncGlobalCheckpoints(); + for (ShardStats shardStats : indexShardStats) { + final SeqNoStats seqNoStats = shardStats.getSeqNoStats(); + assertThat(shardStats.getShardRouting() + " local checkpoint mismatch", + seqNoStats.getLocalCheckpoint(), equalTo(primarySeqNoStats.getLocalCheckpoint())); + assertThat(shardStats.getShardRouting() + " global checkpoint mismatch", + seqNoStats.getGlobalCheckpoint(), equalTo(primarySeqNoStats.getGlobalCheckpoint())); + assertThat(shardStats.getShardRouting() + " max seq no mismatch", + seqNoStats.getMaxSeqNo(), equalTo(primarySeqNoStats.getMaxSeqNo())); + // the local knowledge on the primary of the global checkpoint equals the global checkpoint on the shard + assertThat( + seqNoStats.getGlobalCheckpoint(), + equalTo(globalCheckpoints.get(shardStats.getShardRouting().allocationId().getId()))); + } + } + } + }); + } + } diff --git a/test/framework/src/main/java/org/elasticsearch/test/ESSingleNodeTestCase.java b/test/framework/src/main/java/org/elasticsearch/test/ESSingleNodeTestCase.java index f9ef50a772dac..0363a938dd18f 100644 --- a/test/framework/src/main/java/org/elasticsearch/test/ESSingleNodeTestCase.java +++ b/test/framework/src/main/java/org/elasticsearch/test/ESSingleNodeTestCase.java @@ -112,9 +112,9 @@ public void tearDown() throws Exception { super.tearDown(); assertAcked(client().admin().indices().prepareDelete("*").get()); MetaData metaData = client().admin().cluster().prepareState().get().getState().getMetaData(); - assertThat("test leaves persistent cluster metadata behind: " + metaData.persistentSettings().getAsMap(), + assertThat("test leaves persistent cluster metadata behind: " + metaData.persistentSettings().keySet(), metaData.persistentSettings().size(), equalTo(0)); - assertThat("test leaves transient cluster metadata behind: " + metaData.transientSettings().getAsMap(), + assertThat("test leaves transient cluster metadata behind: " + metaData.transientSettings().keySet(), metaData.transientSettings().size(), equalTo(0)); if (resetNodeAfterTest()) { assert NODE != null; @@ -170,7 +170,7 @@ private Node newNode() { // This needs to tie into the ESIntegTestCase#indexSettings() method .put(Environment.PATH_SHARED_DATA_SETTING.getKey(), createTempDir().getParent()) .put("node.name", "node_s_0") - .put(ScriptService.SCRIPT_MAX_COMPILATIONS_PER_MINUTE.getKey(), 1000) + .put(ScriptService.SCRIPT_MAX_COMPILATIONS_RATE.getKey(), "1000/1m") .put(EsExecutors.PROCESSORS_SETTING.getKey(), 1) // limit the number of threads created .put(NetworkModule.HTTP_ENABLED.getKey(), false) .put("transport.type", getTestTransportType()) diff --git a/test/framework/src/main/java/org/elasticsearch/test/ESTestCase.java b/test/framework/src/main/java/org/elasticsearch/test/ESTestCase.java index a0777de5dc32e..e10411e5a435e 100644 --- a/test/framework/src/main/java/org/elasticsearch/test/ESTestCase.java +++ b/test/framework/src/main/java/org/elasticsearch/test/ESTestCase.java @@ -39,7 +39,6 @@ import org.apache.logging.log4j.status.StatusLogger; import org.apache.lucene.util.LuceneTestCase; import org.apache.lucene.util.LuceneTestCase.SuppressCodecs; -import org.apache.lucene.util.SetOnce; import org.apache.lucene.util.TestRuleMarkFailure; import org.apache.lucene.util.TestUtil; import org.apache.lucene.util.TimeUnits; @@ -78,6 +77,7 @@ import org.elasticsearch.common.xcontent.json.JsonXContent; import org.elasticsearch.env.Environment; import org.elasticsearch.env.NodeEnvironment; +import org.elasticsearch.env.TestEnvironment; import org.elasticsearch.index.Index; import org.elasticsearch.index.IndexSettings; import org.elasticsearch.index.analysis.AnalysisRegistry; @@ -134,7 +134,6 @@ import java.util.Set; import java.util.concurrent.ExecutorService; import java.util.concurrent.TimeUnit; -import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.atomic.AtomicInteger; import java.util.function.BooleanSupplier; import java.util.function.Consumer; @@ -342,7 +341,7 @@ protected final void assertWarnings(String... expectedWarnings) { final Set actualWarningValues = actualWarnings.stream().map(DeprecationLogger::extractWarningValueFromWarningHeader).collect(Collectors.toSet()); for (String msg : expectedWarnings) { - assertThat(actualWarningValues, hasItem(DeprecationLogger.escape(msg))); + assertThat(actualWarningValues, hasItem(DeprecationLogger.escapeAndEncode(msg))); } assertEquals("Expected " + expectedWarnings.length + " warnings but found " + actualWarnings.size() + "\nExpected: " + Arrays.asList(expectedWarnings) + "\nActual: " + actualWarnings, @@ -767,8 +766,8 @@ public static boolean terminate(ExecutorService... services) throws InterruptedE return terminated; } - public static boolean terminate(ThreadPool service) throws InterruptedException { - return ThreadPool.terminate(service, 10, TimeUnit.SECONDS); + public static boolean terminate(ThreadPool threadPool) throws InterruptedException { + return ThreadPool.terminate(threadPool, 10, TimeUnit.SECONDS); } /** @@ -812,8 +811,8 @@ public NodeEnvironment newNodeEnvironment(Settings settings) throws IOException Settings build = Settings.builder() .put(settings) .put(Environment.PATH_HOME_SETTING.getKey(), createTempDir().toAbsolutePath()) - .putArray(Environment.PATH_DATA_SETTING.getKey(), tmpPaths()).build(); - return new NodeEnvironment(build, new Environment(build)); + .putList(Environment.PATH_DATA_SETTING.getKey(), tmpPaths()).build(); + return new NodeEnvironment(build, TestEnvironment.newEnvironment(build)); } /** Return consistent index settings for the provided index version. */ @@ -1207,7 +1206,7 @@ public static TestAnalysis createTestAnalysis(Index index, Settings nodeSettings */ public static TestAnalysis createTestAnalysis(IndexSettings indexSettings, Settings nodeSettings, AnalysisPlugin... analysisPlugins) throws IOException { - Environment env = new Environment(nodeSettings); + Environment env = TestEnvironment.newEnvironment(nodeSettings); AnalysisModule analysisModule = new AnalysisModule(env, Arrays.asList(analysisPlugins)); AnalysisRegistry analysisRegistry = analysisModule.getAnalysisRegistry(); return new TestAnalysis(analysisRegistry.build(indexSettings), diff --git a/test/framework/src/main/java/org/elasticsearch/test/InternalSettingsPlugin.java b/test/framework/src/main/java/org/elasticsearch/test/InternalSettingsPlugin.java index 12920a5f1504e..e1c555b811064 100644 --- a/test/framework/src/main/java/org/elasticsearch/test/InternalSettingsPlugin.java +++ b/test/framework/src/main/java/org/elasticsearch/test/InternalSettingsPlugin.java @@ -22,6 +22,7 @@ import org.elasticsearch.common.settings.Setting; import org.elasticsearch.common.settings.Setting.Property; import org.elasticsearch.common.unit.TimeValue; +import org.elasticsearch.index.IndexService; import org.elasticsearch.plugins.Plugin; import java.util.Arrays; @@ -44,7 +45,12 @@ public final class InternalSettingsPlugin extends Plugin { @Override public List> getSettings() { - return Arrays.asList(VERSION_CREATED, MERGE_ENABLED, - INDEX_CREATION_DATE_SETTING, PROVIDED_NAME_SETTING, TRANSLOG_RETENTION_CHECK_INTERVAL_SETTING); + return Arrays.asList( + VERSION_CREATED, + MERGE_ENABLED, + INDEX_CREATION_DATE_SETTING, + PROVIDED_NAME_SETTING, + TRANSLOG_RETENTION_CHECK_INTERVAL_SETTING, + IndexService.GLOBAL_CHECKPOINT_SYNC_INTERVAL_SETTING); } } diff --git a/test/framework/src/main/java/org/elasticsearch/test/InternalTestCluster.java b/test/framework/src/main/java/org/elasticsearch/test/InternalTestCluster.java index 29bfbff29b20b..9ad0afdf6a494 100644 --- a/test/framework/src/main/java/org/elasticsearch/test/InternalTestCluster.java +++ b/test/framework/src/main/java/org/elasticsearch/test/InternalTestCluster.java @@ -337,7 +337,8 @@ public InternalTestCluster(long clusterSeed, Path baseDir, builder.put(DiskThresholdSettings.CLUSTER_ROUTING_ALLOCATION_HIGH_DISK_WATERMARK_SETTING.getKey(), "1b"); builder.put(DiskThresholdSettings.CLUSTER_ROUTING_ALLOCATION_DISK_FLOOD_STAGE_WATERMARK_SETTING.getKey(), "1b"); // Some tests make use of scripting quite a bit, so increase the limit for integration tests - builder.put(ScriptService.SCRIPT_MAX_COMPILATIONS_PER_MINUTE.getKey(), 1000); + builder.put(ScriptService.SCRIPT_MAX_COMPILATIONS_RATE.getKey(), "1000/1m"); + builder.put(OperationRouting.USE_ADAPTIVE_REPLICA_SELECTION_SETTING.getKey(), random.nextBoolean()); if (TEST_NIGHTLY) { builder.put(ThrottlingAllocationDecider.CLUSTER_ROUTING_ALLOCATION_NODE_CONCURRENT_INCOMING_RECOVERIES_SETTING.getKey(), RandomNumbers.randomIntBetween(random, 5, 10)); builder.put(ThrottlingAllocationDecider.CLUSTER_ROUTING_ALLOCATION_NODE_CONCURRENT_OUTGOING_RECOVERIES_SETTING.getKey(), RandomNumbers.randomIntBetween(random, 5, 10)); @@ -2068,7 +2069,8 @@ public void ensureEstimatedStats() { NodeService nodeService = getInstanceFromNode(NodeService.class, nodeAndClient.node); CommonStatsFlags flags = new CommonStatsFlags(Flag.FieldData, Flag.QueryCache, Flag.Segments); - NodeStats stats = nodeService.stats(flags, false, false, false, false, false, false, false, false, false, false, false); + NodeStats stats = nodeService.stats(flags, + false, false, false, false, false, false, false, false, false, false, false, false); assertThat("Fielddata size must be 0 on node: " + stats.getNode(), stats.getIndices().getFieldData().getMemorySizeInBytes(), equalTo(0L)); assertThat("Query cache size must be 0 on node: " + stats.getNode(), stats.getIndices().getQueryCache().getMemorySizeInBytes(), equalTo(0L)); assertThat("FixedBitSet cache size must be 0 on node: " + stats.getNode(), stats.getIndices().getSegments().getBitsetMemoryInBytes(), equalTo(0L)); diff --git a/test/framework/src/main/java/org/elasticsearch/test/MockLogAppender.java b/test/framework/src/main/java/org/elasticsearch/test/MockLogAppender.java index 42977105058c1..b35dc9563ce5c 100644 --- a/test/framework/src/main/java/org/elasticsearch/test/MockLogAppender.java +++ b/test/framework/src/main/java/org/elasticsearch/test/MockLogAppender.java @@ -26,9 +26,11 @@ import java.util.ArrayList; import java.util.List; +import java.util.regex.Pattern; import static org.hamcrest.CoreMatchers.equalTo; import static org.hamcrest.MatcherAssert.assertThat; +import static org.junit.Assert.assertTrue; /** * Test appender that can be used to verify that certain events were logged correctly @@ -122,6 +124,37 @@ public void assertMatched() { } } + public static class PatternSeenEventExcpectation implements LoggingExpectation { + + protected final String name; + protected final String logger; + protected final Level level; + protected final String pattern; + volatile boolean saw; + + public PatternSeenEventExcpectation(String name, String logger, Level level, String pattern) { + this.name = name; + this.logger = logger; + this.level = level; + this.pattern = pattern; + } + + @Override + public void match(LogEvent event) { + if (event.getLevel().equals(level) && event.getLoggerName().equals(logger)) { + if (Pattern.matches(pattern, event.getMessage().getFormattedMessage())) { + saw = true; + } + } + } + + @Override + public void assertMatched() { + assertThat(name, saw, equalTo(true)); + } + + } + private static String getLoggerName(String name) { if (name.startsWith("org.elasticsearch.")) { name = name.substring("org.elasticsearch.".length()); diff --git a/test/framework/src/main/java/org/elasticsearch/test/RandomObjects.java b/test/framework/src/main/java/org/elasticsearch/test/RandomObjects.java index 67999b82a2fe6..1868fc34a991f 100644 --- a/test/framework/src/main/java/org/elasticsearch/test/RandomObjects.java +++ b/test/framework/src/main/java/org/elasticsearch/test/RandomObjects.java @@ -108,7 +108,7 @@ public static Tuple, List> randomStoredFieldValues(Random r //with CBOR we get back a float expectedParsedValues.add(randomFloat); } else if (xContentType == XContentType.SMILE) { - //with SMILE we get back a double + //with SMILE we get back a double (this will change in Jackson 2.9 where it will return a Float) expectedParsedValues.add(randomFloat.doubleValue()); } else { //with JSON AND YAML we get back a double, but with float precision. diff --git a/test/framework/src/main/java/org/elasticsearch/test/TestSearchContext.java b/test/framework/src/main/java/org/elasticsearch/test/TestSearchContext.java index bf4364aa855fb..9d03383561614 100644 --- a/test/framework/src/main/java/org/elasticsearch/test/TestSearchContext.java +++ b/test/framework/src/main/java/org/elasticsearch/test/TestSearchContext.java @@ -24,15 +24,12 @@ import org.apache.lucene.util.Counter; import org.elasticsearch.action.search.SearchTask; import org.elasticsearch.action.search.SearchType; -import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.common.util.BigArrays; -import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.index.IndexService; import org.elasticsearch.index.cache.bitset.BitsetFilterCache; import org.elasticsearch.index.engine.Engine; import org.elasticsearch.index.fielddata.IndexFieldData; -import org.elasticsearch.index.fielddata.IndexFieldDataService; import org.elasticsearch.index.mapper.MappedFieldType; import org.elasticsearch.index.mapper.MapperService; import org.elasticsearch.index.mapper.ObjectMapper; @@ -58,7 +55,7 @@ import org.elasticsearch.search.internal.ShardSearchRequest; import org.elasticsearch.search.profile.Profilers; import org.elasticsearch.search.query.QuerySearchResult; -import org.elasticsearch.search.rescore.RescoreSearchContext; +import org.elasticsearch.search.rescore.RescoreContext; import org.elasticsearch.search.sort.SortAndFormats; import org.elasticsearch.search.suggest.SuggestionSearchContext; import org.elasticsearch.threadpool.ThreadPool; @@ -219,12 +216,12 @@ public void suggest(SuggestionSearchContext suggest) { } @Override - public List rescore() { + public List rescore() { return Collections.emptyList(); } @Override - public void addRescore(RescoreSearchContext rescore) { + public void addRescore(RescoreContext rescore) { } @Override diff --git a/test/framework/src/main/java/org/elasticsearch/test/VersionUtils.java b/test/framework/src/main/java/org/elasticsearch/test/VersionUtils.java index 8b2f51cf8a9a3..74a9b58a78e37 100644 --- a/test/framework/src/main/java/org/elasticsearch/test/VersionUtils.java +++ b/test/framework/src/main/java/org/elasticsearch/test/VersionUtils.java @@ -100,9 +100,17 @@ static Tuple, List> resolveReleasedVersions(Version curre Version unreleased = versions.remove(unreleasedIndex); if (unreleased.revision == 0) { - /* If the last unreleased version is itself a patch release then gradle enforces - * that there is yet another unreleased version before that. */ - unreleasedIndex--; + /* + * If the last unreleased version is itself a patch release then Gradle enforces that there is yet another unreleased version + * before that. However, we have to skip alpha/betas/RCs too (e.g., consider when the version constants are ..., 5.6.3, 5.6.4, + * 6.0.0-alpha1, ..., 6.0.0-rc1, 6.0.0-rc2, 6.0.0, 6.1.0 on the 6.x branch. In this case, we will have pruned 6.0.0 and 6.1.0 as + * unreleased versions, but we also need to prune 5.6.4. At this point though, unreleasedIndex will be pointing to 6.0.0-rc2, so + * we have to skip backwards until we find a non-alpha/beta/RC again. Then we can prune that version as an unreleased version + * too. + */ + do { + unreleasedIndex--; + } while (versions.get(unreleasedIndex).isRelease() == false); Version earlierUnreleased = versions.remove(unreleasedIndex); return new Tuple<>(unmodifiableList(versions), unmodifiableList(Arrays.asList(earlierUnreleased, unreleased, current))); } diff --git a/test/framework/src/main/java/org/elasticsearch/test/client/RandomizingClient.java b/test/framework/src/main/java/org/elasticsearch/test/client/RandomizingClient.java index b144898d643d0..e1a6ba030fde8 100644 --- a/test/framework/src/main/java/org/elasticsearch/test/client/RandomizingClient.java +++ b/test/framework/src/main/java/org/elasticsearch/test/client/RandomizingClient.java @@ -29,7 +29,6 @@ import org.elasticsearch.common.unit.TimeValue; import java.util.Arrays; -import java.util.EnumSet; import java.util.Random; import java.util.concurrent.TimeUnit; @@ -52,7 +51,7 @@ public RandomizingClient(Client client, Random random) { SearchType.DFS_QUERY_THEN_FETCH, SearchType.QUERY_THEN_FETCH)); if (random.nextInt(10) == 0) { - defaultPreference = RandomPicks.randomFrom(random, EnumSet.of(Preference.PRIMARY_FIRST, Preference.LOCAL)).type(); + defaultPreference = Preference.LOCAL.type(); } else if (random.nextInt(10) == 0) { String s = TestUtil.randomRealisticUnicodeString(random, 1, 10); defaultPreference = s.startsWith("_") ? null : s; // '_' is a reserved character diff --git a/test/framework/src/main/java/org/elasticsearch/test/discovery/ClusterDiscoveryConfiguration.java b/test/framework/src/main/java/org/elasticsearch/test/discovery/ClusterDiscoveryConfiguration.java index 7e3f9a21e4386..f873ec4fb933c 100644 --- a/test/framework/src/main/java/org/elasticsearch/test/discovery/ClusterDiscoveryConfiguration.java +++ b/test/framework/src/main/java/org/elasticsearch/test/discovery/ClusterDiscoveryConfiguration.java @@ -129,7 +129,7 @@ public Settings nodeSettings(int nodeOrdinal) { unicastHosts[i] = IP_ADDR + ":" + (unicastHostPorts[unicastHostOrdinals[i]]); } } - builder.putArray("discovery.zen.ping.unicast.hosts", unicastHosts); + builder.putList("discovery.zen.ping.unicast.hosts", unicastHosts); return builder.put(super.nodeSettings(nodeOrdinal)).build(); } diff --git a/test/framework/src/main/java/org/elasticsearch/test/discovery/TestZenDiscovery.java b/test/framework/src/main/java/org/elasticsearch/test/discovery/TestZenDiscovery.java index 63212cddc39b1..d224d9c519c8a 100644 --- a/test/framework/src/main/java/org/elasticsearch/test/discovery/TestZenDiscovery.java +++ b/test/framework/src/main/java/org/elasticsearch/test/discovery/TestZenDiscovery.java @@ -83,7 +83,7 @@ private TestZenDiscovery(Settings settings, ThreadPool threadPool, TransportServ ClusterApplier clusterApplier, ClusterSettings clusterSettings, UnicastHostsProvider hostsProvider, AllocationService allocationService) { super(settings, threadPool, transportService, namedWriteableRegistry, masterService, clusterApplier, clusterSettings, - hostsProvider, allocationService); + hostsProvider, allocationService, Collections.emptyList()); } @Override diff --git a/test/framework/src/main/java/org/elasticsearch/test/hamcrest/CollectionAssertions.java b/test/framework/src/main/java/org/elasticsearch/test/hamcrest/CollectionAssertions.java index b21e94d30a720..2225a5d711e5c 100644 --- a/test/framework/src/main/java/org/elasticsearch/test/hamcrest/CollectionAssertions.java +++ b/test/framework/src/main/java/org/elasticsearch/test/hamcrest/CollectionAssertions.java @@ -30,4 +30,8 @@ public class CollectionAssertions { public static Matcher hasKey(final String key) { return new CollectionMatchers.ImmutableOpenMapHasKeyMatcher(key); } + + public static Matcher hasAllKeys(final String... keys) { + return new CollectionMatchers.ImmutableOpenMapHasAllKeysMatcher(keys); + } } diff --git a/test/framework/src/main/java/org/elasticsearch/test/hamcrest/CollectionMatchers.java b/test/framework/src/main/java/org/elasticsearch/test/hamcrest/CollectionMatchers.java index 521ba58b0efc9..02a50d4164570 100644 --- a/test/framework/src/main/java/org/elasticsearch/test/hamcrest/CollectionMatchers.java +++ b/test/framework/src/main/java/org/elasticsearch/test/hamcrest/CollectionMatchers.java @@ -22,6 +22,9 @@ import org.hamcrest.Description; import org.hamcrest.TypeSafeMatcher; +import java.util.Arrays; +import java.util.List; + /** * Matchers for easier handling of our custom collections, * for example ImmutableOpenMap @@ -56,4 +59,45 @@ public void describeTo(Description description) { } } + public static class ImmutableOpenMapHasAllKeysMatcher extends TypeSafeMatcher { + + private final List keys; + private String missingKey; + + public ImmutableOpenMapHasAllKeysMatcher(final String... keys) { + this.keys = Arrays.asList(keys); + } + + @Override + protected boolean matchesSafely(ImmutableOpenMap item) { + for (String key: keys) { + if (!item.containsKey(key)) { + missingKey = key; + return false; + } + } + + return true; + } + + @Override + public void describeMismatchSafely(final ImmutableOpenMap map, final Description mismatchDescription) { + if (map.size() == 0) { + mismatchDescription.appendText("was empty"); + } else { + mismatchDescription.appendText("was ").appendValue(map.keys()); + } + } + + @Override + public void describeTo(Description description) { + description + .appendText("ImmutableOpenMap should contain all keys ") + .appendValue(keys) + .appendText(", but key [") + .appendValue(missingKey) + .appendText("] is missing"); + } + } + } diff --git a/test/framework/src/main/java/org/elasticsearch/test/hamcrest/ElasticsearchAssertions.java b/test/framework/src/main/java/org/elasticsearch/test/hamcrest/ElasticsearchAssertions.java index c1facb772c79d..bf2ffc5236e3f 100644 --- a/test/framework/src/main/java/org/elasticsearch/test/hamcrest/ElasticsearchAssertions.java +++ b/test/framework/src/main/java/org/elasticsearch/test/hamcrest/ElasticsearchAssertions.java @@ -55,6 +55,7 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Streamable; +import org.elasticsearch.common.io.stream.Writeable; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.common.xcontent.ToXContent; @@ -688,7 +689,12 @@ public static void assertVersionSerializable(Version version, Streamable streama input = new NamedWriteableAwareStreamInput(input, namedWriteableRegistry); } input.setVersion(version); - newInstance.readFrom(input); + // This is here since some Streamables are being converted into Writeables + // and the readFrom method throws an exception if called + Streamable newInstanceFromStream = tryCreateFromStream(streamable, input); + if (newInstanceFromStream == null) { + newInstance.readFrom(input); + } assertThat("Stream should be fully read with version [" + version + "] for streamable [" + streamable + "]", input.available(), equalTo(0)); BytesReference newBytes = serialize(version, streamable); @@ -749,6 +755,33 @@ private static Streamable tryCreateNewInstance(Streamable streamable) throws NoS } } + /** + * This attemps to construct a new {@link Streamable} object that is in the process of + * being converted from {@link Streamable} to {@link Writeable}. Assuming this constructs + * the object successfully, #readFrom should not be called on the constructed object. + * + * @param streamable the object to retrieve the type of class to construct the new instance from + * @param in the stream to read the object from + * @return the newly constructed object from reading the stream + * @throws NoSuchMethodException if constuctor cannot be found + * @throws InstantiationException if the class represents an abstract class + * @throws IllegalAccessException if this {@code Constructor} object + * is enforcing Java language access control and the underlying + * constructor is inaccessible. + * @throws InvocationTargetException if the underlying constructor + * throws an exception. + */ + private static Streamable tryCreateFromStream(Streamable streamable, StreamInput in) throws NoSuchMethodException, + InstantiationException, IllegalAccessException, InvocationTargetException { + try { + Class clazz = streamable.getClass(); + Constructor constructor = clazz.getConstructor(StreamInput.class); + return constructor.newInstance(in); + } catch (NoSuchMethodException e) { + return null; + } + } + /** * Applies basic assertions on the SearchResponse. This method checks if all shards were successful, if * any of the shards threw an exception and if the response is serializable. diff --git a/test/framework/src/main/java/org/elasticsearch/test/junit/listeners/ReproduceInfoPrinter.java b/test/framework/src/main/java/org/elasticsearch/test/junit/listeners/ReproduceInfoPrinter.java index 72fb5221ed728..ff6efa3830023 100644 --- a/test/framework/src/main/java/org/elasticsearch/test/junit/listeners/ReproduceInfoPrinter.java +++ b/test/framework/src/main/java/org/elasticsearch/test/junit/listeners/ReproduceInfoPrinter.java @@ -141,7 +141,7 @@ public ReproduceErrorMessageBuilder appendESProperties() { appendProperties(ESIntegTestCase.TESTS_ENABLE_MOCK_MODULES); } appendProperties("tests.assertion.disabled", "tests.security.manager", "tests.nightly", "tests.jvms", - "tests.client.ratio", "tests.heap.size", "tests.bwc", "tests.bwc.version"); + "tests.client.ratio", "tests.heap.size", "tests.bwc", "tests.bwc.version", "build.snapshot"); if (System.getProperty("tests.jvm.argline") != null && !System.getProperty("tests.jvm.argline").isEmpty()) { appendOpt("tests.jvm.argline", "\"" + System.getProperty("tests.jvm.argline") + "\""); } diff --git a/test/framework/src/main/java/org/elasticsearch/test/rest/ESRestTestCase.java b/test/framework/src/main/java/org/elasticsearch/test/rest/ESRestTestCase.java index 3dde971c5574a..cef820ac0096a 100644 --- a/test/framework/src/main/java/org/elasticsearch/test/rest/ESRestTestCase.java +++ b/test/framework/src/main/java/org/elasticsearch/test/rest/ESRestTestCase.java @@ -19,17 +19,20 @@ package org.elasticsearch.test.rest; -import org.elasticsearch.client.http.Header; -import org.elasticsearch.client.http.HttpHost; -import org.elasticsearch.client.http.message.BasicHeader; -import org.elasticsearch.client.http.nio.conn.ssl.SSLIOSessionStrategy; -import org.elasticsearch.client.http.ssl.SSLContexts; +import org.apache.http.Header; +import org.apache.http.HttpHost; +import org.apache.http.entity.ContentType; +import org.apache.http.entity.StringEntity; +import org.apache.http.message.BasicHeader; +import org.apache.http.nio.conn.ssl.SSLIOSessionStrategy; +import org.apache.http.ssl.SSLContexts; import org.apache.lucene.util.IOUtils; import org.elasticsearch.action.admin.cluster.node.tasks.list.ListTasksAction; import org.elasticsearch.client.Response; import org.elasticsearch.client.ResponseException; import org.elasticsearch.client.RestClient; import org.elasticsearch.client.RestClientBuilder; +import org.elasticsearch.common.Strings; import org.elasticsearch.common.io.PathUtils; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.TimeValue; @@ -41,6 +44,7 @@ import org.junit.AfterClass; import org.junit.Before; +import javax.net.ssl.SSLContext; import java.io.IOException; import java.io.InputStream; import java.nio.file.Files; @@ -51,16 +55,19 @@ import java.security.NoSuchAlgorithmException; import java.security.cert.CertificateException; import java.util.ArrayList; +import java.util.Collections; +import java.util.HashMap; import java.util.HashSet; import java.util.List; import java.util.Map; import java.util.Set; - -import javax.net.ssl.SSLContext; +import java.util.concurrent.TimeUnit; import static java.util.Collections.singletonMap; import static java.util.Collections.sort; import static java.util.Collections.unmodifiableList; +import static org.hamcrest.Matchers.anyOf; +import static org.hamcrest.Matchers.equalTo; /** * Superclass for tests that interact with an external test cluster using Elasticsearch's {@link RestClient}. @@ -287,7 +294,7 @@ private void waitForClusterStateUpdatesToFinish() throws Exception { } catch (IOException e) { fail("cannot get cluster's pending tasks: " + e.getMessage()); } - }); + }, 30, TimeUnit.SECONDS); } /** @@ -380,4 +387,28 @@ private Set runningTasks(Response response) throws IOException { } return runningTasks; } + + protected void assertOK(Response response) { + assertThat(response.getStatusLine().getStatusCode(), anyOf(equalTo(200), equalTo(201))); + } + + protected void ensureGreen() throws IOException { + Map params = new HashMap<>(); + params.put("wait_for_status", "green"); + params.put("wait_for_no_relocating_shards", "true"); + params.put("timeout", "70s"); + params.put("level", "shards"); + assertOK(client().performRequest("GET", "_cluster/health", params)); + } + + protected void createIndex(String name, Settings settings) throws IOException { + createIndex(name, settings, ""); + } + + protected void createIndex(String name, Settings settings, String mapping) throws IOException { + assertOK(client().performRequest("PUT", name, Collections.emptyMap(), + new StringEntity("{ \"settings\": " + Strings.toString(settings) + + ", \"mappings\" : {" + mapping + "} }", ContentType.APPLICATION_JSON))); + } + } diff --git a/test/framework/src/main/java/org/elasticsearch/test/rest/yaml/ClientYamlDocsTestClient.java b/test/framework/src/main/java/org/elasticsearch/test/rest/yaml/ClientYamlDocsTestClient.java index 8b892a020440e..dacd67ccadc32 100644 --- a/test/framework/src/main/java/org/elasticsearch/test/rest/yaml/ClientYamlDocsTestClient.java +++ b/test/framework/src/main/java/org/elasticsearch/test/rest/yaml/ClientYamlDocsTestClient.java @@ -19,12 +19,12 @@ package org.elasticsearch.test.rest.yaml; +import org.apache.http.HttpEntity; +import org.apache.http.HttpHost; import org.elasticsearch.Version; import org.elasticsearch.client.Response; import org.elasticsearch.client.ResponseException; import org.elasticsearch.client.RestClient; -import org.elasticsearch.client.http.HttpEntity; -import org.elasticsearch.client.http.HttpHost; import org.elasticsearch.test.rest.yaml.restspec.ClientYamlSuiteRestSpec; import java.io.IOException; diff --git a/test/framework/src/main/java/org/elasticsearch/test/rest/yaml/ClientYamlTestClient.java b/test/framework/src/main/java/org/elasticsearch/test/rest/yaml/ClientYamlTestClient.java index d31af19b3f169..b4704bd9ed896 100644 --- a/test/framework/src/main/java/org/elasticsearch/test/rest/yaml/ClientYamlTestClient.java +++ b/test/framework/src/main/java/org/elasticsearch/test/rest/yaml/ClientYamlTestClient.java @@ -19,18 +19,19 @@ package org.elasticsearch.test.rest.yaml; import com.carrotsearch.randomizedtesting.RandomizedTest; + +import org.apache.http.Header; +import org.apache.http.HttpEntity; +import org.apache.http.HttpHost; +import org.apache.http.client.methods.HttpGet; +import org.apache.http.entity.ContentType; +import org.apache.http.message.BasicHeader; +import org.apache.http.util.EntityUtils; import org.apache.logging.log4j.Logger; import org.elasticsearch.Version; import org.elasticsearch.client.Response; import org.elasticsearch.client.ResponseException; import org.elasticsearch.client.RestClient; -import org.elasticsearch.client.http.Header; -import org.elasticsearch.client.http.HttpEntity; -import org.elasticsearch.client.http.HttpHost; -import org.elasticsearch.client.http.client.methods.HttpGet; -import org.elasticsearch.client.http.entity.ContentType; -import org.elasticsearch.client.http.message.BasicHeader; -import org.elasticsearch.client.http.util.EntityUtils; import org.elasticsearch.common.logging.Loggers; import org.elasticsearch.test.rest.yaml.restspec.ClientYamlSuiteRestApi; import org.elasticsearch.test.rest.yaml.restspec.ClientYamlSuiteRestPath; @@ -42,6 +43,9 @@ import java.util.HashMap; import java.util.List; import java.util.Map; +import java.util.Map.Entry; +import java.util.Set; +import java.util.stream.Collectors; /** * Used by {@link ESClientYamlSuiteTestCase} to execute REST requests according to the tests written in yaml suite files. Wraps a @@ -80,24 +84,40 @@ public ClientYamlTestResponse callApi(String apiName, Map params //divide params between ones that go within query string and ones that go within path Map pathParts = new HashMap<>(); Map queryStringParams = new HashMap<>(); + + Set apiRequiredPathParts = restApi.getPathParts().entrySet().stream().filter(e -> e.getValue() == true).map(Entry::getKey) + .collect(Collectors.toSet()); + Set apiRequiredParameters = restApi.getParams().entrySet().stream().filter(e -> e.getValue() == true).map(Entry::getKey) + .collect(Collectors.toSet()); + for (Map.Entry entry : params.entrySet()) { - if (restApi.getPathParts().contains(entry.getKey())) { + if (restApi.getPathParts().containsKey(entry.getKey())) { pathParts.put(entry.getKey(), entry.getValue()); + apiRequiredPathParts.remove(entry.getKey()); + } else if (restApi.getParams().containsKey(entry.getKey()) + || restSpec.isGlobalParameter(entry.getKey()) + || restSpec.isClientParameter(entry.getKey())) { + queryStringParams.put(entry.getKey(), entry.getValue()); + apiRequiredParameters.remove(entry.getKey()); } else { - if (restApi.getParams().contains(entry.getKey()) || restSpec.isGlobalParameter(entry.getKey()) - || restSpec.isClientParameter(entry.getKey())) { - queryStringParams.put(entry.getKey(), entry.getValue()); - } else { - throw new IllegalArgumentException("param [" + entry.getKey() + "] not supported in [" - + restApi.getName() + "] " + "api"); - } + throw new IllegalArgumentException( + "path/param [" + entry.getKey() + "] not supported by [" + restApi.getName() + "] " + "api"); } } + if (false == apiRequiredPathParts.isEmpty()) { + throw new IllegalArgumentException( + "missing required path part: " + apiRequiredPathParts + " by [" + restApi.getName() + "] api"); + } + if (false == apiRequiredParameters.isEmpty()) { + throw new IllegalArgumentException( + "missing required parameter: " + apiRequiredParameters + " by [" + restApi.getName() + "] api"); + } + List supportedMethods = restApi.getSupportedMethods(pathParts.keySet()); String requestMethod; if (entity != null) { - if (!restApi.isBodySupported()) { + if (false == restApi.isBodySupported()) { throw new IllegalArgumentException("body is not supported by [" + restApi.getName() + "] api"); } String contentType = entity.getContentType().getValue(); diff --git a/test/framework/src/main/java/org/elasticsearch/test/rest/yaml/ClientYamlTestExecutionContext.java b/test/framework/src/main/java/org/elasticsearch/test/rest/yaml/ClientYamlTestExecutionContext.java index 441411434fafc..bea9aab3ff784 100644 --- a/test/framework/src/main/java/org/elasticsearch/test/rest/yaml/ClientYamlTestExecutionContext.java +++ b/test/framework/src/main/java/org/elasticsearch/test/rest/yaml/ClientYamlTestExecutionContext.java @@ -19,9 +19,9 @@ package org.elasticsearch.test.rest.yaml; import com.carrotsearch.randomizedtesting.RandomizedTest; -import org.elasticsearch.client.http.HttpEntity; -import org.elasticsearch.client.http.entity.ByteArrayEntity; -import org.elasticsearch.client.http.entity.ContentType; +import org.apache.http.HttpEntity; +import org.apache.http.entity.ByteArrayEntity; +import org.apache.http.entity.ContentType; import org.apache.logging.log4j.Logger; import org.apache.lucene.util.BytesRef; import org.elasticsearch.Version; diff --git a/test/framework/src/main/java/org/elasticsearch/test/rest/yaml/ClientYamlTestResponse.java b/test/framework/src/main/java/org/elasticsearch/test/rest/yaml/ClientYamlTestResponse.java index 631e29535575f..793a71a95a2a3 100644 --- a/test/framework/src/main/java/org/elasticsearch/test/rest/yaml/ClientYamlTestResponse.java +++ b/test/framework/src/main/java/org/elasticsearch/test/rest/yaml/ClientYamlTestResponse.java @@ -18,9 +18,9 @@ */ package org.elasticsearch.test.rest.yaml; -import org.elasticsearch.client.http.Header; -import org.elasticsearch.client.http.client.methods.HttpHead; -import org.elasticsearch.client.http.util.EntityUtils; +import org.apache.http.Header; +import org.apache.http.client.methods.HttpHead; +import org.apache.http.util.EntityUtils; import org.elasticsearch.client.Response; import org.elasticsearch.common.bytes.BytesArray; import org.elasticsearch.common.xcontent.NamedXContentRegistry; diff --git a/test/framework/src/main/java/org/elasticsearch/test/rest/yaml/ESClientYamlSuiteTestCase.java b/test/framework/src/main/java/org/elasticsearch/test/rest/yaml/ESClientYamlSuiteTestCase.java index b46f151fd3806..5ee78c6942dec 100644 --- a/test/framework/src/main/java/org/elasticsearch/test/rest/yaml/ESClientYamlSuiteTestCase.java +++ b/test/framework/src/main/java/org/elasticsearch/test/rest/yaml/ESClientYamlSuiteTestCase.java @@ -20,7 +20,8 @@ package org.elasticsearch.test.rest.yaml; import com.carrotsearch.randomizedtesting.RandomizedTest; -import org.elasticsearch.client.http.HttpHost; +import org.apache.http.HttpHost; +import org.apache.lucene.util.IOUtils; import org.elasticsearch.Version; import org.elasticsearch.client.Response; import org.elasticsearch.client.ResponseException; diff --git a/test/framework/src/main/java/org/elasticsearch/test/rest/yaml/ObjectPath.java b/test/framework/src/main/java/org/elasticsearch/test/rest/yaml/ObjectPath.java index c5bbc2be0cc55..7b5952c7a5eb4 100644 --- a/test/framework/src/main/java/org/elasticsearch/test/rest/yaml/ObjectPath.java +++ b/test/framework/src/main/java/org/elasticsearch/test/rest/yaml/ObjectPath.java @@ -18,7 +18,7 @@ */ package org.elasticsearch.test.rest.yaml; -import org.elasticsearch.client.http.util.EntityUtils; +import org.apache.http.util.EntityUtils; import org.elasticsearch.client.Response; import org.elasticsearch.common.bytes.BytesArray; import org.elasticsearch.common.bytes.BytesReference; diff --git a/test/framework/src/main/java/org/elasticsearch/test/rest/yaml/Stash.java b/test/framework/src/main/java/org/elasticsearch/test/rest/yaml/Stash.java index 04d2ffac9f4ec..e2eefc6376ad1 100644 --- a/test/framework/src/main/java/org/elasticsearch/test/rest/yaml/Stash.java +++ b/test/framework/src/main/java/org/elasticsearch/test/rest/yaml/Stash.java @@ -22,7 +22,8 @@ import org.apache.logging.log4j.Logger; import org.elasticsearch.common.Strings; import org.elasticsearch.common.logging.Loggers; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContent.Params; +import org.elasticsearch.common.xcontent.ToXContentFragment; import org.elasticsearch.common.xcontent.XContentBuilder; import java.io.IOException; @@ -38,7 +39,7 @@ * Allows to cache the last obtained test response and or part of it within variables * that can be used as input values in following requests and assertions. */ -public class Stash implements ToXContent { +public class Stash implements ToXContentFragment { private static final Pattern EXTENDED_KEY = Pattern.compile("\\$\\{([^}]+)\\}"); private static final Pattern PATH = Pattern.compile("\\$_path"); diff --git a/test/framework/src/main/java/org/elasticsearch/test/rest/yaml/restspec/ClientYamlSuiteRestApi.java b/test/framework/src/main/java/org/elasticsearch/test/rest/yaml/restspec/ClientYamlSuiteRestApi.java index 66b77911cc0d8..72c94762a4314 100644 --- a/test/framework/src/main/java/org/elasticsearch/test/rest/yaml/restspec/ClientYamlSuiteRestApi.java +++ b/test/framework/src/main/java/org/elasticsearch/test/rest/yaml/restspec/ClientYamlSuiteRestApi.java @@ -18,10 +18,11 @@ */ package org.elasticsearch.test.rest.yaml.restspec; -import org.elasticsearch.client.http.client.methods.HttpPost; -import org.elasticsearch.client.http.client.methods.HttpPut; +import org.apache.http.client.methods.HttpPost; +import org.apache.http.client.methods.HttpPut; import java.util.ArrayList; +import java.util.HashMap; import java.util.List; import java.util.Map; import java.util.Set; @@ -35,8 +36,8 @@ public class ClientYamlSuiteRestApi { private final String name; private List methods = new ArrayList<>(); private List paths = new ArrayList<>(); - private List pathParts = new ArrayList<>(); - private List params = new ArrayList<>(); + private Map pathParts = new HashMap<>(); + private Map params = new HashMap<>(); private Body body = Body.NOT_SUPPORTED; public enum Body { @@ -98,20 +99,28 @@ void addPath(String path) { this.paths.add(path); } - public List getPathParts() { + /** + * Gets all path parts supported by the api. For every path part defines if it + * is required or optional. + */ + public Map getPathParts() { return pathParts; } - void addPathPart(String pathPart) { - this.pathParts.add(pathPart); + void addPathPart(String pathPart, boolean required) { + this.pathParts.put(pathPart, required); } - public List getParams() { + /** + * Gets all parameters supported by the api. For every parameter defines if it + * is required or optional. + */ + public Map getParams() { return params; } - void addParam(String param) { - this.params.add(param); + void addParam(String param, boolean required) { + this.params.put(param, required); } void setBodyOptional() { diff --git a/test/framework/src/main/java/org/elasticsearch/test/rest/yaml/restspec/ClientYamlSuiteRestApiParser.java b/test/framework/src/main/java/org/elasticsearch/test/rest/yaml/restspec/ClientYamlSuiteRestApiParser.java index 8abcfc35f27a3..66dc0a5705400 100644 --- a/test/framework/src/main/java/org/elasticsearch/test/rest/yaml/restspec/ClientYamlSuiteRestApiParser.java +++ b/test/framework/src/main/java/org/elasticsearch/test/rest/yaml/restspec/ClientYamlSuiteRestApiParser.java @@ -18,6 +18,8 @@ */ package org.elasticsearch.test.rest.yaml.restspec; +import org.elasticsearch.common.ParseField; +import org.elasticsearch.common.xcontent.ObjectParser; import org.elasticsearch.common.xcontent.XContentParser; import java.io.IOException; @@ -27,6 +29,11 @@ */ public class ClientYamlSuiteRestApiParser { + private static final ObjectParser PARAMETER_PARSER = new ObjectParser<>("parameter", true, Parameter::new); + static { + PARAMETER_PARSER.declareBoolean(Parameter::setRequired, new ParseField("required")); + } + public ClientYamlSuiteRestApi parse(String location, XContentParser parser) throws IOException { while ( parser.nextToken() != XContentParser.Token.FIELD_NAME ) { @@ -57,7 +64,6 @@ public ClientYamlSuiteRestApi parse(String location, XContentParser parser) thro if (parser.currentToken() == XContentParser.Token.FIELD_NAME) { currentFieldName = parser.currentName(); } - if (parser.currentToken() == XContentParser.Token.START_ARRAY && "paths".equals(currentFieldName)) { while (parser.nextToken() == XContentParser.Token.VALUE_STRING) { String path = parser.text(); @@ -71,30 +77,30 @@ public ClientYamlSuiteRestApi parse(String location, XContentParser parser) thro if (parser.currentToken() == XContentParser.Token.START_OBJECT && "parts".equals(currentFieldName)) { while (parser.nextToken() == XContentParser.Token.FIELD_NAME) { String part = parser.currentName(); - if (restApi.getPathParts().contains(part)) { + if (restApi.getPathParts().containsKey(part)) { throw new IllegalArgumentException("Found duplicate part [" + part + "]"); } - restApi.addPathPart(part); parser.nextToken(); if (parser.currentToken() != XContentParser.Token.START_OBJECT) { throw new IllegalArgumentException("Expected parts field in rest api definition to contain an object"); } - parser.skipChildren(); + restApi.addPathPart(part, PARAMETER_PARSER.parse(parser, null).isRequired()); } } if (parser.currentToken() == XContentParser.Token.START_OBJECT && "params".equals(currentFieldName)) { while (parser.nextToken() == XContentParser.Token.FIELD_NAME) { + String param = parser.currentName(); - if (restApi.getParams().contains(param)) { + if (restApi.getParams().containsKey(param)) { throw new IllegalArgumentException("Found duplicate param [" + param + "]"); } - restApi.addParam(parser.currentName()); + parser.nextToken(); if (parser.currentToken() != XContentParser.Token.START_OBJECT) { throw new IllegalArgumentException("Expected params field in rest api definition to contain an object"); } - parser.skipChildren(); + restApi.addParam(param, PARAMETER_PARSER.parse(parser, null).isRequired()); } } @@ -124,7 +130,7 @@ public ClientYamlSuiteRestApi parse(String location, XContentParser parser) thro } } } - if (!requiredFound) { + if (false == requiredFound) { restApi.setBodyOptional(); } } @@ -146,4 +152,14 @@ public ClientYamlSuiteRestApi parse(String location, XContentParser parser) thro return restApi; } + + private static class Parameter { + private boolean required; + public boolean isRequired() { + return required; + } + public void setRequired(boolean required) { + this.required = required; + } + } } diff --git a/test/framework/src/main/java/org/elasticsearch/test/rest/yaml/section/DoSection.java b/test/framework/src/main/java/org/elasticsearch/test/rest/yaml/section/DoSection.java index b906090d08fd0..082040fb1eb47 100644 --- a/test/framework/src/main/java/org/elasticsearch/test/rest/yaml/section/DoSection.java +++ b/test/framework/src/main/java/org/elasticsearch/test/rest/yaml/section/DoSection.java @@ -263,7 +263,7 @@ void checkWarningHeaders(final List warningHeaders) { final List missing = new ArrayList<>(); // LinkedHashSet so that missing expected warnings come back in a predictable order which is nice for testing final Set expected = - new LinkedHashSet<>(expectedWarningHeaders.stream().map(DeprecationLogger::escape).collect(Collectors.toList())); + new LinkedHashSet<>(expectedWarningHeaders.stream().map(DeprecationLogger::escapeAndEncode).collect(Collectors.toList())); for (final String header : warningHeaders) { final Matcher matcher = WARNING_HEADER_PATTERN.matcher(header); final boolean matches = matcher.matches(); @@ -320,6 +320,7 @@ private String formatStatusCodeMessage(ClientYamlTestResponse restTestResponse, private static Map>> catches = new HashMap<>(); static { + catches.put("bad_request", tuple("400", equalTo(400))); catches.put("unauthorized", tuple("401", equalTo(401))); catches.put("forbidden", tuple("403", equalTo(403))); catches.put("missing", tuple("404", equalTo(404))); @@ -327,6 +328,7 @@ private String formatStatusCodeMessage(ClientYamlTestResponse restTestResponse, catches.put("conflict", tuple("409", equalTo(409))); catches.put("unavailable", tuple("503", equalTo(503))); catches.put("request", tuple("4xx|5xx", allOf(greaterThanOrEqualTo(400), + not(equalTo(400)), not(equalTo(401)), not(equalTo(403)), not(equalTo(404)), diff --git a/test/framework/src/main/java/org/elasticsearch/test/transport/CapturingTransport.java b/test/framework/src/main/java/org/elasticsearch/test/transport/CapturingTransport.java index 2ccddf6bc5437..81fc934ca6d7e 100644 --- a/test/framework/src/main/java/org/elasticsearch/test/transport/CapturingTransport.java +++ b/test/framework/src/main/java/org/elasticsearch/test/transport/CapturingTransport.java @@ -39,7 +39,7 @@ import org.elasticsearch.transport.TransportRequest; import org.elasticsearch.transport.TransportRequestOptions; import org.elasticsearch.transport.TransportResponse; -import org.elasticsearch.transport.TransportServiceAdapter; +import org.elasticsearch.transport.TransportService; import org.elasticsearch.transport.TransportStats; import java.io.IOException; @@ -60,7 +60,7 @@ /** A transport class that doesn't send anything but rather captures all requests for inspection from tests */ public class CapturingTransport implements Transport { - private TransportServiceAdapter adapter; + private TransportService transportService; public static class CapturedRequest { public final DiscoveryNode node; @@ -137,7 +137,7 @@ public void clear() { /** simulate a response for the given requestId */ public void handleResponse(final long requestId, final TransportResponse response) { - adapter.onResponseReceived(requestId).handleResponse(response); + transportService.onResponseReceived(requestId).handleResponse(response); } /** @@ -189,7 +189,7 @@ public void handleRemoteError(final long requestId, final Throwable t) { * @param e the failure */ public void handleError(final long requestId, final TransportException e) { - adapter.onResponseReceived(requestId).handleException(e); + transportService.onResponseReceived(requestId).handleException(e); } @Override @@ -220,8 +220,8 @@ public TransportStats getStats() { } @Override - public void transportServiceAdapter(TransportServiceAdapter adapter) { - this.adapter = adapter; + public void setTransportService(TransportService transportService) { + this.transportService = transportService; } @Override diff --git a/test/framework/src/main/java/org/elasticsearch/test/transport/MockTransportService.java b/test/framework/src/main/java/org/elasticsearch/test/transport/MockTransportService.java index 98736a7a98ec1..f5efa9a8c56fa 100644 --- a/test/framework/src/main/java/org/elasticsearch/test/transport/MockTransportService.java +++ b/test/framework/src/main/java/org/elasticsearch/test/transport/MockTransportService.java @@ -20,6 +20,7 @@ package org.elasticsearch.test.transport; import com.carrotsearch.randomizedtesting.SysGlobals; + import org.elasticsearch.Version; import org.elasticsearch.cluster.ClusterModule; import org.elasticsearch.cluster.node.DiscoveryNode; @@ -44,7 +45,6 @@ import org.elasticsearch.node.Node; import org.elasticsearch.plugins.Plugin; import org.elasticsearch.tasks.TaskManager; -import org.elasticsearch.test.ESTestCase; import org.elasticsearch.test.tasks.MockTaskManager; import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.ConnectTransportException; @@ -58,7 +58,6 @@ import org.elasticsearch.transport.TransportRequest; import org.elasticsearch.transport.TransportRequestOptions; import org.elasticsearch.transport.TransportService; -import org.elasticsearch.transport.TransportServiceAdapter; import org.elasticsearch.transport.TransportStats; import java.io.IOException; @@ -73,6 +72,7 @@ import java.util.Set; import java.util.concurrent.ConcurrentMap; import java.util.concurrent.CopyOnWriteArrayList; +import java.util.concurrent.ExecutorService; import java.util.concurrent.LinkedBlockingDeque; import java.util.concurrent.atomic.AtomicBoolean; import java.util.function.Function; @@ -101,16 +101,16 @@ public List> getSettings() { } public static MockTransportService createNewService(Settings settings, Version version, ThreadPool threadPool, - @Nullable ClusterSettings clusterSettings) { + @Nullable ClusterSettings clusterSettings) { // some tests use MockTransportService to do network based testing. Yet, we run tests in multiple JVMs that means // concurrent tests could claim port that another JVM just released and if that test tries to simulate a disconnect it might // be smart enough to re-connect depending on what is tested. To reduce the risk, since this is very hard to debug we use // a different default port range per JVM unless the incoming settings override it int basePort = 10300 + (JVM_ORDINAL * 100); // use a non-default port otherwise some cluster in this JVM might reuse a port - settings = Settings.builder().put(TcpTransport.PORT.getKey(), basePort + "-" + (basePort+100)).put(settings).build(); + settings = Settings.builder().put(TcpTransport.PORT.getKey(), basePort + "-" + (basePort + 100)).put(settings).build(); NamedWriteableRegistry namedWriteableRegistry = new NamedWriteableRegistry(ClusterModule.getNamedWriteables()); final Transport transport = new MockTcpTransport(settings, threadPool, BigArrays.NON_RECYCLING_INSTANCE, - new NoneCircuitBreakerService(), namedWriteableRegistry, new NetworkService(Collections.emptyList()), version); + new NoneCircuitBreakerService(), namedWriteableRegistry, new NetworkService(Collections.emptyList()), version); return createNewService(settings, transport, version, threadPool, clusterSettings); } @@ -118,8 +118,8 @@ public static MockTransportService createNewService(Settings settings, Transport @Nullable ClusterSettings clusterSettings) { return new MockTransportService(settings, transport, threadPool, TransportService.NOOP_TRANSPORT_INTERCEPTOR, boundAddress -> - new DiscoveryNode(Node.NODE_NAME_SETTING.get(settings), UUIDs.randomBase64UUID(), boundAddress.publishAddress(), - Node.NODE_ATTRIBUTES.get(settings).getAsMap(), DiscoveryNode.getRolesFromSettings(settings), version), + new DiscoveryNode(Node.NODE_NAME_SETTING.get(settings), UUIDs.randomBase64UUID(), boundAddress.publishAddress(), + Node.NODE_ATTRIBUTES.getAsMap(settings), DiscoveryNode.getRolesFromSettings(settings), version), clusterSettings); } @@ -128,11 +128,11 @@ public static MockTransportService createNewService(Settings settings, Transport /** * Build the service. * - * @param clusterSettings if non null the the {@linkplain TransportService} will register with the {@link ClusterSettings} for settings - * updates for {@link #TRACE_LOG_EXCLUDE_SETTING} and {@link #TRACE_LOG_INCLUDE_SETTING}. + * @param clusterSettings if non null the {@linkplain TransportService} will register with the {@link ClusterSettings} for settings + * updates for {@link #TRACE_LOG_EXCLUDE_SETTING} and {@link #TRACE_LOG_INCLUDE_SETTING}. */ public MockTransportService(Settings settings, Transport transport, ThreadPool threadPool, TransportInterceptor interceptor, - @Nullable ClusterSettings clusterSettings) { + @Nullable ClusterSettings clusterSettings) { this(settings, transport, threadPool, interceptor, (boundAddress) -> DiscoveryNode.createLocal(settings, boundAddress.publishAddress(), settings.get(Node.NODE_NAME_SETTING.getKey(), UUIDs.randomBase64UUID())), clusterSettings); @@ -141,8 +141,8 @@ public MockTransportService(Settings settings, Transport transport, ThreadPool t /** * Build the service. * - * @param clusterSettings if non null the the {@linkplain TransportService} will register with the {@link ClusterSettings} for settings - * updates for {@link #TRACE_LOG_EXCLUDE_SETTING} and {@link #TRACE_LOG_INCLUDE_SETTING}. + * @param clusterSettings if non null the {@linkplain TransportService} will register with the {@link ClusterSettings} for settings + * updates for {@link #TRACE_LOG_EXCLUDE_SETTING} and {@link #TRACE_LOG_INCLUDE_SETTING}. */ public MockTransportService(Settings settings, Transport transport, ThreadPool threadPool, TransportInterceptor interceptor, Function localNodeFactory, @@ -163,11 +163,22 @@ public static TransportAddress[] extractTransportAddresses(TransportService tran protected TaskManager createTaskManager() { if (MockTaskManager.USE_MOCK_TASK_MANAGER_SETTING.get(settings)) { return new MockTaskManager(settings); - } else { + } else { return super.createTaskManager(); } } + private volatile String executorName; + + public void setExecutorName(final String executorName) { + this.executorName = executorName; + } + + @Override + protected ExecutorService getExecutorService() { + return executorName == null ? super.getExecutorService() : getThreadPool().executor(executorName); + } + /** * Clears all the registered rules. */ @@ -347,7 +358,7 @@ public void addUnresponsiveRule(TransportAddress transportAddress, final TimeVal final long startTime = System.currentTimeMillis(); addDelegate(transportAddress, new ClearableTransport(original) { - private final Queue requestsToSendWhenCleared = new LinkedBlockingDeque(); + private final Queue requestsToSendWhenCleared = new LinkedBlockingDeque<>(); private boolean cleared = false; TimeValue getDelay() { @@ -419,8 +430,7 @@ protected void sendRequest(Connection connection, long requestId, String action, RequestHandlerRegistry reg = MockTransportService.this.getRequestHandler(action); BytesStreamOutput bStream = new BytesStreamOutput(); request.writeTo(bStream); - final TransportRequest clonedRequest = reg.newRequest(); - clonedRequest.readFrom(bStream.bytes().streamInput()); + final TransportRequest clonedRequest = reg.newRequest(bStream.bytes().streamInput()); Runnable runnable = new AbstractRunnable() { AtomicBoolean requestSent = new AtomicBoolean(); @@ -548,8 +558,8 @@ public DelegateTransport(Transport transport) { } @Override - public void transportServiceAdapter(TransportServiceAdapter service) { - transport.transportServiceAdapter(service); + public void setTransportService(TransportService service) { + transport.setTransportService(service); } @Override @@ -642,7 +652,9 @@ public void stop() { } @Override - public void close() { transport.close(); } + public void close() { + transport.close(); + } @Override public Map profileBoundAddresses() { @@ -705,55 +717,47 @@ public void clearTracers() { } @Override - protected Adapter createAdapter() { - return new MockAdapter(); + protected boolean traceEnabled() { + return super.traceEnabled() || activeTracers.isEmpty() == false; } - class MockAdapter extends Adapter { - - @Override - protected boolean traceEnabled() { - return super.traceEnabled() || activeTracers.isEmpty() == false; - } - - @Override - protected void traceReceivedRequest(long requestId, String action) { - super.traceReceivedRequest(requestId, action); - for (Tracer tracer : activeTracers) { - tracer.receivedRequest(requestId, action); - } + @Override + protected void traceReceivedRequest(long requestId, String action) { + super.traceReceivedRequest(requestId, action); + for (Tracer tracer : activeTracers) { + tracer.receivedRequest(requestId, action); } + } - @Override - protected void traceResponseSent(long requestId, String action) { - super.traceResponseSent(requestId, action); - for (Tracer tracer : activeTracers) { - tracer.responseSent(requestId, action); - } + @Override + protected void traceResponseSent(long requestId, String action) { + super.traceResponseSent(requestId, action); + for (Tracer tracer : activeTracers) { + tracer.responseSent(requestId, action); } + } - @Override - protected void traceResponseSent(long requestId, String action, Exception e) { - super.traceResponseSent(requestId, action, e); - for (Tracer tracer : activeTracers) { - tracer.responseSent(requestId, action, e); - } + @Override + protected void traceResponseSent(long requestId, String action, Exception e) { + super.traceResponseSent(requestId, action, e); + for (Tracer tracer : activeTracers) { + tracer.responseSent(requestId, action, e); } + } - @Override - protected void traceReceivedResponse(long requestId, DiscoveryNode sourceNode, String action) { - super.traceReceivedResponse(requestId, sourceNode, action); - for (Tracer tracer : activeTracers) { - tracer.receivedResponse(requestId, sourceNode, action); - } + @Override + protected void traceReceivedResponse(long requestId, DiscoveryNode sourceNode, String action) { + super.traceReceivedResponse(requestId, sourceNode, action); + for (Tracer tracer : activeTracers) { + tracer.receivedResponse(requestId, sourceNode, action); } + } - @Override - protected void traceRequestSent(DiscoveryNode node, long requestId, String action, TransportRequestOptions options) { - super.traceRequestSent(node, requestId, action, options); - for (Tracer tracer : activeTracers) { - tracer.requestSent(node, requestId, action, options); - } + @Override + protected void traceRequestSent(DiscoveryNode node, long requestId, String action, TransportRequestOptions options) { + super.traceRequestSent(node, requestId, action, options); + for (Tracer tracer : activeTracers) { + tracer.requestSent(node, requestId, action, options); } } @@ -803,6 +807,7 @@ public Transport getOriginalTransport() { public Transport.Connection openConnection(DiscoveryNode node, ConnectionProfile profile) throws IOException { FilteredConnection filteredConnection = new FilteredConnection(super.openConnection(node, profile)) { final AtomicBoolean closed = new AtomicBoolean(false); + @Override public void close() throws IOException { try { diff --git a/test/framework/src/main/java/org/elasticsearch/transport/AbstractSimpleTransportTestCase.java b/test/framework/src/main/java/org/elasticsearch/transport/AbstractSimpleTransportTestCase.java index 2e252d112df2b..9230a4eb248fc 100644 --- a/test/framework/src/main/java/org/elasticsearch/transport/AbstractSimpleTransportTestCase.java +++ b/test/framework/src/main/java/org/elasticsearch/transport/AbstractSimpleTransportTestCase.java @@ -83,8 +83,10 @@ import static java.util.Collections.emptyMap; import static java.util.Collections.emptySet; +import static org.hamcrest.Matchers.containsString; import static org.hamcrest.Matchers.empty; import static org.hamcrest.Matchers.equalTo; +import static org.hamcrest.Matchers.hasToString; import static org.hamcrest.Matchers.instanceOf; import static org.hamcrest.Matchers.notNullValue; import static org.hamcrest.Matchers.startsWith; @@ -147,14 +149,14 @@ public void onNodeDisconnected(DiscoveryNode node) { private MockTransportService buildService(final String name, final Version version, ClusterSettings clusterSettings, Settings settings, boolean acceptRequests, boolean doHandshake) { MockTransportService service = build( - Settings.builder() - .put(settings) - .put(Node.NODE_NAME_SETTING.getKey(), name) - .put(TransportService.TRACE_LOG_INCLUDE_SETTING.getKey(), "") - .put(TransportService.TRACE_LOG_EXCLUDE_SETTING.getKey(), "NOTHING") - .build(), - version, - clusterSettings, doHandshake); + Settings.builder() + .put(settings) + .put(Node.NODE_NAME_SETTING.getKey(), name) + .put(TransportService.TRACE_LOG_INCLUDE_SETTING.getKey(), "") + .put(TransportService.TRACE_LOG_EXCLUDE_SETTING.getKey(), "NOTHING") + .build(), + version, + clusterSettings, doHandshake); if (acceptRequests) { service.acceptIncomingRequests(); } @@ -1047,9 +1049,11 @@ private static class Tracer extends MockTransportService.Tracer { public volatile boolean sawResponseReceived; public AtomicReference expectedEvents = new AtomicReference<>(); + Tracer(Set actions) { this.actions = actions; } + @Override public void receivedRequest(long requestId, String action) { super.receivedRequest(requestId, action); @@ -1446,7 +1450,7 @@ public void handleResponse(StringMessageResponse response) { public void handleException(TransportException exp) { Throwable cause = ExceptionsHelper.unwrapCause(exp); assertThat(cause, instanceOf(ConnectTransportException.class)); - assertThat(((ConnectTransportException)cause).node(), equalTo(nodeA)); + assertThat(((ConnectTransportException) cause).node(), equalTo(nodeA)); } }); @@ -1456,7 +1460,7 @@ public void handleException(TransportException exp) { } catch (Exception e) { Throwable cause = ExceptionsHelper.unwrapCause(e); assertThat(cause, instanceOf(ConnectTransportException.class)); - assertThat(((ConnectTransportException)cause).node(), equalTo(nodeA)); + assertThat(((ConnectTransportException) cause).node(), equalTo(nodeA)); } // wait for the transport to process the sending failure and disconnect from node @@ -1586,26 +1590,26 @@ public void testBlockingIncomingRequests() throws Exception { CountDownLatch latch = new CountDownLatch(1); serviceA.sendRequest(connection, "action", new TestRequest(), TransportRequestOptions.EMPTY, new TransportResponseHandler() { - @Override - public TestResponse newInstance() { - return new TestResponse(); - } + @Override + public TestResponse newInstance() { + return new TestResponse(); + } - @Override - public void handleResponse(TestResponse response) { - latch.countDown(); - } + @Override + public void handleResponse(TestResponse response) { + latch.countDown(); + } - @Override - public void handleException(TransportException exp) { - latch.countDown(); - } + @Override + public void handleException(TransportException exp) { + latch.countDown(); + } - @Override - public String executor() { - return ThreadPool.Names.SAME; - } - }); + @Override + public String executor() { + return ThreadPool.Names.SAME; + } + }); assertFalse(requestProcessed.get()); @@ -1859,14 +1863,20 @@ public String executor() { public void testRegisterHandlerTwice() { serviceB.registerRequestHandler("action1", TestRequest::new, randomFrom(ThreadPool.Names.SAME, ThreadPool.Names.GENERIC), - (request, message) -> {throw new AssertionError("boom");}); + (request, message) -> { + throw new AssertionError("boom"); + }); expectThrows(IllegalArgumentException.class, () -> serviceB.registerRequestHandler("action1", TestRequest::new, randomFrom(ThreadPool.Names.SAME, ThreadPool.Names.GENERIC), - (request, message) -> {throw new AssertionError("boom");}) + (request, message) -> { + throw new AssertionError("boom"); + }) ); serviceA.registerRequestHandler("action1", TestRequest::new, randomFrom(ThreadPool.Names.SAME, ThreadPool.Names.GENERIC), - (request, message) -> {throw new AssertionError("boom");}); + (request, message) -> { + throw new AssertionError("boom"); + }); } public void testTimeoutPerConnection() throws IOException { @@ -1914,11 +1924,12 @@ public void testTimeoutPerConnection() throws IOException { public void testHandshakeWithIncompatVersion() { assumeTrue("only tcp transport has a handshake method", serviceA.getOriginalTransport() instanceof TcpTransport); NamedWriteableRegistry namedWriteableRegistry = new NamedWriteableRegistry(Collections.emptyList()); + Version version = Version.fromString("2.0.0"); try (MockTcpTransport transport = new MockTcpTransport(Settings.EMPTY, threadPool, BigArrays.NON_RECYCLING_INSTANCE, - new NoneCircuitBreakerService(), namedWriteableRegistry, new NetworkService(Collections.emptyList()), - Version.fromString("2.0.0"))) { - transport.transportServiceAdapter(serviceA.new Adapter()); - transport.start(); + new NoneCircuitBreakerService(), namedWriteableRegistry, new NetworkService(Collections.emptyList()), version); + MockTransportService service = MockTransportService.createNewService(Settings.EMPTY, transport, version, threadPool, null)) { + service.start(); + service.acceptIncomingRequests(); DiscoveryNode node = new DiscoveryNode("TS_TPC", "TS_TPC", transport.boundAddress().publishAddress(), emptyMap(), emptySet(), version0); ConnectionProfile.Builder builder = new ConnectionProfile.Builder(); @@ -1937,9 +1948,10 @@ public void testHandshakeUpdatesVersion() throws IOException { NamedWriteableRegistry namedWriteableRegistry = new NamedWriteableRegistry(Collections.emptyList()); Version version = VersionUtils.randomVersionBetween(random(), Version.CURRENT.minimumCompatibilityVersion(), Version.CURRENT); try (MockTcpTransport transport = new MockTcpTransport(Settings.EMPTY, threadPool, BigArrays.NON_RECYCLING_INSTANCE, - new NoneCircuitBreakerService(), namedWriteableRegistry, new NetworkService(Collections.emptyList()),version)) { - transport.transportServiceAdapter(serviceA.new Adapter()); - transport.start(); + new NoneCircuitBreakerService(), namedWriteableRegistry, new NetworkService(Collections.emptyList()), version); + MockTransportService service = MockTransportService.createNewService(Settings.EMPTY, transport, version, threadPool, null)) { + service.start(); + service.acceptIncomingRequests(); DiscoveryNode node = new DiscoveryNode("TS_TPC", "TS_TPC", transport.boundAddress().publishAddress(), emptyMap(), emptySet(), Version.fromString("2.0.0")); @@ -1956,24 +1968,26 @@ public void testHandshakeUpdatesVersion() throws IOException { } } - public void testTcpHandshake() throws IOException, InterruptedException { assumeTrue("only tcp transport has a handshake method", serviceA.getOriginalTransport() instanceof TcpTransport); TcpTransport originalTransport = (TcpTransport) serviceA.getOriginalTransport(); NamedWriteableRegistry namedWriteableRegistry = new NamedWriteableRegistry(Collections.emptyList()); - try (MockTcpTransport transport = new MockTcpTransport(Settings.EMPTY, threadPool, BigArrays.NON_RECYCLING_INSTANCE, + MockTcpTransport transport = new MockTcpTransport(Settings.EMPTY, threadPool, BigArrays.NON_RECYCLING_INSTANCE, new NoneCircuitBreakerService(), namedWriteableRegistry, new NetworkService(Collections.emptyList())) { @Override protected String handleRequest(MockChannel mockChannel, String profileName, StreamInput stream, long requestId, int messageLengthBytes, Version version, InetSocketAddress remoteAddress, byte status) throws IOException { return super.handleRequest(mockChannel, profileName, stream, requestId, messageLengthBytes, version, remoteAddress, - (byte)(status & ~(1<<3))); // we flip the isHandshake bit back and act like the handler is not found + (byte) (status & ~(1 << 3))); // we flip the isHandshake bit back and act like the handler is not found } - }) { - transport.transportServiceAdapter(serviceA.new Adapter()); - transport.start(); + }; + + try (MockTransportService service = MockTransportService.createNewService(Settings.EMPTY, transport, Version.CURRENT, threadPool, + null)) { + service.start(); + service.acceptIncomingRequests(); // this acts like a node that doesn't have support for handshakes DiscoveryNode node = new DiscoveryNode("TS_TPC", "TS_TPC", transport.boundAddress().publishAddress(), emptyMap(), emptySet(), version0); @@ -1986,7 +2000,7 @@ protected String handleRequest(MockChannel mockChannel, String profileName, Stre TcpTransport.NodeChannels connection = originalTransport.openConnection( new DiscoveryNode("TS_TPC", "TS_TPC", service.boundAddress().publishAddress(), emptyMap(), emptySet(), version0), null - ) ) { + )) { Version version = originalTransport.executeHandshake(connection.getNode(), connection.channel(TransportRequestOptions.Type.PING), TimeValue.timeValueSeconds(10)); assertEquals(version, Version.CURRENT); @@ -2105,8 +2119,8 @@ public String executor() { } }; - serviceB.sendRequest(nodeA, "action", new TestRequest(randomFrom("fail", "pass")), transportResponseHandler); - serviceA.sendRequest(nodeA, "action", new TestRequest(randomFrom("fail", "pass")), transportResponseHandler); + serviceB.sendRequest(nodeA, "action", new TestRequest(randomFrom("fail", "pass")), transportResponseHandler); + serviceA.sendRequest(nodeA, "action", new TestRequest(randomFrom("fail", "pass")), transportResponseHandler); latch.await(); } @@ -2303,22 +2317,22 @@ public String executor() { TransportRequestOptions.Type.STATE); try (Transport.Connection connection = serviceC.openConnection(serviceB.getLocalNode(), builder.build())) { assertBusy(() -> { // netty for instance invokes this concurrently so we better use assert busy here - TransportStats transportStats = serviceC.transport.getStats(); // we did a single round-trip to do the initial handshake - assertEquals(1, transportStats.getRxCount()); - assertEquals(1, transportStats.getTxCount()); - assertEquals(25, transportStats.getRxSize().getBytes()); - assertEquals(45, transportStats.getTxSize().getBytes()); - }); + TransportStats transportStats = serviceC.transport.getStats(); // we did a single round-trip to do the initial handshake + assertEquals(1, transportStats.getRxCount()); + assertEquals(1, transportStats.getTxCount()); + assertEquals(25, transportStats.getRxSize().getBytes()); + assertEquals(45, transportStats.getTxSize().getBytes()); + }); serviceC.sendRequest(connection, "action", new TestRequest("hello world"), TransportRequestOptions.EMPTY, transportResponseHandler); receivedLatch.await(); assertBusy(() -> { // netty for instance invokes this concurrently so we better use assert busy here - TransportStats transportStats = serviceC.transport.getStats(); // request has ben send - assertEquals(1, transportStats.getRxCount()); - assertEquals(2, transportStats.getTxCount()); - assertEquals(25, transportStats.getRxSize().getBytes()); - assertEquals(91, transportStats.getTxSize().getBytes()); - }); + TransportStats transportStats = serviceC.transport.getStats(); // request has ben send + assertEquals(1, transportStats.getRxCount()); + assertEquals(2, transportStats.getTxCount()); + assertEquals(25, transportStats.getRxSize().getBytes()); + assertEquals(91, transportStats.getTxSize().getBytes()); + }); sendResponseLatch.countDown(); responseLatch.await(); stats = serviceC.transport.getStats(); // response has been received @@ -2398,22 +2412,22 @@ public String executor() { TransportRequestOptions.Type.STATE); try (Transport.Connection connection = serviceC.openConnection(serviceB.getLocalNode(), builder.build())) { assertBusy(() -> { // netty for instance invokes this concurrently so we better use assert busy here - TransportStats transportStats = serviceC.transport.getStats(); // request has ben send - assertEquals(1, transportStats.getRxCount()); - assertEquals(1, transportStats.getTxCount()); - assertEquals(25, transportStats.getRxSize().getBytes()); - assertEquals(45, transportStats.getTxSize().getBytes()); - }); + TransportStats transportStats = serviceC.transport.getStats(); // request has ben send + assertEquals(1, transportStats.getRxCount()); + assertEquals(1, transportStats.getTxCount()); + assertEquals(25, transportStats.getRxSize().getBytes()); + assertEquals(45, transportStats.getTxSize().getBytes()); + }); serviceC.sendRequest(connection, "action", new TestRequest("hello world"), TransportRequestOptions.EMPTY, transportResponseHandler); receivedLatch.await(); assertBusy(() -> { // netty for instance invokes this concurrently so we better use assert busy here - TransportStats transportStats = serviceC.transport.getStats(); // request has ben send - assertEquals(1, transportStats.getRxCount()); - assertEquals(2, transportStats.getTxCount()); - assertEquals(25, transportStats.getRxSize().getBytes()); - assertEquals(91, transportStats.getTxSize().getBytes()); - }); + TransportStats transportStats = serviceC.transport.getStats(); // request has ben send + assertEquals(1, transportStats.getRxCount()); + assertEquals(2, transportStats.getTxCount()); + assertEquals(25, transportStats.getRxSize().getBytes()); + assertEquals(91, transportStats.getTxSize().getBytes()); + }); sendResponseLatch.countDown(); responseLatch.await(); stats = serviceC.transport.getStats(); // exception response has been received @@ -2443,8 +2457,8 @@ public void testTransportProfilesWithPortAndHost() { .put("transport.profiles.some_profile.port", "8900-9000") .put("transport.profiles.some_profile.bind_host", "_local:ipv4_") .put("transport.profiles.some_other_profile.port", "8700-8800") - .putArray("transport.profiles.some_other_profile.bind_host", hosts) - .putArray("transport.profiles.some_other_profile.publish_host", "_local:ipv4_") + .putList("transport.profiles.some_other_profile.bind_host", hosts) + .putList("transport.profiles.some_other_profile.publish_host", "_local:ipv4_") .build(), version0, null, true)) { serviceC.start(); @@ -2600,4 +2614,33 @@ public void testProfilesIncludesDefault() { assertEquals(new HashSet<>(Arrays.asList("default", "test")), profileSettings.stream().map(s -> s.profileName).collect(Collectors .toSet())); } + + public void testChannelCloseWhileConnecting() throws IOException { + try (MockTransportService service = build(Settings.builder().put("name", "close").build(), version0, null, true)) { + service.setExecutorName(ThreadPool.Names.SAME); // make sure stuff is executed in a blocking fashion + service.addConnectionListener(new TransportConnectionListener() { + @Override + public void onConnectionOpened(final Transport.Connection connection) { + try { + closeConnectionChannel(service.getOriginalTransport(), connection); + } catch (final IOException e) { + throw new AssertionError(e); + } + } + }); + final ConnectionProfile.Builder builder = new ConnectionProfile.Builder(); + builder.addConnections(1, + TransportRequestOptions.Type.BULK, + TransportRequestOptions.Type.PING, + TransportRequestOptions.Type.RECOVERY, + TransportRequestOptions.Type.REG, + TransportRequestOptions.Type.STATE); + final ConnectTransportException e = + expectThrows(ConnectTransportException.class, () -> service.openConnection(nodeA, builder.build())); + assertThat(e, hasToString(containsString(("a channel closed while connecting")))); + } + } + + protected abstract void closeConnectionChannel(Transport transport, Transport.Connection connection) throws IOException; + } diff --git a/test/framework/src/main/java/org/elasticsearch/transport/MockTcpTransport.java b/test/framework/src/main/java/org/elasticsearch/transport/MockTcpTransport.java index bbfccb8229ad1..6d5b94dd67a05 100644 --- a/test/framework/src/main/java/org/elasticsearch/transport/MockTcpTransport.java +++ b/test/framework/src/main/java/org/elasticsearch/transport/MockTcpTransport.java @@ -117,12 +117,12 @@ protected InetSocketAddress getLocalAddress(MockChannel mockChannel) { @Override protected MockChannel bind(final String name, InetSocketAddress address) throws IOException { MockServerSocket socket = new MockServerSocket(); - socket.bind(address); socket.setReuseAddress(TCP_REUSE_ADDRESS.get(settings)); ByteSizeValue tcpReceiveBufferSize = TCP_RECEIVE_BUFFER_SIZE.get(settings); if (tcpReceiveBufferSize.getBytes() > 0) { socket.setReceiveBufferSize(tcpReceiveBufferSize.bytesAsInt()); } + socket.bind(address); MockChannel serverMockChannel = new MockChannel(socket, name); CountDownLatch started = new CountDownLatch(1); executor.execute(new AbstractRunnable() { @@ -176,7 +176,8 @@ private void readMessage(MockChannel mockChannel, StreamInput input) throws IOEx } @Override - protected NodeChannels connectToChannels(DiscoveryNode node, ConnectionProfile profile, + protected NodeChannels connectToChannels(DiscoveryNode node, + ConnectionProfile profile, Consumer onChannelClose) throws IOException { final MockChannel[] mockChannels = new MockChannel[1]; final NodeChannels nodeChannels = new NodeChannels(node, mockChannels, LIGHT_PROFILE); // we always use light here @@ -242,8 +243,22 @@ protected void sendMessage(MockChannel mockChannel, BytesReference reference, Ac } @Override - protected void closeChannels(List channel, boolean blocking) throws IOException { - IOUtils.close(channel); + protected void closeChannels(List channels, boolean blocking, boolean doNotLinger) throws IOException { + if (doNotLinger) { + for (MockChannel channel : channels) { + if (channel.activeChannel != null) { + /* We set SO_LINGER timeout to 0 to ensure that when we shutdown the node we don't have a gazillion connections sitting + * in TIME_WAIT to free up resources quickly. This is really the only part where we close the connection from the server + * side otherwise the client (node) initiates the TCP closing sequence which doesn't cause these issues. Setting this + * by default from the beginning can have unexpected side-effects an should be avoided, our protocol is designed + * in a way that clients close connection which is how it should be*/ + if (channel.isOpen.get()) { + channel.activeChannel.setSoLinger(true, 0); + } + } + } + } + IOUtils.close(channels); } @Override diff --git a/test/framework/src/main/java/org/elasticsearch/transport/nio/AcceptingSelector.java b/test/framework/src/main/java/org/elasticsearch/transport/nio/AcceptingSelector.java index f43a061500526..e116d6421706d 100644 --- a/test/framework/src/main/java/org/elasticsearch/transport/nio/AcceptingSelector.java +++ b/test/framework/src/main/java/org/elasticsearch/transport/nio/AcceptingSelector.java @@ -51,21 +51,22 @@ public AcceptingSelector(AcceptorEventHandler eventHandler, Selector selector) t } @Override - void doSelect(int timeout) throws IOException, ClosedSelectorException { - setUpNewServerChannels(); - - int ready = selector.select(timeout); - if (ready > 0) { - Set selectionKeys = selector.selectedKeys(); - Iterator keyIterator = selectionKeys.iterator(); - while (keyIterator.hasNext()) { - SelectionKey sk = keyIterator.next(); - keyIterator.remove(); - acceptChannel(sk); + void processKey(SelectionKey selectionKey) { + NioServerSocketChannel serverChannel = (NioServerSocketChannel) selectionKey.attachment(); + if (selectionKey.isAcceptable()) { + try { + eventHandler.acceptChannel(serverChannel); + } catch (IOException e) { + eventHandler.acceptException(serverChannel, e); } } } + @Override + void preSelect() { + setUpNewServerChannels(); + } + @Override void cleanup() { channelsToClose.addAll(newChannels); @@ -74,6 +75,7 @@ void cleanup() { /** * Schedules a NioServerSocketChannel to be registered with this selector. The channel will by queued and * eventually registered next time through the event loop. + * * @param serverSocketChannel the channel to register */ public void scheduleForRegistration(NioServerSocketChannel serverSocketChannel) { @@ -82,7 +84,7 @@ public void scheduleForRegistration(NioServerSocketChannel serverSocketChannel) wakeup(); } - private void setUpNewServerChannels() throws ClosedChannelException { + private void setUpNewServerChannels() { NioServerSocketChannel newChannel; while ((newChannel = this.newChannels.poll()) != null) { assert newChannel.getSelector() == this : "The channel must be registered with the selector with which it was created"; @@ -101,23 +103,4 @@ private void setUpNewServerChannels() throws ClosedChannelException { } } } - - private void acceptChannel(SelectionKey sk) { - NioServerSocketChannel serverChannel = (NioServerSocketChannel) sk.attachment(); - if (sk.isValid()) { - try { - if (sk.isAcceptable()) { - try { - eventHandler.acceptChannel(serverChannel); - } catch (IOException e) { - eventHandler.acceptException(serverChannel, e); - } - } - } catch (CancelledKeyException ex) { - eventHandler.genericServerChannelException(serverChannel, ex); - } - } else { - eventHandler.genericServerChannelException(serverChannel, new CancelledKeyException()); - } - } } diff --git a/test/framework/src/main/java/org/elasticsearch/transport/nio/AcceptorEventHandler.java b/test/framework/src/main/java/org/elasticsearch/transport/nio/AcceptorEventHandler.java index 7228cf4f050e1..3de846fd61f6b 100644 --- a/test/framework/src/main/java/org/elasticsearch/transport/nio/AcceptorEventHandler.java +++ b/test/framework/src/main/java/org/elasticsearch/transport/nio/AcceptorEventHandler.java @@ -20,6 +20,7 @@ package org.elasticsearch.transport.nio; import org.apache.logging.log4j.Logger; +import org.apache.logging.log4j.message.ParameterizedMessage; import org.elasticsearch.transport.nio.channel.ChannelFactory; import org.elasticsearch.transport.nio.channel.NioServerSocketChannel; import org.elasticsearch.transport.nio.channel.NioSocketChannel; @@ -48,7 +49,7 @@ public AcceptorEventHandler(Logger logger, OpenChannels openChannels, Supplier new ParameterizedMessage("exception while accepting new channel from server channel: {}", + nioServerChannel), exception); } } diff --git a/test/framework/src/main/java/org/elasticsearch/transport/nio/ESSelector.java b/test/framework/src/main/java/org/elasticsearch/transport/nio/ESSelector.java index 9030d5781458e..ba0fae3ee3127 100644 --- a/test/framework/src/main/java/org/elasticsearch/transport/nio/ESSelector.java +++ b/test/framework/src/main/java/org/elasticsearch/transport/nio/ESSelector.java @@ -24,9 +24,12 @@ import java.io.Closeable; import java.io.IOException; +import java.nio.channels.CancelledKeyException; import java.nio.channels.ClosedSelectorException; +import java.nio.channels.SelectionKey; import java.nio.channels.Selector; import java.util.Collections; +import java.util.Iterator; import java.util.Set; import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ConcurrentLinkedQueue; @@ -40,8 +43,8 @@ * {@link #close()} is called. This instance handles closing of channels. Users should call * {@link #queueChannelClose(NioChannel)} to schedule a channel for close by this selector. *

    - * Children of this class should implement the specific {@link #doSelect(int)} and {@link #cleanup()} - * functionality. + * Children of this class should implement the specific {@link #processKey(SelectionKey)}, + * {@link #preSelect()}, and {@link #cleanup()} functionality. */ public abstract class ESSelector implements Closeable { @@ -98,7 +101,26 @@ public void runLoop() { void singleLoop() { try { closePendingChannels(); - doSelect(300); + preSelect(); + + int ready = selector.select(300); + if (ready > 0) { + Set selectionKeys = selector.selectedKeys(); + Iterator keyIterator = selectionKeys.iterator(); + while (keyIterator.hasNext()) { + SelectionKey sk = keyIterator.next(); + keyIterator.remove(); + if (sk.isValid()) { + try { + processKey(sk); + } catch (CancelledKeyException cke) { + eventHandler.genericChannelException((NioChannel) sk.attachment(), cke); + } + } else { + eventHandler.genericChannelException((NioChannel) sk.attachment(), new CancelledKeyException()); + } + } + } } catch (ClosedSelectorException e) { if (isOpen()) { throw e; @@ -117,13 +139,19 @@ void cleanupAndCloseChannels() { } /** - * Should implement the specific select logic. This will be called once per {@link #singleLoop()} + * Called by the base {@link ESSelector} class when there is a {@link SelectionKey} to be handled. * - * @param timeout to pass to the raw select operation - * @throws IOException thrown by the raw select operation - * @throws ClosedSelectorException thrown if the raw selector is closed + * @param selectionKey the key to be handled + * @throws CancelledKeyException thrown when the key has already been cancelled + */ + abstract void processKey(SelectionKey selectionKey) throws CancelledKeyException; + + /** + * Called immediately prior to a raw {@link Selector#select()} call. Should be used to implement + * channel registration, handling queued writes, and other work that is not specifically processing + * a selection key. */ - abstract void doSelect(int timeout) throws IOException, ClosedSelectorException; + abstract void preSelect(); /** * Called once as the selector is being closed. diff --git a/test/framework/src/main/java/org/elasticsearch/transport/nio/EventHandler.java b/test/framework/src/main/java/org/elasticsearch/transport/nio/EventHandler.java index 382a6728771c6..04e1b21b1b065 100644 --- a/test/framework/src/main/java/org/elasticsearch/transport/nio/EventHandler.java +++ b/test/framework/src/main/java/org/elasticsearch/transport/nio/EventHandler.java @@ -20,9 +20,9 @@ package org.elasticsearch.transport.nio; import org.apache.logging.log4j.Logger; +import org.apache.logging.log4j.message.ParameterizedMessage; import org.elasticsearch.transport.nio.channel.CloseFuture; import org.elasticsearch.transport.nio.channel.NioChannel; -import org.elasticsearch.transport.nio.channel.NioSocketChannel; import java.io.IOException; import java.nio.channels.Selector; @@ -31,7 +31,7 @@ public abstract class EventHandler { protected final Logger logger; - public EventHandler(Logger logger) { + EventHandler(Logger logger) { this.logger = logger; } @@ -40,8 +40,8 @@ public EventHandler(Logger logger) { * * @param exception the exception */ - public void selectException(IOException exception) { - logger.warn("io exception during select", exception); + void selectException(IOException exception) { + logger.warn(new ParameterizedMessage("io exception during select [thread={}]", Thread.currentThread().getName()), exception); } /** @@ -49,8 +49,9 @@ public void selectException(IOException exception) { * * @param exception the exception */ - public void closeSelectorException(IOException exception) { - logger.warn("io exception while closing selector", exception); + void closeSelectorException(IOException exception) { + logger.warn(new ParameterizedMessage("io exception while closing selector [thread={}]", Thread.currentThread().getName()), + exception); } /** @@ -58,7 +59,7 @@ public void closeSelectorException(IOException exception) { * * @param exception that was uncaught */ - public void uncaughtException(Exception exception) { + void uncaughtException(Exception exception) { Thread thread = Thread.currentThread(); thread.getUncaughtExceptionHandler().uncaughtException(thread, exception); } @@ -68,13 +69,35 @@ public void uncaughtException(Exception exception) { * * @param channel that should be closed */ - public void handleClose(NioChannel channel) { + void handleClose(NioChannel channel) { channel.closeFromSelector(); CloseFuture closeFuture = channel.getCloseFuture(); assert closeFuture.isDone() : "Should always be done as we are on the selector thread"; IOException closeException = closeFuture.getCloseException(); if (closeException != null) { - logger.debug("exception while closing channel", closeException); + closeException(channel, closeException); } } + + /** + * This method is called when an attempt to close a channel throws an exception. + * + * @param channel that was being closed + * @param exception that occurred + */ + void closeException(NioChannel channel, Exception exception) { + logger.debug(() -> new ParameterizedMessage("exception while closing channel: {}", channel), exception); + } + + /** + * This method is called when handling an event from a channel fails due to an unexpected exception. + * An example would be if checking ready ops on a {@link java.nio.channels.SelectionKey} threw + * {@link java.nio.channels.CancelledKeyException}. + * + * @param channel that caused the exception + * @param exception that was thrown + */ + void genericChannelException(NioChannel channel, Exception exception) { + logger.debug(() -> new ParameterizedMessage("exception while handling event for channel: {}", channel), exception); + } } diff --git a/test/framework/src/main/java/org/elasticsearch/transport/nio/NioClient.java b/test/framework/src/main/java/org/elasticsearch/transport/nio/NioClient.java index 27ddca978786f..ee0b32db0149a 100644 --- a/test/framework/src/main/java/org/elasticsearch/transport/nio/NioClient.java +++ b/test/framework/src/main/java/org/elasticsearch/transport/nio/NioClient.java @@ -56,7 +56,9 @@ public NioClient(Logger logger, OpenChannels openChannels, Supplier closeListener) throws IOException { boolean allowedToConnect = semaphore.tryAcquire(); if (allowedToConnect == false) { @@ -136,10 +138,6 @@ private void closeChannels(ArrayList connections, Exception e) for (final NioSocketChannel socketChannel : connections) { try { socketChannel.closeAsync().awaitClose(); - } catch (InterruptedException inner) { - logger.trace("exception while closing channel", e); - e.addSuppressed(inner); - Thread.currentThread().interrupt(); } catch (Exception inner) { logger.trace("exception while closing channel", e); e.addSuppressed(inner); diff --git a/test/framework/src/main/java/org/elasticsearch/transport/nio/NioTransport.java b/test/framework/src/main/java/org/elasticsearch/transport/nio/NioTransport.java index a621925140090..9eabcc56f28cd 100644 --- a/test/framework/src/main/java/org/elasticsearch/transport/nio/NioTransport.java +++ b/test/framework/src/main/java/org/elasticsearch/transport/nio/NioTransport.java @@ -19,6 +19,7 @@ package org.elasticsearch.transport.nio; +import java.net.StandardSocketOptions; import org.elasticsearch.ElasticsearchException; import org.elasticsearch.ExceptionsHelper; import org.elasticsearch.action.ActionListener; @@ -28,7 +29,6 @@ import org.elasticsearch.common.network.NetworkService; import org.elasticsearch.common.settings.Setting; import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.common.unit.ByteSizeValue; import org.elasticsearch.common.util.BigArrays; import org.elasticsearch.common.util.concurrent.EsExecutors; import org.elasticsearch.indices.breaker.CircuitBreakerService; @@ -99,7 +99,19 @@ protected NioServerSocketChannel bind(String name, InetSocketAddress address) th } @Override - protected void closeChannels(List channels, boolean blocking) throws IOException { + protected void closeChannels(List channels, boolean blocking, boolean doNotLinger) throws IOException { + if (doNotLinger) { + for (NioChannel channel : channels) { + /* We set SO_LINGER timeout to 0 to ensure that when we shutdown the node we don't have a gazillion connections sitting + * in TIME_WAIT to free up resources quickly. This is really the only part where we close the connection from the server + * side otherwise the client (node) initiates the TCP closing sequence which doesn't cause these issues. Setting this + * by default from the beginning can have unexpected side-effects an should be avoided, our protocol is designed + * in a way that clients close connection which is how it should be*/ + if (channel.isOpen() && channel.getRawChannel().supportedOptions().contains(StandardSocketOptions.SO_LINGER)) { + channel.getRawChannel().setOption(StandardSocketOptions.SO_LINGER, 0); + } + } + } ArrayList futures = new ArrayList<>(channels.size()); for (final NioChannel channel : channels) { if (channel != null && channel.isOpen()) { @@ -119,11 +131,7 @@ protected void closeChannels(List channels, boolean blocking) throws for (CloseFuture future : futures) { try { future.awaitClose(); - IOException closeException = future.getCloseException(); - if (closeException != null) { - closingExceptions = addClosingException(closingExceptions, closeException); - } - } catch (InterruptedException e) { + } catch (Exception e) { closingExceptions = addClosingException(closingExceptions, e); } } diff --git a/test/framework/src/main/java/org/elasticsearch/transport/nio/SocketEventHandler.java b/test/framework/src/main/java/org/elasticsearch/transport/nio/SocketEventHandler.java index 6905f7957b3c9..58958a2b3ce3f 100644 --- a/test/framework/src/main/java/org/elasticsearch/transport/nio/SocketEventHandler.java +++ b/test/framework/src/main/java/org/elasticsearch/transport/nio/SocketEventHandler.java @@ -20,6 +20,8 @@ package org.elasticsearch.transport.nio; import org.apache.logging.log4j.Logger; +import org.apache.logging.log4j.message.ParameterizedMessage; +import org.elasticsearch.transport.nio.channel.NioChannel; import org.elasticsearch.transport.nio.channel.NioSocketChannel; import org.elasticsearch.transport.nio.channel.SelectionKeyUtils; import org.elasticsearch.transport.nio.channel.WriteContext; @@ -47,7 +49,7 @@ public SocketEventHandler(Logger logger, BiConsumer * * @param channel that was registered */ - public void handleRegistration(NioSocketChannel channel) { + void handleRegistration(NioSocketChannel channel) { SelectionKeyUtils.setConnectAndReadInterested(channel); } @@ -57,8 +59,8 @@ public void handleRegistration(NioSocketChannel channel) { * @param channel that was registered * @param exception that occurred */ - public void registrationException(NioSocketChannel channel, Exception exception) { - logger.trace("failed to register channel", exception); + void registrationException(NioSocketChannel channel, Exception exception) { + logger.debug(() -> new ParameterizedMessage("failed to register socket channel: {}", channel), exception); exceptionCaught(channel, exception); } @@ -68,7 +70,7 @@ public void registrationException(NioSocketChannel channel, Exception exception) * * @param channel that was registered */ - public void handleConnect(NioSocketChannel channel) { + void handleConnect(NioSocketChannel channel) { SelectionKeyUtils.removeConnectInterested(channel); } @@ -78,10 +80,9 @@ public void handleConnect(NioSocketChannel channel) { * @param channel that was connecting * @param exception that occurred */ - public void connectException(NioSocketChannel channel, Exception exception) { - logger.trace("failed to connect to channel", exception); + void connectException(NioSocketChannel channel, Exception exception) { + logger.debug(() -> new ParameterizedMessage("failed to connect to socket channel: {}", channel), exception); exceptionCaught(channel, exception); - } /** @@ -90,7 +91,7 @@ public void connectException(NioSocketChannel channel, Exception exception) { * * @param channel that can be read */ - public void handleRead(NioSocketChannel channel) throws IOException { + void handleRead(NioSocketChannel channel) throws IOException { int bytesRead = channel.getReadContext().read(); if (bytesRead == -1) { handleClose(channel); @@ -103,8 +104,8 @@ public void handleRead(NioSocketChannel channel) throws IOException { * @param channel that was being read * @param exception that occurred */ - public void readException(NioSocketChannel channel, Exception exception) { - logger.trace("failed to read from channel", exception); + void readException(NioSocketChannel channel, Exception exception) { + logger.debug(() -> new ParameterizedMessage("exception while reading from socket channel: {}", channel), exception); exceptionCaught(channel, exception); } @@ -114,7 +115,7 @@ public void readException(NioSocketChannel channel, Exception exception) { * * @param channel that can be read */ - public void handleWrite(NioSocketChannel channel) throws IOException { + void handleWrite(NioSocketChannel channel) throws IOException { WriteContext channelContext = channel.getWriteContext(); channelContext.flushChannel(); if (channelContext.hasQueuedWriteOps()) { @@ -130,8 +131,8 @@ public void handleWrite(NioSocketChannel channel) throws IOException { * @param channel that was being written to * @param exception that occurred */ - public void writeException(NioSocketChannel channel, Exception exception) { - logger.trace("failed to write to channel", exception); + void writeException(NioSocketChannel channel, Exception exception) { + logger.debug(() -> new ParameterizedMessage("exception while writing to socket channel: {}", channel), exception); exceptionCaught(channel, exception); } @@ -143,9 +144,9 @@ public void writeException(NioSocketChannel channel, Exception exception) { * @param channel that caused the exception * @param exception that was thrown */ - public void genericChannelException(NioSocketChannel channel, Exception exception) { - logger.trace("event handling failed", exception); - exceptionCaught(channel, exception); + void genericChannelException(NioChannel channel, Exception exception) { + super.genericChannelException(channel, exception); + exceptionCaught((NioSocketChannel) channel, exception); } private void exceptionCaught(NioSocketChannel channel, Exception e) { diff --git a/test/framework/src/main/java/org/elasticsearch/transport/nio/SocketSelector.java b/test/framework/src/main/java/org/elasticsearch/transport/nio/SocketSelector.java index b4da075f0fcc9..9c90463421a81 100644 --- a/test/framework/src/main/java/org/elasticsearch/transport/nio/SocketSelector.java +++ b/test/framework/src/main/java/org/elasticsearch/transport/nio/SocketSelector.java @@ -24,13 +24,10 @@ import org.elasticsearch.transport.nio.channel.WriteContext; import java.io.IOException; -import java.nio.channels.CancelledKeyException; import java.nio.channels.ClosedChannelException; import java.nio.channels.ClosedSelectorException; import java.nio.channels.SelectionKey; import java.nio.channels.Selector; -import java.util.Iterator; -import java.util.Set; import java.util.concurrent.ConcurrentLinkedQueue; /** @@ -54,17 +51,30 @@ public SocketSelector(SocketEventHandler eventHandler, Selector selector) throws } @Override - void doSelect(int timeout) throws IOException, ClosedSelectorException { - setUpNewChannels(); - handleQueuedWrites(); + void processKey(SelectionKey selectionKey) { + NioSocketChannel nioSocketChannel = (NioSocketChannel) selectionKey.attachment(); + int ops = selectionKey.readyOps(); + if ((ops & SelectionKey.OP_CONNECT) != 0) { + attemptConnect(nioSocketChannel, true); + } + + if (nioSocketChannel.isConnectComplete()) { + if ((ops & SelectionKey.OP_WRITE) != 0) { + handleWrite(nioSocketChannel); + } - int ready = selector.select(timeout); - if (ready > 0) { - Set selectionKeys = selector.selectedKeys(); - processKeys(selectionKeys); + if ((ops & SelectionKey.OP_READ) != 0) { + handleRead(nioSocketChannel); + } } } + @Override + void preSelect() { + setUpNewChannels(); + handleQueuedWrites(); + } + @Override void cleanup() { WriteOperation op; @@ -122,38 +132,6 @@ public void queueWriteInChannelBuffer(WriteOperation writeOperation) { } } - private void processKeys(Set selectionKeys) { - Iterator keyIterator = selectionKeys.iterator(); - while (keyIterator.hasNext()) { - SelectionKey sk = keyIterator.next(); - keyIterator.remove(); - NioSocketChannel nioSocketChannel = (NioSocketChannel) sk.attachment(); - if (sk.isValid()) { - try { - int ops = sk.readyOps(); - if ((ops & SelectionKey.OP_CONNECT) != 0) { - attemptConnect(nioSocketChannel, true); - } - - if (nioSocketChannel.isConnectComplete()) { - if ((ops & SelectionKey.OP_WRITE) != 0) { - handleWrite(nioSocketChannel); - } - - if ((ops & SelectionKey.OP_READ) != 0) { - handleRead(nioSocketChannel); - } - } - } catch (CancelledKeyException e) { - eventHandler.genericChannelException(nioSocketChannel, e); - } - } else { - eventHandler.genericChannelException(nioSocketChannel, new CancelledKeyException()); - } - } - } - - private void handleWrite(NioSocketChannel nioSocketChannel) { try { eventHandler.handleWrite(nioSocketChannel); diff --git a/test/framework/src/main/java/org/elasticsearch/transport/nio/channel/AbstractNioChannel.java b/test/framework/src/main/java/org/elasticsearch/transport/nio/channel/AbstractNioChannel.java index c02312aab51d6..c550785fac517 100644 --- a/test/framework/src/main/java/org/elasticsearch/transport/nio/channel/AbstractNioChannel.java +++ b/test/framework/src/main/java/org/elasticsearch/transport/nio/channel/AbstractNioChannel.java @@ -113,7 +113,7 @@ public void closeFromSelector() { closeRawChannel(); closedOnThisCall = closeFuture.channelClosed(this); } catch (IOException e) { - closedOnThisCall = closeFuture.channelCloseThrewException(this, e); + closedOnThisCall = closeFuture.channelCloseThrewException(e); } finally { if (closedOnThisCall) { selector.removeRegisteredChannel(this); @@ -162,5 +162,4 @@ void setSelectionKey(SelectionKey selectionKey) { void closeRawChannel() throws IOException { socketChannel.close(); } - } diff --git a/test/framework/src/main/java/org/elasticsearch/transport/nio/channel/ChannelFactory.java b/test/framework/src/main/java/org/elasticsearch/transport/nio/channel/ChannelFactory.java index c25936ce7fc01..f2f92e94e509d 100644 --- a/test/framework/src/main/java/org/elasticsearch/transport/nio/channel/ChannelFactory.java +++ b/test/framework/src/main/java/org/elasticsearch/transport/nio/channel/ChannelFactory.java @@ -21,6 +21,7 @@ import org.apache.lucene.util.IOUtils; +import org.elasticsearch.action.ActionListener; import org.elasticsearch.mocksocket.PrivilegedSocketAccess; import org.elasticsearch.transport.TcpTransport; import org.elasticsearch.transport.nio.AcceptingSelector; @@ -55,7 +56,7 @@ public NioSocketChannel openNioChannel(InetSocketAddress remoteAddress, SocketSe SocketChannel rawChannel = rawChannelFactory.openNioChannel(remoteAddress); NioSocketChannel channel = new NioSocketChannel(NioChannel.CLIENT, rawChannel, selector); channel.setContexts(new TcpReadContext(channel, handler), new TcpWriteContext(channel)); - channel.getCloseFuture().setListener(closeListener); + channel.getCloseFuture().addListener(ActionListener.wrap(closeListener::accept, (e) -> closeListener.accept(channel))); scheduleChannel(channel, selector); return channel; } @@ -65,7 +66,7 @@ public NioSocketChannel acceptNioChannel(NioServerSocketChannel serverChannel, S SocketChannel rawChannel = rawChannelFactory.acceptNioChannel(serverChannel); NioSocketChannel channel = new NioSocketChannel(serverChannel.getProfile(), rawChannel, selector); channel.setContexts(new TcpReadContext(channel, handler), new TcpWriteContext(channel)); - channel.getCloseFuture().setListener(closeListener); + channel.getCloseFuture().addListener(ActionListener.wrap(closeListener::accept, (e) -> closeListener.accept(channel))); scheduleChannel(channel, selector); return channel; } diff --git a/test/framework/src/main/java/org/elasticsearch/transport/nio/channel/CloseFuture.java b/test/framework/src/main/java/org/elasticsearch/transport/nio/channel/CloseFuture.java index c27ba306e0e60..5932de8fef708 100644 --- a/test/framework/src/main/java/org/elasticsearch/transport/nio/channel/CloseFuture.java +++ b/test/framework/src/main/java/org/elasticsearch/transport/nio/channel/CloseFuture.java @@ -19,35 +19,37 @@ package org.elasticsearch.transport.nio.channel; -import org.apache.lucene.util.SetOnce; -import org.elasticsearch.common.util.concurrent.BaseFuture; +import org.elasticsearch.action.support.PlainListenableActionFuture; import java.io.IOException; import java.util.concurrent.ExecutionException; import java.util.concurrent.TimeUnit; import java.util.concurrent.TimeoutException; -import java.util.function.Consumer; -public class CloseFuture extends BaseFuture { - - private final SetOnce> listener = new SetOnce<>(); +public class CloseFuture extends PlainListenableActionFuture { @Override public boolean cancel(boolean mayInterruptIfRunning) { throw new UnsupportedOperationException("Cannot cancel close future"); } - public void awaitClose() throws InterruptedException, IOException { + public void awaitClose() throws IOException { try { super.get(); + } catch (InterruptedException e) { + Thread.currentThread().interrupt(); + throw new IllegalStateException("Future got interrupted", e); } catch (ExecutionException e) { throw (IOException) e.getCause(); } } - public void awaitClose(long timeout, TimeUnit unit) throws InterruptedException, TimeoutException, IOException { + public void awaitClose(long timeout, TimeUnit unit) throws TimeoutException, IOException { try { super.get(timeout, unit); + } catch (InterruptedException e) { + Thread.currentThread().interrupt(); + throw new IllegalStateException("Future got interrupted", e); } catch (ExecutionException e) { throw (IOException) e.getCause(); } @@ -76,31 +78,13 @@ public boolean isClosed() { return super.isDone(); } - public void setListener(Consumer listener) { - this.listener.set(listener); - } - boolean channelClosed(NioChannel channel) { - boolean set = set(channel); - if (set) { - Consumer listener = this.listener.get(); - if (listener != null) { - listener.accept(channel); - } - } - return set; + return set(channel); } - boolean channelCloseThrewException(NioChannel channel, IOException ex) { - boolean set = setException(ex); - if (set) { - Consumer listener = this.listener.get(); - if (listener != null) { - listener.accept(channel); - } - } - return set; + boolean channelCloseThrewException(IOException ex) { + return setException(ex); } } diff --git a/test/framework/src/main/java/org/elasticsearch/transport/nio/channel/NioServerSocketChannel.java b/test/framework/src/main/java/org/elasticsearch/transport/nio/channel/NioServerSocketChannel.java index a0524064cae27..4c6c0b2b65acd 100644 --- a/test/framework/src/main/java/org/elasticsearch/transport/nio/channel/NioServerSocketChannel.java +++ b/test/framework/src/main/java/org/elasticsearch/transport/nio/channel/NioServerSocketChannel.java @@ -37,4 +37,12 @@ public NioServerSocketChannel(String profile, ServerSocketChannel socketChannel, public ChannelFactory getChannelFactory() { return channelFactory; } + + @Override + public String toString() { + return "NioServerSocketChannel{" + + "profile=" + getProfile() + + ", localAddress=" + getLocalAddress() + + '}'; + } } diff --git a/test/framework/src/main/java/org/elasticsearch/transport/nio/channel/NioSocketChannel.java b/test/framework/src/main/java/org/elasticsearch/transport/nio/channel/NioSocketChannel.java index 0f63fc41ca444..6d41ad563a4e3 100644 --- a/test/framework/src/main/java/org/elasticsearch/transport/nio/channel/NioSocketChannel.java +++ b/test/framework/src/main/java/org/elasticsearch/transport/nio/channel/NioSocketChannel.java @@ -155,6 +155,15 @@ public ConnectFuture getConnectFuture() { return connectFuture; } + @Override + public String toString() { + return "NioSocketChannel{" + + "profile=" + getProfile() + + ", localAddress=" + getLocalAddress() + + ", remoteAddress=" + remoteAddress + + '}'; + } + private boolean internalFinish() throws IOException { try { return socketChannel.finishConnect(); diff --git a/test/framework/src/test/java/org/elasticsearch/test/rest/yaml/ClientYamlTestExecutionContextTests.java b/test/framework/src/test/java/org/elasticsearch/test/rest/yaml/ClientYamlTestExecutionContextTests.java index d77674bbc532a..2150baf59eab0 100644 --- a/test/framework/src/test/java/org/elasticsearch/test/rest/yaml/ClientYamlTestExecutionContextTests.java +++ b/test/framework/src/test/java/org/elasticsearch/test/rest/yaml/ClientYamlTestExecutionContextTests.java @@ -19,7 +19,7 @@ package org.elasticsearch.test.rest.yaml; -import org.elasticsearch.client.http.HttpEntity; +import org.apache.http.HttpEntity; import org.elasticsearch.test.ESTestCase; import java.io.IOException; diff --git a/test/framework/src/test/java/org/elasticsearch/test/rest/yaml/restspec/ClientYamlSuiteRestApiParserTests.java b/test/framework/src/test/java/org/elasticsearch/test/rest/yaml/restspec/ClientYamlSuiteRestApiParserTests.java index 6acdd935400f4..ddf89f9a6fcca 100644 --- a/test/framework/src/test/java/org/elasticsearch/test/rest/yaml/restspec/ClientYamlSuiteRestApiParserTests.java +++ b/test/framework/src/test/java/org/elasticsearch/test/rest/yaml/restspec/ClientYamlSuiteRestApiParserTests.java @@ -22,7 +22,9 @@ import org.elasticsearch.test.rest.yaml.section.AbstractClientYamlTestFragmentParserTestCase; import static org.hamcrest.Matchers.contains; +import static org.hamcrest.Matchers.containsInAnyOrder; import static org.hamcrest.Matchers.equalTo; +import static org.hamcrest.Matchers.hasEntry; import static org.hamcrest.Matchers.notNullValue; public class ClientYamlSuiteRestApiParserTests extends AbstractClientYamlTestFragmentParserTestCase { @@ -39,11 +41,13 @@ public void testParseRestSpecIndexApi() throws Exception { assertThat(restApi.getPaths().get(0), equalTo("/{index}/{type}")); assertThat(restApi.getPaths().get(1), equalTo("/{index}/{type}/{id}")); assertThat(restApi.getPathParts().size(), equalTo(3)); - assertThat(restApi.getPathParts().get(0), equalTo("id")); - assertThat(restApi.getPathParts().get(1), equalTo("index")); - assertThat(restApi.getPathParts().get(2), equalTo("type")); + assertThat(restApi.getPathParts().keySet(), containsInAnyOrder("id", "index", "type")); + assertThat(restApi.getPathParts(), hasEntry("index", true)); + assertThat(restApi.getPathParts(), hasEntry("type", true)); + assertThat(restApi.getPathParts(), hasEntry("id", false)); assertThat(restApi.getParams().size(), equalTo(4)); - assertThat(restApi.getParams(), contains("wait_for_active_shards", "op_type", "parent", "refresh")); + assertThat(restApi.getParams().keySet(), containsInAnyOrder("wait_for_active_shards", "op_type", "parent", "refresh")); + restApi.getParams().entrySet().stream().forEach(e -> assertThat(e.getValue(), equalTo(false))); assertThat(restApi.isBodySupported(), equalTo(true)); assertThat(restApi.isBodyRequired(), equalTo(true)); } @@ -59,7 +63,7 @@ public void testParseRestSpecGetTemplateApi() throws Exception { assertThat(restApi.getPaths().get(0), equalTo("/_template")); assertThat(restApi.getPaths().get(1), equalTo("/_template/{name}")); assertThat(restApi.getPathParts().size(), equalTo(1)); - assertThat(restApi.getPathParts().get(0), equalTo("name")); + assertThat(restApi.getPathParts(), hasEntry("name", false)); assertThat(restApi.getParams().size(), equalTo(0)); assertThat(restApi.isBodySupported(), equalTo(false)); assertThat(restApi.isBodyRequired(), equalTo(false)); @@ -78,10 +82,11 @@ public void testParseRestSpecCountApi() throws Exception { assertThat(restApi.getPaths().get(1), equalTo("/{index}/_count")); assertThat(restApi.getPaths().get(2), equalTo("/{index}/{type}/_count")); assertThat(restApi.getPathParts().size(), equalTo(2)); - assertThat(restApi.getPathParts().get(0), equalTo("index")); - assertThat(restApi.getPathParts().get(1), equalTo("type")); + assertThat(restApi.getPathParts().keySet(), containsInAnyOrder("index", "type")); + restApi.getPathParts().entrySet().stream().forEach(e -> assertThat(e.getValue(), equalTo(false))); assertThat(restApi.getParams().size(), equalTo(1)); - assertThat(restApi.getParams().get(0), equalTo("ignore_unavailable")); + assertThat(restApi.getParams().keySet(), contains("ignore_unavailable")); + assertThat(restApi.getParams(), hasEntry("ignore_unavailable", false)); assertThat(restApi.isBodySupported(), equalTo(true)); assertThat(restApi.isBodyRequired(), equalTo(false)); } diff --git a/test/framework/src/test/java/org/elasticsearch/test/test/InternalTestClusterTests.java b/test/framework/src/test/java/org/elasticsearch/test/test/InternalTestClusterTests.java index a45a54d27bfff..25c96da81fa16 100644 --- a/test/framework/src/test/java/org/elasticsearch/test/test/InternalTestClusterTests.java +++ b/test/framework/src/test/java/org/elasticsearch/test/test/InternalTestClusterTests.java @@ -116,15 +116,16 @@ public static void assertClusters(InternalTestCluster cluster0, InternalTestClus } public static void assertSettings(Settings left, Settings right, boolean checkClusterUniqueSettings) { - Set> entries0 = left.getAsMap().entrySet(); - Map entries1 = right.getAsMap(); + Set keys0 = left.keySet(); + Set keys1 = right.keySet(); assertThat("--> left:\n" + left.toDelimitedString('\n') + "\n-->right:\n" + right.toDelimitedString('\n'), - entries0.size(), equalTo(entries1.size())); - for (Map.Entry entry : entries0) { - if (clusterUniqueSettings.contains(entry.getKey()) && checkClusterUniqueSettings == false) { + keys0.size(), equalTo(keys1.size())); + for (String key : keys0) { + if (clusterUniqueSettings.contains(key) && checkClusterUniqueSettings == false) { continue; } - assertThat(entries1, hasEntry(entry.getKey(), entry.getValue())); + assertTrue("key [" + key + "] is missing in " + keys1, keys1.contains(key)); + assertEquals(right.get(key), left.get(key)); } } @@ -137,10 +138,9 @@ private void assertMMNinNodeSetting(InternalTestCluster cluster, int masterNodes private void assertMMNinNodeSetting(String node, InternalTestCluster cluster, int masterNodes) { final int minMasterNodes = masterNodes / 2 + 1; Settings nodeSettings = cluster.client(node).admin().cluster().prepareNodesInfo(node).get().getNodes().get(0).getSettings(); - assertThat("node setting of node [" + node + "] has the wrong min_master_node setting: [" + assertEquals("node setting of node [" + node + "] has the wrong min_master_node setting: [" + nodeSettings.get(DISCOVERY_ZEN_MINIMUM_MASTER_NODES_SETTING.getKey()) + "]", - nodeSettings.getAsMap(), - hasEntry(DISCOVERY_ZEN_MINIMUM_MASTER_NODES_SETTING.getKey(), Integer.toString(minMasterNodes))); + DISCOVERY_ZEN_MINIMUM_MASTER_NODES_SETTING.get(nodeSettings).intValue(), minMasterNodes); } private void assertMMNinClusterSetting(InternalTestCluster cluster, int masterNodes) { @@ -149,10 +149,9 @@ private void assertMMNinClusterSetting(InternalTestCluster cluster, int masterNo Settings stateSettings = cluster.client(node).admin().cluster().prepareState().setLocal(true) .get().getState().getMetaData().settings(); - assertThat("dynamic setting for node [" + node + "] has the wrong min_master_node setting : [" + assertEquals("dynamic setting for node [" + node + "] has the wrong min_master_node setting : [" + stateSettings.get(DISCOVERY_ZEN_MINIMUM_MASTER_NODES_SETTING.getKey()) + "]", - stateSettings.getAsMap(), - hasEntry(DISCOVERY_ZEN_MINIMUM_MASTER_NODES_SETTING.getKey(), Integer.toString(minMasterNodes))); + DISCOVERY_ZEN_MINIMUM_MASTER_NODES_SETTING.get(stateSettings).intValue(), minMasterNodes); } } diff --git a/test/framework/src/test/java/org/elasticsearch/transport/MockTcpTransportTests.java b/test/framework/src/test/java/org/elasticsearch/transport/MockTcpTransportTests.java index b32680d9da466..b1a3a914be89e 100644 --- a/test/framework/src/test/java/org/elasticsearch/transport/MockTcpTransportTests.java +++ b/test/framework/src/test/java/org/elasticsearch/transport/MockTcpTransportTests.java @@ -33,6 +33,7 @@ import java.util.Collections; public class MockTcpTransportTests extends AbstractSimpleTransportTestCase { + @Override protected MockTransportService build(Settings settings, Version version, ClusterSettings clusterSettings, boolean doHandshake) { NamedWriteableRegistry namedWriteableRegistry = new NamedWriteableRegistry(Collections.emptyList()); @@ -53,4 +54,13 @@ protected Version executeHandshake(DiscoveryNode node, MockChannel mockChannel, mockTransportService.start(); return mockTransportService; } + + @Override + protected void closeConnectionChannel(Transport transport, Transport.Connection connection) throws IOException { + final MockTcpTransport t = (MockTcpTransport) transport; + @SuppressWarnings("unchecked") final TcpTransport.NodeChannels channels = + (TcpTransport.NodeChannels) connection; + t.closeChannels(channels.getChannels().subList(0, randomIntBetween(1, channels.getChannels().size())), true, false); + } + } diff --git a/test/framework/src/test/java/org/elasticsearch/transport/nio/AcceptingSelectorTests.java b/test/framework/src/test/java/org/elasticsearch/transport/nio/AcceptingSelectorTests.java index 05d3b292b0a8d..140c44133d3fd 100644 --- a/test/framework/src/test/java/org/elasticsearch/transport/nio/AcceptingSelectorTests.java +++ b/test/framework/src/test/java/org/elasticsearch/transport/nio/AcceptingSelectorTests.java @@ -46,7 +46,6 @@ public class AcceptingSelectorTests extends ESTestCase { private NioServerSocketChannel serverChannel; private AcceptorEventHandler eventHandler; private TestSelectionKey selectionKey; - private HashSet keySet = new HashSet<>(); @Before public void setUp() throws Exception { @@ -64,14 +63,12 @@ public void setUp() throws Exception { when(serverChannel.getSelectionKey()).thenReturn(selectionKey); when(serverChannel.getSelector()).thenReturn(selector); when(serverChannel.isOpen()).thenReturn(true); - when(rawSelector.selectedKeys()).thenReturn(keySet); - when(rawSelector.select(0)).thenReturn(1); } public void testRegisteredChannel() throws IOException, PrivilegedActionException { selector.scheduleForRegistration(serverChannel); - selector.doSelect(0); + selector.preSelect(); verify(eventHandler).serverChannelRegistered(serverChannel); Set registeredChannels = selector.getRegisteredChannels(); @@ -83,7 +80,7 @@ public void testClosedChannelWillNotBeRegistered() throws Exception { when(serverChannel.isOpen()).thenReturn(false); selector.scheduleForRegistration(serverChannel); - selector.doSelect(0); + selector.preSelect(); verify(eventHandler).registrationException(same(serverChannel), any(ClosedChannelException.class)); @@ -98,7 +95,7 @@ public void testRegisterChannelFailsDueToException() throws Exception { ClosedChannelException closedChannelException = new ClosedChannelException(); doThrow(closedChannelException).when(serverChannel).register(); - selector.doSelect(0); + selector.preSelect(); verify(eventHandler).registrationException(serverChannel, closedChannelException); @@ -109,21 +106,19 @@ public void testRegisterChannelFailsDueToException() throws Exception { public void testAcceptEvent() throws IOException { selectionKey.setReadyOps(SelectionKey.OP_ACCEPT); - keySet.add(selectionKey); - selector.doSelect(0); + selector.processKey(selectionKey); verify(eventHandler).acceptChannel(serverChannel); } public void testAcceptException() throws IOException { selectionKey.setReadyOps(SelectionKey.OP_ACCEPT); - keySet.add(selectionKey); IOException ioException = new IOException(); doThrow(ioException).when(eventHandler).acceptChannel(serverChannel); - selector.doSelect(0); + selector.processKey(selectionKey); verify(eventHandler).acceptException(serverChannel, ioException); } @@ -131,7 +126,7 @@ public void testAcceptException() throws IOException { public void testCleanup() throws IOException { selector.scheduleForRegistration(serverChannel); - selector.doSelect(0); + selector.preSelect(); assertEquals(1, selector.getRegisteredChannels().size()); diff --git a/test/framework/src/test/java/org/elasticsearch/transport/nio/ESSelectorTests.java b/test/framework/src/test/java/org/elasticsearch/transport/nio/ESSelectorTests.java index 53705fcf5216b..afcd42dcb528e 100644 --- a/test/framework/src/test/java/org/elasticsearch/transport/nio/ESSelectorTests.java +++ b/test/framework/src/test/java/org/elasticsearch/transport/nio/ESSelectorTests.java @@ -24,8 +24,12 @@ import org.junit.Before; import java.io.IOException; +import java.nio.channels.CancelledKeyException; import java.nio.channels.ClosedSelectorException; +import java.nio.channels.SelectionKey; +import java.nio.channels.Selector; +import static org.mockito.Matchers.anyInt; import static org.mockito.Mockito.mock; import static org.mockito.Mockito.verify; import static org.mockito.Mockito.when; @@ -34,12 +38,14 @@ public class ESSelectorTests extends ESTestCase { private ESSelector selector; private EventHandler handler; + private Selector rawSelector; @Before public void setUp() throws Exception { super.setUp(); handler = mock(EventHandler.class); - selector = new TestSelector(handler); + rawSelector = mock(Selector.class); + selector = new TestSelector(handler, rawSelector); } public void testQueueChannelForClosed() throws IOException { @@ -61,9 +67,8 @@ public void testQueueChannelForClosed() throws IOException { } public void testSelectorClosedExceptionIsNotCaughtWhileRunning() throws IOException { - ((TestSelector) this.selector).setClosedSelectorException(new ClosedSelectorException()); - boolean closedSelectorExceptionCaught = false; + when(rawSelector.select(anyInt())).thenThrow(new ClosedSelectorException()); try { this.selector.singleLoop(); } catch (ClosedSelectorException e) { @@ -75,7 +80,8 @@ public void testSelectorClosedExceptionIsNotCaughtWhileRunning() throws IOExcept public void testIOExceptionWhileSelect() throws IOException { IOException ioException = new IOException(); - ((TestSelector) this.selector).setIOException(ioException); + + when(rawSelector.select(anyInt())).thenThrow(ioException); this.selector.singleLoop(); @@ -84,34 +90,23 @@ public void testIOExceptionWhileSelect() throws IOException { private static class TestSelector extends ESSelector { - private ClosedSelectorException closedSelectorException; - private IOException ioException; - - protected TestSelector(EventHandler eventHandler) throws IOException { - super(eventHandler); + TestSelector(EventHandler eventHandler, Selector selector) throws IOException { + super(eventHandler, selector); } @Override - void doSelect(int timeout) throws IOException, ClosedSelectorException { - if (closedSelectorException != null) { - throw closedSelectorException; - } - if (ioException != null) { - throw ioException; - } + void processKey(SelectionKey selectionKey) throws CancelledKeyException { + } @Override - void cleanup() { + void preSelect() { } - public void setClosedSelectorException(ClosedSelectorException exception) { - this.closedSelectorException = exception; - } + @Override + void cleanup() { - public void setIOException(IOException ioException) { - this.ioException = ioException; } } diff --git a/test/framework/src/test/java/org/elasticsearch/transport/nio/SimpleNioTransportTests.java b/test/framework/src/test/java/org/elasticsearch/transport/nio/SimpleNioTransportTests.java index 2ba2e4cc02a85..f4e21f7093be1 100644 --- a/test/framework/src/test/java/org/elasticsearch/transport/nio/SimpleNioTransportTests.java +++ b/test/framework/src/test/java/org/elasticsearch/transport/nio/SimpleNioTransportTests.java @@ -53,7 +53,7 @@ public class SimpleNioTransportTests extends AbstractSimpleTransportTestCase { public static MockTransportService nioFromThreadPool(Settings settings, ThreadPool threadPool, final Version version, - ClusterSettings clusterSettings, boolean doHandshake) { + ClusterSettings clusterSettings, boolean doHandshake) { NamedWriteableRegistry namedWriteableRegistry = new NamedWriteableRegistry(Collections.emptyList()); NetworkService networkService = new NetworkService(Collections.emptyList()); Transport transport = new NioTransport(settings, threadPool, @@ -96,6 +96,13 @@ protected MockTransportService build(Settings settings, Version version, Cluster return transportService; } + @Override + protected void closeConnectionChannel(Transport transport, Transport.Connection connection) throws IOException { + final NioTransport t = (NioTransport) transport; + @SuppressWarnings("unchecked") TcpTransport.NodeChannels channels = (TcpTransport.NodeChannels) connection; + t.closeChannels(channels.getChannels().subList(0, randomIntBetween(1, channels.getChannels().size())), true, false); + } + public void testConnectException() throws UnknownHostException { try { serviceA.connectToNode(new DiscoveryNode("C", new TransportAddress(InetAddress.getByName("localhost"), 9876), diff --git a/test/framework/src/test/java/org/elasticsearch/transport/nio/SocketSelectorTests.java b/test/framework/src/test/java/org/elasticsearch/transport/nio/SocketSelectorTests.java index 50ce4a55b2960..cb266831530c8 100644 --- a/test/framework/src/test/java/org/elasticsearch/transport/nio/SocketSelectorTests.java +++ b/test/framework/src/test/java/org/elasticsearch/transport/nio/SocketSelectorTests.java @@ -53,7 +53,6 @@ public class SocketSelectorTests extends ESTestCase { private NioSocketChannel channel; private TestSelectionKey selectionKey; private WriteContext writeContext; - private HashSet keySet = new HashSet<>(); private ActionListener listener; private NetworkBytesReference bufferReference = NetworkBytesReference.wrap(new BytesArray(new byte[1])); @@ -72,8 +71,6 @@ public void setUp() throws Exception { this.socketSelector = new SocketSelector(eventHandler, rawSelector); this.socketSelector.setThread(); - when(rawSelector.selectedKeys()).thenReturn(keySet); - when(rawSelector.select(0)).thenReturn(1); when(channel.isOpen()).thenReturn(true); when(channel.getSelectionKey()).thenReturn(selectionKey); when(channel.getWriteContext()).thenReturn(writeContext); @@ -84,7 +81,7 @@ public void setUp() throws Exception { public void testRegisterChannel() throws Exception { socketSelector.scheduleForRegistration(channel); - socketSelector.doSelect(0); + socketSelector.preSelect(); verify(eventHandler).handleRegistration(channel); @@ -97,7 +94,7 @@ public void testClosedChannelWillNotBeRegistered() throws Exception { when(channel.isOpen()).thenReturn(false); socketSelector.scheduleForRegistration(channel); - socketSelector.doSelect(0); + socketSelector.preSelect(); verify(eventHandler).registrationException(same(channel), any(ClosedChannelException.class)); verify(channel, times(0)).finishConnect(); @@ -113,7 +110,7 @@ public void testRegisterChannelFailsDueToException() throws Exception { ClosedChannelException closedChannelException = new ClosedChannelException(); doThrow(closedChannelException).when(channel).register(); - socketSelector.doSelect(0); + socketSelector.preSelect(); verify(eventHandler).registrationException(channel, closedChannelException); verify(channel, times(0)).finishConnect(); @@ -128,7 +125,7 @@ public void testSuccessfullyRegisterChannelWillConnect() throws Exception { when(channel.finishConnect()).thenReturn(true); - socketSelector.doSelect(0); + socketSelector.preSelect(); verify(eventHandler).handleConnect(channel); } @@ -138,7 +135,7 @@ public void testConnectIncompleteWillNotNotify() throws Exception { when(channel.finishConnect()).thenReturn(false); - socketSelector.doSelect(0); + socketSelector.preSelect(); verify(eventHandler, times(0)).handleConnect(channel); } @@ -156,7 +153,7 @@ public void testQueueWriteChannelIsNoLongerWritable() throws Exception { socketSelector.queueWrite(writeOperation); when(channel.isWritable()).thenReturn(false); - socketSelector.doSelect(0); + socketSelector.preSelect(); verify(writeContext, times(0)).queueWriteOperations(writeOperation); verify(listener).onFailure(any(ClosedChannelException.class)); @@ -172,7 +169,7 @@ public void testQueueWriteSelectionKeyThrowsException() throws Exception { when(channel.isWritable()).thenReturn(true); when(channel.getSelectionKey()).thenReturn(selectionKey); when(selectionKey.interestOps(anyInt())).thenThrow(cancelledKeyException); - socketSelector.doSelect(0); + socketSelector.preSelect(); verify(writeContext, times(0)).queueWriteOperations(writeOperation); verify(listener).onFailure(cancelledKeyException); @@ -185,7 +182,7 @@ public void testQueueWriteSuccessful() throws Exception { assertTrue((selectionKey.interestOps() & SelectionKey.OP_WRITE) == 0); when(channel.isWritable()).thenReturn(true); - socketSelector.doSelect(0); + socketSelector.preSelect(); verify(writeContext).queueWriteOperations(writeOperation); assertTrue((selectionKey.interestOps() & SelectionKey.OP_WRITE) != 0); @@ -219,42 +216,36 @@ public void testQueueDirectlyInChannelBufferSelectionKeyThrowsException() throws } public void testConnectEvent() throws Exception { - keySet.add(selectionKey); - selectionKey.setReadyOps(SelectionKey.OP_CONNECT); when(channel.finishConnect()).thenReturn(true); - socketSelector.doSelect(0); + socketSelector.processKey(selectionKey); verify(eventHandler).handleConnect(channel); } public void testConnectEventFinishUnsuccessful() throws Exception { - keySet.add(selectionKey); - selectionKey.setReadyOps(SelectionKey.OP_CONNECT); when(channel.finishConnect()).thenReturn(false); - socketSelector.doSelect(0); + socketSelector.processKey(selectionKey); verify(eventHandler, times(0)).handleConnect(channel); } public void testConnectEventFinishThrowException() throws Exception { - keySet.add(selectionKey); IOException ioException = new IOException(); selectionKey.setReadyOps(SelectionKey.OP_CONNECT); when(channel.finishConnect()).thenThrow(ioException); - socketSelector.doSelect(0); + socketSelector.processKey(selectionKey); verify(eventHandler, times(0)).handleConnect(channel); verify(eventHandler).connectException(channel, ioException); } public void testWillNotConsiderWriteOrReadUntilConnectionComplete() throws Exception { - keySet.add(selectionKey); IOException ioException = new IOException(); selectionKey.setReadyOps(SelectionKey.OP_WRITE | SelectionKey.OP_READ); @@ -262,54 +253,48 @@ public void testWillNotConsiderWriteOrReadUntilConnectionComplete() throws Excep doThrow(ioException).when(eventHandler).handleWrite(channel); when(channel.isConnectComplete()).thenReturn(false); - socketSelector.doSelect(0); + socketSelector.processKey(selectionKey); verify(eventHandler, times(0)).handleWrite(channel); verify(eventHandler, times(0)).handleRead(channel); } public void testSuccessfulWriteEvent() throws Exception { - keySet.add(selectionKey); - selectionKey.setReadyOps(SelectionKey.OP_WRITE); - socketSelector.doSelect(0); + socketSelector.processKey(selectionKey); verify(eventHandler).handleWrite(channel); } public void testWriteEventWithException() throws Exception { - keySet.add(selectionKey); IOException ioException = new IOException(); selectionKey.setReadyOps(SelectionKey.OP_WRITE); doThrow(ioException).when(eventHandler).handleWrite(channel); - socketSelector.doSelect(0); + socketSelector.processKey(selectionKey); verify(eventHandler).writeException(channel, ioException); } public void testSuccessfulReadEvent() throws Exception { - keySet.add(selectionKey); - selectionKey.setReadyOps(SelectionKey.OP_READ); - socketSelector.doSelect(0); + socketSelector.processKey(selectionKey); verify(eventHandler).handleRead(channel); } public void testReadEventWithException() throws Exception { - keySet.add(selectionKey); IOException ioException = new IOException(); selectionKey.setReadyOps(SelectionKey.OP_READ); doThrow(ioException).when(eventHandler).handleRead(channel); - socketSelector.doSelect(0); + socketSelector.processKey(selectionKey); verify(eventHandler).readException(channel, ioException); } @@ -319,7 +304,7 @@ public void testCleanup() throws Exception { socketSelector.scheduleForRegistration(channel); - socketSelector.doSelect(0); + socketSelector.preSelect(); NetworkBytesReference networkBuffer = NetworkBytesReference.wrap(new BytesArray(new byte[1])); socketSelector.queueWrite(new WriteOperation(mock(NioSocketChannel.class), networkBuffer, listener)); diff --git a/test/framework/src/test/java/org/elasticsearch/transport/nio/channel/NioServerSocketChannelTests.java b/test/framework/src/test/java/org/elasticsearch/transport/nio/channel/NioServerSocketChannelTests.java index 6f05d3c1f34c6..367df0c78f4c8 100644 --- a/test/framework/src/test/java/org/elasticsearch/transport/nio/channel/NioServerSocketChannelTests.java +++ b/test/framework/src/test/java/org/elasticsearch/transport/nio/channel/NioServerSocketChannelTests.java @@ -19,6 +19,7 @@ package org.elasticsearch.transport.nio.channel; +import org.elasticsearch.action.ActionListener; import org.elasticsearch.test.ESTestCase; import org.elasticsearch.transport.nio.AcceptingSelector; import org.elasticsearch.transport.nio.AcceptorEventHandler; @@ -33,6 +34,7 @@ import java.util.concurrent.TimeoutException; import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.atomic.AtomicReference; +import java.util.function.Consumer; import java.util.function.Supplier; import static org.mockito.Mockito.mock; @@ -64,10 +66,11 @@ public void testClose() throws IOException, TimeoutException, InterruptedExcepti CountDownLatch latch = new CountDownLatch(1); NioChannel channel = new DoNotCloseServerChannel("nio", mock(ServerSocketChannel.class), mock(ChannelFactory.class), selector); - channel.getCloseFuture().setListener((c) -> { + Consumer listener = (c) -> { ref.set(c); latch.countDown(); - }); + }; + channel.getCloseFuture().addListener(ActionListener.wrap(listener::accept, (e) -> listener.accept(channel))); CloseFuture closeFuture = channel.getCloseFuture(); diff --git a/test/framework/src/test/java/org/elasticsearch/transport/nio/channel/NioSocketChannelTests.java b/test/framework/src/test/java/org/elasticsearch/transport/nio/channel/NioSocketChannelTests.java index 3d039b41a8a68..75ec57b2603db 100644 --- a/test/framework/src/test/java/org/elasticsearch/transport/nio/channel/NioSocketChannelTests.java +++ b/test/framework/src/test/java/org/elasticsearch/transport/nio/channel/NioSocketChannelTests.java @@ -19,6 +19,7 @@ package org.elasticsearch.transport.nio.channel; +import org.elasticsearch.action.ActionListener; import org.elasticsearch.test.ESTestCase; import org.elasticsearch.transport.nio.SocketEventHandler; import org.elasticsearch.transport.nio.SocketSelector; @@ -34,6 +35,7 @@ import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.atomic.AtomicReference; import java.util.function.BiConsumer; +import java.util.function.Consumer; import static org.hamcrest.Matchers.instanceOf; import static org.mockito.Mockito.mock; @@ -67,10 +69,11 @@ public void testClose() throws IOException, TimeoutException, InterruptedExcepti NioSocketChannel socketChannel = new DoNotCloseChannel(NioChannel.CLIENT, mock(SocketChannel.class), selector); socketChannel.setContexts(mock(ReadContext.class), mock(WriteContext.class)); - socketChannel.getCloseFuture().setListener((c) -> { + Consumer listener = (c) -> { ref.set(c); latch.countDown(); - }); + }; + socketChannel.getCloseFuture().addListener(ActionListener.wrap(listener::accept, (e) -> listener.accept(socketChannel))); CloseFuture closeFuture = socketChannel.getCloseFuture(); assertFalse(closeFuture.isClosed());